Skip to content

Commit

Permalink
Add documentation for new losses and activations
Browse files Browse the repository at this point in the history
  • Loading branch information
Tom94 committed Oct 30, 2021
1 parent ef1cccc commit edf26b6
Showing 1 changed file with 44 additions and 2 deletions.
46 changes: 44 additions & 2 deletions DOCUMENTATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,12 @@ Lightning fast implementation of small multi-layer perceptrons (MLPs). Restricte
{
"otype": "FullyFusedMLP", // Component type.
"activation": "ReLU", // Activation of hidden layers.
// Can be "ReLU" or "Sigmoid".
// Can be "ReLU", "Sigmoid",
// "Squareplus" or "Softplus".
"output_activation": "None", // Activation of the output layer.
// Can be "None", "ReLU", "Sigmoid",
// or "Exponential".
// "Exponential", "Squareplus" or
// "Softplus".
"n_neurons": 128, // Neurons in each hidden layer.
// May only be 32, 64 or 128.
"n_hidden_layers": 5, // Number of hidden layers.
Expand Down Expand Up @@ -112,6 +114,46 @@ The encoding used in Neural Radiance Caching [Müller et al. 2021] (to appear).

## Losses

### L1

Standard L1 loss.

```json5
{
"otype": "L1" // Component type.
}
```

### Relative L1

Relative L1 loss normalized by the network prediction.

```json5
{
"otype": "RelativeL1" // Component type.
}
```

### MAPE

Mean absolute percentage error (MAPE). The same as Relative L1, but normalized by the target.

```json5
{
"otype": "MAPE" // Component type.
}
```

### SMAPE

Symmetric mean absolute percentage error (SMAPE). The same as Relative L1, but normalized by the mean of the prediction and the target.

```json5
{
"otype": "SMAPE" // Component type.
}
```

### L2

Standard L2 loss.
Expand Down

0 comments on commit edf26b6

Please sign in to comment.