Neural network operations (mygrad.nnet
)#
Layer operations#
|
Performs batch normalization on |
|
Use |
|
Perform max-pooling over the last N dimensions of a data batch. |
|
Performs a forward pass of sequential data through a Gated Recurrent Unit layer, returning the 'hidden-descriptors' arrived at by utilizing the trainable parameters as follows. |
Losses#
|
Return the per-datum focal loss. |
|
Computes the margin average margin ranking loss. Equivalent to::. |
|
Computes the average multiclass hinge loss. |
|
Returns the (weighted) negative log-likelihood loss between log-probabilities and y_true. |
|
Given the classification scores of C classes for N pieces of data, |
|
Applies the softmax normalization to the input scores before computing the per-datum focal loss. |
Activations#
|
Returns the exponential linear activation (ELU) elementwise along x. |
|
Returns the Gated Linear Unit A * σ(B), where A and B are split from x. |
|
Returns the hard hyperbolic tangent function. |
|
Returns the leaky rectified linear activation elementwise along x. |
|
Applies the log-softmax activation function. |
|
Returns the scaled exponential linear activation (SELU) elementwise along x. |
|
Applies the sigmoid activation function. |
|
Applies the softmax activation function. |
|
Returns the soft sign function x / (1 + |x|). |
|
Applies the recitfied linear unit activation function. |
|
Hyperbolic tangent, element-wise. |
Initializers#
|
Initialize a |
|
Initialize a |
|
Initialize a |
|
Initialize a |
|
Initialize a |
|
Initialize a |
|
Initialize a |
Sliding Window View Utility#
|
Create a sliding window view over the trailing dimensions of an array. |