mygrad.nnet.initializers.glorot_uniform#

mygrad.nnet.initializers.glorot_uniform(*shape, gain=1, dtype=<class 'numpy.float32'>, constant=None)[source]#

Initialize a mygrad.Tensor according to the uniform initialization procedure described by Glorot and Bengio.

Parameters:
shapeSequence[int]

The shape of the output Tensor. Note that shape must be at least two-dimensional.

gainReal, optional (default=1)

The gain (scaling factor) to apply.

dtypedata-type, optional (default=float32)

The data type of the output tensor; must be a floating-point type.

constantbool, optional (default=False)
If True, the returned tensor is a constant (it

does not back-propagate a gradient).

Returns:
mygrad.Tensor, shape=``shape``

A Tensor, with values initialized according to the glorot uniform initialization.

Notes

Glorot and Bengio put forward this initialization in the paper

“Understanding the Difficulty of Training Deep Feedforward Neural Networks”

http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf

A Tensor \(W\) initialized in this way should be drawn from a distribution about

\[U[-\frac{\sqrt{6}}{\sqrt{n_j+n_{j+1}}}, \frac{\sqrt{6}}{\sqrt{n_j+n_{j+1}}}]\]