mygrad.Tensor.null_gradients#
- Tensor.null_gradients(clear_graph: bool = True)[source]#
Deprecated: Tensors will automatically have their computational graphs cleared during backprop. Simply involving a tensor in a new computational graph will null its gradient.
Sets the gradient for this tensor and for all preceding tensors in the computation graph to
None
.Additionally, the computational graph that terminates in this tensor can also be cleared during this process.
- Parameters:
- clear_graphbool, optional (default=True)
If
True
clear the computational graph in addition to nulling the gradients.
Notes
It is advised to clear the computational graph when nulling gradients, i.e. invoke
null_gradients(clear_graph=True)
(or simplynull_gradients()
). This de-references all intermediate operations and tensors in the computational graph and thus permits garbage collection - freeing the memory that was used by the computational graph.Examples
>>> import mygrad as mg >>> x = mg.tensor(2) >>> y = mg.tensor(3) >>> w = x * y >>> f = 2 * w >>> f.backward() # computes df/df, df/dw, df/dy, and df/dx >>> any(tensor.grad is None for tensor in (f, w , x, y)) False
>>> f.null_gradients() # set tensor.grad to None for all tensors in the graph >>> all(tensor.grad is None for tensor in (f, w , x, y)) True