Automatic Differentiation with ``torch.autograd``
=======================================
When training neural networks, the most frequently used algorithm is
back propagation. In this algorithm, parameters (model weights) are
adjusted according to the gradient of the loss function with respect
to the given parameter.
To compute those gradients, PyTorch has a built-in differentiation engine
called ``torch.autograd``. It supports automatic computation of gradient for any
computational graph.
Consider the simplest one-layer neural network, with input ``x``,
parameters ``w`` and ``b``, and some loss function. It can be defined in
PyTorch in the following manner:
Tasks: Backpropagation, Deep Learning Fundamentals, Tensors
Task Categories: Deep Learning Fundamentals
Published: 10/06/23