The code performs the following tasks: Setting up Data: A set of input (x) and target (y) data points is defined. These are converted into PyTorch tensors X and Y and cast to floating-point type. Determining the Device: The code checks if a GPU with CUDA is available and sets the device as either 'cuda' (GPU) or 'cpu'. The tensors X and Y are then transferred to the specified device. Defining a Neural Network: A simple feed-forward neural network MyNeuralNet is defined with an input layer, one hidden layer with ReLU activation, and an output layer. Model Initialization & Loss Calculation: The random seed for PyTorch is set for reproducibility. An instance of the neural network is created and transferred to the device. Mean squared error (MSE) loss between the model's prediction (_Y) and the target values (Y) is computed and printed. Training using Stochastic Gradient Descent (SGD): The SGD optimizer is initialized with a learning rate of 0.001. The model is trained for 50 epochs. In each epoch, the gradients are zeroed, a forward pass is done, the loss is computed, and backpropagation is performed to adjust the model's weights. The loss for each epoch is stored in the loss_history list. Visualizing the Training Loss: Using matplotlib, the loss values over the 50 epochs are plotted. This visualization helps in understanding how well the model is learning. Modifying the Neural Network: The MyNeuralNet class is redefined to return not only the output of the network but also the output of the hidden layer. This network is then trained in a manner similar to the initial training process. Loss over 50 epochs is plotted again to visualize the training progress. Inspecting the Hidden Layer Output: The output of the hidden layer for the input tensor X is retrieved and printed, providing insights into the intermediate representations the neural network has learned. Overall, the code demonstrates how to set up, define, and train a simple neural network using PyTorch, and how to visualize the training process using matplotlib. The modifications made to the neural network in the latter half of the code allow for a deeper inspection of the network's inner workings, specifically the output from the hidden layers.
Tasks: Deep Learning Fundamentals
Task Categories: Deep Learning Fundamentals