Data Preparation: Lists x and y are defined, representing input and target data respectively. These lists are converted into PyTorch tensors X and Y and are set to floating-point type. Device Configuration: The code checks if a CUDA-enabled GPU is available for computation. If available, device is set to 'cuda'; otherwise, it's set to 'cpu'. The tensors X and Y are then transferred to the chosen device. Dataset and DataLoader Creation: A custom dataset class MyDataset is defined using PyTorch's Dataset class. This custom dataset handles the input and target data for training. An instance of the dataset ds is created using X and Y. A DataLoader dl is defined with a batch size of 2 and shuffling enabled. This DataLoader will be used to fetch data in batches during training. Neural Network Definition: A feed-forward neural network MyNeuralNet is defined with: An input layer. A hidden layer with ReLU activation. An output layer. An instance of this network, mynet, is created and transferred to the chosen device (either CPU or CUDA). Loss Functions: Two methods to calculate the mean squared error loss are presented: PyTorch’s built-in MSELoss function. A custom function named my_mean_squared_error. The loss value using PyTorch's built-in function is computed and printed. Intermediate Representations: The intermediate representations of the input data as it passes through the network's layers are extracted: After the input layer with the input_to_hidden_layer. After the hidden layer activation function with the hidden_layer_activation. Throughout the code, there's an emphasis on creating a neural network and setting up the necessary components for training, such as data handling with datasets and loaders, defining the model, and calculating loss.
Tasks: Deep Learning Fundamentals, Loss Functions
Task Categories: Deep Learning Fundamentals
Published: 10/11/23
custom loss function
hidden layer output
Book: Modern Computer Vision
Chapter 2