#Exercise: Build and Train a Simple Neural Network

Objective: Create a simple neural network using PyTorch, train it on a dataset, and evaluate its performance.

Instructions:

  1. Define a Neural Network Model: Create a neural network with one hidden layer.
  2. Load a Dataset: Use the MNIST dataset for training and evaluation.
  3. Define a Loss Function and Optimizer: Choose appropriate loss function and optimizer for the task.
  4. Train the Model: Implement a training loop to train the model on the MNIST dataset.
  5. Evaluate the Model: Test the model on a test set and print the accuracy.
Solution:
import torch import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms # 1. Define the Neural Network Model class SimpleNN(nn.Module): def __init__(self): super(SimpleNN, self).__init__() self.fc1 = nn.Linear(28 * 28, 128) # Input layer (28*28), Hidden layer (128 units) self.fc2 = nn.Linear(128, 10) # Hidden layer (128 units), Output layer (10 classes) def forward(self, x): x = x.view(-1, 28 * 28) # Flatten the input x = torch.relu(self.fc1(x)) # Apply ReLU activation x = self.fc2(x) # Output layer return x # 2. Load the MNIST Dataset transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) # Normalize images ]) trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True) testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=1000, shuffle=False) # 3. Define Loss Function and Optimizer model = SimpleNN() criterion = nn.CrossEntropyLoss() # Loss function for classification optimizer = optim.Adam(model.parameters(), lr=0.001) # Optimizer # 4. Train the Model num_epochs = 5 for epoch in range(num_epochs): model.train() # Set model to training mode running_loss = 0.0 for data, targets in trainloader: optimizer.zero_grad() # Clear previous gradients outputs = model(data) # Forward pass loss = criterion(outputs, targets) # Compute loss loss.backward() # Backward pass optimizer.step() # Update weights running_loss += loss.item() print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {running_loss/len(trainloader):.4f}') # 5. Evaluate the Model model.eval() # Set model to evaluation mode correct = 0 total = 0 with torch.no_grad(): # No need to compute gradients during evaluation for data, targets in testloader: outputs = model(data) _, predicted = torch.max(outputs, 1) # Get the predicted classes total += targets.size(0) correct += (predicted == targets).sum().item() accuracy = 100 * correct / total print(f'Test Accuracy: {accuracy:.2f}%')

Explanation:

  1. Define the Neural Network Model: The SimpleNN class has two fully connected layers. The forward method processes input data through these layers.
  2. Load the MNIST Dataset: The dataset is transformed into tensors and normalized. DataLoaders are created for training and testing.
  3. Define Loss Function and Optimizer: CrossEntropyLoss is used for classification, and Adam optimizer updates the model's parameters.
  4. Train the Model: The training loop processes the dataset in batches, computes the loss, performs backpropagation, and updates the model's weights.
  5. Evaluate the Model: The model's performance is evaluated on the test dataset, and accuracy is printed.