#Getting PyTorch to Run on the GPU

PyTorch supports GPU acceleration, which can significantly speed up computations. Here’s how to use the GPU in PyTorch:

#Checking for GPU Availability

Before using the GPU, ensure that it’s available:

import torch # Check if GPU is available if torch.cuda.is_available(): print("GPU is available.") else: print("GPU is not available.")

#Moving Tensors to the GPU

You can move tensors to the GPU using the .to() method or .cuda() method. Here’s an example:

import torch # Create a tensor and move it to the GPU tensor_cpu = torch.tensor([1.0, 2.0, 3.0]) tensor_gpu = tensor_cpu.to('cuda') # Move tensor to GPU print("Tensor on GPU:", tensor_gpu)

Explanation:

  • .to('cuda'): Moves the tensor to the GPU.
  • .cuda(): Another way to move a tensor to the GPU.

#Moving Tensors Back to the CPU

After computations are done on the GPU, you might want to move tensors back to the CPU:

import torch # Create a tensor on the GPU tensor_gpu = torch.tensor([1.0, 2.0, 3.0]).to('cuda') # Move the tensor back to the CPU tensor_cpu = tensor_gpu.to('cpu') print("Tensor on CPU:", tensor_cpu)

Explanation:

  • .to('cpu'): Moves the tensor back to the CPU.

Let’s review the key points

  1. Check GPU Availability: Use torch.cuda.is_available() to see if a GPU is available.
  2. Move Tensors to GPU: Use .to('cuda') to move tensors or models to the GPU.
  3. Move Tensors Back to CPU: Use .to('cpu') to move tensors back to the CPU.