#
Basic Tensor Operations
- Addition
- Subtraction
- Multiplication
- Division
- Matrix Multiplication
- Element-wise Operations
#
Tensor Operations
import torch
# Create a tensor
vector = torch.tensor([1, 2, 3, 4, 5])
# Add 5 to each element
added_tensor = vector + 5
print("After Addition:", added_tensor)
Output:
After Addition: tensor([ 6, 7, 8, 9, 10])
import torch
# Create a tensor
vector = torch.tensor([5, 6, 7, 8, 9])
# Subtract 2 from each element
subtracted_tensor = vector - 2
print("After Subtraction:", subtracted_tensor)
Output:
After Subtraction: tensor([3, 4, 5, 6, 7])
import torch
# Create a tensor
vector = torch.tensor([1, 2, 3, 4, 5])
# Multiply each element by 2
multiplied_tensor = vector * 2
print("After Multiplication:", multiplied_tensor)
Output:
After Multiplication: tensor([ 2, 4, 6, 8, 10])
import torch
# Create a tensor
vector = torch.tensor([10, 20, 30, 40, 50])
# Divide each element by 2
divided_tensor = vector / 2
print("After Division:", divided_tensor)
Output:
After Division: tensor([ 5., 10., 15., 20., 25.])
import torch
# Create two matrices
matrix1 = torch.tensor([[1, 2], [3, 4]])
matrix2 = torch.tensor([[2, 0], [1, 3]])
# Matrix multiplication
result_matrix = torch.mm(matrix1, matrix2)
print("Matrix Multiplication Result:\n", result_matrix)
Output:
Matrix Multiplication Result:
tensor([[ 4, 6],
[10, 12]])
import torch
# Create two tensors
tensor1 = torch.tensor([1, 2, 3])
tensor2 = torch.tensor([4, 5, 6])
# Element-wise addition
elementwise_sum = tensor1 + tensor2
print("Element-wise Sum:", elementwise_sum)
# Element-wise subtraction
elementwise_diff = tensor1 - tensor2
print("Element-wise Difference:", elementwise_diff)
# Element-wise multiplication
elementwise_prod = tensor1 * tensor2
print("Element-wise Product:", elementwise_prod)
# Element-wise division
elementwise_div = tensor2 / tensor1
print("Element-wise Division:", elementwise_div)
Output:
Element-wise Sum: tensor([5, 7, 9])
Element-wise Difference: tensor([-3, -3, -3])
Element-wise Product: tensor([ 4, 10, 18])
Element-wise Division: tensor([4.0000, 2.5000, 2.0000])
These operations cover basic arithmetic and matrix operations on tensors, which are fundamental for many machine learning and data manipulation tasks.
#
Aggregation Operations
Aggregation in deep learning refers to combining multiple values or tensors to produce a single result. This process is essential for summarizing data, reducing dimensions, or performing various operations across tensors. Aggregation operations are commonly used in neural networks to process and extract meaningful information from data.
#
Common Aggregation Operations
Sum
Purpose: Computes the sum of all elements in a tensor or along a specific axis.
Example Use Case: Summing the outputs of different layers in a neural network.
Code Example:
import torch tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]) sum_all = torch.sum(tensor) sum_dim0 = torch.sum(tensor, dim=0) sum_dim1 = torch.sum(tensor, dim=1) print("Sum of all elements:", sum_all) # Output: 21 print("Sum along dimension 0:", sum_dim0) # Output: tensor([5, 7, 9]) print("Sum along dimension 1:", sum_dim1) # Output: tensor([ 6, 15])Mean
Purpose: Calculates the average value of elements in a tensor or along a specific axis.
Example Use Case: Averaging the predictions of a neural network for regression tasks.
Code Example:
tensor = torch.tensor([[1, 2, 3], [4, 5, 6]], dtype=torch.float) mean_all = torch.mean(tensor) mean_dim0 = torch.mean(tensor, dim=0) mean_dim1 = torch.mean(tensor, dim=1) print("Mean of all elements:", mean_all) # Output: 3.5 print("Mean along dimension 0:", mean_dim0) # Output: tensor([2.5, 3.5, 4.5]) print("Mean along dimension 1:", mean_dim1) # Output: tensor([2., 5.])Max
Purpose: Finds the maximum value in a tensor or along a specific axis.
Example Use Case: Identifying the highest activation value in a neural network layer.
Code Example:
tensor = torch.tensor([[1, 5, 3], [4, 2, 6]]) max_all, max_indices = torch.max(tensor, dim=None, keepdim=False) max_dim0, max_indices_dim0 = torch.max(tensor, dim=0) max_dim1, max_indices_dim1 = torch.max(tensor, dim=1) print("Max value of all elements:", max_all) # Output: 6 print("Max along dimension 0:", max_dim0) # Output: tensor([4, 5, 6]) print("Max along dimension 1:", max_dim1) # Output: tensor([5, 6])Min
Purpose: Finds the minimum value in a tensor or along a specific axis.
Example Use Case: Identifying the lowest activation value in a neural network layer.
Code Example:
tensor = torch.tensor([[1, 5, 3], [4, 2, 6]]) min_all, min_indices = torch.min(tensor, dim=None, keepdim=False) min_dim0, min_indices_dim0 = torch.min(tensor, dim=0) min_dim1, min_indices_dim1 = torch.min(tensor, dim=1) print("Min value of all elements:", min_all) # Output: 1 print("Min along dimension 0:", min_dim0) # Output: tensor([1, 2, 3]) print("Min along dimension 1:", min_dim1) # Output: tensor([1, 2])Norm
Purpose: Computes various types of norms (e.g., L1, L2) of tensors to measure their magnitude.
Example Use Case: Regularizing neural network weights to prevent overfitting.
Code Example:
tensor = torch.tensor([1, 2, 3], dtype=torch.float) norm_l1 = torch.norm(tensor, p=1) # L1 norm norm_l2 = torch.norm(tensor, p=2) # L2 norm print("L1 Norm:", norm_l1) # Output: 6.0 print("L2 Norm:", norm_l2) # Output: 3.7417Count
Purpose: Counts the number of elements that meet a specific condition.
Example Use Case: Counting the number of non-zero elements in a tensor.
Code Example:
tensor = torch.tensor([[1, 0, 3], [4, 0, 6]]) count_nonzero = torch.sum(tensor != 0) count_zero = torch.sum(tensor == 0) print("Count of non-zero elements:", count_nonzero) # Output: 4 print("Count of zero elements:", count_zero) # Output: 2
Let’s review the key points
- Sum: Aggregates values by summing them.
- Mean: Computes the average value.
- Max: Finds the maximum value.
- Min: Finds the minimum value.
- Norm: Measures the magnitude using various norms.
- Count: Counts the number of elements meeting specific criteria.