[Pytorch学习]Learning pytorch with examples

怼烎@ 2021-09-28 09:56 649阅读 0赞

Learning pytorch with examples

  • 前言
  • Tensors
    • Warm-up numpy
    • PyTorch: Tensors
  • Autograd
    • PyTorch: Tensors and autograd
    • PyTorch:Defining new autograd functions
  • nn module
    • PyTorch: nn
    • Pytorch:optim
    • Pytorch: Custom nn Modules
    • Pytorch:Control Flow+Weight Sharing

前言

pytorch的优点

  • 有类似于numpy数组的叫Tensor的类,但是可以在GPU上运行
    An n-dimensional Tensor, similar to numpy but can run on GPUs
  • 在构建和训练神经网络的过程中可以自动求微分
    Automatic differentiation for building and training neural networks

Tensors

Warm-up numpy

大致的网络结构为
输入 -> 隐层h -> 全连接层h_relu -> 输出层

  • 输入->隐层h 有权重W1
  • 隐层h->全连接层 运用了RELU激活函数
  • 全连接层 -> 输出层 运用了W2

方法:
梯度下降法

损失函数:
均方误差

这里在计算梯度的使用使用了矩阵方程法
g r a d i e n t = ∂ J ( θ ) ∂ θ = X T ( X θ − Y ) gradient=\frac{ \partial{J(\theta)} }{ \partial{\theta} }=X^T(X\theta-Y) gradient=∂θ∂J(θ)​=XT(Xθ−Y)
w = w − α ∗ g r a d i e n t w=w-\alpha *gradient w=w−α∗gradient

  1. import numpy as np
  2. # N is batch size; D_in is input dimension;
  3. # H is hidden dimension; D_out is output dimension.
  4. N, D_in, H, D_out = 64, 1000, 100, 10
  5. # Create random input and output data
  6. x = np.random.randn(N, D_in) # 输入
  7. y = np.random.randn(N, D_out) # 输出
  8. # Randomly initialize weights
  9. w1 = np.random.randn(D_in, H) #
  10. w2 = np.random.randn(H, D_out)
  11. learning_rate = 1e-6
  12. for t in range(500):
  13. # Forward pass: compute predicted y
  14. h = x.dot(w1)
  15. h_relu = np.maximum(h, 0)
  16. y_pred = h_relu.dot(w2)
  17. # Compute and print loss
  18. loss = np.square(y_pred - y).sum()
  19. print(t, loss)
  20. # Backprop to compute gradients of w1 and w2 with respect to loss
  21. grad_y_pred = 2.0 * (y_pred - y)
  22. grad_w2 = h_relu.T.dot(grad_y_pred)
  23. grad_h_relu = grad_y_pred.dot(w2.T)
  24. grad_h = grad_h_relu.copy()
  25. grad_h[h < 0] = 0
  26. grad_w1 = x.T.dot(grad_h)
  27. # Update weights
  28. w1 -= learning_rate * grad_w1
  29. w2 -= learning_rate * grad_w2

PyTorch: Tensors

Pytorch可以利用GPU来加速数值计算
使用方法:

  1. dtype = torch.float
  2. device = torch.device("cpu")
  3. # device = torch.device("cuda:0") # Uncomment this to run on GPU
  4. # N is batch size; D_in is input dimension;
  5. # H is hidden dimension; D_out is output dimension.
  6. N, D_in, H, D_out = 64, 1000, 100, 10
  7. # Create random input and output data
  8. x = torch.randn(N, D_in, device=device, dtype=dtype)
  9. y = torch.randn(N, D_out, device=device, dtype=dtype)

Autograd

PyTorch: Tensors and autograd

使用pytorch 自动微分 automatic differentiation的功能去自动进行反向传播的过程。

例子:

  1. import torch
  2. dtype = torch.float
  3. device = torch.device("cpu")
  4. N, D_in, H, D_out = 64, 1000, 100, 10
  5. x = torch.randn(N, D_in, device=device, dtype=dtype)
  6. y = torch.randn(N, D_out, device=device, dtype=dtype)
  7. w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True)
  8. w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True)
  9. learning_rate = 1e-6
  10. for t in range(500):
  11. y_pred = x.mm(w1).clamp(min=0).mm(w2)
  12. loss = (y_pred - y).pow(2).sum()
  13. print(t, loss.item())
  14. loss.backward() # 进行反向传播操作
  15. with torch.no_grad(): # 不把以下操作记作自动微分的函数操作一部分
  16. w1 -= learning_rate * w1.grad
  17. w2 -= learning_rate * w2.grad
  18. w1.grad.zero_()
  19. w2.grad.zero_() # 清除梯度

PyTorch:Defining new autograd functions

自动微分操作包括两部分:

  1. Forward Function:根据input Tensors来计算output Tensors
  2. Backward Function:以输入的标量为值,计算梯度

我们可以自己定义我们的autograd operator自动微分运算。
该类需要继承torch.autograd.Function
实现 forward/backward函数
然后我们可以构建一个实例,并像函数一样调用它。

例子:

  1. import torch
  2. class MyReLU(torch.autograd.Function):
  3. def forward(ctx, input):
  4. ctx.save_for_backward(input)
  5. return input.clamp(min=0) # 相当于实现RELU的函数的功能
  6. def backward(ctx, grad_output):
  7. input, = ctx.saved_tensors
  8. grad_input = grad_output.clone()
  9. grad_input[input < 0] = 0
  10. return grad_input # 计算RELU的梯度
  11. dtype = torch.float
  12. device = torch.device("cpu")
  13. N, D_in, H, D_out = 64, 1000, 100, 10
  14. x = torch.randn(N, D_in, device=device, dtype=dtype)
  15. y = torch.randn(N, D_out, device=device, dtype=dtype)
  16. w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True)
  17. w2 = torch.randn(H, D_out, device=device, dtype=dtype, requires_grad=True)
  18. learning_rate = 1e-6
  19. for t in range(500):
  20. relu = MyReLU.apply
  21. y_pred = relu(x.mm(w1)).mm(w2)
  22. loss = (y_pred - y).pow(2).sum()
  23. print(t, loss.item())
  24. loss.backward()
  25. with torch.no_grad():
  26. w1 -= learning_rate * w1.grad
  27. w2 -= learning_rate * w2.grad
  28. w1.grad.zero_()
  29. w2.grad.zero_()

nn module

PyTorch: nn

nn包定义了一系列的模型(Module),这些Module大致和神经网络的层相同。
一个模型(Module)接受input Tensor, 产生ouput Tensor,但是也会再内部产生包含可学习参数(learnable parameters)的Tensors。

使用NN包构建神经网络的例子

  1. # -*- coding: utf-8 -*-
  2. import torch
  3. # N is batch size; D_in is input dimension;
  4. # H is hidden dimension; D_out is output dimension.
  5. N, D_in, H, D_out = 64, 1000, 100, 10
  6. # Create random Tensors to hold inputs and outputs
  7. x = torch.randn(N, D_in)
  8. y = torch.randn(N, D_out)
  9. # Use the nn package to define our model as a sequence of layers. nn.Sequential
  10. # is a Module which contains other Modules, and applies them in sequence to
  11. # produce its output. Each Linear Module computes output from input using a
  12. # linear function, and holds internal Tensors for its weight and bias.
  13. model = torch.nn.Sequential(
  14. torch.nn.Linear(D_in, H),
  15. torch.nn.ReLU(),
  16. torch.nn.Linear(H, D_out),
  17. )
  18. # The nn package also contains definitions of popular loss functions; in this
  19. # case we will use Mean Squared Error (MSE) as our loss function.
  20. loss_fn = torch.nn.MSELoss(reduction='sum')
  21. learning_rate = 1e-4
  22. for t in range(500):
  23. # Forward pass: compute predicted y by passing x to the model. Module objects
  24. # override the __call__ operator so you can call them like functions. When
  25. # doing so you pass a Tensor of input data to the Module and it produces
  26. # a Tensor of output data.
  27. y_pred = model(x)
  28. # Compute and print loss. We pass Tensors containing the predicted and true
  29. # values of y, and the loss function returns a Tensor containing the
  30. # loss.
  31. loss = loss_fn(y_pred, y)
  32. print(t, loss.item())
  33. # Zero the gradients before running the backward pass.
  34. model.zero_grad()
  35. # Backward pass: compute gradient of the loss with respect to all the learnable
  36. # parameters of the model. Internally, the parameters of each Module are stored
  37. # in Tensors with requires_grad=True, so this call will compute gradients for
  38. # all learnable parameters in the model.
  39. loss.backward()
  40. # Update the weights using gradient descent. Each parameter is a Tensor, so
  41. # we can access its gradients like we did before.
  42. with torch.no_grad():
  43. for param in model.parameters():
  44. param -= learning_rate * param.grad

Pytorch:optim

之前我们需要手动更新参数,这对简单的优化算法,比如随机梯度下降,来说是比较简单的。但是使用复杂的优化算法(比如 AdaGrad, RMSProp)就比较麻烦了。

使用optim包可以简化我们的优化过程,省去手工操作引起的不必要的麻烦。

注意,每次调用optim优化器的时候,要清空梯度,因为默认pytorch会累加梯度。

例子:

  1. # -*- coding: utf-8 -*-
  2. import torch
  3. # N is batch size; D_in is input dimension;
  4. # H is hidden dimension; D_out is output dimension.
  5. N, D_in, H, D_out = 64, 1000, 100, 10
  6. # Create random Tensors to hold inputs and outputs
  7. x = torch.randn(N, D_in)
  8. y = torch.randn(N, D_out)
  9. # Use the nn package to define our model and loss function.
  10. model = torch.nn.Sequential(
  11. torch.nn.Linear(D_in, H),
  12. torch.nn.ReLU(),
  13. torch.nn.Linear(H, D_out),
  14. )
  15. loss_fn = torch.nn.MSELoss(reduction='sum')
  16. # Use the optim package to define an Optimizer that will update the weights of
  17. # the model for us. Here we will use Adam; the optim package contains many other
  18. # optimization algoriths. The first argument to the Adam constructor tells the
  19. # optimizer which Tensors it should update.
  20. learning_rate = 1e-4
  21. optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
  22. for t in range(500):
  23. # Forward pass: compute predicted y by passing x to the model.
  24. y_pred = model(x)
  25. # Compute and print loss.
  26. loss = loss_fn(y_pred, y)
  27. print(t, loss.item())
  28. # Before the backward pass, use the optimizer object to zero all of the
  29. # gradients for the variables it will update (which are the learnable
  30. # weights of the model). This is because by default, gradients are
  31. # accumulated in buffers( i.e, not overwritten) whenever .backward()
  32. # is called. Checkout docs of torch.autograd.backward for more details.
  33. optimizer.zero_grad()
  34. # Backward pass: compute gradient of the loss with respect to model
  35. # parameters
  36. loss.backward()
  37. # Calling the step function on an Optimizer makes an update to its
  38. # parameters
  39. optimizer.step()

Pytorch: Custom nn Modules

前面我们用简单的torch.nn.Sequential(..., ..., ...)来定义神经网络。
当我们要定义更加复杂的神经网络的时候,我们要继承torch.nn.Module并且定义forward,forward函数接受输入Tensor,产生输出Tensor。

例子:
一个两层的自定义神经网络

  1. # -*- coding: utf-8 -*-
  2. import torch
  3. class TwoLayerNet(torch.nn.Module):
  4. def __init__(self, D_in, H, D_out):
  5. """ In the constructor we instantiate two nn.Linear modules and assign them as member variables. """
  6. super(TwoLayerNet, self).__init__()
  7. self.linear1 = torch.nn.Linear(D_in, H)
  8. self.linear2 = torch.nn.Linear(H, D_out)
  9. def forward(self, x):
  10. """ In the forward function we accept a Tensor of input data and we must return a Tensor of output data. We can use Modules defined in the constructor as well as arbitrary operators on Tensors. """
  11. h_relu = self.linear1(x).clamp(min=0)
  12. y_pred = self.linear2(h_relu)
  13. return y_pred
  14. # N is batch size; D_in is input dimension;
  15. # H is hidden dimension; D_out is output dimension.
  16. N, D_in, H, D_out = 64, 1000, 100, 10
  17. # Create random Tensors to hold inputs and outputs
  18. x = torch.randn(N, D_in)
  19. y = torch.randn(N, D_out)
  20. # Construct our model by instantiating the class defined above
  21. model = TwoLayerNet(D_in, H, D_out)
  22. # Construct our loss function and an Optimizer. The call to model.parameters()
  23. # in the SGD constructor will contain the learnable parameters of the two
  24. # nn.Linear modules which are members of the model.
  25. criterion = torch.nn.MSELoss(reduction='sum')
  26. optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
  27. for t in range(500):
  28. # Forward pass: Compute predicted y by passing x to the model
  29. y_pred = model(x)
  30. # Compute and print loss
  31. loss = criterion(y_pred, y)
  32. print(t, loss.item())
  33. # Zero gradients, perform a backward pass, and update the weights.
  34. optimizer.zero_grad()
  35. loss.backward()
  36. optimizer.step()

Pytorch:Control Flow+Weight Sharing

使用python简单的控制流(if-else / loop etc.)以及权重共享机制(同一层反复使用)来实现神经网络。
及达到了代码重用的目的。

  1. # -*- coding: utf-8 -*-
  2. import random
  3. import torch
  4. class DynamicNet(torch.nn.Module):
  5. def __init__(self, D_in, H, D_out):
  6. """ In the constructor we construct three nn.Linear instances that we will use in the forward pass. """
  7. super(DynamicNet, self).__init__()
  8. self.input_linear = torch.nn.Linear(D_in, H)
  9. self.middle_linear = torch.nn.Linear(H, H)
  10. self.output_linear = torch.nn.Linear(H, D_out)
  11. def forward(self, x):
  12. """ For the forward pass of the model, we randomly choose either 0, 1, 2, or 3 and reuse the middle_linear Module that many times to compute hidden layer representations. Since each forward pass builds a dynamic computation graph, we can use normal Python control-flow operators like loops or conditional statements when defining the forward pass of the model. Here we also see that it is perfectly safe to reuse the same Module many times when defining a computational graph. This is a big improvement from Lua Torch, where each Module could be used only once. """
  13. h_relu = self.input_linear(x).clamp(min=0)
  14. for _ in range(random.randint(0, 3)):
  15. h_relu = self.middle_linear(h_relu).clamp(min=0)
  16. y_pred = self.output_linear(h_relu)
  17. return y_pred
  18. # N is batch size; D_in is input dimension;
  19. # H is hidden dimension; D_out is output dimension.
  20. N, D_in, H, D_out = 64, 1000, 100, 10
  21. # Create random Tensors to hold inputs and outputs
  22. x = torch.randn(N, D_in)
  23. y = torch.randn(N, D_out)
  24. # Construct our model by instantiating the class defined above
  25. model = DynamicNet(D_in, H, D_out)
  26. # Construct our loss function and an Optimizer. Training this strange model with
  27. # vanilla stochastic gradient descent is tough, so we use momentum
  28. criterion = torch.nn.MSELoss(reduction='sum')
  29. optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9)
  30. for t in range(500):
  31. # Forward pass: Compute predicted y by passing x to the model
  32. y_pred = model(x)
  33. # Compute and print loss
  34. loss = criterion(y_pred, y)
  35. print(t, loss.item())
  36. # Zero gradients, perform a backward pass, and update the weights.
  37. optimizer.zero_grad()
  38. loss.backward()
  39. optimizer.step()

发表评论

表情:
评论列表 (有 0 条评论,649人围观)

还没有评论,来说两句吧...

相关阅读