PyTorch cheat sheet

浅浅的花香味﹌ 2022-11-29 03:26 228阅读 0赞

转自:https://hackmd.io/@rh0jTfFDTO6SteMDq91tgg/HkDRHKLrU

此處整理 PyTorch 常用的 modules 和 functions 方便快速查詢。
完整且詳細的 docs 請見 PyTorch 官方文檔(ver 1.2.0)。

另外,這裡有兩個版本的 PyTorch 教學 Colaboratory Notebooks,一個和上課教學互相對應,另一個有更詳細的解說(包含前後處理、視覺化、常用工具等),提供各位參考。

對了,拜託各位先 copy 一份到自己雲端硬碟,或是用 Playground 模式,不要干擾到原本的版本,多謝各位合作

  • PyTorch cheat sheet

    • Tensor Operations [Docs]
    • Data Preparation [Docs]
    • NN (Neural Network) Model Construction [Docs]

      • Training
      • Testing
    • CNN (Convolutional Neural Networks)
    • RNN (Recurrent Neural Networks)
  • Change Log

在我們開始前⋯⋯(表示方法說明)

  1. 如果遇到中間出現 as 語法的,表示常使用 import ... as ... 語法 import,例如:

    1. torch
    2. ├── nn as nn

    這邊表示常常在 coding 時以

    1. import torch.nn as nn

    語法 import,其下的 modules、classes 和 functions 也以 nn.SOMETHING 形式出現,如同

    1. import numpy as np # np.array...
    2. import pandas as pd # pd.read_csv...
  2. 如果今天看到的是有加 () 括號的,代表是一個 object 的 method。例如:

    1. torch
    2. └── (Tensor)
    3. ├── view
    4. └── item

    這代表一個 torch.Tensor object 的 viewitem method

  3. 如果看到有加 <> 角括號,代表與前面出現過的是同一個 module,後面會有線條連接到在這個段落中第一次出現的地方。例如:

    1. ├── functional as F ────────────────────────────┐
    2. ├── relu
    3. ...(Some lines)...
    4. └──<functional as F> <───────────────────────────┘
    5. └── nll_loss

    這邊兩個 functional 指的是同一個,不過因為包含兩種不同用途而分開寫

Tensor Operations [Docs]

  1. torch
  2. ├── (Tensor)
  3. ├── view(*shape) # e.g. x.view(-1, 3, 12)
  4. ## -1 automatically filled
  5. └── item() # get if Tensor is a scalar
  6. ├── empty(*size) # e.g. x = torch.empty(2, 3)
  7. ├── stack(tensors, dim=0)
  8. └── cat(tensors, dim=0)

Data Preparation [Docs]

  1. torch
  2. └── utils
  3. └── data
  4. ├── Dataset # A class to override
  5. ## `__len__` & `__getitem__`
  6. ├── TensorDataset(data_tensor, target_tensor)
  7. ├── DataLoader(dataset, batch_size=1,
  8. shuffle=False,
  9. collate_fn=\
  10. <function default_collate>)
  11. # define `collate_fn` yourself
  12. └── sampler
  13. ├── SequentialSampler(data_source)
  14. └── RandomSampler(data_source)

NN (Neural Network) Model Construction [Docs]

這是 PyTorch 最主要的 module,docs 比較複雜,分成

  • torch.nn
  • torch.nn.functional
  • torch.nn.init
  • torch.optim
  • torch.autograd

Training

  1. torch
  2. ├── (Tensor)
  3. ├── backward()
  4. ├── cpu()
  5. ├── cuda()
  6. └── to(torch.device) # x = x.to(device)
  7. ├── cuda
  8. └── is_available()
  9. # if torch.cuda.is_available():
  10. ## device = "cuda"
  11. ## else: device = "cpu"
  12. ├── nn as nn
  13. │### Models ###
  14. ├── Module
  15. ├── load_state_dict(torch.load(PATH))
  16. ├── train()
  17. └── eval()
  18. ├── Sequential(layers)
  19. │### Initializations ###
  20. ├── init
  21. └── uniform_(w) # In-place,
  22. ## w is a `torch.Tensor`.
  23. │### Layers ###
  24. ├── Linear(in_feat, out_feat)
  25. ├── Dropout(rate)
  26. │### Activations ###
  27. ├── Softmax(dim=None)
  28. ├── Sigmoid()
  29. ├── ReLU()
  30. ├── LeakyReLU(negative_slope=0.01)
  31. ├── Tanh()
  32. ├── GELU()
  33. ├── ReLU6() # Model Compression
  34. # --> Corresponding functions
  35. ├── functional as F ────────────────────────────┐
  36. ├── softmax(input, dim=None)
  37. ├── sigmoid(input)
  38. ├── relu(input)
  39. ├── leaky_relu(input,
  40. negative_slope=0.01)
  41. ├── tanh(input)
  42. ├── gelu(input)
  43. └── relu6(input)
  44. │### Losses ### │
  45. ├── MSELoss()
  46. ├── CrossEntropyLoss()
  47. ├── BCELoss()
  48. ├── NLLLoss()
  49. # --> Corresponding functions │
  50. └──<functional as F> <───────────────────────────┘
  51. ├── mse_loss(input, target)
  52. ├── cross_entropy(input,
  53. target: torch.LongTensor)
  54. ├── binary_cross_entropy(input, target)
  55. ├── log_softmax(input)
  56. └── nll_loss(log_softmax_output, target)
  57. # F.nll_loss(F.log_softmax(input), target)
  58. ### Optimizers ###
  59. ├── optim
  60. ├── (Optimizer)
  61. ├── zero_grad()
  62. ├── step()
  63. └── state_dict()
  64. ├── SGD(model.parameters(), lr=0.1, momentum=0.9)
  65. ├── Adagrad(model.parameters(), lr=0.01,
  66. lr_decay=0, weight_decay=0,
  67. initial_accumulator_value=0,eps=1e-10)
  68. ├── RMSProp(model.parameters(), lr=0.01,
  69. alpha=0.99, eps=1e-08, weight_decay=0,
  70. momentum=0)
  71. ├── Adam(model.parameters(), lr=0.001,
  72. betas=(0.9, 0.999), eps=1e-08,
  73. weight_decay=0)
  74. └── lr_scheduler
  75. └── ReduceLROnPlateau(optimizer)
  76. │── load(PATH)
  77. │── save(model, PATH)
  78. └── autograd
  79. └── backward(tensors)

Testing

  1. torch
  2. ├── nn
  3. └── Module
  4. ├── load_state_dict(torch.load(PATH))
  5. └── eval()
  6. ├── optim
  7. └── (Optimizer)
  8. └── state_dict()
  9. └── no_grad() # with torch.no_grad(): ...

CNN (Convolutional Neural Networks)

  • Convolutional Layers
  • Pooling Layers
  • torchvision docs

    torch
    ├── (Tensor)
    │ └── view(*shape)
    ├── nn
    │ │### Layers ###
    │ ├── Conv2d(in_channels, out_channels,
    │ │ kernel_size, stride=1, padding=0)
    │ ├── ConvTranspose2d(in_channels, out_channels,
    │ │ kernel_size, stride=1, padding=0,
    │ │ output_padding=0)
    │ ├── MaxPool2d(kernel_size, stride=None,
    │ │ padding=0, dilation=1)
    │ │ # stride default: kernel_size
    │ ├── BatchNorm2d(num_feat)
    │ └── BatchNorm1d(num_feat)
    ├── stack(tensors, dim=0)
    └── cat(tensors, dim=0)

    torchvision
    ├── models as models # Useful pretrained
    ├── transforms as transforms
    │ ├── Compose(transforms) # Wrapper
    │ ├── ToPILImage(mode=None)
    │ ├── RandomHorizontalFlip(p=0.5)
    │ ├── RandomRotation(degrees)
    │ ├── ToTensor()
    │ └── Resize(size)
    └── utils

    1. ├── make_grid(tensor, nrow=8, padding=2)
    2. └── save_image(tensor, filename, nrow=8,padding=2)

RNN (Recurrent Neural Networks)

  • Recurrent Layers
  • Gensim Word2Vec Docs

    torch
    ├── nn
    │ ├── Embedding(num_embed, embed_dim)
    │ │ # embedding = nn.Embedding(
    │ │ ## *(w2vmodel.wv.vectors.shape))
    │ ├── Parameter(params: torch.FloatTensor)
    │ │ # embedding.weight = nn.Parameter(
    │ │ ## torch.FloatTensor(w2vmodel.wv.vectors))
    │ ├── LongTensor # Feeding Indices of words
    │ │
    │ ├── LSTM(inp_size, hid_size, num_layers)
    │ │ # input: input, (h_0, c_0)
    │ └── GRU(inp_size, hid_size, num_layers)
    ├── stack(tensors, dim=0)
    └── cat(tensors, dim=0)

    gensim
    └── models

    1. └── word2Vec
    2. └── Word2Vec(sentences) # list or words/tokens

Change Log

全部的架構太大,不方便查詢,故先隱藏起來

PyTorch 套件常用部分完整架構

  1. torch
  2. ├── (Tensor)
  3. ├── view
  4. ├── item
  5. ├── cpu()
  6. ├── cuda()
  7. ├── to(torch.device)
  8. └── backward
  9. ├── nn
  10. ├── Module
  11. ├── load_state_dict
  12. ├── train
  13. └── eval
  14. ├── Sequential
  15. # Layers
  16. ├── Linear
  17. ├── Dropout
  18. ## CNN
  19. ├── Conv2d
  20. ├── ConvTranspose2d
  21. ├── MaxPool2d
  22. ├── BatchNorm2d
  23. ├── BatchNorm1d # GAN
  24. ## RNN
  25. ├── Embedding
  26. ├── LSTM
  27. ├── GRU
  28. # Loss functions
  29. ├── MSELoss
  30. ├── CrossEntropyLoss
  31. ├── BCELoss
  32. # Activations
  33. ├── Sigmoid
  34. ├── ReLU
  35. ├── Tanh
  36. ├── ReLU6 # Network Compression
  37. # Initializations
  38. ├── init
  39. └── uniform_
  40. ├── functional as F
  41. ├── relu
  42. ├── leakyrelu
  43. ├── gelu
  44. └── nll_loss
  45. └── Parameter
  46. ├── optim
  47. ├── SGD
  48. ├── RMSProp
  49. ├── Adagrad
  50. ├── Adam
  51. ├── AdamW
  52. ├── lr_scheduler
  53. └── (Optimizer)
  54. ├── zero_grad
  55. ├── state_dict
  56. └── step
  57. ├── utils
  58. └── data
  59. ├── Dataset
  60. ├── TensorDataset
  61. ├── DataLoader
  62. ├── sampler
  63. ├── SequentialSampler
  64. └── RandomSampler
  65. ├── cuda
  66. └── is_available
  67. ├── autograd
  68. └── backward
  69. # tensor operation
  70. ├── no_grad
  71. ├── empty
  72. ├── stack
  73. ├── cat
  74. # model save
  75. ├── load
  76. └── save
  77. torchvision
  78. ├── transforms
  79. ├── Compose
  80. ├── ToPILImage
  81. ├── RandomHorizontalFlip
  82. ├── RandomRotation
  83. ├── ToTensor
  84. └── Resize
  85. ├── models
  86. └── utils
  87. ├── make_grid
  88. └── save_image
  • PyTorch cheat sheet
  • Change Log

发表评论

表情:
评论列表 (有 0 条评论,228人围观)

还没有评论,来说两句吧...

相关阅读