[关闭]
@Team 2019-04-14T02:02:00.000000Z 字数 12357 阅读 3163

# 神经网络长什么样不知道?这有一份简单的 pytorch可视化技巧(1)

陈扬


深度学习这几年伴随着硬件性能的进一步提升,人们开始着手于设计更深更复杂的神经网络,有时候我们在开源社区拿到网络模型的时候,做客可能 不会直接开源模型代码,而是给出一个模型的参数文件,当我们想要复现算法的时候,很可能就需要靠自己手动仿造源作者设计的神经网络进行搭建,为了方便我们设计网络,我结合了我最近的工作积累,给大家分享一些关于 pytorch 的网络可视化方法

以下所有代码我已经开源:https://github.com/OUCMachineLearning/OUCML/blob/master/One%20Day%20One%20GAN/day11/pytorch_show_1.ipynb>

pytorch-summary

https://github.com/sksq96/pytorch-summary 最简单的 pytorch 网络结构打印方法,也是最不依赖各种环境的一个轻量级可视化网络结构pytorch 扩展包 类似于Keras style的model.summary() 以前用过Keras的朋友应该见过,Keras有一个简洁的API来查看模型的可视化,这在调试网络时非常有用。这是一个准备在PyTorch中模仿相同的准系统代码。目的是提供补充信息,以及PyTorch中print(your_model)未提供的信息。

使用方法

pip 下载安装 torchsummary

  1. from torchsummary import summary
  2. summary(your_model, input_size=(channels, H, W))

一个简单的例子:

CNN for MNSIT

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. from torchsummary import summary
  5. class Net(nn.Module):
  6. def __init__(self):
  7. super(Net, self).__init__()
  8. self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
  9. self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
  10. self.conv2_drop = nn.Dropout2d()
  11. self.fc1 = nn.Linear(320, 50)
  12. self.fc2 = nn.Linear(50, 10)
  13. def forward(self, x):
  14. x = F.relu(F.max_pool2d(self.conv1(x), 2))
  15. x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
  16. x = x.view(-1, 320)
  17. x = F.relu(self.fc1(x))
  18. x = F.dropout(x, training=self.training)
  19. x = self.fc2(x)
  20. return F.log_softmax(x, dim=1)
  21. device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # PyTorch v0.4.0
  22. model = Net().to(device)
  23. summary(model, (1, 28, 28))
  1. ----------------------------------------------------------------
  2. Layer (type) Output Shape Param #
  3. ================================================================
  4. Conv2d-1 [-1, 10, 24, 24] 260
  5. Conv2d-2 [-1, 20, 8, 8] 5,020
  6. Dropout2d-3 [-1, 20, 8, 8] 0
  7. Linear-4 [-1, 50] 16,050
  8. Linear-5 [-1, 10] 510
  9. ================================================================
  10. Total params: 21,840
  11. Trainable params: 21,840
  12. Non-trainable params: 0
  13. ----------------------------------------------------------------
  14. Input size (MB): 0.00
  15. Forward/backward pass size (MB): 0.06
  16. Params size (MB): 0.08
  17. Estimated Total Size (MB): 0.15
  18. ----------------------------------------------------------------

可视化 torchvision 里边的 vgg

  1. import torch
  2. from torchvision import models
  3. from torchsummary import summary
  4. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
  5. vgg = models.vgg11_bn().to(device)
  6. summary(vgg, (3, 224, 224))
  1. ----------------------------------------------------------------
  2. Layer (type) Output Shape Param #
  3. ================================================================
  4. Conv2d-1 [-1, 64, 224, 224] 1,792
  5. BatchNorm2d-2 [-1, 64, 224, 224] 128
  6. ReLU-3 [-1, 64, 224, 224] 0
  7. MaxPool2d-4 [-1, 64, 112, 112] 0
  8. Conv2d-5 [-1, 128, 112, 112] 73,856
  9. BatchNorm2d-6 [-1, 128, 112, 112] 256
  10. ReLU-7 [-1, 128, 112, 112] 0
  11. MaxPool2d-8 [-1, 128, 56, 56] 0
  12. Conv2d-9 [-1, 256, 56, 56] 295,168
  13. BatchNorm2d-10 [-1, 256, 56, 56] 512
  14. ReLU-11 [-1, 256, 56, 56] 0
  15. Conv2d-12 [-1, 256, 56, 56] 590,080
  16. BatchNorm2d-13 [-1, 256, 56, 56] 512
  17. ReLU-14 [-1, 256, 56, 56] 0
  18. MaxPool2d-15 [-1, 256, 28, 28] 0
  19. Conv2d-16 [-1, 512, 28, 28] 1,180,160
  20. BatchNorm2d-17 [-1, 512, 28, 28] 1,024
  21. ReLU-18 [-1, 512, 28, 28] 0
  22. Conv2d-19 [-1, 512, 28, 28] 2,359,808
  23. BatchNorm2d-20 [-1, 512, 28, 28] 1,024
  24. ReLU-21 [-1, 512, 28, 28] 0
  25. MaxPool2d-22 [-1, 512, 14, 14] 0
  26. Conv2d-23 [-1, 512, 14, 14] 2,359,808
  27. BatchNorm2d-24 [-1, 512, 14, 14] 1,024
  28. ReLU-25 [-1, 512, 14, 14] 0
  29. Conv2d-26 [-1, 512, 14, 14] 2,359,808
  30. BatchNorm2d-27 [-1, 512, 14, 14] 1,024
  31. ReLU-28 [-1, 512, 14, 14] 0
  32. MaxPool2d-29 [-1, 512, 7, 7] 0
  33. Linear-30 [-1, 4096] 102,764,544
  34. ReLU-31 [-1, 4096] 0
  35. Dropout-32 [-1, 4096] 0
  36. Linear-33 [-1, 4096] 16,781,312
  37. ReLU-34 [-1, 4096] 0
  38. Dropout-35 [-1, 4096] 0
  39. Linear-36 [-1, 1000] 4,097,000
  40. ================================================================
  41. Total params: 132,868,840
  42. Trainable params: 132,868,840
  43. Non-trainable params: 0
  44. ----------------------------------------------------------------
  45. Input size (MB): 0.57
  46. Forward/backward pass size (MB): 181.84
  47. Params size (MB): 506.85
  48. Estimated Total Size (MB): 689.27
  49. ----------------------------------------------------------------

好处是我们可以很直观的看到我们一个 batch_size的 Tensor 输入神经网络的时候需要多到多少的空间,缺点呢就是我们不能直观的看到各层网络间的连接结构

HiddenLayer

https://github.com/waleedka/hiddenlayer

HiddenLayer是一个可以用于PyTorch,Tensorflow和Keras的神经网络图和训练指标的轻量级库。
HiddenLayer简单易用,适用于Jupyter Notebook。它不是要取代高级工具,例如TensorBoard,而是用于高级工具对于任务来说太大的情况。

安装grazhviz

推荐使用 conda安装,一键配置其所需要的环境

  1. conda install graphviz python-graphviz

Otherwise:

Install HiddenLayer

  1. pip install hiddenlayer

一个简单的例子

VGG16

  1. import torch
  2. import torchvision.models
  3. import hiddenlayer as hl
  4. # VGG16 with BatchNorm
  5. model = torchvision.models.vgg16()
  6. # Build HiddenLayer graph
  7. # Jupyter Notebook renders it automatically
  8. hl.build_graph(model, torch.zeros([1, 3, 224, 224]))

在可视化网络之前,我们先将 VGG 网络实例化,然后个 hiddenlayer 输入进去一个([1,3,224,224])的四阶张量(意思相当于一张224*224的 RGB 图片)

image-20190413172420688

使用 transforms把残差块缩写表示

  1. # Resnet101
  2. device = torch.device("cuda")
  3. print("device = ", device)
  4. model = torchvision.models.resnet152().cuda()
  5. # Rather than using the default transforms, build custom ones to group
  6. # nodes of residual and bottleneck blocks.
  7. transforms = [
  8. # Fold Conv, BN, RELU layers into one
  9. hl.transforms.Fold("Conv > BatchNorm > Relu", "ConvBnRelu"),
  10. # Fold Conv, BN layers together
  11. hl.transforms.Fold("Conv > BatchNorm", "ConvBn"),
  12. # Fold bottleneck blocks
  13. hl.transforms.Fold("""
  14. ((ConvBnRelu > ConvBnRelu > ConvBn) | ConvBn) > Add > Relu
  15. """, "BottleneckBlock", "Bottleneck Block"),
  16. # Fold residual blocks
  17. hl.transforms.Fold("""ConvBnRelu > ConvBnRelu > ConvBn > Add > Relu""",
  18. "ResBlock", "Residual Block"),
  19. # Fold repeated blocks
  20. hl.transforms.FoldDuplicates(),
  21. ]
  22. # Display graph using the transforms above
  23. resnet152=hl.build_graph(model, torch.zeros([1, 3, 224, 224]).cuda(), transforms=transforms)

image-20190413190002357

保存图片
  1. resnet152.save("resnet152")

有了网络结构图,我们便会像,如何把我们的模型他训练的结果可视化出来呢?

比如我们经常在论文中看到这样的图:

image-20190413190859145

  1. import os
  2. import time
  3. import random
  4. import numpy as np
  5. import torch
  6. import torchvision.models
  7. import torch.nn as nn
  8. from torchvision import datasets, transforms
  9. import hiddenlayer as hl
一个简单的回归的例子:
  1. # New history and canvas objects
  2. history2 = hl.History()
  3. canvas2 = hl.Canvas()
  4. # Simulate a training loop with two metrics: loss and accuracy
  5. loss = 1
  6. accuracy = 0
  7. for step in range(800):
  8. # Fake loss and accuracy
  9. loss -= loss * np.random.uniform(-.09, 0.1)
  10. accuracy = max(0, accuracy + (1 - accuracy) * np.random.uniform(-.09, 0.1))
  11. # Log metrics and display them at certain intervals
  12. if step % 10 == 0:
  13. history2.log(step, loss=loss, accuracy=accuracy)
  14. # Draw two plots
  15. # Encluse them in a "with" context to ensure they render together
  16. with canvas2:
  17. canvas2.draw_plot([history1["loss"], history2["loss"]],
  18. labels=["Loss 1", "Loss 2"])
  19. canvas2.draw_plot([history1["accuracy"], history2["accuracy"]],
  20. labels=["Accuracy 1", "Accuracy 2"])
  21. time.sleep(0.1)

image-20190413191450905

序列化保存结果和加载:
  1. # Save experiments 1 and 2
  2. history1.save("experiment1.pkl")
  3. history2.save("experiment2.pkl")
  4. # Load them again. To verify it's working, load them into new objects.
  5. h1 = hl.History()
  6. h2 = hl.History()
  7. h1.load("experiment1.pkl")
  8. h2.load("experiment2.pkl")
利用饼状图来显示
  1. class MyCanvas(hl.Canvas):
  2. """Extending Canvas to add a pie chart method."""
  3. def draw_pie(self, metric):
  4. # Method name must start with 'draw_' for the Canvas to automatically manage it
  5. # Use the provided matplotlib Axes in self.ax
  6. self.ax.axis('equal') # set square aspect ratio
  7. # Get latest value of the metric
  8. value = np.clip(metric.data[-1], 0, 1)
  9. # Draw pie chart
  10. self.ax.pie([value, 1-value], labels=["Accuracy", ""])
  1. history3 = hl.History()
  2. canvas3 = MyCanvas() # My custom Canvas
  3. # Simulate a training loop
  4. loss = 1
  5. accuracy = 0
  6. for step in range(400):
  7. # Fake loss and accuracy
  8. loss -= loss * np.random.uniform(-.09, 0.1)
  9. accuracy = max(0, accuracy + (1 - accuracy) * np.random.uniform(-.09, 0.1))
  10. if step % 10 == 0:
  11. # Log loss and accuracy
  12. history3.log(step, loss=loss, accuracy=accuracy)
  13. # Log a fake image metric (e.g. image generated by a GAN)
  14. image = np.sin(np.sum(((np.indices([32, 32]) - 16) * 0.5 * accuracy) ** 2, 0))
  15. history3.log(step, image=image)
  16. # Display
  17. with canvas3:
  18. canvas3.draw_pie(history3["accuracy"])
  19. canvas3.draw_plot([history3["accuracy"], history3["loss"]])
  20. canvas3.draw_image(history3["image"])
  21. time.sleep(0.1)

image-20190413192325328

手动搭建一个简易的分类网络
  1. import torch
  2. import torch.nn as nn
  3. import torch.optim as optim
  4. import torch.nn.functional as F
  5. import torch.backends.cudnn as cudnn
  6. import torchvision
  7. import torchvision.transforms as transforms
  8. import numpy as np
  9. import os
  10. import argparse
  11. # Simple Convolutional Network
  12. class CifarModel(nn.Module):
  13. def __init__(self):
  14. super(CifarModel, self).__init__()
  15. self.c2d=nn.Conv2d(3, 16, kernel_size=3, padding=1)
  16. self.features = nn.Sequential(
  17. nn.BatchNorm2d(16),
  18. nn.ReLU(),
  19. nn.Conv2d(16, 16, kernel_size=3, padding=1),
  20. nn.BatchNorm2d(16),
  21. nn.ReLU(),
  22. nn.MaxPool2d(2, 2),
  23. nn.Conv2d(16, 32, kernel_size=3, padding=1),
  24. nn.BatchNorm2d(32),
  25. nn.ReLU(),
  26. nn.Conv2d(32, 32, kernel_size=3, padding=1),
  27. nn.BatchNorm2d(32),
  28. nn.ReLU(),
  29. nn.MaxPool2d(2, 2),
  30. nn.Conv2d(32, 32, kernel_size=3, padding=1),
  31. nn.BatchNorm2d(32),
  32. nn.ReLU(),
  33. nn.Conv2d(32, 32, kernel_size=3, padding=1),
  34. nn.BatchNorm2d(32),
  35. nn.ReLU(),
  36. nn.AdaptiveMaxPool2d(1)
  37. )
  38. self.classifier = nn.Sequential(
  39. nn.Linear(32, 32),
  40. # TODO: nn.BatchNorm2d(32),
  41. nn.ReLU(),
  42. nn.Linear(32, 10))
  43. def forward(self, x):
  44. x_0=self.c2d(x)
  45. x1 = self.features(x_0)
  46. self.feature_map=x_0
  47. x2 = x1.view(x1.size(0), -1)
  48. x3 = self.classifier(x2)
  49. return x3
  50. model = CifarModel().cuda()
  51. device = 'cuda' if torch.cuda.is_available() else 'cpu'
  52. criterion = torch.nn.CrossEntropyLoss()
  53. optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
  54. #show parameter
  55. summary(model, (3, 32, 32))
  56. hl.build_graph(model,torch.zeros([1,3,32,32]).cuda())
  1. ----------------------------------------------------------------
  2. Layer (type) Output Shape Param #
  3. ================================================================
  4. Conv2d-1 [-1, 16, 32, 32] 448
  5. BatchNorm2d-2 [-1, 16, 32, 32] 32
  6. ReLU-3 [-1, 16, 32, 32] 0
  7. Conv2d-4 [-1, 16, 32, 32] 2,320
  8. BatchNorm2d-5 [-1, 16, 32, 32] 32
  9. ReLU-6 [-1, 16, 32, 32] 0
  10. MaxPool2d-7 [-1, 16, 16, 16] 0
  11. Conv2d-8 [-1, 32, 16, 16] 4,640
  12. BatchNorm2d-9 [-1, 32, 16, 16] 64
  13. ReLU-10 [-1, 32, 16, 16] 0
  14. Conv2d-11 [-1, 32, 16, 16] 9,248
  15. BatchNorm2d-12 [-1, 32, 16, 16] 64
  16. ReLU-13 [-1, 32, 16, 16] 0
  17. MaxPool2d-14 [-1, 32, 8, 8] 0
  18. Conv2d-15 [-1, 32, 8, 8] 9,248
  19. BatchNorm2d-16 [-1, 32, 8, 8] 64
  20. ReLU-17 [-1, 32, 8, 8] 0
  21. Conv2d-18 [-1, 32, 8, 8] 9,248
  22. BatchNorm2d-19 [-1, 32, 8, 8] 64
  23. ReLU-20 [-1, 32, 8, 8] 0
  24. AdaptiveMaxPool2d-21 [-1, 32, 1, 1] 0
  25. Linear-22 [-1, 32] 1,056
  26. ReLU-23 [-1, 32] 0
  27. Linear-24 [-1, 10] 330
  28. ================================================================
  29. Total params: 36,858
  30. Trainable params: 36,858
  31. Non-trainable params: 0
  32. ----------------------------------------------------------------
  33. Input size (MB): 0.01
  34. Forward/backward pass size (MB): 1.27
  35. Params size (MB): 0.14
  36. Estimated Total Size (MB): 1.42
  37. ----------------------------------------------------------------

image-20190413233437389

加载数据
  1. # Data
  2. print('==> Preparing data..')
  3. transform_train = transforms.Compose([
  4. transforms.RandomCrop(32, padding=4),
  5. transforms.RandomHorizontalFlip(),
  6. transforms.ToTensor(),
  7. transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
  8. ])
  9. transform_test = transforms.Compose([
  10. transforms.ToTensor(),
  11. transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
  12. ])
  13. trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train)
  14. trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=2)
  15. testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test)
  16. testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=2)
  17. classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
  1. ==> Preparing data..
  2. Files already downloaded and verified
  3. Files already downloaded and verified
  4. train_dataset.data Tensor uint8 (50000, 32, 32, 3) min: 0.000 max: 255.000
  5. train_dataset.labels list len: 50000 [6, 9, 9, 4, 1, 1, 2, 7, 8, 3]
  6. test_dataset.data Tensor uint8 (10000, 32, 32, 3) min: 0.000 max: 255.000
  7. test_dataset.labels list len: 10000 [3, 8, 8, 0, 6, 6, 1, 6, 3, 1]
训练分类器
  1. step = (0, 0) # tuple of (epoch, batch_ix)
  2. cifar_history = hl.History()
  3. cifar_canvas = hl.Canvas()
  4. # Training loop
  5. for epoch in range(10):
  6. train_iter = iter(trainloader)
  7. for batch_ix, (inputs, labels) in enumerate(train_iter):
  8. # Update global step counter
  9. step = (epoch, batch_ix)
  10. optimizer.zero_grad()
  11. inputs = inputs.to(device)
  12. labels = labels.to(device)
  13. # forward + backward + optimize
  14. outputs = model(inputs)
  15. loss = criterion(outputs, labels)
  16. loss.backward()
  17. optimizer.step()
  18. # Print statistics
  19. if batch_ix and batch_ix % 100 == 0:
  20. # Compute accuracy
  21. pred_labels = np.argmax(outputs.detach().cpu().numpy(), 1)
  22. accuracy = np.mean(pred_labels == labels.detach().cpu().numpy())
  23. # Log metrics to history
  24. cifar_history.log((epoch, batch_ix),
  25. loss=loss, accuracy=accuracy,
  26. conv1_weight=model.c2d.weight,
  27. feature_map=model.feature_map[0,1].detach().cpu().numpy())
  28. # Visualize metrics
  29. with cifar_canvas:
  30. cifar_canvas.draw_plot([cifar_history["loss"], cifar_history["accuracy"]])
  31. cifar_canvas.draw_image(cifar_history["feature_map"])
  32. cifar_canvas.draw_hist(cifar_history["conv1_weight"])

img

通过hook 的方法,我们抓取了分类器里面第一层卷积层的输出以及其 weight 参数,通过draw_image 和 draw_hist 的方法,把他们学习的过程动态的可视化出来(在我 github 上会放出其全部实现的代码哟)

大家可以仿造我的写法,拿到具体某一次特征图的输出.

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注