[关闭]
@cleardusk 2016-03-06T14:48:21.000000Z 字数 3646 阅读 1269

3.5

GjzCV


服务器的配置

一块 K40,两块 K20

  1. Device 0: "Tesla K40c"
  2. CUDA Driver Version / Runtime Version 7.5 / 7.5
  3. CUDA Capability Major/Minor version number: 3.5
  4. Total amount of global memory: 11520 MBytes (12079136768 bytes)
  5. (15) Multiprocessors, (192) CUDA Cores/MP: 2880 CUDA Cores
  6. GPU Max Clock rate: 745 MHz (0.75 GHz)
  7. Memory Clock rate: 3004 Mhz
  8. Memory Bus Width: 384-bit
  9. L2 Cache Size: 1572864 bytes
  10. Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  11. Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
  12. Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
  13. Total amount of constant memory: 65536 bytes
  14. Total amount of shared memory per block: 49152 bytes
  15. Total number of registers available per block: 65536
  16. Warp size: 32
  17. Maximum number of threads per multiprocessor: 2048
  18. Maximum number of threads per block: 1024
  19. Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  20. Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
  21. Maximum memory pitch: 2147483647 bytes
  22. Texture alignment: 512 bytes
  23. Concurrent copy and kernel execution: Yes with 2 copy engine(s)
  24. Run time limit on kernels: No
  25. Integrated GPU sharing Host Memory: No
  26. Support host page-locked memory mapping: Yes
  27. Alignment requirement for Surfaces: Yes
  28. Device has ECC support: Enabled
  29. Device supports Unified Addressing (UVA): Yes
  30. Device PCI Domain ID / Bus ID / location ID: 0 / 132 / 0
  31. Compute Mode:
  32. < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
  1. Device 1: "Tesla K20m"
  2. CUDA Driver Version / Runtime Version 7.5 / 7.5
  3. CUDA Capability Major/Minor version number: 3.5
  4. Total amount of global memory: 4800 MBytes (5032706048 bytes)
  5. (13) Multiprocessors, (192) CUDA Cores/MP: 2496 CUDA Cores
  6. GPU Max Clock rate: 706 MHz (0.71 GHz)
  7. Memory Clock rate: 2600 Mhz
  8. Memory Bus Width: 320-bit
  9. L2 Cache Size: 1310720 bytes
  10. Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  11. Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
  12. Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
  13. Total amount of constant memory: 65536 bytes
  14. Total amount of shared memory per block: 49152 bytes
  15. Total number of registers available per block: 65536
  16. Warp size: 32
  17. Maximum number of threads per multiprocessor: 2048
  18. Maximum number of threads per block: 1024
  19. Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  20. Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
  21. Maximum memory pitch: 2147483647 bytes
  22. Texture alignment: 512 bytes
  23. Concurrent copy and kernel execution: Yes with 2 copy engine(s)
  24. Run time limit on kernels: No
  25. Integrated GPU sharing Host Memory: No
  26. Support host page-locked memory mapping: Yes
  27. Alignment requirement for Surfaces: Yes
  28. Device has ECC support: Enabled
  29. Device supports Unified Addressing (UVA): Yes
  30. Device PCI Domain ID / Bus ID / location ID: 0 / 3 / 0
  31. Compute Mode:
  32. < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

安装 cuda 的命令

  1. sudo gdebi cuda-repo-ubuntu1404_7.5-18_amd64.deb
  2. sudo apt-get update
  3. sudo apt-get install cuda

在 bash.bashrc 配置

  1. export CUDA_HOME=/usr/local/cuda-7.5
  2. export LD_LIBRARY_PATH=${CUDA_HOME}/lib64
  3. exportPATH=${CUDA_HOME}/bin:${PATH}

测试

  1. $ cuda-install-samples-7.5.sh ~
  2. $ cd ~/NVIDIA_CUDA-7.5_Samples
  3. $ cd 1_Utilities/deviceQuery
  4. $ make
  5. $ ./deviceQuery

重新编译 caffe

看来之前的 caffe 用的是 cuda7.0 编译的

  1. caffe ./examples/mnist/train_lenet.sh
  2. ./build/tools/caffe: error while loading shared libraries: libcudart.so.7.0: cannot open shared object file: No such file or directory

编译 OpenBLAS

https://github.com/xianyi/OpenBLAS/wiki/Installation-Guide

安装 cudnn

各种编译

1

cuda,atlas,无 cudnn

GPU相关

查看 GPU 使用信息

  1. nvidia-smi
  2. nvidia-smi --loop=1

使用多个 GPU

  1. caffe -gpu all

测试 (MNIST)

仅 cuda 模式下,所有 GPU(K40,K20,K20): 5'52''

  1. Sat Mar 5 13:52:29 CST 2016
  2. Sat Mar 5 13:58:11 CST 2016

仅 cuda 模式下,K40: 4'42''

  1. Sat Mar 5 14:01:47 CST 2016
  2. Sat Mar 5 14:06:29 CST 2016

cuda + OpenBLAS,K40: 4'49

  1. Sat Mar 5 14:18:02 CST 2016
  2. Sat Mar 5 14:22:51 CST 2016

cuda + cudnn3.0, k40: 41'', 37'',平均大概 37''

  1. #1
  2. Sat Mar 5 15:16:35 CST 2016
  3. Sat Mar 5 15:17:16 CST 2016
  4. #2
  5. Sat Mar 5 15:19:19 CST 2016
  6. Sat Mar 5 15:19:56 CST 2016

cuda + cudnn4.0 与 上面结果差不多
cuda + cudnn4.0 + OpenBLAS 还是差不多

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注