TensorFlow2、CUDA10、cuDNN7.6.5

浅浅的花香味﹌ 2023-07-02 03:27 128阅读 0赞

" class="reference-link">20191009191333910.png

日萌社

人工智能AI:Keras PyTorch MXNet TensorFlow PaddlePaddle 深度学习实战(不定时更新)


安装

TensorFlow2、CUDA10、cuDNN7.6.5

Anaconda3 python 3.7、TensorFlow2、CUDA10、cuDNN7.6.5

TensorFlow 2.0 环境搭建

window下安装 Keras、TensorFlow(先安装CUDA、cuDNN,再安装Keras、TensorFlow)


下载NVIDIA驱动:https://www.geforce.cn/drivers

TensorFlow2.0需要cuda10,所以应该装410.48以上版本驱动

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3ppbWlhbzU1MjE0NzU3Mg_size_16_color_FFFFFF_t_70watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3ppbWlhbzU1MjE0NzU3Mg_size_16_color_FFFFFF_t_70 1


CUDA、cuDNN百度盘下载

链接:https://pan.baidu.com/s/1oqYxOYob9MBuqHYsxkS-IA
提取码:y8ub

链接:https://pan.baidu.com/s/1YqfX0ObJSSUIaHYW3OW3cQ
提取码:21lu
链接:https://pan.baidu.com/s/1yF7e6ntWpXpdPWFLN4gi2g
提取码:f1zi

  1. CUDAtensorflow版本清单:https://tensorflow.google.cn/install/source#linux
  2. CUDA下载:https://developer.nvidia.com/cuda-toolkit-archive

CUDA、cuDNN、tensorflow版本要求清单

注意:使用cuDNN v7.4.2.24可能会报错(亲测),但使用cuDNN v7.6.5.32的话,并不会出现“cuDNN v7.4.2.24可能会出现的”报错,虽然官网推荐组合是cuDNN v7.4.2.24,但建议使用cuDNN v7.6.5.32

" class="reference-link">watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3ppbWlhbzU1MjE0NzU3Mg_size_16_color_FFFFFF_t_70 2

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 1 watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 2

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 3

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 4

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 5

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 6

2020012518172792.png

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 7

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 8

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 9

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 10

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 11

2020012518175298.png

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 12

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 13

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 14

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 15

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 16

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 17

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 18

watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 19

测试GPU是否能正常运行tensorflow

  1. >>> import tensorflow as tf
  2. >>> tf.test.is_gpu_available()
  3. import tensorflow as tf
  4. import timeit
  5. with tf.device('/cpu:0'):
  6. cpu_a = tf.random.normal([10000, 1000])
  7. cpu_b = tf.random.normal([1000, 2000])
  8. print(cpu_a.device, cpu_b.device)
  9. with tf.device('/gpu:0'):
  10. gpu_a = tf.random.normal([10000, 1000])
  11. gpu_b = tf.random.normal([1000, 2000])
  12. print(gpu_a.device, gpu_b.device)
  13. def cpu_run():
  14. with tf.device('/cpu:0'):
  15. c = tf.matmul(cpu_a, cpu_b)
  16. return c
  17. def gpu_run():
  18. with tf.device('/gpu:0'):
  19. c = tf.matmul(gpu_a, gpu_b)
  20. return c
  21. # warm up
  22. cpu_time = timeit.timeit(cpu_run, number=10)
  23. gpu_time = timeit.timeit(gpu_run, number=10)
  24. print('warmup:', cpu_time, gpu_time)
  25. cpu_time = timeit.timeit(cpu_run, number=10)
  26. gpu_time = timeit.timeit(gpu_run, number=10)
  27. print('run time:', cpu_time, gpu_time)

" class="reference-link">watermark_type_ZmFuZ3poZW5naGVpdGk_shadow_10_text_aHR0cHM6Ly9henVzYS5ibG9nLmNzZG4ubmV0_size_16_color_FFFFFF_t_70 20

#

#

发表评论

表情:
评论列表 (有 0 条评论,128人围观)

还没有评论,来说两句吧...

相关阅读