Tensorflow Windows Build with GPU Support

Step-by-step Windows build

虽然Research一直在用Caffe,而且用的飞起,但还是很关注tensorflow社区的事情,最近发现TF有windows版本的了,就自己试了试。

步骤:https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/cmake

Pre-requisites:

Microsoft Windows 10

  1. Install the pre-requisites detailed above, and set up your environment.

    • The following commands assume that you are using the Windows Command Prompt (cmd.exe). You will need to set up your environment to use the appropriate toolchain, i.e. the 64-bit tools. (Some of the binary targets we will build are too large for the 32-bit tools, and they will fail with out-of-memory errors.) The typical command to do set up your environment is:

      D:\temp> "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\vcvarsall.bat"
      
    • When building with GPU support after installing the CUDNN zip file from NVidia, append its bin directory to your PATH environment variable. In case TensorFlow fails to find the CUDA dll‘s during initialization, check your PATH environment variable. It should contain the directory of the CUDA dlls and the directory of the CUDNN dll. For example:
      D:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin
      D:\local\cuda\bin
      
    • We assume that cmake and git are installed and in your %PATH%. If for example cmake is not in your path and it is installed in C:\Program Files (x86)\CMake\bin\cmake.exe, you can add this directory to your %PATH% as follows:
      D:\temp> set PATH="%PATH%;C:\Program Files (x86)\CMake\bin\cmake.exe"
      
  2. Clone the TensorFlow repository and create a working directory for your build:
    D:\temp> git clone https://github.com/tensorflow/tensorflow.git
    D:\temp> cd tensorflow\tensorflow\contrib\cmake
    D:\temp\tensorflow\tensorflow\contrib\cmake> mkdir build
    D:\temp\tensorflow\tensorflow\contrib\cmake> cd build
    D:\temp\tensorflow\tensorflow\contrib\cmake\build>
    
  3. Invoke CMake to create Visual Studio solution and project files.

    N.B. This assumes that cmake.exe is in your %PATH% environment variable. The other paths are for illustrative purposes only, and may be different on your platform. The ^ character is a line continuation and must be the last character on each line.

    D:\...\build> cmake .. -A x64 -DCMAKE_BUILD_TYPE=Release ^
    More? -DSWIG_EXECUTABLE=C:/tools/swigwin-3.0.10/swig.exe ^
    More? -DPYTHON_EXECUTABLE=C:/Users/%USERNAME%/AppData/Local/Continuum/Anaconda3/python.exe ^
    More? -DPYTHON_LIBRARIES=C:/Users/%USERNAME%/AppData/Local/Continuum/Anaconda3/libs/python35.lib
    

    To build with GPU support add "^" at the end of the last line above following with:

    More? -Dtensorflow_ENABLE_GPU=ON ^
    More? -DCUDNN_HOME="D:\...\cudnn"
    

    Note that the -DCMAKE_BUILD_TYPE=Release flag must match the build configuration that you choose when invoking msbuild. The known-good values are Release and RelWithDebInfo. The Debug build type is not currently supported, because it relies on a Debug library for Python (python35d.lib) that is not distributed by default.

    There are various options that can be specified when generating the solution and project files:

    • -DCMAKE_BUILD_TYPE=(Release|RelWithDebInfo): Note that the CMAKE_BUILD_TYPE option must match the build configuration that you choose when invoking MSBuild in step 4. The known-good values are Release and RelWithDebInfo. The Debug build type is not currently supported, because it relies on a Debug library for Python (python35d.lib) that is not distributed by default.
    • -Dtensorflow_BUILD_ALL_KERNELS=(ON|OFF). Defaults to ON. You can build a small subset of the kernels for a faster build by setting this option to OFF.
    • -Dtensorflow_BUILD_CC_EXAMPLE=(ON|OFF). Defaults to ON. Generate project files for a simple C++ example training program.
    • -Dtensorflow_BUILD_PYTHON_BINDINGS=(ON|OFF). Defaults to ON. Generate project files for building a PIP package containing the TensorFlow runtime and its Python bindings.
    • -Dtensorflow_ENABLE_GRPC_SUPPORT=(ON|OFF). Defaults to ON. Include gRPC support and the distributed client and server code in the TensorFlow runtime.
    • -Dtensorflow_ENABLE_SSL_SUPPORT=(ON|OFF). Defaults to OFF. Include SSL support (for making secure HTTP requests) in the TensorFlow runtime. This support is incomplete, and will be used for Google Cloud Storage support.
    • -Dtensorflow_ENABLE_GPU=(ON|OFF). Defaults to OFF. Include GPU support. If GPU is enabled you need to install the CUDA 8.0 Toolkit and CUDNN 5.1. CMake will expect the location of CUDNN in -DCUDNN_HOME=path_you_unziped_cudnn.
    • -Dtensorflow_BUILD_CC_TESTS=(ON|OFF). Defaults to OFF. This builds cc unit tests. There are many of them and building will take a few hours. After cmake, build and execute the tests with
      MSBuild /p:Configuration=RelWithDebInfo ALL_BUILD.vcxproj
      ctest -C RelWithDebInfo
      
    • -Dtensorflow_BUILD_PYTHON_TESTS=(ON|OFF). Defaults to OFF. This enables python kernel tests. After building the python wheel, you need to install the new wheel before running the tests. To execute the tests, use
      ctest -C RelWithDebInfo
      
  4. Invoke MSBuild to build TensorFlow.

    To build the C++ example program, which will be created as a .exe executable in the subdirectory .\Release:

    D:\...\build> MSBuild /p:Configuration=Release tf_tutorials_example_trainer.vcxproj
    D:\...\build> Release\tf_tutorials_example_trainer.exe
    

    To build the PIP package, which will be created as a .whl file in the subdirectory .\tf_python\dist:

    D:\...\build> MSBuild /p:Configuration=Release tf_python_build_pip_package.vcxproj

在进行第四部的时候出了错,原因是在tensorflow\tensorflow\contrib\cmake\build\CMakeFiles\tf_core_gpu_kernels.dir\__\下面生成的cmake文件有问题,解决方案是:line 81处__VERSION__="MSVC"要改成__VERSION__=\"MSVC\"

改了之后重新进行Step 4生成whl文件,pip install *.whl开始玩吧。。。

时间: 2024-12-20 01:13:27

Tensorflow Windows Build with GPU Support的相关文章

Tensorflow r1.12及tensorflow serving r1.12 GPU版本编译遇到的问题

1.git clone tensorflow serving 及tensorflow代码 2. ERROR: /root/.cache/bazel/_bazel_root/f71d782da17fd83c84ed6253a342a306/external/local_config_cuda/crosstool/BUILD:4:1: Traceback (most recent call last): File "/root/.cache/bazel/_bazel_root/f71d782da17

detectron2安装出现Kernel not compiled with GPU support 报错信息

在安装使用detectron2的时候碰到Kernel not compiled with GPU support 问题,前后拖了好久都没解决,现总结一下以备以后查阅. 不想看心路历程的可以直接跳到最后一小节,哈哈哈. environment 因为我使用的是实验室的服务器,所以很多东西没法改,我的cuda环境如下: ubuntu nvcc默认版本是9.2 nvidia-smi版本又是10.0的 我之前一直没搞清楚这nvcc和nvidia-smi版本为什么可以不一样,想了解原因的可以看一下我之前的文

Mesos Nvidia GPU Support 翻译

原文地址 https://github.com/apache/mesos/blob/master/docs/gpu-support.md Mesos 在1.0.0 版本对英伟达公司的gpu进行了全面支持. Overview 在您了解几个关键步骤的情况下,mesos下运行gpu是非常简单直接的.其中之一是设置必要的Agent Flags,让他去列举gpu并且把它们交给mesos matser.在另一方面,我们需要设置合理的framework capabilities以便于mesos master可

深度学习TensorFlow如何使用多GPU并行模式?

TensorFlow可以用单个GPU,加速深度学习模型的训练过程,但要利用更多的GPU或者机器,需要了解如何并行化地训练深度学习模型. 常用的并行化深度学习模型训练方式有两种:同步模式和异步模式. 下面将介绍这两种模式的工作方式及其优劣. 如下图,深度学习模型的训练是一个迭代的过程. 在每一轮迭代中,前向传播算法会根据当前参数的取值,计算出在一小部分训练数据上的预测值,然后反向传播算法,再根据损失函数计算参数的梯度并更新参数. 异步模式的训练流程图 在并行化地训练深度学习模型时,不同设备(GPU

Windows Server+AMD GPU+HDMI时_黑边_不铺满问题的解决办法

HDMI接显示器或电视,有黑边或者被放大了是个很常见的问题,显卡设置界面里改下Scale或者Overscan/Underscan就行,可问题是WindowsServer版的CCC没有控制颜色对比度和缩放的那个界面. 不怕,直接改注册表就好. [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Video\{Your Card GUID}\0000]"TVEnableOverscan"=dword:00000000"Digi

深度学习入门——测试PyTorch和Tensorflow能正常使用GPU

1.测试PyTorch能正常使用GPU: import torch torch.cuda.is_available() 返回结果: True 2.测试Tensorflow能正常使用GPU: 示例一: tf.test.is_gpu_available() 返回结果: True 示例二: gpu_device_name = tf.test.gpu_device_name() print(gpu_device_name) 返回结果: /device:GPU:0 示例三: import tensorfl

深度学习框架keras平台搭建(关键字:windows、非GPU、离线安装)

当下,人工智能越来越受到人们的关注,而这很大程度上都归功于深度学习的迅猛发展.人工智能和不同产业之间的成功跨界对传统产业产生着深刻的影响. 最近,我也开始不断接触深度学习,之前也看了很多文章介绍,对深度学习的历史发展以及相关理论知识也有大致了解. 但常言道:纸上得来终觉浅,绝知此事要躬行:与其临渊羡鱼,不如退而结网.因此决定自己动手玩一玩. 对比了当下众多流行框架的优缺点,以及结合自身硬件条件,最后选定keras框架作为入手点. 作为大多数人都习惯于Windows系统,此外由于GPU比较昂贵,本

windows下caffe GPU版本配置

由于项目需要,所以在自己本子上配置了一下windows下GPU版本的caffe; 硬件:  win10    ;      gtx1070独显(计算能力6.1): 安装软件:     cudnn-8.0-windows10-x64-v5.1  :  cuda_8.0.61_win10  :  NugetPackages.zip  :  caffe-master: 可以自己官网下载(我也提供了百度云:链接:https://pan.baidu.com/s/1miDu1qo 密码:w7ja) 参考链接

TensorFlow——tensorflow指定CPU与GPU运算

1.指定GPU运算 如果安装的是GPU版本,在运行的过程中TensorFlow能够自动检测.如果检测到GPU,TensorFlow会尽可能的利用找到的第一个GPU来执行操作. 如果机器上有超过一个可用的GPU,除了第一个之外的其他的GPU默认是不参与计算的.为了让TensorFlow使用这些GPU,必须将OP明确指派给他们执行.with......device语句能够用来指派特定的CPU或者GPU执行操作: import tensorflow as tf import numpy as np w