Tensorflow-GPU setup with cuDNN and NVIDIA CUDA 9.0 on Ubuntu 18.04 LTS

Pre-requisite: CUDA should be installed on the machine with NVIDIA graphics card

 

CUDA Setup

Driver and CUDA toolkit is described in a previous blogpost.

With a slight change since the Tensorflow setup requires CUDA toolkit 9.0

# Clean CUDA 9.1 and install 9.0
$ sudo /usr/local/cuda/bin/uninstall_cuda_9.1.pl 
$ rm -rf /usr/local/cuda-9.1
$ sudo rm -rf /usr/local/cuda-9.1
$ sudo ./cuda_9.0.176_384.81_linux.run --override

# Make sure environment variables are set for test
$ source ~/.bashrc 
$ sudo ln -s /usr/bin/gcc-6 /usr/local/cuda/bin/gcc
$ sudo ln -s /usr/bin/g++-6 /usr/local/cuda/bin/g++
$ cd ~/NVIDIA_CUDA-9.0_Samples/
$ make -j12
$ ./deviceQuery

Test Successful

cuDNN Setup

Referenced from a medium blogpost.

The following steps are pretty much the same as the installation guide using .deb files (strange that the cuDNN guide is better than the CUDA one).

Screenshot from 2018-07-13 16-03-10.png

  1. Go to the cuDNN download page (need registration) and select the latest cuDNN 7.1.* version made for CUDA 9.0.
  2. Download all 3 .deb files: the runtime library, the developer library, and the code samples library for Ubuntu 16.04.
  3. In your download folder, install them in the same order:
# (the runtime library)
$ sudo dpkg -i libcudnn7_7.1.4.18-1+cuda9.0_amd64.deb
# (the developer library)
$ sudo dpkg -i libcudnn7-dev_7.1.4.18-1+cuda9.0_amd64.deb
# (the code samples)
$ sudo dpkg -i libcudnn7-doc_7.1.4.18-1+cuda9.0_amd64.deb

# remove 
$ sudo dpkg -r libcudnn7-doc libcudnn7-dev libcudnn7

Now, we can verify the cuDNN installation (below is just the official guide, which surprisingly works out of the box):

  1. Copy the code samples somewhere you have write access: cp -r /usr/src/cudnn_samples_v7/ ~/
  2. Go to the MNIST example code: cd ~/cudnn_samples_v7/mnistCUDNN.
  3. Compile the MNIST example: make clean && make -j4
  4. Run the MNIST example: ./mnistCUDNN. If your installation is successful, you should see Test passed! at the end of the output.
(cv3) rahul@Windspect:~/cv/cudnn_samples_v7/mnistCUDNN$ ./mnistCUDNN
cudnnGetVersion() : 7104 , CUDNN_VERSION from cudnn.h : 7104 (7.1.4)
Host compiler version : GCC 5.4.0
There are 2 CUDA capable devices on your machine :
device 0 : sms 28  Capabilities 6.1, SmClock 1582.0 Mhz, MemSize (Mb) 11172, MemClock 5505.0 Mhz, Ecc=0, boardGroupID=0
device 1 : sms 28  Capabilities 6.1, SmClock 1582.0 Mhz, MemSize (Mb) 11163, MemClock 5505.0 Mhz, Ecc=0, boardGroupID=1
Using device 0

...

Result of classification: 1 3 5
Test passed!

In case of compilation error

Error

/usr/local/cuda/include/cuda_runtime_api.h:1683:101: error: use of enum ‘cudaDeviceP2PAttr’ without previous declaration
extern __host__ __cudart_builtin__ cudaError_t CUDARTAPI cudaDeviceGetP2PAttribute(int *value, enum cudaDeviceP2PAttr attr, int srcDevice, int dstDevice);
/usr/local/cuda/include/cuda_runtime_api.h:2930:102: error: use of enum ‘cudaFuncAttribute’ without previous declaration
 extern __host__ __cudart_builtin__ cudaError_t CUDARTAPI cudaFuncSetAttribute(const void *func, enum cudaFuncAttribute attr, int value);
                                                                                                      ^
In file included from /usr/local/cuda/include/channel_descriptor.h:62:0,
                 from /usr/local/cuda/include/cuda_runtime.h:90,
                 from /usr/include/cudnn.h:64,
                 from mnistCUDNN.cpp:30:

Solution: sudo vim /usr/include/cudnn.h

replace the line '#include "driver_types.h"' 
with '#include <driver_types.h>'

 

Configure the CUDA & cuDNN Environment Variables

# cuDNN libraries are at /usr/local/cuda/extras/CUPTI/lib64
export PATH=/usr/local/cuda-9.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.0/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.0/lib 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/extras/CUPTI/lib64

source ~/.bashrc

TensorFlow installation

The python environment is setup using a virtualenv located at /opt/pyenv/cv3

$ source /opt/pyenv/cv3/bin/activate
$ pip install numpy scipy matplotlib 
$ pip install scikit-image scikit-learn ipython

Referenced from the official Tensorflow guide 

$ pip install --upgrade tensorflow      # for Python 2.7
$ pip3 install --upgrade tensorflow     # for Python 3.n
$ pip install --upgrade tensorflow-gpu  # for Python 2.7 and GPU
$ pip3 install --upgrade tensorflow-gpu=1.5 # for Python 3.n and GPU

# remove tensorflow
$ pip3 uninstall tensorflow-gpu

Now, run a test

(cv3) rahul@Windspect:~$ python
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2018-08-14 18:03:45.024181: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: A VX2 FMA
2018-08-14 18:03:45.261898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:03:00.0
totalMemory: 10.91GiB freeMemory: 10.75GiB
2018-08-14 18:03:45.435881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 1 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:04:00.0
totalMemory: 10.90GiB freeMemory: 10.10GiB
2018-08-14 18:03:45.437318: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0, 1
2018-08-14 18:03:46.100062: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-08-14 18:03:46.100098: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 1
2018-08-14 18:03:46.100108: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N Y
2018-08-14 18:03:46.100114: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 1: Y N
2018-08-14 18:03:46.100718: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1039 8 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
2018-08-14 18:03:46.262683: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 9769 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:04:00.0, compute capability: 6.1)
>>> print(sess.run(hello))
b'Hello, TensorFlow!'

Looks like it is able to discover and use the NVIDIA GPU

KERAS

Now add keras to the system

pip install pillow h5py keras autopep8

Edit configuration, vim ~/.keras/keras.json

{
"image_data_format": "channels_last",
"backend": "tensorflow",
"epsilon": 1e-07,
"floatx": "float32"
}

A test for keras would be like this at the python CLI,

(cv3) rahul@Windspect:~/workspace$ python
Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux
>>> import keras
Using TensorFlow backend.
>>>

 

END.

 

Advertisements

Quick Apt Repository way – NVIDIA CUDA 9.x on Ubuntu 18.04 LST installation

The same NVIDIA CUDA 9.1 setup on Ubuntu 18.04 LST using the aptitude repository. However this appears to work and is simple to work with. Reference is taken from this askubuntu discussion.

Lookup the solution to the Nouveau issue from this blogpost

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo ubuntu-drivers autoinstall
sudo reboot

Now install the CUDA toolkit

sudo apt install g++-6
sudo apt install gcc-6
sudo apt install nvidia-cuda-toolkit gcc-6

Screenshot from 2018-07-13 14-18-16

Screenshot from 2018-07-13 14-16-00

Run the installer

root@wind:~/Downloads# ./cuda_9.1.85_387.26_linux --override

Screenshot from 2018-07-13 14-27-36.png

Screenshot from 2018-07-13 14-28-43

Setup the environment variables

# Environment variables
export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib

Provide the soft link for the gcc-6 compiler

sudo ln -s /usr/bin/gcc-6 /usr/local/cuda/bin/gcc
sudo ln -s /usr/bin/g++-6 /usr/local/cuda/bin/g++
sudo reboot

Test

cd ~/NVIDIA_CUDA-9.1_Samples/
make -j4

Upon completion of the compilation test using device query binary

$ cd ~/NVIDIA_CUDA-9.1_Samples/bin/x86_64/linux/release
$ ./deviceQuery

Screenshot from 2018-07-13 14-41-49.png

$ sudo bash -c "echo /usr/local/cuda/lib64/ > /etc/ld.so.conf.d/cuda.conf"
$ sudo ldconfig

DONE

NVIDIA CUDA 9.x on Ubuntu 18.04 LST installation

Guide

An installation guide to take you through the NVIDIA graphics driver as well as CUDA toolkit setup on an Ubuntu 18.04 LTS.

A. Know your cards

Verify what graphics card you have on your machine

rahul@karma:~$ lspci | grep VGA
04:00.0 VGA compatible controller: 
NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
rahul@karma:~$ sudo lshw -C video
 *-display 
 description: VGA compatible controller
 product: GM204 [GeForce GTX 970]
 vendor: NVIDIA Corporation
 physical id: 0
 bus info: pci@0000:04:00.0
 version: a1
 width: 64 bits
 clock: 33MHz
 capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
 configuration: driver=nouveau latency=0
 resources: irq:30 memory:f2000000-f2ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:2000(size=128) memory:f3080000-f30fffff

Download the right driver

downloaded the Version 390.67 for GeForce GTX 970

Screenshot from 2018-07-12 17-15-34.png

B. Nouveau problem kills your GPU rush

Hoever there are solutions available

Here is what worked for me

  1. remove all nvidia packages ,skip this if your system is fresh installed
    sudo apt-get remove nvidia* && sudo apt autoremove
    
  2. install some packages for build kernel:
    sudo apt-get install dkms build-essential linux-headers-generic
    
  3. now block and disable nouveau kernel driver:
    sudo vim /etc/modprobe.d/nvidia-installer-disable-nouveau.conf
    

Insert follow lines to the nvidia-installer-disable-nouveau.conf:

blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off

save and exit.

  1. Disable the Kernel nouveau by typing the following commands(nouveau-kms.conf may not exist,it is ok):
    rahul@wind:~$ echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf
    options nouveau modeset=0
    
  2. build the new kernel by:
    rahul@wind:~$ sudo update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-4.15.0-23-generic
    
  3. reboot
Run the Installer in run-level 3
$ sudo init 3 
$ sudo bash
$ ./NVIDIA-Linux-x86_64-390.67.run

Uninstall

More instruction on how to stop using the driver before uninstallation
sudo nvidia-installer –uninstall

C. NVIDIA X Server Settings

Install this from the ubuntu software center.
Screenshot from 2018-07-12 17-23-43.png

D. Start the CUDA related setup

We will need the CUDA toolkit 9.1 which is supported for the GTX 970 version with compute 3.0 capability. So download the local installer for Ubuntu.

Screenshot from 2018-07-13 13-55-24.png

Downloaded the “cuda_9.1.85_387.26_linux.run*” local installation file.

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt install nvidia-cuda-toolkit gcc-6

Steps are taken from the CUDA 9.1 official documentation

  1. Perform the pre-installation actions.
  2.  Disable the Nouveau drivers. We did this in the above driver installation
  3. Reboot into text mode (runlevel 3). This can usually be accomplished by adding the number “3” to the end of the system’s kernel boot parameters. Change the runlevel ‘sudo init 3’, refer
  4. Verify that the Nouveau drivers are not loaded. If the Nouveau drivers are still loaded, consult your distribution’s documentation to see if further steps are needed to disable Nouveau.
  5. Run the installer and follow the on-screen prompts:
$ chmod +x cuda_9.1.85_387.26_linux
$ rahul@wind:~/Downloads$ ./cuda_9.1.85_387.26_linux --override

Screenshot from 2018-07-13 13-52-19.png

Since we already installed the Driver above we say NO in the NVIDIA accelerated graphic driver installation question.

Screenshot from 2018-07-13 13-54-20.png

This will install the CUDA stuff in the following locations

  • CUDA Toolkit /usr/local/cuda-9.1
  • CUDA Samples $(HOME)/NVIDIA_CUDA-9.1_Samples

We can verify the graphic card using the NVIDIA-SMI command.

Screenshot from 2018-07-12 20-02-08

Uninstallation

cd /usr/local/cuda-9.1/bin
sudo ./uninstall_cuda_9.1.pl

 

E. Environment Variables

rahul@wind:~$ vim ~/.bashrc

# Add the following to the environment variables
export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib

rahul@wind:~$ source ~/.bashrc
rahul@wind:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.1, 

 

F. Test

Ensure you have the right driver versions

rahul@wind:$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 390.67 Fri Jun 1 04:04:27 PDT 2018
GCC version: gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)

Change directory to the NVIDIA CUDA Samples and compile them

rahul@wind:~/NVIDIA_CUDA-9.1_Samples$ make

Now run the device query test

rahul@wind:~/NVIDIA_CUDA-9.1_Samples/bin/x86_64/linux/release$ ./deviceQuery
./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

 

END