Jetson AGX Xavier Development Kit Setup for Deep Learning (Tensorflow, PyTorch and Jupyter Lab) with JetPack 4.x SDK

NVIDIA Jetson AGX Xavier Developer Kit

There are many examples available at that can be used to create a custom detector. However here we look at a basic development environment to get started with Tensorflow, PyTorch and Jupyter Lab on the device.

Setup Jetson AGX Xavier Development Kit

  1. NVIDIA SDK Manager for flashing (
  2. Use the SDK manager to also install Jetpack and other components
  3. Install Intel Wifi Ac 8265 with Bluetooth (
  4. Install M2 NVMe SSD storage (
  5. Move the rootfs to SSD (
  6. Jetson reference Zoo:

With this we will have AGX Xavier Kit running Jetpack 4.4 DP with rootfs on SSD and internet access via Intel WiFi 8265

Jetson Family of products (We are looking at AGX Xavier)

Deep Learning Environment/Framework Setup

  1. Setup VirtualEnvWrapper for each frameworks python install environment
mkvirtualenv <environment_name> -p python3
  1. Install Tensorflow 1.15 and 2.1 with Python 3.6 and Jetpack 4.4 DP
sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
sudo apt-get install python3-pip
pip3 install -U pip
pip3 install -U pip testresources setuptools numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11

Make sure you install the python packages as above for each of the TensorFlow 1.15 and 2.1 virtual environments (tf1 and tf2) that we create next.

a.) Create Virtual Environment for Tensorflow 1.15 installation
mkvirtualenv tf1 -p python3

# TF-1.15
pip3 install --pre --extra-index-url ‘tensorflow<2’

b.) Create Virtual Environment for TensorFlow 2.1.0 installation
mkvirtualenv tf2 -p python3

# TF-2.x
pip3 install --pre --extra-index-url tensorflow

c.) Test the installation MNIST LeNet for both the tensorflow versions (
Make sure to upgrade keras to 2.2.4 (pip install keras)

3. Install PyTorch 1.5 in virtualenv (
mkvirtualenv tor -p python3

# Python 3.6 and Jetpack 4.4 DP
wget -O torch-1.5.0-cp36-cp36m-linux_aarch64.whl
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev 
pip3 install Cython testresources setuptools pybind11
pip3 install numpy torch-1.5.0-cp36-cp36m-linux_aarch64.whl

Select the version of torchvision to download depending on the version of PyTorch that you have installed:

PyTorch v1.0 - torchvision v0.2.2
PyTorch v1.1 - torchvision v0.3.0
PyTorch v1.2 - torchvision v0.4.0
PyTorch v1.3 - torchvision v0.4.2
PyTorch v1.4 - torchvision v0.5.0
PyTorch v1.5 - torchvision v0.6.0  <---- Selected for Installation 

Install torchvision

sudo apt-get install libjpeg-dev zlib1g-dev
git clone --branch v0.6.0 torchvision   # see above for version of torchvision to download
cd torchvision
python install
cd ../  # attempting to load torchvision from build dir will result in import error

Test Pytorch installation using MNIST

4. Install Jupyter for development across these virtualenv/kernels

a.) Add /home/nv/.local/bin/ to PATH for local or user installed packages
export PATH=/home/nv/.local/bin:$PATH

b.) Install Jupyterlab and add kernel from virtualenv path

python3 -m pip install jupyterlab ipykernel
python3 -m jupyter  --version

Setup the python virtual env kernel spec file for virtual environment created at ~/.virtualenvs/tor/

python3 -m ipykernel install --user --name=tor 

Installed kernelspec tor in ~/.local/share/jupyter/kernels/tor

c.) Edit the kernel.json file in envs kernelspec folder and change the default argv from “/usr/bin/python3” to “/user/nv/.virtualenvs/tor/bin/python”

 "argv": [
 "display_name": "tor",
 "language": "python"

Verify the kernels available using kernelspec for each python virtual environment. We have three in current setup corresponding to TensorFlow 1.15, 2.1 and PyTorch 1.5 installations

nv@agx$ python3 -m jupyter kernelspec list
Available kernels:
  python3    /home/nv/.local/share/jupyter/kernels/python3
  tf1        /home/nv/.local/share/jupyter/kernels/tf1
  tf2        /home/nv/.local/share/jupyter/kernels/tf2
  tor        /home/nv/.local/share/jupyter/kernels/tor

Start the Jupyter Lab server and select “tor” kernel for run

python3 -m jupyter lab --allow-root --ip= --no-browser

Tested with PyTorch Kernel at “tor” virtual environment and MNIST PyTorch code at

Now you can try all the great resources on NVIDIA’s web page

Thanks to JetsonHacks for all the great reference tutorials


Nvidia-Docker containers for your JupyterLab based Tensorflow-gpu environment with MRCNN example, on Ubuntu 18.04 LTS

This will setup a convenient development environment for Tensorflow based deep learning environment on NVIDIA cards using the nvidia-docker containers. The work can happen in a Jupter Lab.




  1. Setup Python
  2. Setup Docker
  3. Setup Nvidia-Docker
  4. Create Deep learning container
  5. Run Mask R-CNN example in container
  6. Manage containers using Portainer



1. Setup Python

We need a virtual environment to work with and for that we use virtualenvwrapper

sudo apt-get install -y build-essential cmake unzip pkg-config ubuntu-restricted-extras git python3-dev python3-pip python3-numpy
sudo apt-get install -y freeglut3 freeglut3-dev libxi-dev libxmu-dev
sudo pip3 install virtualenv virtualenvwrapper

Edit ~/.bashrc file, add the following entry and source it

# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/

Create a virtual environment and install packages to it

mkvirtualenv cv3 -p python3
workon cv3
pip install numpy scipy scikit-image scikit-learn
pip install imutils pyzmq ipython matplotlib imgaug

More on this is available at Compile and Setup OpenCV 3.4.x on Ubuntu 18.04 LTS with Python Virtualenv for Image processing with Ceres, VTK, PCL


2. Setup Docker

Install docker-ce for Ubuntu 18.04 keeping in mind its compatibility with the nvidia-docker installation which is coming next

The repository setup is critical here and follow the instructions mentioned in the official installation guide at

Just install docker-ce now

sudo apt-get install docker-ce=5:19.03.2~3-0~ubuntu-bionic 
sudo apt-get install docker-ce-cli=5:19.03.2~3-0~ubuntu-bionic 
sudo apt-get install
sudo usermod -aG docker $USER
sudo systemctl enable docker 

Reboot the machine now and run “docker run hello-world” for test


3. Setup Nvidia-Docker

Install nvidia-docker runtime which allows containers to access the GPU hardware. The needed docker version is Docker 19.03 

# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L | sudo apt-key add -
curl -s -L$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

Test the runtime using nvidia-smi on official containers

# Test nvidia-smi with the latest official CUDA image
docker run --gpus all nvidia/cuda:9.0-base nvidia-smi

You can also configure docker to use the nvidia runtime by default by using the following

# Now verify the config and it should look like below
sudo echo > /etc/docker/daemon.json
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []


Advanced instructions for moving your docker image storage to a different location


Ubuntu/Debian: edit your /etc/default/docker file with the -g option: DOCKER_OPTS="-dns -dns -g /mnt"

4. Create a Deep Learning Container

Use the Dockerfile with an example in the provided dl-lab-docker repository

git clone
cd dl-lab-docker

Now lets build an image and run it locally.

Make sure Nvidia-docker is installed and default runtime is nvidia.

docker build -t dl-lab-docker:latest . -f Dockerfile.dl-lab.xenial
I have a prebuild docker image containing tensorflow-gpu==1.5.0 with CUDA 9.0 and cuDNN 7.0.5 which can be run as below
docker run --gpus all -it --ipc=host -p 8888:8888 \

Finally access the Jupyter Lab page at

This docker image is also made available at my Docker Hub as vishwakarmarhl/dl-lab-docker. So without building you can use the image directly by pulling this image from docker hub.

Oh !!! by the way since this will be a development environment do not rely on the code provided in the container. Mount your own development folders to code.

docker run -it --ipc=host -v $(pwd):/module/host_workspace \
           -p 8888:8888 vishwakarmarhl/dl-lab-docker:latest



5. Container with Mask R-CNN in Jupyter Lab

The code directory contains Dockerfile to make it easy to get up and running with TensorFlow via Docker.

Navigate to the mrcnn codebase as below and open up masker.ipynb


Below is an example run of the Mask R-CNN model taken from



6. Manage Containers using Portainer

Since we are dealing with docker containers, the number of containers quickly becomes messy in a development environment. We will deal with this using Portainer management interface

docker volume create portainer_data 
docker run -d -p 8000:8000 -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer


The portainer interface should be available at


Finally you can take a look at the created images at Docker Hub (dl-lab-docker repo) repository which is automatically building the github Dockerfile




Tensorflow-GPU setup with cuDNN and NVIDIA CUDA 9.0 on Ubuntu 18.04 LTS

Pre-requisite: CUDA should be installed on the machine with NVIDIA graphics card


CUDA Setup

Driver and CUDA toolkit is described in a previous blogpost.

With a slight change since the Tensorflow setup requires CUDA toolkit 9.0

# Clean CUDA 9.1 and install 9.0
$ sudo /usr/local/cuda/bin/ 
$ rm -rf /usr/local/cuda-9.1
$ sudo rm -rf /usr/local/cuda-9.1
$ sudo ./ --override

# Make sure environment variables are set for test
$ source ~/.bashrc 
$ sudo ln -s /usr/bin/gcc-6 /usr/local/cuda/bin/gcc
$ sudo ln -s /usr/bin/g++-6 /usr/local/cuda/bin/g++
$ cd ~/NVIDIA_CUDA-9.0_Samples/
$ make -j12
$ ./deviceQuery

Test Successful

cuDNN Setup

Referenced from a medium blogpost.

The following steps are pretty much the same as the installation guide using .deb files (strange that the cuDNN guide is better than the CUDA one).

Screenshot from 2018-07-13 16-03-10.png

  1. Go to the cuDNN download page (need registration) and select the latest cuDNN 7.1.* version made for CUDA 9.0.
  2. Download all 3 .deb files: the runtime library, the developer library, and the code samples library for Ubuntu 16.04.
  3. In your download folder, install them in the same order:
# (the runtime library)
$ sudo dpkg -i libcudnn7_7.1.4.18-1+cuda9.0_amd64.deb
# (the developer library)
$ sudo dpkg -i libcudnn7-dev_7.1.4.18-1+cuda9.0_amd64.deb
# (the code samples)
$ sudo dpkg -i libcudnn7-doc_7.1.4.18-1+cuda9.0_amd64.deb

# remove 
$ sudo dpkg -r libcudnn7-doc libcudnn7-dev libcudnn7

Now, we can verify the cuDNN installation (below is just the official guide, which surprisingly works out of the box):

  1. Copy the code samples somewhere you have write access: cp -r /usr/src/cudnn_samples_v7/ ~/
  2. Go to the MNIST example code: cd ~/cudnn_samples_v7/mnistCUDNN.
  3. Compile the MNIST example: make clean && make -j4
  4. Run the MNIST example: ./mnistCUDNN. If your installation is successful, you should see Test passed! at the end of the output.
(cv3) rahul@Windspect:~/cv/cudnn_samples_v7/mnistCUDNN$ ./mnistCUDNN
cudnnGetVersion() : 7104 , CUDNN_VERSION from cudnn.h : 7104 (7.1.4)
Host compiler version : GCC 5.4.0
There are 2 CUDA capable devices on your machine :
device 0 : sms 28  Capabilities 6.1, SmClock 1582.0 Mhz, MemSize (Mb) 11172, MemClock 5505.0 Mhz, Ecc=0, boardGroupID=0
device 1 : sms 28  Capabilities 6.1, SmClock 1582.0 Mhz, MemSize (Mb) 11163, MemClock 5505.0 Mhz, Ecc=0, boardGroupID=1
Using device 0


Result of classification: 1 3 5
Test passed!

In case of compilation error


/usr/local/cuda/include/cuda_runtime_api.h:1683:101: error: use of enum ‘cudaDeviceP2PAttr’ without previous declaration
extern __host__ __cudart_builtin__ cudaError_t CUDARTAPI cudaDeviceGetP2PAttribute(int *value, enum cudaDeviceP2PAttr attr, int srcDevice, int dstDevice);
/usr/local/cuda/include/cuda_runtime_api.h:2930:102: error: use of enum ‘cudaFuncAttribute’ without previous declaration
 extern __host__ __cudart_builtin__ cudaError_t CUDARTAPI cudaFuncSetAttribute(const void *func, enum cudaFuncAttribute attr, int value);
In file included from /usr/local/cuda/include/channel_descriptor.h:62:0,
                 from /usr/local/cuda/include/cuda_runtime.h:90,
                 from /usr/include/cudnn.h:64,
                 from mnistCUDNN.cpp:30:

Solution: sudo vim /usr/include/cudnn.h

replace the line '#include "driver_types.h"' 
with '#include <driver_types.h>'


Configure the CUDA & cuDNN Environment Variables

# cuDNN libraries are at /usr/local/cuda/extras/CUPTI/lib64
export PATH=/usr/local/cuda-9.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.0/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.0/lib 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/extras/CUPTI/lib64

source ~/.bashrc

TensorFlow installation

The python environment is setup using a virtualenv located at /opt/pyenv/cv3

$ source /opt/pyenv/cv3/bin/activate
$ pip install numpy scipy matplotlib 
$ pip install scikit-image scikit-learn ipython

Referenced from the official Tensorflow guide 

$ pip install --upgrade tensorflow      # for Python 2.7
$ pip3 install --upgrade tensorflow     # for Python 3.n
$ pip install --upgrade tensorflow-gpu  # for Python 2.7 and GPU
$ pip3 install --upgrade tensorflow-gpu=1.5 # for Python 3.n and GPU

# remove tensorflow
$ pip3 uninstall tensorflow-gpu

Now, run a test

(cv3) rahul@Windspect:~$ python
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2018-08-14 18:03:45.024181: I tensorflow/core/platform/] Your CPU supports instructions that this TensorFlow binary was not compiled to use: A VX2 FMA
2018-08-14 18:03:45.261898: I tensorflow/core/common_runtime/gpu/] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:03:00.0
totalMemory: 10.91GiB freeMemory: 10.75GiB
2018-08-14 18:03:45.435881: I tensorflow/core/common_runtime/gpu/] Found device 1 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:04:00.0
totalMemory: 10.90GiB freeMemory: 10.10GiB
2018-08-14 18:03:45.437318: I tensorflow/core/common_runtime/gpu/] Adding visible gpu devices: 0, 1
2018-08-14 18:03:46.100062: I tensorflow/core/common_runtime/gpu/] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-08-14 18:03:46.100098: I tensorflow/core/common_runtime/gpu/] 0 1
2018-08-14 18:03:46.100108: I tensorflow/core/common_runtime/gpu/] 0: N Y
2018-08-14 18:03:46.100114: I tensorflow/core/common_runtime/gpu/] 1: Y N
2018-08-14 18:03:46.100718: I tensorflow/core/common_runtime/gpu/] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1039 8 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
2018-08-14 18:03:46.262683: I tensorflow/core/common_runtime/gpu/] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 9769 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:04:00.0, compute capability: 6.1)
>>> print(
b'Hello, TensorFlow!'

Looks like it is able to discover and use the NVIDIA GPU


Now add keras to the system

pip install pillow h5py keras autopep8

Edit configuration, vim ~/.keras/keras.json

"image_data_format": "channels_last",
"backend": "tensorflow",
"epsilon": 1e-07,
"floatx": "float32"

A test for keras would be like this at the python CLI,

(cv3) rahul@Windspect:~/workspace$ python
Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux
>>> import keras
Using TensorFlow backend.




Anaconda for your Image Processing, Machine Learning, Neural Networks, Computer Vision development environment using VS Code

Python is a great language and I will not go into explaining why it is so. Here is a brief setup for your development environment in case you are tinkering with computer vision problems and looking at learning neural network on your windows laptop.

Anaconda3 5.0

64 bit Download:

Install Anaconda with the default options.

  • Anaconda Navigator is a great place to look at your environment and activate them as per your need.
  • In case you want to have a Python 2x and 3x environment side by side, then you can create them in navigator. Here I have a base(root) setup with Python 3.6 and an additional Python 2.7 environment.
  • In order to use a particular environment you can click on that environment in the navigator or go to the Anaconda prompt and execute the following command
"(base)C:\Users\Karma>activate Py27"
  • To deactivate use
  • To create a new environment use the following command:
(base)C:\Users\Karma>conda create -n Py27 python=2.7 anaconda


Whenever you want to use a particular environment just go to the environments section and activate it. This will setup your python with the packages and version as configured in that environment.  In the screenshot above I have tensorflow in my base environment while its always better to have a separate environment for this.

In case you are using Cmder like me then go for this:

Considering where you have installed your Anaconda
> C:\Anaconda3\Scripts\activate.bat C:\Anaconda3
> C:\Users\Karma\Anaconda3\Scripts\activate.bat C:\Users\Karma\Anaconda3
> conda info --envs
> conda activate py27
> conda deactivate

Lets try to use package manager “conda” for the setup.

Run the following installation command on Anaconda Command Prompt which will open up showing prompt as (C:\Anaconda3) C:\Users\Karma>:

In order to find packages, you should look at the Anaconda repository ( )

# Adding the menpo channels and install opencv
conda install -c opencv
conda config --add channels menpo
conda install -c menpo opencv

# or directly use conda-forge
conda install -c conda-forge opencv

# Install packages
conda install numpy
conda install scipy
conda install matplotlib

# List packages
conda list


If the OpenCV installation did not go through then we can use the pre-built windows binaries maintained by,

Christoph Gohlke at

Download File: You can remove these modules by using “pip uninstall <package>”

(base)λ pip install opencv_python-3.4.0-cp36-cp36m-win_amd64.whl
Processing c:\users\karma\downloads\opencv_python-3.4.0-cp36-cp36m-win_amd64.whl
Installing collected packages: opencv-python
Successfully installed opencv-python-3.4.0
(base)λ pip install opencv_python-3.4.0+contrib-cp36-cp36m-win_amd64.whl
Processing c:\users\karma\downloads\opencv_python-3.4.0+contrib-cp36-cp36m-win_amd64.whl
Installing collected packages: opencv-python
Successfully installed opencv-python-3.4.0+contrib

In my case I used SIFT and SURF implementations which were made available in the contrib packages.

Now, that we have packages set, lets test it out on the python interpreter interface,
Use the following commands on the python CLI.

import numpy as np
import cv2



To install this package with conda run:
conda install -c conda-forge tensorflow

Version changes based on the repository you are trying to download from.

I typically use VS Code but if you like smooth scrolling go for Sublime.

In VS Code I use ms-python.python, tht13.python extensions to simplify my workspace.


Debugging is critical to work with any kind of code. So here is some configuration to get you started here.

  • Verify that the workspace settings.json file has the right python path
"python.pythonPath ": "C:\\Anaconda3\\python.exe"
  • Add a launch.json in your project .vscode folder with the following values
   "name": "Python",
   "type": "python",
   "pythonPath":"${config:python.pythonPath}", "request": "launch", "stopOnEntry": true, "console": "none", "program": "${file}", "cwd": "${workspaceFolder}", "debugOptions": [ "WaitOnAbnormalExit", "WaitOnNormalExit", "RedirectOutput" ] }
This will get you setup for debugging and here is how the debug interface would look like when you have put the breakpoints and stepped through the code.

Good Luck.