BlackFly S USB & Spinnaker SDK Integration with Jetson AGX Xavier Development Board

FLIR products are fairly simple to assemble and below are the components that we used for this USB camera test setup. We will use C and Python Spinnaker ARM64 sources for Ubuntu 18.04/16.04 available on Jetson AGX Xavier Development Board.

Read Pre-requisite
a.) Spinnaker C++ SDK:
b.) USB-FS Throughput:
c.) Buffer handling:
d.) GenICam Standard:

All resources for Jetson cameras are available at elinux
BlackFly S USB3 Setup

Resources for Multiple camera setup:

GeniCam Standards for Spinnaker Node:

1.) List the devices attached to the system
$ usb-devices

T: Bus=02 Lev=01 Prnt=01 Port=02 Cnt=02 Dev#= 3 Spd=5000 MxCh= 0
D: Ver= 3.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS= 9 #Cfgs= 1
P: Vendor=1e10 ProdID=4000 Rev=00.00
S: Manufacturer=FLIR
S: Product=Blackfly S BFS-U3-63S4C
S: SerialNumber=0133D049
C: #Ifs= 3 Cfg#= 1 Atr=80 MxPwr=896mA
I: If#= 0 Alt= 0 #EPs= 2 Cls=ef(misc ) Sub=05 Prot=00 Driver=(none)
I: If#= 1 Alt= 0 #EPs= 1 Cls=ef(misc ) Sub=05 Prot=01 Driver=(none)
I: If#= 2 Alt= 0 #EPs= 1 Cls=ef(misc ) Sub=05 Prot=02 Driver=(none)

Increase the USB memory limit
$ sudo sh -c ‘echo 1000 > /sys/module/usbcore/parameters/usbfs_memory_mb’
$ cat /sys/module/usbcore/parameters/usbfs_memory_mb

2.) Spinnaker 64 Bit ARM SDK

  • Unzip and
Would you like to add a udev entry to allow access to USB hardware?
If a udev entry is not added, your cameras may only be accessible by running Spinnaker as sudo.
[Y/n] $ Y
Launching udev configuration script…
This script will assist users in configuring their udev rules to allow
access to USB devices. The script will create a udev rule which will
add FLIR USB devices to a group called flirimaging. The user may also
choose to restart the udev daemon. All of this can be done manually as well.
Adding new members to usergroup flirimaging…
Usergroup flirimaging is empty
To add a new member please enter username (or hit Enter to continue):
$ nv
Adding user nv to group flirimaging group. Is this OK?
[Y/n] $ Y
Added user nv
Current members of flirimaging group: nv

Writing the udev rules file…
Do you want to restart the udev daemon?
[Y/n] $ Y
[ ok ] Restarting udev (via systemctl): udev.service.
Configuration complete.
A reboot may be required on some systems for changes to take effect.
Would you like to set USB-FS memory size to 1000 MB at startup (via /etc/rc.local)?
By default, Linux systems only allocate 16 MB of USB-FS buffer memory for all USB devices.
This may result in image acquisition issues from high-resolution cameras or multiple-camera set ups.
NOTE: You can set this at any time by following the USB notes in the included README.
[Y/n] $ Y
Launching USB-FS configuration script…
Created /etc/rc.local and set USB-FS memory to 1000 MB.
Installation complete.
Would you like to make a difference by participating in the Spinnaker feedback program?
[Y/n] $ n
Join the feedback program anytime at ""!
Thank you for installing the Spinnaker SDK.

3.) Python Env Setup

Select ARM64 version Download for Ubuntu 18.04
Path/Link: Spinnaker/Linux Ubuntu/Python/Ubuntu18.04/arm64/spinnaker_python-

$ sudo apt install libfreetype6-dev python3-tk
$ pip3 install matplotlib Pillow keyboard

nv@agx:~/spinnaker_python-$ sudo /home/nv/.virtualenvs/tor/bin/python

Library version:
Number of cameras detected: 1
Running example for camera 0…

Acquisition mode set to continuous…
Acquiring images…
Device serial number retrieved as 20172873…
Press enter to close the program..

We can see the Bug barrier at about 3 feet from the camera
– Focus is somewhere in the middle
– Iris range F1.4

Fujinon Lens on BlackFly Camera:

Jetson AGX Xavier Development Kit Setup for Deep Learning (Tensorflow, PyTorch and Jupyter Lab) with JetPack 4.x SDK

NVIDIA Jetson AGX Xavier Developer Kit

There are many examples available at that can be used to create a custom detector. However here we look at a basic development environment to get started with Tensorflow, PyTorch and Jupyter Lab on the device.

Setup Jetson AGX Xavier Development Kit

  1. NVIDIA SDK Manager for flashing (
  2. Use the SDK manager to also install Jetpack and other components
  3. Install Intel Wifi Ac 8265 with Bluetooth (
  4. Install M2 NVMe SSD storage (
  5. Move the rootfs to SSD (
  6. Jetson reference Zoo:

With this we will have AGX Xavier Kit running Jetpack 4.4 DP with rootfs on SSD and internet access via Intel WiFi 8265

Jetson Family of products (We are looking at AGX Xavier)

Deep Learning Environment/Framework Setup

  1. Setup VirtualEnvWrapper for each frameworks python install environment
mkvirtualenv <environment_name> -p python3
  1. Install Tensorflow 1.15 and 2.1 with Python 3.6 and Jetpack 4.4 DP
sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
sudo apt-get install python3-pip
pip3 install -U pip
pip3 install -U pip testresources setuptools numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11

Make sure you install the python packages as above for each of the TensorFlow 1.15 and 2.1 virtual environments (tf1 and tf2) that we create next.

a.) Create Virtual Environment for Tensorflow 1.15 installation
mkvirtualenv tf1 -p python3

# TF-1.15
pip3 install --pre --extra-index-url ‘tensorflow<2’

b.) Create Virtual Environment for TensorFlow 2.1.0 installation
mkvirtualenv tf2 -p python3

# TF-2.x
pip3 install --pre --extra-index-url tensorflow

c.) Test the installation MNIST LeNet for both the tensorflow versions (
Make sure to upgrade keras to 2.2.4 (pip install keras)

3. Install PyTorch 1.5 in virtualenv (
mkvirtualenv tor -p python3

# Python 3.6 and Jetpack 4.4 DP
wget -O torch-1.5.0-cp36-cp36m-linux_aarch64.whl
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev 
pip3 install Cython testresources setuptools pybind11
pip3 install numpy torch-1.5.0-cp36-cp36m-linux_aarch64.whl

Select the version of torchvision to download depending on the version of PyTorch that you have installed:

PyTorch v1.0 - torchvision v0.2.2
PyTorch v1.1 - torchvision v0.3.0
PyTorch v1.2 - torchvision v0.4.0
PyTorch v1.3 - torchvision v0.4.2
PyTorch v1.4 - torchvision v0.5.0
PyTorch v1.5 - torchvision v0.6.0  <---- Selected for Installation 

Install torchvision

sudo apt-get install libjpeg-dev zlib1g-dev
git clone --branch v0.6.0 torchvision   # see above for version of torchvision to download
cd torchvision
python install
cd ../  # attempting to load torchvision from build dir will result in import error

Test Pytorch installation using MNIST

4. Install Jupyter for development across these virtualenv/kernels

a.) Add /home/nv/.local/bin/ to PATH for local or user installed packages
export PATH=/home/nv/.local/bin:$PATH

b.) Install Jupyterlab and add kernel from virtualenv path

python3 -m pip install jupyterlab ipykernel
python3 -m jupyter  --version

Setup the python virtual env kernel spec file for virtual environment created at ~/.virtualenvs/tor/

python3 -m ipykernel install --user --name=tor 

Installed kernelspec tor in ~/.local/share/jupyter/kernels/tor

c.) Edit the kernel.json file in envs kernelspec folder and change the default argv from “/usr/bin/python3” to “/user/nv/.virtualenvs/tor/bin/python”

 "argv": [
 "display_name": "tor",
 "language": "python"

Verify the kernels available using kernelspec for each python virtual environment. We have three in current setup corresponding to TensorFlow 1.15, 2.1 and PyTorch 1.5 installations

nv@agx$ python3 -m jupyter kernelspec list
Available kernels:
  python3    /home/nv/.local/share/jupyter/kernels/python3
  tf1        /home/nv/.local/share/jupyter/kernels/tf1
  tf2        /home/nv/.local/share/jupyter/kernels/tf2
  tor        /home/nv/.local/share/jupyter/kernels/tor

Start the Jupyter Lab server and select “tor” kernel for run

python3 -m jupyter lab --allow-root --ip= --no-browser

Tested with PyTorch Kernel at “tor” virtual environment and MNIST PyTorch code at

Now you can try all the great resources on NVIDIA’s web page

Thanks to JetsonHacks for all the great reference tutorials


Quick Apt Repository way – NVIDIA CUDA 9.x on Ubuntu 18.04 LST installation

The same NVIDIA CUDA 9.1 setup on Ubuntu 18.04 LST using the aptitude repository. However this appears to work and is simple to work with. Reference is taken from this askubuntu discussion.

Lookup the solution to the Nouveau issue from this blogpost

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo ubuntu-drivers autoinstall
sudo reboot

Now install the CUDA toolkit

sudo apt install g++-6
sudo apt install gcc-6
sudo apt install nvidia-cuda-toolkit gcc-6

Screenshot from 2018-07-13 14-18-16

Screenshot from 2018-07-13 14-16-00

Run the installer

root@wind:~/Downloads# ./cuda_9.1.85_387.26_linux --override

Screenshot from 2018-07-13 14-27-36.png

Screenshot from 2018-07-13 14-28-43

Setup the environment variables

# Environment variables
export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib

Provide the soft link for the gcc-6 compiler

sudo ln -s /usr/bin/gcc-6 /usr/local/cuda/bin/gcc
sudo ln -s /usr/bin/g++-6 /usr/local/cuda/bin/g++
sudo reboot


cd ~/NVIDIA_CUDA-9.1_Samples/
make -j4

Upon completion of the compilation test using device query binary

$ cd ~/NVIDIA_CUDA-9.1_Samples/bin/x86_64/linux/release
$ ./deviceQuery

Screenshot from 2018-07-13 14-41-49.png

$ sudo bash -c "echo /usr/local/cuda/lib64/ > /etc/"
$ sudo ldconfig


NVIDIA CUDA 9.x on Ubuntu 18.04 LST installation


An installation guide to take you through the NVIDIA graphics driver as well as CUDA toolkit setup on an Ubuntu 18.04 LTS.

A. Know your cards

Verify what graphics card you have on your machine

rahul@karma:~$ lspci | grep VGA
04:00.0 VGA compatible controller: 
NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
rahul@karma:~$ sudo lshw -C video
 description: VGA compatible controller
 product: GM204 [GeForce GTX 970]
 vendor: NVIDIA Corporation
 physical id: 0
 bus info: pci@0000:04:00.0
 version: a1
 width: 64 bits
 clock: 33MHz
 capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
 configuration: driver=nouveau latency=0
 resources: irq:30 memory:f2000000-f2ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:2000(size=128) memory:f3080000-f30fffff

Download the right driver

downloaded the Version 390.67 for GeForce GTX 970

Screenshot from 2018-07-12 17-15-34.png

B. Nouveau problem kills your GPU rush

Hoever there are solutions available

Here is what worked for me

  1. remove all nvidia packages ,skip this if your system is fresh installed
    sudo apt-get remove nvidia* && sudo apt autoremove
  2. install some packages for build kernel:
    sudo apt-get install dkms build-essential linux-headers-generic
  3. now block and disable nouveau kernel driver:
    sudo vim /etc/modprobe.d/nvidia-installer-disable-nouveau.conf

Insert follow lines to the nvidia-installer-disable-nouveau.conf:

blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off

save and exit.

  1. Disable the Kernel nouveau by typing the following commands(nouveau-kms.conf may not exist,it is ok):
    rahul@wind:~$ echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf
    options nouveau modeset=0
  2. build the new kernel by:
    rahul@wind:~$ sudo update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-4.15.0-23-generic
  3. reboot
Run the Installer in run-level 3
$ sudo init 3 
$ sudo bash
$ ./


More instruction on how to stop using the driver before uninstallation
sudo nvidia-installer –uninstall

C. NVIDIA X Server Settings

Install this from the ubuntu software center.
Screenshot from 2018-07-12 17-23-43.png

D. Start the CUDA related setup

We will need the CUDA toolkit 9.1 which is supported for the GTX 970 version with compute 3.0 capability. So download the local installer for Ubuntu.

Screenshot from 2018-07-13 13-55-24.png

Downloaded the “*” local installation file.

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt install nvidia-cuda-toolkit gcc-6

Steps are taken from the CUDA 9.1 official documentation

  1. Perform the pre-installation actions.
  2.  Disable the Nouveau drivers. We did this in the above driver installation
  3. Reboot into text mode (runlevel 3). This can usually be accomplished by adding the number “3” to the end of the system’s kernel boot parameters. Change the runlevel ‘sudo init 3’, refer
  4. Verify that the Nouveau drivers are not loaded. If the Nouveau drivers are still loaded, consult your distribution’s documentation to see if further steps are needed to disable Nouveau.
  5. Run the installer and follow the on-screen prompts:
$ chmod +x cuda_9.1.85_387.26_linux
$ rahul@wind:~/Downloads$ ./cuda_9.1.85_387.26_linux --override

Screenshot from 2018-07-13 13-52-19.png

Since we already installed the Driver above we say NO in the NVIDIA accelerated graphic driver installation question.

Screenshot from 2018-07-13 13-54-20.png

This will install the CUDA stuff in the following locations

  • CUDA Toolkit /usr/local/cuda-9.1
  • CUDA Samples $(HOME)/NVIDIA_CUDA-9.1_Samples

We can verify the graphic card using the NVIDIA-SMI command.

Screenshot from 2018-07-12 20-02-08


cd /usr/local/cuda-9.1/bin
sudo ./


E. Environment Variables

rahul@wind:~$ vim ~/.bashrc

# Add the following to the environment variables
export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib

rahul@wind:~$ source ~/.bashrc
rahul@wind:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.1, 


F. Test

Ensure you have the right driver versions

rahul@wind:$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 390.67 Fri Jun 1 04:04:27 PDT 2018
GCC version: gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)

Change directory to the NVIDIA CUDA Samples and compile them

rahul@wind:~/NVIDIA_CUDA-9.1_Samples$ make

Now run the device query test

rahul@wind:~/NVIDIA_CUDA-9.1_Samples/bin/x86_64/linux/release$ ./deviceQuery
./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)