BlackFly S USB & Spinnaker SDK Integration with Jetson AGX Xavier Development Board


FLIR products are fairly simple to assemble and below are the components that we used for this USB camera test setup. We will use C and Python Spinnaker ARM64 sources for Ubuntu 18.04/16.04 available on Jetson AGX Xavier Development Board.

Read Pre-requisite
a.) Spinnaker C++ SDK: http://softwareservices.flir.com/Spinnaker/latest
b.) USB-FS Throughput: https://www.flir.com/support-center/iis/machine-vision/application-note/understanding-usbfs-on-linux/
c.) Buffer handling: https://www.flir.com/support-center/iis/machine-vision/application-note/understanding-buffer-handling/
d.) GenICam Standard: https://www.emva.org/standards-technology/genicam/

All resources for Jetson cameras are available at elinux

https://elinux.org/Jetson/Cameras
BlackFly S USB3 Setup

Resources for Multiple camera setup: https://www.flir.com/support-center/iis/machine-vision/application-note/configuring-synchronized-capture-with-multiple-cameras

GeniCam Standards for Spinnaker Node: https://www.flir.eu/support-center/iis/machine-vision/application-note/spinnaker-nodes/

1.) List the devices attached to the system
$ usb-devices

T: Bus=02 Lev=01 Prnt=01 Port=02 Cnt=02 Dev#= 3 Spd=5000 MxCh= 0
D: Ver= 3.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS= 9 #Cfgs= 1
P: Vendor=1e10 ProdID=4000 Rev=00.00
S: Manufacturer=FLIR
S: Product=Blackfly S BFS-U3-63S4C
S: SerialNumber=0133D049
C: #Ifs= 3 Cfg#= 1 Atr=80 MxPwr=896mA
I: If#= 0 Alt= 0 #EPs= 2 Cls=ef(misc ) Sub=05 Prot=00 Driver=(none)
I: If#= 1 Alt= 0 #EPs= 1 Cls=ef(misc ) Sub=05 Prot=01 Driver=(none)
I: If#= 2 Alt= 0 #EPs= 1 Cls=ef(misc ) Sub=05 Prot=02 Driver=(none)

Increase the USB memory limit
$ sudo sh -c ‘echo 1000 > /sys/module/usbcore/parameters/usbfs_memory_mb’
$ cat /sys/module/usbcore/parameters/usbfs_memory_mb

2.) Spinnaker 64 Bit ARM SDK
Ref: https://www.flir.com/support-center/iis/machine-vision/application-note/getting-started-with-the-nvidia-jetson-platform

  • Unzip and install_spinnaker_arm.sh
Would you like to add a udev entry to allow access to USB hardware?
If a udev entry is not added, your cameras may only be accessible by running Spinnaker as sudo.
[Y/n] $ Y
Launching udev configuration script…
This script will assist users in configuring their udev rules to allow
access to USB devices. The script will create a udev rule which will
add FLIR USB devices to a group called flirimaging. The user may also
choose to restart the udev daemon. All of this can be done manually as well.
Adding new members to usergroup flirimaging…
Usergroup flirimaging is empty
To add a new member please enter username (or hit Enter to continue):
$ nv
Adding user nv to group flirimaging group. Is this OK?
[Y/n] $ Y
Added user nv
Current members of flirimaging group: nv

Writing the udev rules file…
Do you want to restart the udev daemon?
[Y/n] $ Y
[ ok ] Restarting udev (via systemctl): udev.service.
Configuration complete.
A reboot may be required on some systems for changes to take effect.
Would you like to set USB-FS memory size to 1000 MB at startup (via /etc/rc.local)?
By default, Linux systems only allocate 16 MB of USB-FS buffer memory for all USB devices.
This may result in image acquisition issues from high-resolution cameras or multiple-camera set ups.
NOTE: You can set this at any time by following the USB notes in the included README.
[Y/n] $ Y
Launching USB-FS configuration script…
Created /etc/rc.local and set USB-FS memory to 1000 MB.
Installation complete.
Would you like to make a difference by participating in the Spinnaker feedback program?
[Y/n] $ n
Join the feedback program anytime at "https://www.flir.com/spinnaker/survey"!
Thank you for installing the Spinnaker SDK.

3.) Python Env Setup

Select ARM64 version Download for Ubuntu 18.04
Path/Link: Spinnaker/Linux Ubuntu/Python/Ubuntu18.04/arm64/spinnaker_python-2.0.0.109-cp36-cp36m-linux_aarch64.tar.gz

$ sudo apt install libfreetype6-dev python3-tk
$ pip3 install matplotlib Pillow keyboard

nv@agx:~/spinnaker_python-2.0.0.109-cp36-cp36m-linux_aarch64/Examples/Python3$ sudo /home/nv/.virtualenvs/tor/bin/python AcquireAndDisplay.py

Library version: 2.0.0.109
Number of cameras detected: 1
Running example for camera 0…
*** IMAGE ACQUISITION ***

Acquisition mode set to continuous…
Acquiring images…
Device serial number retrieved as 20172873…
Press enter to close the program..

We can see the Bug barrier at about 3 feet from the camera
– Focus is somewhere in the middle
– Iris range F1.4

Fujinon Lens on BlackFly Camera: http://mvlens.fujifilm.com/en/product/hfha.html

Jetson AGX Xavier Development Kit Setup for Deep Learning (Tensorflow, PyTorch and Jupyter Lab) with JetPack 4.x SDK

NVIDIA Jetson AGX Xavier Developer Kit
https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit

There are many examples available at https://github.com/NVIDIA-AI-IOT/tf_trt_models that can be used to create a custom detector. However here we look at a basic development environment to get started with Tensorflow, PyTorch and Jupyter Lab on the device.

https://developer.nvidia.com/embedded/twodaystoademo

Setup Jetson AGX Xavier Development Kit

  1. NVIDIA SDK Manager for flashing (https://docs.nvidia.com/sdk-manager/install-with-sdkm-jetson/index.html)
  2. Use the SDK manager to also install Jetpack and other components
  3. Install Intel Wifi Ac 8265 with Bluetooth (https://www.jetsonhacks.com/2019/04/08/jetson-nano-intel-wifi-and-bluetooth/)
  4. Install M2 NVMe SSD storage (https://www.jetsonhacks.com/2018/10/18/install-nvme-ssd-on-nvidia-jetson-agx-developer-kit/)
  5. Move the rootfs to SSD (https://github.com/jetsonhacks/rootOnNVMe)
  6. Jetson reference Zoo: https://elinux.org/Jetson_Zoo

With this we will have AGX Xavier Kit running Jetpack 4.4 DP with rootfs on SSD and internet access via Intel WiFi 8265

Jetson Family of products (We are looking at AGX Xavier)

Deep Learning Environment/Framework Setup

  1. Setup VirtualEnvWrapper for each frameworks python install environment
mkvirtualenv <environment_name> -p python3
  1. Install Tensorflow 1.15 and 2.1 with Python 3.6 and Jetpack 4.4 DP
    (https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html)
sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
sudo apt-get install python3-pip
pip3 install -U pip
pip3 install -U pip testresources setuptools numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11

Make sure you install the python packages as above for each of the TensorFlow 1.15 and 2.1 virtual environments (tf1 and tf2) that we create next.

https://forums.developer.nvidia.com/t/official-tensorflow-for-jetson-agx-xavier

a.) Create Virtual Environment for Tensorflow 1.15 installation
mkvirtualenv tf1 -p python3

# TF-1.15
pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 ‘tensorflow<2’

b.) Create Virtual Environment for TensorFlow 2.1.0 installation
mkvirtualenv tf2 -p python3

# TF-2.x
pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow

c.) Test the installation MNIST LeNet for both the tensorflow versions (https://forums.developer.nvidia.com/t/problem-to-install-tensorflow-on-xavier-solved/64991/11)
Make sure to upgrade keras to 2.2.4 (pip install keras)


3. Install PyTorch 1.5 in virtualenv (https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-5-0-now-available)
mkvirtualenv tor -p python3

# Python 3.6 and Jetpack 4.4 DP
wget https://nvidia.box.com/shared/static/3ibazbiwtkl181n95n9em3wtrca7tdzp.whl -O torch-1.5.0-cp36-cp36m-linux_aarch64.whl
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev 
pip3 install Cython testresources setuptools pybind11
pip3 install numpy torch-1.5.0-cp36-cp36m-linux_aarch64.whl

Select the version of torchvision to download depending on the version of PyTorch that you have installed:

PyTorch v1.0 - torchvision v0.2.2
PyTorch v1.1 - torchvision v0.3.0
PyTorch v1.2 - torchvision v0.4.0
PyTorch v1.3 - torchvision v0.4.2
PyTorch v1.4 - torchvision v0.5.0
PyTorch v1.5 - torchvision v0.6.0  <---- Selected for Installation 

Install torchvision

sudo apt-get install libjpeg-dev zlib1g-dev
git clone --branch v0.6.0 https://github.com/pytorch/vision torchvision   # see above for version of torchvision to download
cd torchvision
python setup.py install
cd ../  # attempting to load torchvision from build dir will result in import error

Test Pytorch installation using MNIST https://github.com/pytorch/examples/blob/master/mnist/main.py


4. Install Jupyter for development across these virtualenv/kernels
Reference: https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html

a.) Add /home/nv/.local/bin/ to PATH for local or user installed packages
export PATH=/home/nv/.local/bin:$PATH

b.) Install Jupyterlab and add kernel from virtualenv path

python3 -m pip install jupyterlab ipykernel
python3 -m jupyter  --version

Setup the python virtual env kernel spec file for virtual environment created at ~/.virtualenvs/tor/

python3 -m ipykernel install --user --name=tor 

Installed kernelspec tor in ~/.local/share/jupyter/kernels/tor

c.) Edit the kernel.json file in envs kernelspec folder and change the default argv from “/usr/bin/python3” to “/user/nv/.virtualenvs/tor/bin/python”

 "argv": [
  "/home/nv/.virtualenvs/tor/bin/python",
  "-m",
  "ipykernel_launcher",
  "-f",
  "{connection_file}"
 ],
 "display_name": "tor",
 "language": "python"
}

Verify the kernels available using kernelspec for each python virtual environment. We have three in current setup corresponding to TensorFlow 1.15, 2.1 and PyTorch 1.5 installations

nv@agx$ python3 -m jupyter kernelspec list
Available kernels:
  python3    /home/nv/.local/share/jupyter/kernels/python3
  tf1        /home/nv/.local/share/jupyter/kernels/tf1
  tf2        /home/nv/.local/share/jupyter/kernels/tf2
  tor        /home/nv/.local/share/jupyter/kernels/tor

Start the Jupyter Lab server and select “tor” kernel for run

python3 -m jupyter lab --allow-root --ip=0.0.0.0 --no-browser

Tested with PyTorch Kernel at “tor” virtual environment and MNIST PyTorch code at https://github.com/pytorch/examples/blob/master/mnist/main.py

Now you can try all the great resources on NVIDIA’s web page https://developer.nvidia.com/embedded/twodaystoademo


Thanks to JetsonHacks for all the great reference tutorials

@Jetsonhacks

Quick Apt Repository way – NVIDIA CUDA 9.x on Ubuntu 18.04 LST installation

The same NVIDIA CUDA 9.1 setup on Ubuntu 18.04 LST using the aptitude repository. However this appears to work and is simple to work with. Reference is taken from this askubuntu discussion.

Lookup the solution to the Nouveau issue from this blogpost

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo ubuntu-drivers autoinstall
sudo reboot

Now install the CUDA toolkit

sudo apt install g++-6
sudo apt install gcc-6
sudo apt install nvidia-cuda-toolkit gcc-6

Screenshot from 2018-07-13 14-18-16

Screenshot from 2018-07-13 14-16-00

Run the installer

root@wind:~/Downloads# ./cuda_9.1.85_387.26_linux --override

Screenshot from 2018-07-13 14-27-36.png

Screenshot from 2018-07-13 14-28-43

Setup the environment variables

# Environment variables
export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib

Provide the soft link for the gcc-6 compiler

sudo ln -s /usr/bin/gcc-6 /usr/local/cuda/bin/gcc
sudo ln -s /usr/bin/g++-6 /usr/local/cuda/bin/g++
sudo reboot

Test

cd ~/NVIDIA_CUDA-9.1_Samples/
make -j4

Upon completion of the compilation test using device query binary

$ cd ~/NVIDIA_CUDA-9.1_Samples/bin/x86_64/linux/release
$ ./deviceQuery

Screenshot from 2018-07-13 14-41-49.png

$ sudo bash -c "echo /usr/local/cuda/lib64/ > /etc/ld.so.conf.d/cuda.conf"
$ sudo ldconfig

DONE

NVIDIA CUDA 9.x on Ubuntu 18.04 LST installation

Guide

An installation guide to take you through the NVIDIA graphics driver as well as CUDA toolkit setup on an Ubuntu 18.04 LTS.

A. Know your cards

Verify what graphics card you have on your machine

rahul@karma:~$ lspci | grep VGA
04:00.0 VGA compatible controller: 
NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
rahul@karma:~$ sudo lshw -C video
 *-display 
 description: VGA compatible controller
 product: GM204 [GeForce GTX 970]
 vendor: NVIDIA Corporation
 physical id: 0
 bus info: pci@0000:04:00.0
 version: a1
 width: 64 bits
 clock: 33MHz
 capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
 configuration: driver=nouveau latency=0
 resources: irq:30 memory:f2000000-f2ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:2000(size=128) memory:f3080000-f30fffff

Download the right driver

downloaded the Version 390.67 for GeForce GTX 970

Screenshot from 2018-07-12 17-15-34.png

B. Nouveau problem kills your GPU rush

Hoever there are solutions available

Here is what worked for me

  1. remove all nvidia packages ,skip this if your system is fresh installed
    sudo apt-get remove nvidia* && sudo apt autoremove
    
  2. install some packages for build kernel:
    sudo apt-get install dkms build-essential linux-headers-generic
    
  3. now block and disable nouveau kernel driver:
    sudo vim /etc/modprobe.d/nvidia-installer-disable-nouveau.conf
    

Insert follow lines to the nvidia-installer-disable-nouveau.conf:

blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off

save and exit.

  1. Disable the Kernel nouveau by typing the following commands(nouveau-kms.conf may not exist,it is ok):
    rahul@wind:~$ echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf
    options nouveau modeset=0
    
  2. build the new kernel by:
    rahul@wind:~$ sudo update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-4.15.0-23-generic
    
  3. reboot
Run the Installer in run-level 3
$ sudo init 3 
$ sudo bash
$ ./NVIDIA-Linux-x86_64-390.67.run

Uninstall

More instruction on how to stop using the driver before uninstallation
sudo nvidia-installer –uninstall

C. NVIDIA X Server Settings

Install this from the ubuntu software center.
Screenshot from 2018-07-12 17-23-43.png

D. Start the CUDA related setup

We will need the CUDA toolkit 9.1 which is supported for the GTX 970 version with compute 3.0 capability. So download the local installer for Ubuntu.

Screenshot from 2018-07-13 13-55-24.png

Downloaded the “cuda_9.1.85_387.26_linux.run*” local installation file.

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt install nvidia-cuda-toolkit gcc-6

Steps are taken from the CUDA 9.1 official documentation

  1. Perform the pre-installation actions.
  2.  Disable the Nouveau drivers. We did this in the above driver installation
  3. Reboot into text mode (runlevel 3). This can usually be accomplished by adding the number “3” to the end of the system’s kernel boot parameters. Change the runlevel ‘sudo init 3’, refer
  4. Verify that the Nouveau drivers are not loaded. If the Nouveau drivers are still loaded, consult your distribution’s documentation to see if further steps are needed to disable Nouveau.
  5. Run the installer and follow the on-screen prompts:
$ chmod +x cuda_9.1.85_387.26_linux
$ rahul@wind:~/Downloads$ ./cuda_9.1.85_387.26_linux --override

Screenshot from 2018-07-13 13-52-19.png

Since we already installed the Driver above we say NO in the NVIDIA accelerated graphic driver installation question.

Screenshot from 2018-07-13 13-54-20.png

This will install the CUDA stuff in the following locations

  • CUDA Toolkit /usr/local/cuda-9.1
  • CUDA Samples $(HOME)/NVIDIA_CUDA-9.1_Samples

We can verify the graphic card using the NVIDIA-SMI command.

Screenshot from 2018-07-12 20-02-08

Uninstallation

cd /usr/local/cuda-9.1/bin
sudo ./uninstall_cuda_9.1.pl

 

E. Environment Variables

rahul@wind:~$ vim ~/.bashrc

# Add the following to the environment variables
export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib

rahul@wind:~$ source ~/.bashrc
rahul@wind:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.1, 

 

F. Test

Ensure you have the right driver versions

rahul@wind:$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 390.67 Fri Jun 1 04:04:27 PDT 2018
GCC version: gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)

Change directory to the NVIDIA CUDA Samples and compile them

rahul@wind:~/NVIDIA_CUDA-9.1_Samples$ make

Now run the device query test

rahul@wind:~/NVIDIA_CUDA-9.1_Samples/bin/x86_64/linux/release$ ./deviceQuery
./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

 

END