RP LIDAR A3 USB Integration with Jetson AGX Xavier Development Board

  1. Jetson AGX Xavier Dev Board Setup
  2. RPLIDAR A3 SDK : http://www.slamtec.com/en/support#rplidar-a-series
  3. Jetson Hacks Ref: https://www.jetsonhacks.com/2018/12/07/rplidar-a2-nvidia-jetson-development-kits/

1.) List the devices attached to the system
$ usb-devices

T:  Bus=01 Lev=02 Prnt=02 Port=02 Cnt=02 Dev#=  4 Spd=12  MxCh= 0
D:  Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs=  1
P:  Vendor=10c4 ProdID=ea60 Rev=01.00
S:  Manufacturer=Silicon Labs
S:  Product=CP2102 USB to UART Bridge Controller
S:  SerialNumber=0001
C:  #Ifs= 1 Cfg#= 1 Atr=80 MxPwr=100mA
I:  If#= 0 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=00 Prot=00 Driver=cp210x

2.) Install rplidar_sdk

$ git clone https://github.com/Slamtec/rplidar_sdk
$ cd rplidar_sdk/sdk
$ make
$ sudo adduser $USER dialout

3.) Run the ultra example

$ cd output/Linux/Release/
$ ./ultra_simple /dev/ttyUSB0

nv@agx:$ cd ~/rplidar_sdk/sdk/output/Linux/Release
nv@agx:$ ./ultra_simple /dev/ttyUSB0 

Ultra simple LIDAR data grabber for RPLIDAR.
Version: 1.12.0
RPLIDAR S/N: B8949A87C5E392D5A5E492F82841316B
Firmware Ver: 1.27
Hardware Rev: 6
RPLidar health status : 0
   theta: 0.15 Dist: 00000.00 Q: 0 
   theta: 0.23 Dist: 00148.00 Q: 188 
   theta: 0.36 Dist: 00000.00 Q: 0 
   theta: 0.57 Dist: 00000.00 Q: 0 
   theta: 0.76 Dist: 00149.00 Q: 188 
   theta: 0.77 Dist: 00000.00 Q: 0 

BlackFly S USB & Spinnaker SDK Integration with Jetson AGX Xavier Development Board

FLIR products are fairly simple to assemble and below are the components that we used for this USB camera test setup. We will use C and Python Spinnaker ARM64 sources for Ubuntu 18.04/16.04 available on Jetson AGX Xavier Development Board.

Read Pre-requisite
a.) Spinnaker C++ SDK: http://softwareservices.flir.com/Spinnaker/latest
b.) USB-FS Throughput: https://www.flir.com/support-center/iis/machine-vision/application-note/understanding-usbfs-on-linux/
c.) Buffer handling: https://www.flir.com/support-center/iis/machine-vision/application-note/understanding-buffer-handling/
d.) GenICam Standard: https://www.emva.org/standards-technology/genicam/

All resources for Jetson cameras are available at elinux

BlackFly S USB3 Setup

Resources for Multiple camera setup: https://www.flir.com/support-center/iis/machine-vision/application-note/configuring-synchronized-capture-with-multiple-cameras

GeniCam Standards for Spinnaker Node: https://www.flir.eu/support-center/iis/machine-vision/application-note/spinnaker-nodes/

1.) List the devices attached to the system
$ usb-devices

T: Bus=02 Lev=01 Prnt=01 Port=02 Cnt=02 Dev#= 3 Spd=5000 MxCh= 0
D: Ver= 3.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS= 9 #Cfgs= 1
P: Vendor=1e10 ProdID=4000 Rev=00.00
S: Manufacturer=FLIR
S: Product=Blackfly S BFS-U3-63S4C
S: SerialNumber=0133D049
C: #Ifs= 3 Cfg#= 1 Atr=80 MxPwr=896mA
I: If#= 0 Alt= 0 #EPs= 2 Cls=ef(misc ) Sub=05 Prot=00 Driver=(none)
I: If#= 1 Alt= 0 #EPs= 1 Cls=ef(misc ) Sub=05 Prot=01 Driver=(none)
I: If#= 2 Alt= 0 #EPs= 1 Cls=ef(misc ) Sub=05 Prot=02 Driver=(none)

Increase the USB memory limit
$ sudo sh -c ‘echo 1000 > /sys/module/usbcore/parameters/usbfs_memory_mb’
$ cat /sys/module/usbcore/parameters/usbfs_memory_mb

2.) Spinnaker 64 Bit ARM SDK
Ref: https://www.flir.com/support-center/iis/machine-vision/application-note/getting-started-with-the-nvidia-jetson-platform

  • Unzip and install_spinnaker_arm.sh
Would you like to add a udev entry to allow access to USB hardware?
If a udev entry is not added, your cameras may only be accessible by running Spinnaker as sudo.
[Y/n] $ Y
Launching udev configuration script…
This script will assist users in configuring their udev rules to allow
access to USB devices. The script will create a udev rule which will
add FLIR USB devices to a group called flirimaging. The user may also
choose to restart the udev daemon. All of this can be done manually as well.
Adding new members to usergroup flirimaging…
Usergroup flirimaging is empty
To add a new member please enter username (or hit Enter to continue):
$ nv
Adding user nv to group flirimaging group. Is this OK?
[Y/n] $ Y
Added user nv
Current members of flirimaging group: nv

Writing the udev rules file…
Do you want to restart the udev daemon?
[Y/n] $ Y
[ ok ] Restarting udev (via systemctl): udev.service.
Configuration complete.
A reboot may be required on some systems for changes to take effect.
Would you like to set USB-FS memory size to 1000 MB at startup (via /etc/rc.local)?
By default, Linux systems only allocate 16 MB of USB-FS buffer memory for all USB devices.
This may result in image acquisition issues from high-resolution cameras or multiple-camera set ups.
NOTE: You can set this at any time by following the USB notes in the included README.
[Y/n] $ Y
Launching USB-FS configuration script…
Created /etc/rc.local and set USB-FS memory to 1000 MB.
Installation complete.
Would you like to make a difference by participating in the Spinnaker feedback program?
[Y/n] $ n
Join the feedback program anytime at "https://www.flir.com/spinnaker/survey"!
Thank you for installing the Spinnaker SDK.

3.) Python Env Setup

Select ARM64 version Download for Ubuntu 18.04
Path/Link: Spinnaker/Linux Ubuntu/Python/Ubuntu18.04/arm64/spinnaker_python-

$ sudo apt install libfreetype6-dev python3-tk
$ pip3 install matplotlib Pillow keyboard

nv@agx:~/spinnaker_python-$ sudo /home/nv/.virtualenvs/tor/bin/python AcquireAndDisplay.py

Library version:
Number of cameras detected: 1
Running example for camera 0…

Acquisition mode set to continuous…
Acquiring images…
Device serial number retrieved as 20172873…
Press enter to close the program..

We can see the Bug barrier at about 3 feet from the camera
– Focus is somewhere in the middle
– Iris range F1.4

Fujinon Lens on BlackFly Camera: http://mvlens.fujifilm.com/en/product/hfha.html

Jetson AGX Xavier Development Kit Setup for Deep Learning (Tensorflow, PyTorch and Jupyter Lab) with JetPack 4.x SDK

NVIDIA Jetson AGX Xavier Developer Kit

There are many examples available at https://github.com/NVIDIA-AI-IOT/tf_trt_models that can be used to create a custom detector. However here we look at a basic development environment to get started with Tensorflow, PyTorch and Jupyter Lab on the device.


Setup Jetson AGX Xavier Development Kit

  1. NVIDIA SDK Manager for flashing (https://docs.nvidia.com/sdk-manager/install-with-sdkm-jetson/index.html)
  2. Use the SDK manager to also install Jetpack and other components
  3. Install Intel Wifi Ac 8265 with Bluetooth (https://www.jetsonhacks.com/2019/04/08/jetson-nano-intel-wifi-and-bluetooth/)
  4. Install M2 NVMe SSD storage (https://www.jetsonhacks.com/2018/10/18/install-nvme-ssd-on-nvidia-jetson-agx-developer-kit/)
  5. Move the rootfs to SSD (https://github.com/jetsonhacks/rootOnNVMe)
  6. Jetson reference Zoo: https://elinux.org/Jetson_Zoo

With this we will have AGX Xavier Kit running Jetpack 4.4 DP with rootfs on SSD and internet access via Intel WiFi 8265

Jetson Family of products (We are looking at AGX Xavier)

Deep Learning Environment/Framework Setup

  1. Setup VirtualEnvWrapper for each frameworks python install environment
mkvirtualenv <environment_name> -p python3
  1. Install Tensorflow 1.15 and 2.1 with Python 3.6 and Jetpack 4.4 DP
sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
sudo apt-get install python3-pip
pip3 install -U pip
pip3 install -U pip testresources setuptools numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11

Make sure you install the python packages as above for each of the TensorFlow 1.15 and 2.1 virtual environments (tf1 and tf2) that we create next.


a.) Create Virtual Environment for Tensorflow 1.15 installation
mkvirtualenv tf1 -p python3

# TF-1.15
pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 ‘tensorflow<2’

b.) Create Virtual Environment for TensorFlow 2.1.0 installation
mkvirtualenv tf2 -p python3

# TF-2.x
pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow

c.) Test the installation MNIST LeNet for both the tensorflow versions (https://forums.developer.nvidia.com/t/problem-to-install-tensorflow-on-xavier-solved/64991/11)
Make sure to upgrade keras to 2.2.4 (pip install keras)

3. Install PyTorch 1.5 in virtualenv (https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-5-0-now-available)
mkvirtualenv tor -p python3

# Python 3.6 and Jetpack 4.4 DP
wget https://nvidia.box.com/shared/static/3ibazbiwtkl181n95n9em3wtrca7tdzp.whl -O torch-1.5.0-cp36-cp36m-linux_aarch64.whl
sudo apt-get install python3-pip libopenblas-base libopenmpi-dev 
pip3 install Cython testresources setuptools pybind11
pip3 install numpy torch-1.5.0-cp36-cp36m-linux_aarch64.whl

Select the version of torchvision to download depending on the version of PyTorch that you have installed:

PyTorch v1.0 - torchvision v0.2.2
PyTorch v1.1 - torchvision v0.3.0
PyTorch v1.2 - torchvision v0.4.0
PyTorch v1.3 - torchvision v0.4.2
PyTorch v1.4 - torchvision v0.5.0
PyTorch v1.5 - torchvision v0.6.0  <---- Selected for Installation 

Install torchvision

sudo apt-get install libjpeg-dev zlib1g-dev
git clone --branch v0.6.0 https://github.com/pytorch/vision torchvision   # see above for version of torchvision to download
cd torchvision
python setup.py install
cd ../  # attempting to load torchvision from build dir will result in import error

Test Pytorch installation using MNIST https://github.com/pytorch/examples/blob/master/mnist/main.py

4. Install Jupyter for development across these virtualenv/kernels
Reference: https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html

a.) Add /home/nv/.local/bin/ to PATH for local or user installed packages
export PATH=/home/nv/.local/bin:$PATH

b.) Install Jupyterlab and add kernel from virtualenv path

python3 -m pip install jupyterlab ipykernel
python3 -m jupyter  --version

Setup the python virtual env kernel spec file for virtual environment created at ~/.virtualenvs/tor/

python3 -m ipykernel install --user --name=tor 

Installed kernelspec tor in ~/.local/share/jupyter/kernels/tor

c.) Edit the kernel.json file in envs kernelspec folder and change the default argv from “/usr/bin/python3” to “/user/nv/.virtualenvs/tor/bin/python”

 "argv": [
 "display_name": "tor",
 "language": "python"

Verify the kernels available using kernelspec for each python virtual environment. We have three in current setup corresponding to TensorFlow 1.15, 2.1 and PyTorch 1.5 installations

nv@agx$ python3 -m jupyter kernelspec list
Available kernels:
  python3    /home/nv/.local/share/jupyter/kernels/python3
  tf1        /home/nv/.local/share/jupyter/kernels/tf1
  tf2        /home/nv/.local/share/jupyter/kernels/tf2
  tor        /home/nv/.local/share/jupyter/kernels/tor

Start the Jupyter Lab server and select “tor” kernel for run

python3 -m jupyter lab --allow-root --ip= --no-browser

Tested with PyTorch Kernel at “tor” virtual environment and MNIST PyTorch code at https://github.com/pytorch/examples/blob/master/mnist/main.py

Now you can try all the great resources on NVIDIA’s web page https://developer.nvidia.com/embedded/twodaystoademo

Thanks to JetsonHacks for all the great reference tutorials


Visualize RGBD using RealSense d435i + ROS(Melodic) on Jetson Nano Dev Kit

Here we build RealSense 435i lib and install RealSense-ROS on Jetson Nano, so that we can stream RGBD information on ROS topics and visualize in rViz.

For ROS2 (Dashing)

Now, lets see ROS (Melodic)

A. Setup guide for Jetson Nano Developer Kit

Go through the docs for Jetson Linux version based on Ubuntu 18.04 to a Micro SD card


B. Setup Guide for RealSense 435i

1. Download the RealSense SDK library (release 2.31) on Jetsons device

mkdir ~/ws && cd ~/ws
mkdir lib &&  cd lib
wget https://github.com/IntelRealSense/librealsense/archive/v2.31.0.tar.gz
tar -xvzf v2.31.0.tar.gz 
cd librealsense-2.31.0
  • Setup environment variables in ~/.bashrc file and source it
export PATH=${PATH}:/usr/local/cuda/bin
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64

2. Build instructions for Linux Ubuntu can be found at github install documentation for RealSense


sudo apt-get install -y git libssl-dev build-essential python3-dev
sudo apt-get install -y bc bzip2 xz-utils git-core vim-common
sudo apt-get install -y libusb-1.0-0-dev pkg-config libgtk-3-dev
sudo apt-get install -y libglfw3-dev libgl1-mesa-dev libglu1-mesa-dev

You may have to build the kernel and update to 4.15 if default version 4.9 used in this blog is unsupported. Documentation to do this is available at developer.ridgerun.com

Another simpler option can be seen at Jetson hacks blog for https://github.com/JetsonHacksNano/buildKernelAndModules


build instructions were take from Jetsons hacks nano help page for compilation, flag and env variables

Now, Navigate to librealsense root directory 

mkdir build && cd build
sudo make uninstall && make clean
make -j3 && sudo make install

Upon successful completion of this step which may take an hour or so since we are compiling this on Jetson nano.

Update the rules from the librealsense folder and reboot

sudo cp config/99-realsense-libusb.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules && udevadm trigger
sudo reboot

Connect your camera and open up “realsense-viewer” from command line.

Let us setup VNC viewer with XFCE4 desktop environment, to check on the device for camera views.

sudo apt install -y xfce4 xfce4-goodies
sudo apt install -y tightvncserver

Go through the instructions in the documentation for VNC server.


C. Setup ROS on the Device

We will discuss how to setup ROS and connect the RGBD camera from RealSense for SLAM. The setup guide for Ubuntu installation is available on ROS website (http://wiki.ros.org/melodic/Installation/Ubuntu) but Jetsons Nano is covered in the steps below.

1.  Install ROS (Melodic) on Jetsons Nano

Once we have setup the catkin workspace lets start the roscore server

nv@nvc2:~/ws/lib/installROS$ roscore
... logging to /home/nv/.ros/log/~/roslaunch-nvc2-32701.log 
Checking log directory for disk usage. This may take awhile
Press Ctrl-C to interrupt                                  
Done checking log file disk usage. Usage is <1GB.          
started roslaunch server       
ros_comm version 1.14.3                                    
 * /rosdistro: melodic                                     
 * /rosversion: 1.14.3                                     
auto-starting new master                                   
process[master]: started with pid [32712]                  
setting /run_id to c6c6f128-3efb-11ea-bcdf-00e04c680086
process[rosout-1]: started with pid [32725]                
started core service [/rosout]

To verify the topics we can open up another tab and test the topics

nv@nvc2:~$ rostopic list                      

nv@nvc2:~$ rostopic info /rosout              
Type: rosgraph_msgs/Log                       
Publishers: None                              
 * /rosout (


2. Integrate with Realsense-ros wrapper for Jetsons Nano

source ~/catkin_ws/devel/setup.bash
roslaunch realsense2_camera rs_rgbd.launch



  • rVIz for visualizing the frames captured from ros topics


D. Multiple camera on Jetson Nano + ROS devices

Ref: https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/

  • Consider three devices with one being master desktop PC and others are Jetson Nano.
  • All streams are being published from Jetson devices to Desktop PC which the master node in this setup.
  • We also have a camera connected directly to the PC which is also publishing to master on localhost.

1. Master (Desktop PC with ROS)

  • Run Ros Core and configure the master IP that will be discovered by other devices
rahul@Windspect:~/catkin_ws$ source ~/catkin_ws/devel/setup.bash
rahul@Windspect:~/catkin_ws$ roscore
 * /rosdistro: kinetic
 * /rosversion: 1.12.14
auto-starting new master
process[master]: started with pid [4213]
  • Run a ROS launch on the Desktop PC
$ roslaunch realsense2_camera rs_camera.launch camera:=cam3


2. Jetson Devices (nvc1~ or nvc2~

nv@nvc2:~/catkin_ws$ source ~/.bashrc 
$ source ~/catkin_ws/devel/setup.bash
$ roslaunch realsense2_camera rs_camera.launch camera:=cam_nvc2 \
realsense2_camera (nodelet/nodelet)
realsense2_camera_manager (nodelet/nodelet)


process[cam_nvc2/realsense2_camera_manager-1]: started with pid [10666]
process[cam_nvc2/realsense2_camera-2]: started with pid [10667]
[ INFO] [1580172320.682329166]: Initializing nodelet with 4 worker threads.
[ INFO] [1580172321.007386126]: RealSense ROS v2.2.11
[ INFO] [1580172321.007555920]: Running with LibRealSense v2.31.0
[ INFO] [1580172321.550959040]:
 27/01 16:45:22,062 WARNING [547474108800] (types.cpp:49) hwmon command 0x4f failed. Error type: No data to return (-21).
[ INFO] [1580172322.116796378]: Device with serial number 843112070672 was found.

[ INFO] [1580172322.116927526]: Device with physical ID 2-1.4-4 was found.
[ INFO] [1580172322.117483108]: Device with name Intel RealSense D435I was found.
[ WARN] [1580172322.126162156]: Error extracting usb port from device with physical ID: 2-1.4-4
Please report on github issue at https://github.com/IntelRealSense/realsense-ros
[ INFO] [1580172322.178399048]: getParameters...
[ INFO] [1580172325.118570723]: setupDevice...
[ INFO] [1580172325.118670047]: JSON file is not provided
[ INFO] [1580172325.118737809]: ROS Node Namespace: cam_nvc2
[ INFO] [1580172325.118836665]: Device Name: Intel RealSense D435I
[ INFO] [1580172325.118891770]: Device Serial No: 843112070672
[ INFO] [1580172325.118964323]: Device physical port: 2-1.4-4
[ INFO] [1580172325.119104950]: Device FW version:

[ INFO] [1580172325.122388024]: Setting Dynamic reconfig parameters.
[ INFO] [1580172332.420782203]: Done Setting Dynamic reconfig parameters.
[ INFO] [1580172332.474350002]: depth stream is enabled - width: 640, height: 480, fps: 30, Format: Z16
[ INFO] [1580172332.475074336]: infra1 stream is enabled - width: 640, height: 480, fps: 30, Format: Y8
[ INFO] [1580172332.475693826]: infra2 stream is enabled - width: 640, height: 480, fps: 30, Format: Y8
[ INFO] [1580172332.529029433]: color stream is enabled - width: 640, height: 480, fps: 30, Format: RGB8
[ INFO] [1580172332.530210546]: setupPublishers...
[ INFO] [1580172332.590997109]: Expected frequency for depth = 30.00000
[ INFO] [1580172332.651019700]: Expected frequency for infra1 = 30.00000
[ INFO] [1580172332.693332837]: Expected frequency for infra2 = 30.00000
[ INFO] [1580172332.750939191]: Expected frequency for color = 30.00000
[ INFO] [1580172332.898780035]: setupStreams...

27/01 16:45:33,861 WARNING [547205673344] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: No data available, number: 61
[ INFO] [1580172333.887263938]: RealSense Node Is Up!


This is a view from one camera on PC and another on Jetson Nano. All streams are published over ROS to the master on PC





Nvidia-Docker containers for your JupyterLab based Tensorflow-gpu environment with MRCNN example, on Ubuntu 18.04 LTS

This will setup a convenient development environment for Tensorflow based deep learning environment on NVIDIA cards using the nvidia-docker containers. The work can happen in a Jupter Lab.

Github: https://github.com/vishwakarmarhl/dl-lab-docker



  1. Setup Python
  2. Setup Docker
  3. Setup Nvidia-Docker
  4. Create Deep learning container
  5. Run Mask R-CNN example in container
  6. Manage containers using Portainer



1. Setup Python

We need a virtual environment to work with and for that we use virtualenvwrapper

sudo apt-get install -y build-essential cmake unzip pkg-config ubuntu-restricted-extras git python3-dev python3-pip python3-numpy
sudo apt-get install -y freeglut3 freeglut3-dev libxi-dev libxmu-dev
sudo pip3 install virtualenv virtualenvwrapper

Edit ~/.bashrc file, add the following entry and source it

# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh

Create a virtual environment and install packages to it

mkvirtualenv cv3 -p python3
workon cv3
pip install numpy scipy scikit-image scikit-learn
pip install imutils pyzmq ipython matplotlib imgaug

More on this is available at Compile and Setup OpenCV 3.4.x on Ubuntu 18.04 LTS with Python Virtualenv for Image processing with Ceres, VTK, PCL


2. Setup Docker

Install docker-ce for Ubuntu 18.04 keeping in mind its compatibility with the nvidia-docker installation which is coming next

The repository setup is critical here and follow the instructions mentioned in the official installation guide at https://docs.docker.com/install/linux/docker-ce/ubuntu/

Just install docker-ce now

sudo apt-get install docker-ce=5:19.03.2~3-0~ubuntu-bionic 
sudo apt-get install docker-ce-cli=5:19.03.2~3-0~ubuntu-bionic 
sudo apt-get install containerd.io
sudo usermod -aG docker $USER
sudo systemctl enable docker 

Reboot the machine now and run “docker run hello-world” for test


3. Setup Nvidia-Docker

Install nvidia-docker runtime which allows containers to access the GPU hardware. The needed docker version is Docker 19.03 

# Add the package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker

Test the runtime using nvidia-smi on official containers

# Test nvidia-smi with the latest official CUDA image
docker run --gpus all nvidia/cuda:9.0-base nvidia-smi

You can also configure docker to use the nvidia runtime by default by using the following

# Now verify the config and it should look like below
sudo echo > /etc/docker/daemon.json
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []


Advanced instructions for moving your docker image storage to a different location

Reference: https://forums.docker.com/t/how-do-i-change-the-docker-image-installation-directory/1169

Ubuntu/Debian: edit your /etc/default/docker file with the -g option: DOCKER_OPTS="-dns -dns -g /mnt"

4. Create a Deep Learning Container

Use the Dockerfile with an example in the provided dl-lab-docker repository

git clone https://github.com/vishwakarmarhl/dl-lab-docker.git
cd dl-lab-docker

Now lets build an image and run it locally.

Make sure Nvidia-docker is installed and default runtime is nvidia.

docker build -t dl-lab-docker:latest . -f Dockerfile.dl-lab.xenial
I have a prebuild docker image containing tensorflow-gpu==1.5.0 with CUDA 9.0 and cuDNN 7.0.5 which can be run as below
docker run --gpus all -it --ipc=host -p 8888:8888 \

Finally access the Jupyter Lab page at

This docker image is also made available at my Docker Hub as vishwakarmarhl/dl-lab-docker. So without building you can use the image directly by pulling this image from docker hub.

Oh !!! by the way since this will be a development environment do not rely on the code provided in the container. Mount your own development folders to code.

docker run -it --ipc=host -v $(pwd):/module/host_workspace \
           -p 8888:8888 vishwakarmarhl/dl-lab-docker:latest



5. Container with Mask R-CNN in Jupyter Lab

The code directory contains Dockerfile to make it easy to get up and running with TensorFlow via Docker.

Navigate to the mrcnn codebase as below and open up masker.ipynb


Below is an example run of the Mask R-CNN model taken from https://github.com/matterport/Mask_RCNN



6. Manage Containers using Portainer

Since we are dealing with docker containers, the number of containers quickly becomes messy in a development environment. We will deal with this using Portainer management interface

docker volume create portainer_data 
docker run -d -p 8000:8000 -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer


The portainer interface should be available at


Finally you can take a look at the created images at Docker Hub (dl-lab-docker repo) repository which is automatically building the github Dockerfile




Realsense 435i (Depth & RGB) Multi Camera Setup and OpenCV-Python wrapper (Intel RealSense SDK 2.0 Compiled from Source on Win10)


multiple Realsense depth camera’s appear to be accessible from different source identifiers/locations (especially using DirectShow on windows). I am not interested in hardware sync so will be using an enforced timestamp to associate with the capture.

Intel RealSense Depth Camera D435i: https://www.intelrealsense.com/depth-camera-d435i

Ref: https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration

Intel RealSense SDK 2.0 codehttps://github.com/IntelRealSense/librealsense

I installed the RealSense SDK 2.23.0 version for my Windows 10 environment. Looks like all the example projects in github focus on C++. However we have wrapper python examples which does provide the stuff we need.

Build Python wrapper and tests using CMake & Visual Studio on Windows 10

Build the RealSense python wrapper from source code


Source: https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python

  1. Git clone the librealsense2 source code
  2. Install Visual Studio Community edition for compiling the source
  3. Run CMake Gui “cmake-gui.exe”.
  4. “Browse Source” and configure the path. Example “C:/Dev/librealsense”
  5. Create “build” folder on librealsense path folder. Click “Browse Build” button and configure the path. Example “C:/Dev/librealsense/build”.
  6. Check on “Grouped” and “Advanced” checkbox
  7. Click “Configure” button.
  8. On “Specify the generator for this project”, Select “Visual Studio 16 2019”
  9. Then click “Finish” button.
  10. After configuration is done, on “BUILD” node, check “BUILD_PYTHON_BINDINGS” checkbox button.
  11. Then click “Configure” button again.
  12. In “Ungrouped Entries”, identify variable “PYTHON_EXECUTABLE”, and change the path to anaconda python.exe in your virtual environment path as
    “C:/Python3/python.exe” or “C:/ProgramData/Anaconda3/envs/cv3/python.exe”
  13. Then click “Generate” button, after all is done, click “Open Project”.
  14. Alternately, open Visual Studio IDE, select the librealsense2.sln solution and build all in Release or Debug mode.
  15. The libraries are built and available in folder c:/Dev/librealsense/build/Release if you built in Release mode
    1.  cp pybackend2.cp35-win_amd64.pyd pybackend2.pyd
    2.  cp pyrealsense2.cp35-win_amd64.pyd pyrealsense2.pyd
  16. Copy the libraries to the relevant environment
    1. copy “pybackend2.pyd”, “pyrealsense2.pyd” and “realsense2.dll” to “DLLs” folder. Example “C:/ProgramData/Anaconda3/envs/cv3/DLLs”
    2. copy “realsense2.lib” to “libs” folder. Example “C:/ProgramData/Anaconda3/envs/cv3/libs”
  17. Test the python bindings and realsense libraries
    1. Activate your Anaconda environment. Example: conda activate cv3
    2. Test the python opencv RGB & Depth visualization code and see the results as per the image below

Test Example: Opencv RGB & Depth viewer

cd C:/Dev/librealsense/wrappers\python\example
python opencv_viewer_example.pyRealSenseTestCapture


Now lets write a test code for Multiple Camera Setup

Test 1: enlist all the devices

import pyrealsense2 as rs
import numpy as np
import cv2

DS5_product_ids = ["0AD1", "0AD2", "0AD3", "0AD4", "0AD5", "0AF6", "0AFE", "0AFF", "0B00", "0B01", "0B03", "0B07","0B3A"]
def find_device_that_supports_advanced_mode() :
    ctx = rs.context()
    ds5_dev = rs.device()
    devices = ctx.query_devices()
    print("D: ", devices)
    devs = []
    for dev in devices:
        if dev.supports(rs.camera_info.product_id) and str(dev.get_info(rs.camera_info.product_id)) in DS5_product_ids:
            if dev.supports(rs.camera_info.name):
                print("Found device that supports advanced mode:", dev.get_info(rs.camera_info.name), " -> ", dev)
    #raise Exception("No device that supports advanced mode was found")
    return devs

devs = find_device_that_supports_advanced_mode()


(cv3) λ python senserealtest.py
D:  <pyrealsense2.device_list object at 0x00000237B1C3B688>
Found device that supports advanced mode: Intel RealSense D435I  ->  <pyrealsense2.device: Intel RealSense D435I (S/N: 843112070672)>
Found device that supports advanced mode: Intel RealSense D435I  ->  <pyrealsense2.device: Intel RealSense D435I (S/N: 843112070632)>
Found device that supports advanced mode: Intel RealSense D435I  ->  <pyrealsense2.device: Intel RealSense D435I (S/N: 843112070689)>

Test 2: Stream images from all the devices and show

Use a device manager helper class from boxdimensioner multicam example project. Also look at the fix provided for an issue observed in device manager class.

import pyrealsense2 as rs
import numpy as np
import cv2

from realsense_device_manager import DeviceManager

def visualise_measurements(frames_devices):
    Calculate the cumulative pointcloud from the multiple devices
    frames_devices : dict
    	The frames from the different devices
    	keys: str
    		Serial number of the device
    	values: [frame]
    		frame: rs.frame()
    			The frameset obtained over the active pipeline from the realsense device
    for (device, frame) in frames_devices.items():
        color_image = np.asarray(frame[rs.stream.color].get_data())
        text_str = device
        cv2.putText(color_image, text_str, (50,50), cv2.FONT_HERSHEY_PLAIN, 2, (0,255,0) )
        # Visualise the results
        text_str = 'Color image from RealSense Device Nr: ' + device
        cv2.imshow(text_str, color_image)

# Define some constants 
resolution_width = 1280 # pixels
resolution_height = 720 # pixels
frame_rate = 15  # fps
dispose_frames_for_stablisation = 30  # frames

    # Enable the streams from all the intel realsense devices
    rs_config = rs.config()
    rs_config.enable_stream(rs.stream.depth, resolution_width, resolution_height, rs.format.z16, frame_rate)
    rs_config.enable_stream(rs.stream.infrared, 1, resolution_width, resolution_height, rs.format.y8, frame_rate)
    rs_config.enable_stream(rs.stream.color, resolution_width, resolution_height, rs.format.bgr8, frame_rate)

    # Use the device manager class to enable the devices and get the frames
    device_manager = DeviceManager(rs.context(), rs_config)
    # Allow some frames for the auto-exposure controller to stablise
    while 1:
        frames = device_manager.poll_frames()

except KeyboardInterrupt:
    print("The program was interupted by the user. Closing the program...")



This will show you multiple windows, one for each camera connected to the system


That should be good for today !!!

References can be found at the following places:





Raspberry PI 3 B+ as an Access point/bridge on a local wireless network

A Raspberry PI 3 B+ based setup is expected here. This is detailed in a previous blog.

O. Create a PI based Network Access point using static host (pi@navx.local)

Major reference that describes how to create a DHCP server based WiFi bridge is available here, http://ardupilot.org/dev/docs/making-a-mavlink-wifi-bridge-using-the-raspberry-pi.html


Hostname & Setup: Two Raspberry Pi’s. One is the access point and other connects to it for communicating over Wireless LAN.

navx.local -> pi@ 
-> This Raspberry PI is the WiFi access-point/hotspot host

nava.local -> pi@ 
-> This Raspberry PI is configured to connect to the above access-point

A. Setup or configure WiFi SSID on Raspberry (pi@nava.local)

Link: https://www.raspberrypi.org/documentation/configuration/wireless/wireless-cli.md

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

Connect the raspberry Pi to the WiFi access point. Edit the wpa_supplicant file with the new SSID and Password.

Alternately you can use raspi-config

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf
sudo ifdown wlan0
sudo ifup wlan0

B. Change the hostname using raspi-config or using /etc/hosts & /etc/hostname file  (pi@nava.local & pi@navx.local)

Install avahi with the following commands on all the Raspberry Pi’s: https://www.howtogeek.com/167190/how-and-why-to-assign-the-.local-domain-to-your-raspberry-pi/

sudo apt-get install avahi-daemon
# Update boot startup for avahi-daemon
sudo insserv avahi-daemon
sudo update-rc.d avahi-daemon defaults

Install Bonjour on windows for access, discovery and then configure IPv6 on raspberry PI’s

# Enable IPv6 on RPi 
sudo modprobe ipv6

Add ip6 entry on a new line in /etc/modules file

# Apply the new configuration with:
sudo /etc/init.d/avahi-daemon restart

Raspberry PI’s should now be addressable from other machines as navx.local,

ssh pi@navx.local
ssh pi@nava.local


C. Stop the static IP and connect back to WiFi on wlan0 (pi@navx.local)

In case you are connected to another WiFi with internet access before moving to this configuration. A quick way to connect back to that WiFi is below.

Disable, stop DHCP server and reboot

sudo update-rc.d hostapd disable
sudo update-rc.d isc-dhcp-server disable
sudo service hostapd stop
sudo service isc-dhcp-server stop

Move the previous network config to the current setup

cp /etc/network/interfaces.backup /etc/network/interfaces

Revert the interfaces file on Rpi rasbian by copying the content of backup to the interfaces. The backed up content is shown below.

(cv3) pi@navx:~ $ cat interface.dyninternet.backup
# interfaces(5) file used by ifup(8) and ifdown(8)
# Please note that this file is written to be used with dhcpcd
# For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'

# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d

auto lo
iface lo inet loopback

iface eth0 inet manual

allow-hotplug intwifi0
iface intwifi0 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

allow-hotplug wlan0
iface wlan0 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

allow-hotplug wlan1
iface wlan1 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

D. Start the static IP configured DHCP server for the access point (pi@navx.local)

Start the DHCP server and enable the service

sudo service hostapd start
sudo service isc-dhcp-server start

sudo update-rc.d hostapd enable
sudo update-rc.d isc-dhcp-server enable

sudo reboot

Scan the wlan0 for SSID “NavxStation

sudo iwlist wlan0 scanning essid NavxStation

Connect from the raspberry (pi@nava.local) as described in section A


E. Provide fixed IP address on the host dhcp server (pi@navx.local)


The “HWaddr” or “ether” value is the MAC address. In this example say “c7:35:ce:fd:8e:a1

ifconfig wlan0

Edit the /etc/dhcp/dhcpd.conf file and add the following towards the end for fixed assignment.

host machine1_nava {
  hardware ethernet XX:XX:XX:XX:XX:XX;

Check the currently leased connections

cat /var/lib/dhcp/dhcpd.leases

Also, you can verify connected devices using

sudo iw dev wlan0 station dump 
sudo arp




Brokerless ZeroMq to share live image/video data stream from Raspberry PI B+ (Uses OpenCV)

Makes sure you have the following :

  1. OpenCV 3.x compiled and installed on Rasbperry Pi
  2. Raspberry Pi B+ with Raspbian and a USB webcam attached


Read through ZeroMQ in 100 words for a brief description





Now lets go through the simple code I wrote for publishing and subscribing to live webcam stream from a raspberry PI to a workstation.


Make sure you have the dependencies and imports as below

import os, sys, datetime
import json, base64

import cv2
import zmq
import numpy as np
import imutils
from imutils.video import FPS


“”” Publish “””

  • In this piece of code we are creating a Zero MQ context preparing to send data to ‘tcp://localhost:5555’
  • The opencv API for camera is used to capture a frame from the device in 640×480 size
  • FPS module is used from opencv to estimate the frame rate for this capture
  • The byte buffer read from the webcam is encoded and sent as a string over the Zero MQ TCP socket connection
  • Continue to send each buffer out on the TCP socket
def pubVideo(config):
    context = zmq.Context()
    footage_socket = context.socket(zmq.PUB)
    # 'tcp://localhost:5555'
    ip = ""
    port = 5555
    target_address = "tcp://{}:{}".format(ip, port) 
    print("Publish Video to ", target_address)
    impath = 0 # For the first USB camera attached
    camera = cv2.VideoCapture(impath)  # init the camera
    camera.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
    camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
    print("Start Time: ", datetime.datetime.now())
    fps = FPS().start()
    while True:
            buffer = capture(config, camera)
            if not isinstance(buffer, (list, tuple, np.ndarray)):
            buffer_encoded = base64.b64encode(buffer)
            # Update the FPS counter
        except KeyboardInterrupt:
            # stop the timer and display FPS information
            print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))
            print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
            print("\n\nBye bye\n")
    print("End Time: ", datetime.datetime.now())


“”” Subscribe “””

  • The ZeroMQ subscriber is listening on ‘tcp://*:5555’ 
  • As the string is received its decoded and converted to image using OpenCv
  • We use OpenCV to visualize this frame in a window
  • Every frame sent over the ZeroMQ TCP socket is visualized and appears as a live video stream
def subVideo(config):
    context = zmq.Context()
    footage_socket = context.socket(zmq.SUB)
    port = 5555
    bind_address = "tcp://*:{}".format(port) # 'tcp://*:5555'
    print("Subscribe Video at ", bind_address)
    footage_socket.setsockopt_string(zmq.SUBSCRIBE, str(''))
    while True:
            frame = footage_socket.recv_string()
            img = base64.b64decode(frame)
            npimg = np.fromstring(img, dtype=np.uint8)
            source = cv2.imdecode(npimg, 1)
            cv2.imshow("image", source)
        except KeyboardInterrupt:
            print("\n\nBye bye\n")

Github: https://github.com/vishwakarmarhl/cnaviz/blob/master/imcol/CvZmq.py



In the github code above, you can run the test as follows,

PUB: Webcam source machine

python CvZmq.py pub --pubip= --pubport=5555

SUB: Target visualization machine

python CvZmq.py sub --subport=5555


Sensor data using Dronekit from Navio2 and Raspberry Pi B+ running ArduPilot copter stack

Before doing anything here go through the below setup and start the Arducopter service after the Navio2 setup.

Setup -> Navio2 with Raspberry Pi 3 B+ for the Ardupilot flight controller setup


Setup Dronekit-Python

– We will go through dronekit however, other option is pymavlink (https://www.ardusub.com/developers/pymavlink.html)
Hello Drone


– From Source in ==Python 3.5 with dronekit 2.9.1== http://python.dronekit.io/contributing/developer_setup_windows.html
git clone https://github.com/dronekit/dronekit-python.git

cd dronekit-python && python setup.py build && python setup.py install
pip install dronekit==2.9.1 future==0.15.2 monotonic==1.2 pymavlink==2.0.6
Successfully installed dronekit-2.9.1 future-0.15.2 monotonic-1.2 pymavlink-2.0.6

Ardupilot configuration

cd dronekit-python\examples\vehicle_state

Edit TELEM variables in PI@/etc/default/arducopter


  • Listen as a TCP server on 14550 (Runs only onec client connection at a time)

Edit the /etc/default/arducopter file

TELEM1="-A tcp:"

Run on client (IP addr of the Raspberry PI will go in the CLI)

python vehicle_state.py --connect tcp:


  • Broadcast to dedicated Client (Like Mission Planner or Mavproxy)

Edit the /etc/default/arducopter file (Configure IP of the Mission Planner/GCS client)

TELEM1="-A udp:"

Run on client

(cv2) λ python vehicle_state.py --connect udp:



(cv3) λ python NavDrn.py

Get some vehicle attribute values:
Autopilot Firmware version: APM:Copter-3.5.5
Major version number: 3
Minor version number: 5
Patch version number: 5
Release type: rc
Release version: 0
Stable release?: True
Autopilot capabilities
Supports MISSION_FLOAT message type: True
Supports PARAM_FLOAT message type: True
Supports MISSION_INT message type: True
Supports COMMAND_INT message type: True
Supports PARAM_UNION message type: False
Supports ftp for file transfers: False
Supports commanding attitude offboard: True
Supports commanding position and velocity targets in local NED frame: True
Supports set position + velocity targets in global scaled integers: True
Supports terrain protocol / data handling: True
Supports direct actuator control: False
Supports the flight termination command: True
Supports mission_float message type: True
Supports onboard compass calibration: True
Global Location: LocationGlobal:lat=0.0,lon=0.0,alt=17.03
Global Location (relative altitude): LocationGlobalRelative:lat=0.0,lon=0.0,alt=17.03
Local Location: LocationLocal:north=None,east=None,down=None
Attitude: Attitude:pitch=0.02147647738456726,yaw=2.0874133110046387,roll=-0.12089607864618301
Velocity: [0.0, 0.0, -0.02]
GPS: GPSInfo:fix=1,num_sat=0
Gimbal status: Gimbal: pitch=None, roll=None, yaw=None
Battery: Battery:voltage=0.0,current=None,level=None
EKF OK?: False
Last Heartbeat: 0.6720000000204891
Rangefinder: Rangefinder: distance=None, voltage=None
Rangefinder distance: None
Rangefinder voltage: None
Heading: 119
Is Armable?: False
System status: CRITICAL
Groundspeed: 0.005300579592585564
Airspeed: 0.0
Armed: False
For Simulation use Dronekit-SITL with Mission Planner
Source code: NavDrn.py
# Import DroneKit-Python (https://github.com/dronekit/dronekit-python)
from dronekit import connect, VehicleMode
import json 
import data.log as logger


    Dronekit: http://python.dronekit.io/guide/quick_start.html
    Dronekit.Vehicle: http://python.dronekit.io/automodule.html#dronekit.Vehicle

Derived from:

""" Class for DroneKit related operations """
class NavDrn():

    def __init__(self):
        self.log = logger.getNewFileLogger(__name__,"sens.log")

    def printVehicleInfo(self):
        # Get some vehicle attributes (state)
        print("\nGet some vehicle attribute values:")
        print(" Autopilot Firmware version: %s" %  self.vehicle.version)
        print("   Major version number: %s" %  self.vehicle.version.major)
        print("   Minor version number: %s" %  self.vehicle.version.minor)
        print("   Patch version number: %s" %  self.vehicle.version.patch)
        print("   Release type: %s" %  self.vehicle.version.release_type())
        print("   Release version: %s" %  self.vehicle.version.release_version())
        print("   Stable release?: %s" %  self.vehicle.version.is_stable())
        print(" Autopilot capabilities")
        print("   Supports MISSION_FLOAT message type: %s" %  self.vehicle.capabilities.mission_float)
        print("   Supports PARAM_FLOAT message type: %s" %  self.vehicle.capabilities.param_float)
        print("   Supports MISSION_INT message type: %s" %  self.vehicle.capabilities.mission_int)
        print("   Supports COMMAND_INT message type: %s" %  self.vehicle.capabilities.command_int)
        print("   Supports PARAM_UNION message type: %s" %  self.vehicle.capabilities.param_union)
        print("   Supports ftp for file transfers: %s" %  self.vehicle.capabilities.ftp)
        print("   Supports commanding attitude offboard: %s" %  self.vehicle.capabilities.set_attitude_target)
        print("   Supports commanding position and velocity targets in local NED frame: %s" %  self.vehicle.capabilities.set_attitude_target_local_ned)
        print("   Supports set position + velocity targets in global scaled integers: %s" %  self.vehicle.capabilities.set_altitude_target_global_int)
        print("   Supports terrain protocol / data handling: %s" %  self.vehicle.capabilities.terrain)
        print("   Supports direct actuator control: %s" %  self.vehicle.capabilities.set_actuator_target)
        print("   Supports the flight termination command: %s" %  self.vehicle.capabilities.flight_termination)
        print("   Supports mission_float message type: %s" %  self.vehicle.capabilities.mission_float)
        print("   Supports onboard compass calibration: %s" %  self.vehicle.capabilities.compass_calibration)
        print(" Global Location: %s" % self.vehicle.location.global_frame)
        print(" Global Location (relative altitude): %s" % self.vehicle.location.global_relative_frame)
        print(" Local Location: %s" % self.vehicle.location.local_frame)
        print(" Attitude: %s" % self.vehicle.attitude)
        print(" Velocity: %s" % self.vehicle.velocity)
        print(" GPS: %s" % self.vehicle.gps_0)
        print(" Gimbal status: %s" % self.vehicle.gimbal)
        print(" Battery: %s" % self.vehicle.battery)
        print(" EKF OK?: %s" % self.vehicle.ekf_ok)
        print(" Last Heartbeat: %s" % self.vehicle.last_heartbeat)
        print(" Rangefinder: %s" % self.vehicle.rangefinder)
        print(" Rangefinder distance: %s" % self.vehicle.rangefinder.distance)
        print(" Rangefinder voltage: %s" % self.vehicle.rangefinder.voltage)
        print(" Heading: %s" % self.vehicle.heading)
        print(" Is Armable?: %s" % self.vehicle.is_armable)
        print(" System status: %s" % self.vehicle.system_status.state)
        print(" Groundspeed: %s" % self.vehicle.groundspeed)    # settable
        print(" Airspeed: %s" % self.vehicle.airspeed)    # settable
        print(" Mode: %s" % self.vehicle.mode.name)    # settable
        print(" Armed: %s" % self.vehicle.armed)    # settable
        print(" Channel values from RC Tx:", self.vehicle.channels)
        print(" Home location: %s" % self.vehicle.home_location)    # settable
        print(" ----- ")

    """ Initialize the TCP connection to the vehicle """
    def init(self, connectionString="tcp:localhost:14550"):
        except Exception as e:
            self.log.error("{0}\n Retry ".format(str(e)))

    """ Connect to the Vehicle """
    def connectVehicle(self, connectionString="tcp:localhost:14550"):
        self.log.info("Connecting to vehicle on: %s" % (connectionString,))
        self.vehicle = connect(connectionString, wait_ready=True)
        return self.vehicle

    """ Close connection to the Vehicle """

    def closeVehicle(self):
        self.log.info("Closed connection to vehicle")

if __name__ == "__main__":
    navDrn = NavDrn()
    # Establish connection with the Raspberry Pi with Navio2 hat on
    connectionString = "tcp:"
    while True:
            # Print the data
        except KeyboardInterrupt:
            print("\n Bye Bye")
    # Close vehicle object

Navio2 with Raspberry Pi 3 B+ for the Ardupilot flight controller setup

Load the Raspberry Pi Image provided by Emlid which has ROS and ardupilot pre-installed.

Controller Setup

Component/Part Name Documentation/Link Description
NAVIO2 Kit Ardupilot Navio2 Overview Sensor HAT for Pi
CanaKit Raspberry Pi 3 B+ Pi & Navio2 Setup Compute for flight
DJI F330 Flamewheel (or similar ARF Kit) Copter Assembly guide Frames, Motors, ESCs, Propellers
Radio Controller (Transmitter) Review of the RC products RC Transmitter
ELP USB FHD01M-L36 Camera ELP USB Webcam 2MP



(cv2) pi@nava:~/workspace/cnaviz/imcol $ ps -eaf | grep ardu
root 1909 1 0 16:36 ? 00:00:00 /bin/sh -c /usr/bin/arducopter $TELEM1 $TELEM2
root 1910 1909 15 16:36 ? 00:15:48 /usr/bin/arducopter -A udp:


Setup a Python 2 environment and clone Navio 2 repository

sudo apt-get install build-essential libi2c-dev i2c-tools python-dev libffi-dev
mkvirtualenv cv2 -p python2
pip install smbus-cffi
git clone https://github.com/emlid/Navio2.git
cd Navio2
Run tests
(cv2) pi@nava:~/Navio2/Python $ emlidtool test
2018-08-20 19:03:23 nava root[2337] INFO mpu9250: Passed
2018-08-20 19:03:23 nava root[2337] INFO adc: Passed
2018-08-20 19:03:23 nava root[2337] INFO rcio_status_alive: Passed
2018-08-20 19:03:23 nava root[2337] INFO lsm9ds1: Passed
2018-08-20 19:03:23 nava root[2337] INFO gps: Passed
2018-08-20 19:03:23 nava root[2337] INFO ms5611: Passed
2018-08-20 19:03:23 nava root[2337] INFO pwm: Passed
2018-08-20 19:03:23 nava root[2337] INFO rcio_firmware: Passed
Ardupilot should be stopped while running the Navio2 tests
sudo systemctl stop arducopter
(cv2) pi@nava:~/Navio2/Python $ python Barometer.py
Temperature(C): 39.384754 Pressure(millibar): 1010.329778
Temperature(C): 39.333014 Pressure(millibar): 1010.368464
(cv2) pi@nava:~/Navio2/Python $ python AccelGyroMag.py -i mpu
Selected: MPU9250
Connection established: True
Acc: -2.442 +9.428 +0.958 Gyr: -0.030 +0.011 -0.010 Mag: -3489.829 +30.680 +0.000
Acc: -2.504 +9.596 +1.063 Gyr: -0.023 +0.004 -0.012 Mag: -55.946 +6.677 +31.255
Acc: -2.346 +9.495 +0.924 Gyr: -0.023 +0.007 -0.007 Mag: -57.394 +5.955 +31.255
Acc: -2.370 +9.567 +1.020 Gyr: -0.030 +0.006 -0.014 Mag: -55.765 +6.497 +30.731
(cv2) pi@nava:~/Navio2/Python $ python GPS.py
Longitude=0 Latitude=0 height=0 hMSL=-17000 hAcc=4294967295 vAcc=4082849024
Longitude=0 Latitude=0 height=0 hMSL=-17000 hAcc=4294967295 vAcc=4083043328
(cv2) pi@nava:~/Navio2/Python $ python ADC.py
A0: 5.0100V A1: 0.0440V A2: 0.0160V A3: 0.0160V A4: 0.0180V A5: 0.0220V
A0: 5.0370V A1: 0.0440V A2: 0.0180V A3: 0.0140V A4: 0.0160V A5: 0.0240V
A0: 5.0370V A1: 0.0440V A2: 0.0160V A3: 0.0140V A4: 0.0160V A5: 0.0240V
(cv2) pi@nava:~/Navio2/Python $ sudo python LED.py
LED is yellow
LED is green
LED is cyan
LED is blue
LED is magenta
LED is red
LED is yellow
LED is green
LED is cyan