Realsense 435i (Depth & RGB) Multi Camera Setup and OpenCV-Python wrapper (Intel RealSense SDK 2.0 Compiled from Source on Win10)


multiple Realsense depth camera’s appear to be accessible from different source identifiers/locations (especially using DirectShow on windows). I am not interested in hardware sync so will be using an enforced timestamp to associate with the capture.

Intel RealSense Depth Camera D435i:


Intel RealSense SDK 2.0 code

I installed the RealSense SDK 2.23.0 version for my Windows 10 environment. Looks like all the example projects in github focus on C++. However we have wrapper python examples which does provide the stuff we need.

Build Python wrapper and tests using CMake & Visual Studio on Windows 10

Build the RealSense python wrapper from source code



  1. Git clone the librealsense2 source code
  2. Install Visual Studio Community edition for compiling the source
  3. Run CMake Gui “cmake-gui.exe”.
  4. “Browse Source” and configure the path. Example “C:/Dev/librealsense”
  5. Create “build” folder on librealsense path folder. Click “Browse Build” button and configure the path. Example “C:/Dev/librealsense/build”.
  6. Check on “Grouped” and “Advanced” checkbox
  7. Click “Configure” button.
  8. On “Specify the generator for this project”, Select “Visual Studio 16 2019”
  9. Then click “Finish” button.
  10. After configuration is done, on “BUILD” node, check “BUILD_PYTHON_BINDINGS” checkbox button.
  11. Then click “Configure” button again.
  12. In “Ungrouped Entries”, identify variable “PYTHON_EXECUTABLE”, and change the path to anaconda python.exe in your virtual environment path as
    “C:/Python3/python.exe” or “C:/ProgramData/Anaconda3/envs/cv3/python.exe”
  13. Then click “Generate” button, after all is done, click “Open Project”.
  14. Alternately, open Visual Studio IDE, select the librealsense2.sln solution and build all in Release or Debug mode.
  15. The libraries are built and available in folder c:/Dev/librealsense/build/Release if you built in Release mode
    1.  cp pybackend2.cp35-win_amd64.pyd pybackend2.pyd
    2.  cp pyrealsense2.cp35-win_amd64.pyd pyrealsense2.pyd
  16. Copy the libraries to the relevant environment
    1. copy “pybackend2.pyd”, “pyrealsense2.pyd” and “realsense2.dll” to “DLLs” folder. Example “C:/ProgramData/Anaconda3/envs/cv3/DLLs”
    2. copy “realsense2.lib” to “libs” folder. Example “C:/ProgramData/Anaconda3/envs/cv3/libs”
  17. Test the python bindings and realsense libraries
    1. Activate your Anaconda environment. Example: conda activate cv3
    2. Test the python opencv RGB & Depth visualization code and see the results as per the image below

Test Example: Opencv RGB & Depth viewer

cd C:/Dev/librealsense/wrappers\python\example
python opencv_viewer_example.pyRealSenseTestCapture


Now lets write a test code for Multiple Camera Setup

Test 1: enlist all the devices

import pyrealsense2 as rs
import numpy as np
import cv2

DS5_product_ids = ["0AD1", "0AD2", "0AD3", "0AD4", "0AD5", "0AF6", "0AFE", "0AFF", "0B00", "0B01", "0B03", "0B07","0B3A"]
def find_device_that_supports_advanced_mode() :
    ctx = rs.context()
    ds5_dev = rs.device()
    devices = ctx.query_devices()
    print("D: ", devices)
    devs = []
    for dev in devices:
        if dev.supports(rs.camera_info.product_id) and str(dev.get_info(rs.camera_info.product_id)) in DS5_product_ids:
            if dev.supports(
                print("Found device that supports advanced mode:", dev.get_info(, " -> ", dev)
    #raise Exception("No device that supports advanced mode was found")
    return devs

devs = find_device_that_supports_advanced_mode()


(cv3) λ python
D:  <pyrealsense2.device_list object at 0x00000237B1C3B688>
Found device that supports advanced mode: Intel RealSense D435I  ->  <pyrealsense2.device: Intel RealSense D435I (S/N: 843112070672)>
Found device that supports advanced mode: Intel RealSense D435I  ->  <pyrealsense2.device: Intel RealSense D435I (S/N: 843112070632)>
Found device that supports advanced mode: Intel RealSense D435I  ->  <pyrealsense2.device: Intel RealSense D435I (S/N: 843112070689)>

Test 2: Stream images from all the devices and show

Use a device manager helper class from boxdimensioner multicam example project. Also look at the fix provided for an issue observed in device manager class.

import pyrealsense2 as rs
import numpy as np
import cv2

from realsense_device_manager import DeviceManager

def visualise_measurements(frames_devices):
    Calculate the cumulative pointcloud from the multiple devices
    frames_devices : dict
    	The frames from the different devices
    	keys: str
    		Serial number of the device
    	values: [frame]
    		frame: rs.frame()
    			The frameset obtained over the active pipeline from the realsense device
    for (device, frame) in frames_devices.items():
        color_image = np.asarray(frame[].get_data())
        text_str = device
        cv2.putText(color_image, text_str, (50,50), cv2.FONT_HERSHEY_PLAIN, 2, (0,255,0) )
        # Visualise the results
        text_str = 'Color image from RealSense Device Nr: ' + device
        cv2.imshow(text_str, color_image)

# Define some constants 
resolution_width = 1280 # pixels
resolution_height = 720 # pixels
frame_rate = 15  # fps
dispose_frames_for_stablisation = 30  # frames

    # Enable the streams from all the intel realsense devices
    rs_config = rs.config()
    rs_config.enable_stream(, resolution_width, resolution_height, rs.format.z16, frame_rate)
    rs_config.enable_stream(, 1, resolution_width, resolution_height, rs.format.y8, frame_rate)
    rs_config.enable_stream(, resolution_width, resolution_height, rs.format.bgr8, frame_rate)

    # Use the device manager class to enable the devices and get the frames
    device_manager = DeviceManager(rs.context(), rs_config)
    # Allow some frames for the auto-exposure controller to stablise
    while 1:
        frames = device_manager.poll_frames()

except KeyboardInterrupt:
    print("The program was interupted by the user. Closing the program...")



This will show you multiple windows, one for each camera connected to the system


That should be good for today !!!

References can be found at the following places:






Brokerless ZeroMq to share live image/video data stream from Raspberry PI B+ (Uses OpenCV)

Makes sure you have the following :

  1. OpenCV 3.x compiled and installed on Rasbperry Pi
  2. Raspberry Pi B+ with Raspbian and a USB webcam attached


Read through ZeroMQ in 100 words for a brief description





Now lets go through the simple code I wrote for publishing and subscribing to live webcam stream from a raspberry PI to a workstation.


Make sure you have the dependencies and imports as below

import os, sys, datetime
import json, base64

import cv2
import zmq
import numpy as np
import imutils
from import FPS


“”” Publish “””

  • In this piece of code we are creating a Zero MQ context preparing to send data to ‘tcp://localhost:5555’
  • The opencv API for camera is used to capture a frame from the device in 640×480 size
  • FPS module is used from opencv to estimate the frame rate for this capture
  • The byte buffer read from the webcam is encoded and sent as a string over the Zero MQ TCP socket connection
  • Continue to send each buffer out on the TCP socket
def pubVideo(config):
    context = zmq.Context()
    footage_socket = context.socket(zmq.PUB)
    # 'tcp://localhost:5555'
    ip = ""
    port = 5555
    target_address = "tcp://{}:{}".format(ip, port) 
    print("Publish Video to ", target_address)
    impath = 0 # For the first USB camera attached
    camera = cv2.VideoCapture(impath)  # init the camera
    camera.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
    camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
    print("Start Time: ",
    fps = FPS().start()
    while True:
            buffer = capture(config, camera)
            if not isinstance(buffer, (list, tuple, np.ndarray)):
            buffer_encoded = base64.b64encode(buffer)
            # Update the FPS counter
        except KeyboardInterrupt:
            # stop the timer and display FPS information
            print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))
            print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
            print("\n\nBye bye\n")
    print("End Time: ",


“”” Subscribe “””

  • The ZeroMQ subscriber is listening on ‘tcp://*:5555’ 
  • As the string is received its decoded and converted to image using OpenCv
  • We use OpenCV to visualize this frame in a window
  • Every frame sent over the ZeroMQ TCP socket is visualized and appears as a live video stream
def subVideo(config):
    context = zmq.Context()
    footage_socket = context.socket(zmq.SUB)
    port = 5555
    bind_address = "tcp://*:{}".format(port) # 'tcp://*:5555'
    print("Subscribe Video at ", bind_address)
    footage_socket.setsockopt_string(zmq.SUBSCRIBE, str(''))
    while True:
            frame = footage_socket.recv_string()
            img = base64.b64decode(frame)
            npimg = np.fromstring(img, dtype=np.uint8)
            source = cv2.imdecode(npimg, 1)
            cv2.imshow("image", source)
        except KeyboardInterrupt:
            print("\n\nBye bye\n")




In the github code above, you can run the test as follows,

PUB: Webcam source machine

python pub --pubip= --pubport=5555

SUB: Target visualization machine

python sub --subport=5555


Compile and Setup OpenCV 3.4.x on Ubuntu 18.04 LTS with Python Virtualenv for Image processing with Ceres, VTK, PCL

OpenCV: Open Source Computer Vision Library



OpenCV Source:


A. Setup an external HDD/SSD for this setup


B. Environment (Ubuntu 18.04 LTS)


Python3 setup

Install the needed packages in a python virtualenv. Refer similar windows Anaconda setup or look at the ubuntu based info here

sudo apt-get install -y build-essential cmake unzip pkg-config 
sudo apt-get install -y ubuntu-restricted-extras
sudo apt-get install -y python3-dev python3-numpy
sudo apt-get install -y git python3-pip virtualenv
sudo pip3 install virtualenv
rahul@karma:~$ virtualenv -p /usr/bin/python3 cv3
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /home/rahul/cv3/bin/python3
Also creating executable in /home/rahul/cv3/bin/python
Installing setuptools, pkg_resources, pip, wheel...

Activate and Deactivate the python Environment

rahul@karma:~$ source ~/cv3/bin/activate
(cv3) rahul@karma:~$ python
Python 3.6.5 (default, Apr 1 2018, 05:46:30) 
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> print("Test",4*5)
Test 20
>>> exit()
(cv3) rahul@karma:~$ deactivate

Alternatively, a great way to use virtualenv is to use Virtualenvwrappers

sudo pip3 install virtualenv virtualenvwrapper

Add these to your ~/.bashrc file

# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/

Now, run “source ~/.bashrc” to set the environment

Create a Virtual environment
rahul@karma:~$ mkvirtualenv cv3 -p python3
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /home/rahul/.virtualenvs/cv3/bin/python3
Also creating executable in /home/rahul/.virtualenvs/cv3/bin/python
Installing setuptools, pkg_resources, pip, wheel...done.
virtualenvwrapper.user_scripts creating /home/rahul/.virtualenvs/cv3/bin/predeactivate
virtualenvwrapper.user_scripts creating /home/rahul/.virtualenvs/cv3/bin/postdeactivate
virtualenvwrapper.user_scripts creating /home/rahul/.virtualenvs/cv3/bin/preactivate
virtualenvwrapper.user_scripts creating /home/rahul/.virtualenvs/cv3/bin/postactivate
virtualenvwrapper.user_scripts creating /home/rahul/.virtualenvs/cv3/bin/get_env_details
(cv3) rahul@karma:~$
Activate/Deactivate virtual env
rahul@karma:~$ workon cv3
(cv3) rahul@karma:~$ deactivate 

Install basic packages for the computer vision work.

(cv3) rahul@karma: pip install numpy scipy scikit-image scikit-learn  
pip install imutils pyzmq ipython matplotlib
pip install dronekit==2.9.1 future==0.15.2 monotonic==1.2 pymavlink==2.0.6

Java installation from this blog

sudo add-apt-repository ppa:linuxuprising/java
sudo apt update
sudo apt install oracle-java10-installer
sudo apt install oracle-java10-set-default sudo apt-get install ant
Packages needed for OpenCV and others
GTK support for GUI features, Camera support (libv4l), Media Support (ffmpeg, gstreamer) etc. Additional packages for image formats mostly downloaded form the ubuntu-restricted-extra repository


sudo apt-get install -y libjpeg-dev libpng-dev libtiff-dev ffmpeg
sudo apt-get install -y libjpeg8-dev libjasper-dev libpng12-dev libtiff5-dev
sudo apt-get install -y libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install -y libxvidcore-dev libx264-dev libvorbis-dev
sudo apt-get install -y libgtk2.0-dev libgtk-3-dev ccache imagemagick
sudo apt-get install -y liblept5 leptonica-progs libleptonica-dev
sudo apt-get install -y qt5-default libgtk2.0-dev libtbb-dev
sudo apt-get install -y libatlas-base-dev gfortran libblas-dev liblapack-dev 
sudo apt-get install -y libdvd-pkg libgstreamer-plugins-base1.0-dev
sudo apt-get install -y libmp3lame-dev libtheora-dev
sudo apt-get install -y libxine2-dev libv4l-dev x264 v4l-utils
sudo apt-get install -y libopencore-amrnb-dev libopencore-amrwb-dev

# Optional dependencies
sudo apt-get install -y libprotobuf-dev protobuf-compiler
sudo apt-get install -y libgoogle-glog-dev libgflags-dev
sudo apt-get install -y libgphoto2-dev libeigen3-dev libhdf5-dev doxygen


VTK for SFM Modules

SFM setup:

sudo apt-get install libxt-dev libglew-dev libsuitesparse-dev
sudo apt-get install tk8.5 tcl8.5 tcl8.5-dev tcl-dev


# However, if you want to build Ceres as a *shared* library, 
# You must, add the following PPA:
sudo add-apt-repository ppa:bzindovic/suitesparse-bugfix-1319687
sudo apt-get update
sudo apt-get install libsuitesparse-dev
git clone
cd ceres-solver
mkdir build && cd build
export CXXFLAGS="-std=c++11" 
cmake ..
make -j4
make test
sudo make install


sudo apt-get install libblas-dev libblas-doc liblapacke-dev liblapack-doc


VTK Setup,

Configure and build with QT support

git clone git:// VTK
cd VTK
mkdir VTK-build
cd VTK-build
-DVTK_Group_Qt:BOOL=ON \
make -j4
sudo make install
$ cp -r ~/cv/VTK/VTK-build/lib/python3.6/site-packages/* ~/.virtualenvs/cv3/lib/python3.6/site-packages/
$ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib:/usr/local/lib"
$ sudo ldconfig

cd flann-1.8.4-src/ && mkdir build && cd build
cmake ..
make -j4 
sudo make install



sudo apt-get install -y libusb-1.0-0-dev libusb-dev libudev-dev
sudo apt-get install -y mpi-default-dev openmpi-bin openmpi-common 
sudo apt-get install -y libboost-all-dev libpcap-dev  sudo apt-get install -y libqhull* libgtest-dev sudo apt-get install -y freeglut3-dev pkg-config sudo apt-get install -y libxmu-dev libxi-dev sudo apt-get install -y mono-complete sudo apt-get install -y openjdk-8-jdk openjdk-8-jre
git clone 
cd pcl && mkdir build && cd build 
CXXFLAGS="-std=gnu++11" cmake -DBUILD_apps=ON \
 -DBUILD_apps_point_cloud_editor=ON \
 -DBUILD_apps_cloud_composer=ON \
 -DBUILD_apps_modeler=ON \
 -DBUILD_apps_3d_rec_framework=ON \
 -DBUILD_examples=ON ..
make -j8 
sudo make install

Official OpenCV installation

wget -O
wget -O

In case of Raspberry PI 3 B+, this blog worked for me.



Configure OpenCV with CMake
$ cd ~/cv/opencv-3.4.2 && mkdir build && cd build
-D BUILD_opencv_java=OFF \
-D OPENCV_EXTRA_MODULES_PATH=~/cv/opencv_contrib-3.4.2/modules \
-D PYTHON_EXECUTABLE=~/.virtualenvs/cv3/bin/python \
Screenshot from 2018-07-12 13-18-55

Make sure the Python 3 interpreter and other dependencies are configured correctly.

Compiling with CUDA (Setup instructions)

-D OPENCV_EXTRA_MODULES_PATH=~/cv/opencv_contrib-3.4.2/modules \
-D PYTHON_EXECUTABLE=~/.virtualenvs/cv3/bin/python \


Compile, Install and Verify
(cv3) rahul@karma:~/cv/opencv-3.4.2/build$ make -j4
$ sudo make install
$ sudo sh -c 'echo "/usr/local/lib" >> /etc/'
$ sudo ldconfig
$ pkg-config --modversion opencv
Setup the cv shared libraries
(cv3) rahul@karma$ ls -l /usr/local/lib/python3.6/site-packages
total 5172
-rw-r--r-- 1 root staff 5292240 Jul 12 13:32
# or use the find command 
$ find /usr/local/lib/ -type f -name "cv2*.so"
$ cd /usr/local/lib/python3.6/site-packages/
$ mv
$ cd ~/.virtualenvs/cv3/lib/python3.6/site-packages/
$ ln -s /usr/local/lib/python3.6/site-packages/

C. Test

(cv3) rahul@karma:~$ python
Python 3.6.5 (default, Apr 1 2018, 05:46:30) 
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information
>>> import cv2
>>> cv2.__version__
>>> exit()
(cv3) rahul@karma:~$