Raspberry PI 3 B+ as an Access point/bridge on a local wireless network

A Raspberry PI 3 B+ based setup is expected here. This is detailed in a previous blog.

O. Create a PI based Network Access point using static host (pi@navx.local)

Major reference that describes how to create a DHCP server based WiFi bridge is available here, http://ardupilot.org/dev/docs/making-a-mavlink-wifi-bridge-using-the-raspberry-pi.html

maxresdefault

Hostname & Setup: Two Raspberry Pi’s. One is the access point and other connects to it for communicating over Wireless LAN.

navx.local -> pi@192.168.42.1 
-> This Raspberry PI is the WiFi access-point/hotspot host

nava.local -> pi@192.168.42.13 
-> This Raspberry PI is configured to connect to the above access-point

A. Setup or configure WiFi SSID on Raspberry (pi@nava.local)

Link: https://www.raspberrypi.org/documentation/configuration/wireless/wireless-cli.md

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

Connect the raspberry Pi to the WiFi access point. Edit the wpa_supplicant file with the new SSID and Password.

Alternately you can use raspi-config

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf
sudo ifdown wlan0
sudo ifup wlan0

B. Change the hostname using raspi-config or using /etc/hosts & /etc/hostname file  (pi@nava.local & pi@navx.local)

Install avahi with the following commands on all the Raspberry Pi’s: https://www.howtogeek.com/167190/how-and-why-to-assign-the-.local-domain-to-your-raspberry-pi/

sudo apt-get install avahi-daemon
# Update boot startup for avahi-daemon
sudo insserv avahi-daemon
sudo update-rc.d avahi-daemon defaults

Install Bonjour on windows for access, discovery and then configure IPv6 on raspberry PI’s
https://peterlaszlo.wordpress.com/2013/06/27/bonjour-avahi-rpi-windows/

# Enable IPv6 on RPi 
sudo modprobe ipv6

Add ip6 entry on a new line in /etc/modules file

# Apply the new configuration with:
sudo /etc/init.d/avahi-daemon restart

Raspberry PI’s should now be addressable from other machines as navx.local,

ssh pi@navx.local
ssh pi@nava.local

 

C. Stop the static IP and connect back to WiFi on wlan0 (pi@navx.local)

In case you are connected to another WiFi with internet access before moving to this configuration. A quick way to connect back to that WiFi is below.

Disable, stop DHCP server and reboot

sudo update-rc.d hostapd disable
sudo update-rc.d isc-dhcp-server disable
sudo service hostapd stop
sudo service isc-dhcp-server stop

Move the previous network config to the current setup

cp /etc/network/interfaces.backup /etc/network/interfaces

Revert the interfaces file on Rpi rasbian by copying the content of backup to the interfaces. The backed up content is shown below.

(cv3) pi@navx:~ $ cat interface.dyninternet.backup
# interfaces(5) file used by ifup(8) and ifdown(8)
# Please note that this file is written to be used with dhcpcd
# For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'

# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d

auto lo
iface lo inet loopback

iface eth0 inet manual

allow-hotplug intwifi0
iface intwifi0 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

allow-hotplug wlan0
iface wlan0 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

allow-hotplug wlan1
iface wlan1 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

D. Start the static IP configured DHCP server for the access point (pi@navx.local)

Start the DHCP server and enable the service

sudo service hostapd start
sudo service isc-dhcp-server start

sudo update-rc.d hostapd enable
sudo update-rc.d isc-dhcp-server enable

sudo reboot

Scan the wlan0 for SSID “NavxStation

sudo iwlist wlan0 scanning essid NavxStation

Connect from the raspberry (pi@nava.local) as described in section A

 

E. Provide fixed IP address on the host dhcp server (pi@navx.local)

https://blog.monotok.org/setup-raspberry-pi-dhcp-server/

The “HWaddr” or “ether” value is the MAC address. In this example say “c7:35:ce:fd:8e:a1

ifconfig wlan0

Edit the /etc/dhcp/dhcpd.conf file and add the following towards the end for fixed assignment.

host machine1_nava {
  hardware ethernet XX:XX:XX:XX:XX:XX;
  fixed-address 192.168.42.13;
}

Check the currently leased connections

cat /var/lib/dhcp/dhcpd.leases

Also, you can verify connected devices using

sudo iw dev wlan0 station dump 
sudo arp

 

 

 

Advertisements

Brokerless ZeroMq to share live image/video data stream from Raspberry PI B+ (Uses OpenCV)

Makes sure you have the following :

  1. OpenCV 3.x compiled and installed on Rasbperry Pi
  2. Raspberry Pi B+ with Raspbian and a USB webcam attached

ZeroMQ

Read through ZeroMQ in 100 words for a brief description

zmq

Installationhttp://zeromq.org/bindings:python

 

Code

Now lets go through the simple code I wrote for publishing and subscribing to live webcam stream from a raspberry PI to a workstation.

 

Make sure you have the dependencies and imports as below

import os, sys, datetime
import json, base64

import cv2
import zmq
import numpy as np
import imutils
from imutils.video import FPS

 

“”” Publish “””

  • In this piece of code we are creating a Zero MQ context preparing to send data to ‘tcp://localhost:5555’
  • The opencv API for camera is used to capture a frame from the device in 640×480 size
  • FPS module is used from opencv to estimate the frame rate for this capture
  • The byte buffer read from the webcam is encoded and sent as a string over the Zero MQ TCP socket connection
  • Continue to send each buffer out on the TCP socket
def pubVideo(config):
    context = zmq.Context()
    footage_socket = context.socket(zmq.PUB)
    # 'tcp://localhost:5555'
    ip = "127.0.0.1"
    port = 5555
    target_address = "tcp://{}:{}".format(ip, port) 
    print("Publish Video to ", target_address)
    footage_socket.connect(target_address)
    impath = 0 # For the first USB camera attached
    camera = cv2.VideoCapture(impath)  # init the camera
    camera.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
    camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
    print("Start Time: ", datetime.datetime.now())
    fps = FPS().start()
    while True:
        try:
            buffer = capture(config, camera)
            if not isinstance(buffer, (list, tuple, np.ndarray)):
                break
            buffer_encoded = base64.b64encode(buffer)
            footage_socket.send_string(buffer_encoded.decode('ascii'))
            # Update the FPS counter
            fps.update()
            cv2.waitKey(1)
        except KeyboardInterrupt:
            # stop the timer and display FPS information
            fps.stop()
            print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))
            print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
            camera.release()
            cv2.destroyAllWindows()
            print("\n\nBye bye\n")
            break
    print("End Time: ", datetime.datetime.now())

 

“”” Subscribe “””

  • The ZeroMQ subscriber is listening on ‘tcp://*:5555’ 
  • As the string is received its decoded and converted to image using OpenCv
  • We use OpenCV to visualize this frame in a window
  • Every frame sent over the ZeroMQ TCP socket is visualized and appears as a live video stream
def subVideo(config):
    context = zmq.Context()
    footage_socket = context.socket(zmq.SUB)
    port = 5555
    bind_address = "tcp://*:{}".format(port) # 'tcp://*:5555'
    print("Subscribe Video at ", bind_address)
    footage_socket.bind(bind_address)
    footage_socket.setsockopt_string(zmq.SUBSCRIBE, str(''))
    while True:
        try:
            frame = footage_socket.recv_string()
            img = base64.b64decode(frame)
            npimg = np.fromstring(img, dtype=np.uint8)
            source = cv2.imdecode(npimg, 1)
            cv2.imshow("image", source)
            cv2.waitKey(1)
        except KeyboardInterrupt:
            cv2.destroyAllWindows()
            print("\n\nBye bye\n")
            break

Github: https://github.com/vishwakarmarhl/cnaviz/blob/master/imcol/CvZmq.py

 

Run

In the github code above, you can run the test as follows,

PUB: Webcam source machine

python CvZmq.py pub --pubip=127.0.0.1 --pubport=5555

SUB: Target visualization machine

python CvZmq.py sub --subport=5555

 

Sensor data using Dronekit from Navio2 and Raspberry Pi B+ running ArduPilot copter stack

Before doing anything here go through the below setup and start the Arducopter service after the Navio2 setup.

Setup -> Navio2 with Raspberry Pi 3 B+ for the Ardupilot flight controller setup

x-default

Setup Dronekit-Python

– We will go through dronekit however, other option is pymavlink (https://www.ardusub.com/developers/pymavlink.html)
Hello Drone

Install

– From Source in ==Python 3.5 with dronekit 2.9.1== http://python.dronekit.io/contributing/developer_setup_windows.html
git clone https://github.com/dronekit/dronekit-python.git

cd dronekit-python && python setup.py build && python setup.py install
Alternately,
pip install dronekit==2.9.1 future==0.15.2 monotonic==1.2 pymavlink==2.0.6
Successfully installed dronekit-2.9.1 future-0.15.2 monotonic-1.2 pymavlink-2.0.6

Ardupilot configuration

cd dronekit-python\examples\vehicle_state

Edit TELEM variables in PI@/etc/default/arducopter

 

  • Listen as a TCP server on 14550 (Runs only onec client connection at a time)

Edit the /etc/default/arducopter file

TELEM1="-A tcp:0.0.0.0:14550"

Run on client (IP addr of the Raspberry PI will go in the CLI)

python vehicle_state.py --connect tcp:172.31.254.100:14550

 

  • Broadcast to dedicated Client (Like Mission Planner or Mavproxy)

Edit the /etc/default/arducopter file (Configure IP of the Mission Planner/GCS client)

TELEM1="-A udp:172.31.254.15:14550"

Run on client

(cv2) λ python vehicle_state.py --connect udp:0.0.0.0:14550

 

Result

(cv3) λ python NavDrn.py

Get some vehicle attribute values:
Autopilot Firmware version: APM:Copter-3.5.5
Major version number: 3
Minor version number: 5
Patch version number: 5
Release type: rc
Release version: 0
Stable release?: True
Autopilot capabilities
Supports MISSION_FLOAT message type: True
Supports PARAM_FLOAT message type: True
Supports MISSION_INT message type: True
Supports COMMAND_INT message type: True
Supports PARAM_UNION message type: False
Supports ftp for file transfers: False
Supports commanding attitude offboard: True
Supports commanding position and velocity targets in local NED frame: True
Supports set position + velocity targets in global scaled integers: True
Supports terrain protocol / data handling: True
Supports direct actuator control: False
Supports the flight termination command: True
Supports mission_float message type: True
Supports onboard compass calibration: True
Global Location: LocationGlobal:lat=0.0,lon=0.0,alt=17.03
Global Location (relative altitude): LocationGlobalRelative:lat=0.0,lon=0.0,alt=17.03
Local Location: LocationLocal:north=None,east=None,down=None
Attitude: Attitude:pitch=0.02147647738456726,yaw=2.0874133110046387,roll=-0.12089607864618301
Velocity: [0.0, 0.0, -0.02]
GPS: GPSInfo:fix=1,num_sat=0
Gimbal status: Gimbal: pitch=None, roll=None, yaw=None
Battery: Battery:voltage=0.0,current=None,level=None
EKF OK?: False
Last Heartbeat: 0.6720000000204891
Rangefinder: Rangefinder: distance=None, voltage=None
Rangefinder distance: None
Rangefinder voltage: None
Heading: 119
Is Armable?: False
System status: CRITICAL
Groundspeed: 0.005300579592585564
Airspeed: 0.0
Mode: STABILIZE
Armed: False
For Simulation use Dronekit-SITL with Mission Planner
Source code: NavDrn.py
# Import DroneKit-Python (https://github.com/dronekit/dronekit-python)
from dronekit import connect, VehicleMode
import json 
import data.log as logger


"""

Docs:
    Dronekit: http://python.dronekit.io/guide/quick_start.html
    Dronekit.Vehicle: http://python.dronekit.io/automodule.html#dronekit.Vehicle

Derived from:
    http://python.dronekit.io/examples/vehicle_state.html
"""

""" Class for DroneKit related operations """
class NavDrn():

    def __init__(self):
        self.log = logger.getNewFileLogger(__name__,"sens.log")


    def printVehicleInfo(self):
        self.vehicle.wait_ready(True)
        # Get some vehicle attributes (state)
        print("\nGet some vehicle attribute values:")
        print(" Autopilot Firmware version: %s" %  self.vehicle.version)
        print("   Major version number: %s" %  self.vehicle.version.major)
        print("   Minor version number: %s" %  self.vehicle.version.minor)
        print("   Patch version number: %s" %  self.vehicle.version.patch)
        print("   Release type: %s" %  self.vehicle.version.release_type())
        print("   Release version: %s" %  self.vehicle.version.release_version())
        print("   Stable release?: %s" %  self.vehicle.version.is_stable())
        print(" Autopilot capabilities")
        print("   Supports MISSION_FLOAT message type: %s" %  self.vehicle.capabilities.mission_float)
        print("   Supports PARAM_FLOAT message type: %s" %  self.vehicle.capabilities.param_float)
        print("   Supports MISSION_INT message type: %s" %  self.vehicle.capabilities.mission_int)
        print("   Supports COMMAND_INT message type: %s" %  self.vehicle.capabilities.command_int)
        print("   Supports PARAM_UNION message type: %s" %  self.vehicle.capabilities.param_union)
        print("   Supports ftp for file transfers: %s" %  self.vehicle.capabilities.ftp)
        print("   Supports commanding attitude offboard: %s" %  self.vehicle.capabilities.set_attitude_target)
        print("   Supports commanding position and velocity targets in local NED frame: %s" %  self.vehicle.capabilities.set_attitude_target_local_ned)
        print("   Supports set position + velocity targets in global scaled integers: %s" %  self.vehicle.capabilities.set_altitude_target_global_int)
        print("   Supports terrain protocol / data handling: %s" %  self.vehicle.capabilities.terrain)
        print("   Supports direct actuator control: %s" %  self.vehicle.capabilities.set_actuator_target)
        print("   Supports the flight termination command: %s" %  self.vehicle.capabilities.flight_termination)
        print("   Supports mission_float message type: %s" %  self.vehicle.capabilities.mission_float)
        print("   Supports onboard compass calibration: %s" %  self.vehicle.capabilities.compass_calibration)
        print(" Global Location: %s" % self.vehicle.location.global_frame)
        print(" Global Location (relative altitude): %s" % self.vehicle.location.global_relative_frame)
        print(" Local Location: %s" % self.vehicle.location.local_frame)
        print(" Attitude: %s" % self.vehicle.attitude)
        print(" Velocity: %s" % self.vehicle.velocity)
        print(" GPS: %s" % self.vehicle.gps_0)
        print(" Gimbal status: %s" % self.vehicle.gimbal)
        print(" Battery: %s" % self.vehicle.battery)
        print(" EKF OK?: %s" % self.vehicle.ekf_ok)
        print(" Last Heartbeat: %s" % self.vehicle.last_heartbeat)
        print(" Rangefinder: %s" % self.vehicle.rangefinder)
        print(" Rangefinder distance: %s" % self.vehicle.rangefinder.distance)
        print(" Rangefinder voltage: %s" % self.vehicle.rangefinder.voltage)
        print(" Heading: %s" % self.vehicle.heading)
        print(" Is Armable?: %s" % self.vehicle.is_armable)
        print(" System status: %s" % self.vehicle.system_status.state)
        print(" Groundspeed: %s" % self.vehicle.groundspeed)    # settable
        print(" Airspeed: %s" % self.vehicle.airspeed)    # settable
        print(" Mode: %s" % self.vehicle.mode.name)    # settable
        print(" Armed: %s" % self.vehicle.armed)    # settable
        print(" Channel values from RC Tx:", self.vehicle.channels)
        print(" Home location: %s" % self.vehicle.home_location)    # settable
        print(" ----- ")

    """ Initialize the TCP connection to the vehicle """
    def init(self, connectionString="tcp:localhost:14550"):
        try:
            self.connectVehicle(connectionString)
        except Exception as e:
            self.log.error("{0}\n Retry ".format(str(e)))
            self.connectVehicle(connectionString)

    """ Connect to the Vehicle """
    def connectVehicle(self, connectionString="tcp:localhost:14550"):
        self.log.info("Connecting to vehicle on: %s" % (connectionString,))
        self.vehicle = connect(connectionString, wait_ready=True)
        return self.vehicle

    """ Close connection to the Vehicle """

    def closeVehicle(self):
        self.vehicle.close()
        self.log.info("Closed connection to vehicle")

if __name__ == "__main__":
    navDrn = NavDrn()
    # Establish connection with the Raspberry Pi with Navio2 hat on
    connectionString = "tcp:172.31.254.100:14550"
    navDrn.init(connectionString)
    while True:
        try:
            # Print the data
            navDrn.printVehicleInfo()
        except KeyboardInterrupt:
            print("\n Bye Bye")
            break
    # Close vehicle object
    navDrn.closeVehicle()

Navio2 with Raspberry Pi 3 B+ for the Ardupilot flight controller setup

Load the Raspberry Pi Image provided by Emlid which has ROS and ardupilot pre-installed.

Controller Setup

Component/Part Name Documentation/Link Description
NAVIO2 Kit Ardupilot Navio2 Overview Sensor HAT for Pi
CanaKit Raspberry Pi 3 B+ Pi & Navio2 Setup Compute for flight
DJI F330 Flamewheel (or similar ARF Kit) Copter Assembly guide Frames, Motors, ESCs, Propellers
Radio Controller (Transmitter) Review of the RC products RC Transmitter
ELP USB FHD01M-L36 Camera ELP USB Webcam 2MP

IMG_20180821_151617446

Ardupilot

Verify 
(cv2) pi@nava:~/workspace/cnaviz/imcol $ ps -eaf | grep ardu
root 1909 1 0 16:36 ? 00:00:00 /bin/sh -c /usr/bin/arducopter $TELEM1 $TELEM2
root 1910 1909 15 16:36 ? 00:15:48 /usr/bin/arducopter -A udp:172.31.254.175:14550

Examples

Setup a Python 2 environment and clone Navio 2 repository

sudo apt-get install build-essential libi2c-dev i2c-tools python-dev libffi-dev
mkvirtualenv cv2 -p python2
pip install smbus-cffi
git clone https://github.com/emlid/Navio2.git
cd Navio2
Run tests
(cv2) pi@nava:~/Navio2/Python $ emlidtool test
2018-08-20 19:03:23 nava root[2337] INFO mpu9250: Passed
2018-08-20 19:03:23 nava root[2337] INFO adc: Passed
2018-08-20 19:03:23 nava root[2337] INFO rcio_status_alive: Passed
2018-08-20 19:03:23 nava root[2337] INFO lsm9ds1: Passed
2018-08-20 19:03:23 nava root[2337] INFO gps: Passed
2018-08-20 19:03:23 nava root[2337] INFO ms5611: Passed
2018-08-20 19:03:23 nava root[2337] INFO pwm: Passed
2018-08-20 19:03:23 nava root[2337] INFO rcio_firmware: Passed
Ardupilot should be stopped while running the Navio2 tests
sudo systemctl stop arducopter
Barometer
(cv2) pi@nava:~/Navio2/Python $ python Barometer.py
Temperature(C): 39.384754 Pressure(millibar): 1010.329778
Temperature(C): 39.333014 Pressure(millibar): 1010.368464
Accelerometer
(cv2) pi@nava:~/Navio2/Python $ python AccelGyroMag.py -i mpu
Selected: MPU9250
Connection established: True
Acc: -2.442 +9.428 +0.958 Gyr: -0.030 +0.011 -0.010 Mag: -3489.829 +30.680 +0.000
Acc: -2.504 +9.596 +1.063 Gyr: -0.023 +0.004 -0.012 Mag: -55.946 +6.677 +31.255
Acc: -2.346 +9.495 +0.924 Gyr: -0.023 +0.007 -0.007 Mag: -57.394 +5.955 +31.255
Acc: -2.370 +9.567 +1.020 Gyr: -0.030 +0.006 -0.014 Mag: -55.765 +6.497 +30.731
GPS
(cv2) pi@nava:~/Navio2/Python $ python GPS.py
gpsFix=0
Longitude=0 Latitude=0 height=0 hMSL=-17000 hAcc=4294967295 vAcc=4082849024
gpsFix=0
Longitude=0 Latitude=0 height=0 hMSL=-17000 hAcc=4294967295 vAcc=4083043328
ADC
(cv2) pi@nava:~/Navio2/Python $ python ADC.py
A0: 5.0100V A1: 0.0440V A2: 0.0160V A3: 0.0160V A4: 0.0180V A5: 0.0220V
A0: 5.0370V A1: 0.0440V A2: 0.0180V A3: 0.0140V A4: 0.0160V A5: 0.0240V
A0: 5.0370V A1: 0.0440V A2: 0.0160V A3: 0.0140V A4: 0.0160V A5: 0.0240V
LED
(cv2) pi@nava:~/Navio2/Python $ sudo python LED.py
LED is yellow
LED is green
LED is cyan
LED is blue
LED is magenta
LED is red
LED is yellow
LED is green
LED is cyan

 

Tensorflow-GPU setup with cuDNN and NVIDIA CUDA 9.0 on Ubuntu 18.04 LTS

Pre-requisite: CUDA should be installed on the machine with NVIDIA graphics card

 

CUDA Setup

Driver and CUDA toolkit is described in a previous blogpost.

With a slight change since the Tensorflow setup requires CUDA toolkit 9.0

# Clean CUDA 9.1 and install 9.0
$ sudo /usr/local/cuda/bin/uninstall_cuda_9.1.pl 
$ rm -rf /usr/local/cuda-9.1
$ sudo rm -rf /usr/local/cuda-9.1
$ sudo ./cuda_9.0.176_384.81_linux.run --override

# Make sure environment variables are set for test
$ source ~/.bashrc 
$ sudo ln -s /usr/bin/gcc-6 /usr/local/cuda/bin/gcc
$ sudo ln -s /usr/bin/g++-6 /usr/local/cuda/bin/g++
$ cd ~/NVIDIA_CUDA-9.0_Samples/
$ make -j12
$ ./deviceQuery

Test Successful

cuDNN Setup

Referenced from a medium blogpost.

The following steps are pretty much the same as the installation guide using .deb files (strange that the cuDNN guide is better than the CUDA one).

Screenshot from 2018-07-13 16-03-10.png

  1. Go to the cuDNN download page (need registration) and select the latest cuDNN 7.1.* version made for CUDA 9.0.
  2. Download all 3 .deb files: the runtime library, the developer library, and the code samples library for Ubuntu 16.04.
  3. In your download folder, install them in the same order:
# (the runtime library)
$ sudo dpkg -i libcudnn7_7.1.4.18-1+cuda9.0_amd64.deb
# (the developer library)
$ sudo dpkg -i libcudnn7-dev_7.1.4.18-1+cuda9.0_amd64.deb
# (the code samples)
$ sudo dpkg -i libcudnn7-doc_7.1.4.18-1+cuda9.0_amd64.deb

# remove 
$ sudo dpkg -r libcudnn7-doc libcudnn7-dev libcudnn7

Now, we can verify the cuDNN installation (below is just the official guide, which surprisingly works out of the box):

  1. Copy the code samples somewhere you have write access: cp -r /usr/src/cudnn_samples_v7/ ~/
  2. Go to the MNIST example code: cd ~/cudnn_samples_v7/mnistCUDNN.
  3. Compile the MNIST example: make clean && make -j4
  4. Run the MNIST example: ./mnistCUDNN. If your installation is successful, you should see Test passed! at the end of the output.
(cv3) rahul@Windspect:~/cv/cudnn_samples_v7/mnistCUDNN$ ./mnistCUDNN
cudnnGetVersion() : 7104 , CUDNN_VERSION from cudnn.h : 7104 (7.1.4)
Host compiler version : GCC 5.4.0
There are 2 CUDA capable devices on your machine :
device 0 : sms 28  Capabilities 6.1, SmClock 1582.0 Mhz, MemSize (Mb) 11172, MemClock 5505.0 Mhz, Ecc=0, boardGroupID=0
device 1 : sms 28  Capabilities 6.1, SmClock 1582.0 Mhz, MemSize (Mb) 11163, MemClock 5505.0 Mhz, Ecc=0, boardGroupID=1
Using device 0

...

Result of classification: 1 3 5
Test passed!

In case of compilation error

Error

/usr/local/cuda/include/cuda_runtime_api.h:1683:101: error: use of enum ‘cudaDeviceP2PAttr’ without previous declaration
extern __host__ __cudart_builtin__ cudaError_t CUDARTAPI cudaDeviceGetP2PAttribute(int *value, enum cudaDeviceP2PAttr attr, int srcDevice, int dstDevice);
/usr/local/cuda/include/cuda_runtime_api.h:2930:102: error: use of enum ‘cudaFuncAttribute’ without previous declaration
 extern __host__ __cudart_builtin__ cudaError_t CUDARTAPI cudaFuncSetAttribute(const void *func, enum cudaFuncAttribute attr, int value);
                                                                                                      ^
In file included from /usr/local/cuda/include/channel_descriptor.h:62:0,
                 from /usr/local/cuda/include/cuda_runtime.h:90,
                 from /usr/include/cudnn.h:64,
                 from mnistCUDNN.cpp:30:

Solution: sudo vim /usr/include/cudnn.h

replace the line '#include "driver_types.h"' 
with '#include <driver_types.h>'

 

Configure the CUDA & cuDNN Environment Variables

# cuDNN libraries are at /usr/local/cuda/extras/CUPTI/lib64
export PATH=/usr/local/cuda-9.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.0/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.0/lib 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/extras/CUPTI/lib64

source ~/.bashrc

TensorFlow installation

The python environment is setup using a virtualenv located at /opt/pyenv/cv3

$ source /opt/pyenv/cv3/bin/activate
$ pip install numpy scipy matplotlib 
$ pip install scikit-image scikit-learn ipython

Referenced from the official Tensorflow guide 

$ pip install --upgrade tensorflow      # for Python 2.7
$ pip3 install --upgrade tensorflow     # for Python 3.n
$ pip install --upgrade tensorflow-gpu  # for Python 2.7 and GPU
$ pip3 install --upgrade tensorflow-gpu=1.5 # for Python 3.n and GPU

# remove tensorflow
$ pip3 uninstall tensorflow-gpu

Now, run a test

(cv3) rahul@Windspect:~$ python
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2018-08-14 18:03:45.024181: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: A VX2 FMA
2018-08-14 18:03:45.261898: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:03:00.0
totalMemory: 10.91GiB freeMemory: 10.75GiB
2018-08-14 18:03:45.435881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 1 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:04:00.0
totalMemory: 10.90GiB freeMemory: 10.10GiB
2018-08-14 18:03:45.437318: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0, 1
2018-08-14 18:03:46.100062: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-08-14 18:03:46.100098: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 1
2018-08-14 18:03:46.100108: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N Y
2018-08-14 18:03:46.100114: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 1: Y N
2018-08-14 18:03:46.100718: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1039 8 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
2018-08-14 18:03:46.262683: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 9769 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:04:00.0, compute capability: 6.1)
>>> print(sess.run(hello))
b'Hello, TensorFlow!'

Looks like it is able to discover and use the NVIDIA GPU

KERAS

Now add keras to the system

pip install pillow h5py keras autopep8

Edit configuration, vim ~/.keras/keras.json

{
"image_data_format": "channels_last",
"backend": "tensorflow",
"epsilon": 1e-07,
"floatx": "float32"
}

A test for keras would be like this at the python CLI,

(cv3) rahul@Windspect:~/workspace$ python
Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux
>>> import keras
Using TensorFlow backend.
>>>

 

END.

 

Quick Apt Repository way – NVIDIA CUDA 9.x on Ubuntu 18.04 LST installation

The same NVIDIA CUDA 9.1 setup on Ubuntu 18.04 LST using the aptitude repository. However this appears to work and is simple to work with. Reference is taken from this askubuntu discussion.

Lookup the solution to the Nouveau issue from this blogpost

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo ubuntu-drivers autoinstall
sudo reboot

Now install the CUDA toolkit

sudo apt install g++-6
sudo apt install gcc-6
sudo apt install nvidia-cuda-toolkit gcc-6

Screenshot from 2018-07-13 14-18-16

Screenshot from 2018-07-13 14-16-00

Run the installer

root@wind:~/Downloads# ./cuda_9.1.85_387.26_linux --override

Screenshot from 2018-07-13 14-27-36.png

Screenshot from 2018-07-13 14-28-43

Setup the environment variables

# Environment variables
export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib

Provide the soft link for the gcc-6 compiler

sudo ln -s /usr/bin/gcc-6 /usr/local/cuda/bin/gcc
sudo ln -s /usr/bin/g++-6 /usr/local/cuda/bin/g++
sudo reboot

Test

cd ~/NVIDIA_CUDA-9.1_Samples/
make -j4

Upon completion of the compilation test using device query binary

$ cd ~/NVIDIA_CUDA-9.1_Samples/bin/x86_64/linux/release
$ ./deviceQuery

Screenshot from 2018-07-13 14-41-49.png

$ sudo bash -c "echo /usr/local/cuda/lib64/ > /etc/ld.so.conf.d/cuda.conf"
$ sudo ldconfig

DONE

NVIDIA CUDA 9.x on Ubuntu 18.04 LST installation

Guide

An installation guide to take you through the NVIDIA graphics driver as well as CUDA toolkit setup on an Ubuntu 18.04 LTS.

A. Know your cards

Verify what graphics card you have on your machine

rahul@karma:~$ lspci | grep VGA
04:00.0 VGA compatible controller: 
NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
rahul@karma:~$ sudo lshw -C video
 *-display 
 description: VGA compatible controller
 product: GM204 [GeForce GTX 970]
 vendor: NVIDIA Corporation
 physical id: 0
 bus info: pci@0000:04:00.0
 version: a1
 width: 64 bits
 clock: 33MHz
 capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
 configuration: driver=nouveau latency=0
 resources: irq:30 memory:f2000000-f2ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:2000(size=128) memory:f3080000-f30fffff

Download the right driver

downloaded the Version 390.67 for GeForce GTX 970

Screenshot from 2018-07-12 17-15-34.png

B. Nouveau problem kills your GPU rush

Hoever there are solutions available

Here is what worked for me

  1. remove all nvidia packages ,skip this if your system is fresh installed
    sudo apt-get remove nvidia* && sudo apt autoremove
    
  2. install some packages for build kernel:
    sudo apt-get install dkms build-essential linux-headers-generic
    
  3. now block and disable nouveau kernel driver:
    sudo vim /etc/modprobe.d/nvidia-installer-disable-nouveau.conf
    

Insert follow lines to the nvidia-installer-disable-nouveau.conf:

blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off

save and exit.

  1. Disable the Kernel nouveau by typing the following commands(nouveau-kms.conf may not exist,it is ok):
    rahul@wind:~$ echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf
    options nouveau modeset=0
    
  2. build the new kernel by:
    rahul@wind:~$ sudo update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-4.15.0-23-generic
    
  3. reboot
Run the Installer in run-level 3
$ sudo init 3 
$ sudo bash
$ ./NVIDIA-Linux-x86_64-390.67.run

Uninstall

More instruction on how to stop using the driver before uninstallation
sudo nvidia-installer –uninstall

C. NVIDIA X Server Settings

Install this from the ubuntu software center.
Screenshot from 2018-07-12 17-23-43.png

D. Start the CUDA related setup

We will need the CUDA toolkit 9.1 which is supported for the GTX 970 version with compute 3.0 capability. So download the local installer for Ubuntu.

Screenshot from 2018-07-13 13-55-24.png

Downloaded the “cuda_9.1.85_387.26_linux.run*” local installation file.

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt install nvidia-cuda-toolkit gcc-6

Steps are taken from the CUDA 9.1 official documentation

  1. Perform the pre-installation actions.
  2.  Disable the Nouveau drivers. We did this in the above driver installation
  3. Reboot into text mode (runlevel 3). This can usually be accomplished by adding the number “3” to the end of the system’s kernel boot parameters. Change the runlevel ‘sudo init 3’, refer
  4. Verify that the Nouveau drivers are not loaded. If the Nouveau drivers are still loaded, consult your distribution’s documentation to see if further steps are needed to disable Nouveau.
  5. Run the installer and follow the on-screen prompts:
$ chmod +x cuda_9.1.85_387.26_linux
$ rahul@wind:~/Downloads$ ./cuda_9.1.85_387.26_linux --override

Screenshot from 2018-07-13 13-52-19.png

Since we already installed the Driver above we say NO in the NVIDIA accelerated graphic driver installation question.

Screenshot from 2018-07-13 13-54-20.png

This will install the CUDA stuff in the following locations

  • CUDA Toolkit /usr/local/cuda-9.1
  • CUDA Samples $(HOME)/NVIDIA_CUDA-9.1_Samples

We can verify the graphic card using the NVIDIA-SMI command.

Screenshot from 2018-07-12 20-02-08

Uninstallation

cd /usr/local/cuda-9.1/bin
sudo ./uninstall_cuda_9.1.pl

 

E. Environment Variables

rahul@wind:~$ vim ~/.bashrc

# Add the following to the environment variables
export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib64 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda-9.1/lib

rahul@wind:~$ source ~/.bashrc
rahul@wind:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.1, 

 

F. Test

Ensure you have the right driver versions

rahul@wind:$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module 390.67 Fri Jun 1 04:04:27 PDT 2018
GCC version: gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)

Change directory to the NVIDIA CUDA Samples and compile them

rahul@wind:~/NVIDIA_CUDA-9.1_Samples$ make

Now run the device query test

rahul@wind:~/NVIDIA_CUDA-9.1_Samples/bin/x86_64/linux/release$ ./deviceQuery
./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

 

END