Docker: More Than A Cute Whale

Deploying computing to the edge using Docker containers and Azure Iot Edge

Written by Sean Edmond

The number of IoT devices has exploded over the last decade. With over 20 billion IoT device connected globally today, this number is expected to triple within the next 15 years.  These devices have increased in complexity and are now integrating cameras, microphones and other sensors.

While the cloud has helped us centralize computing for these IoT devices, there are clear advantages to moving computing away from the cloud and deploying analytics to these “edge devices”:

– The bandwidth required to transmit a video/audio stream to the cloud is high (and expensive)

– Image or audio processing on the cloud would put stress on the server computing resources

– Communication to/from the cloud introduces latency that might not be able to meet the requirements of time sensitive operations. Especially if the connection is lost.

– Private data isn’t exposed to the internet

– Leverages sensor fusion

https://www.ptgrey.com/edge-computing

This spring we were awarded a project to use Azure IoT Edge to deploy Docker modules to a hardware device we created in Fall 2017.  The Docker modules we created interact with all of the hardware sensors on the device. Here’s how we did it!

First Phase – Electrical Development

 

We designed the device from the ground up.  This included selecting all of the sensors, completing the electrical design.

Hardware Components included:

  • i.MX6 SOM from Solidrun
  • MIPI CSI-2 camera
  • I2S microphone
  • I2C devices for sensing temperature, humidity, pressure, time-of-flight, ambient light, motion
  • plastic enclosure


The project only had the schedule and budget to complete one board revision.  To de-risk the camera design, we also created a breakout board for the sensor that attached to the IMX6Q development board.

First Phase – Firmware Development


We used Yocto to build our own linux distribution. The advantage of using Yocto is that you can build your own stripped down version of Linux, specifically designed for your embedded application.

We were able to get a head start on the camera kernel driver with the custom OV9281 breakout board.  The driver was coded to be Video 4 Linux compatible for easy integration with NXP’s board support package.  Similarly, the I2S microphone was coded to plugin to the ALSA lib. The camera driver required several patches on the kernel to support our sensor format (which truly tested our skills)!  Creating drivers for the I2C sensors was a breeze.

We had a few challenges booting the board and had to make modifications to u-boot and the Device Tree.  With the board up an running we were finally able to do our final phase of validation. The board was a rev A success!

After assembling the units and focussing the camera lenses, we delivered the final devices with a few sample applications for accessing the sensors.  The plan was that our customer would develop their own application using the SDK exported by Yocto.

First Phase – Deployment in the Field


One of the challenges our customer faced with the initial delivery was the ease of compiling and deploying applications for the target:

 

  • Building applications with yocto is possible, but not appropriate because every developer needs to be running Ubuntu and have a minimum 150 GB of free disk space
  • The system lacked an easy mechanism to deploy compiled binaries and dependent libraries remotely.  Re-flashing and re-installing new SD cards was an option, but it required taking apart the enclosure
  • Yocto has the ability to generate an SDK that provides a “sysroot” and gcc for cross-compiling.  However, everytime a new library is required, the SDK has to be regenerated. The process of re-generating  the SDK is time-consuming, requires yocto, and is a pain to distribute to all application developers
  • Debugging on the target is difficult.  Developers wanted a way to test their application off the target
  • Developers were required to develope in C or C++.  They wanted the ability to perform analytics with different languages

Second Phase – The Solution : Azure IoT Edge

Clearly, we needed to enhance the device to enable application developers to easily create and deploy their analytics software to the edge.  Our customer came to us for a second project phase to install Azure IoT Edge on the devices.

Azure IoT Edge works by deploying Docker containers to IoT Edge Devices.  A Docker Container is a “lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings”.  Azure Iot Edge modules communicate with each other or the cloud using either a JSON message interface or a websocket.

https://stackoverflow.com/questions/16047306/how-is-docker-different-from-a-normal-virtual-machine

This provided the perfect solution for easily developing and deploying software:

  • Applications could be developed in any language as the message or websocket interface is language agnostic
  • Dependent libraries and packages are installed in each Docker image
  • Azure Iot Edge provided all of the functionality to manage and monitor the connection to each device.  It also provides the interface to deploy and monitor the Docker modules
  • Docker images can be created for any target, so windows developers are able to test modules on their windows machine before deploying to the target
https://docs.microsoft.com/en-us/azure/iot-edge/how-iot-edge-works

Second Phase – Installing Docker CE and Azure on the Device

 

The first goal was to complete the Azure IoT quick start on our device:
(https://docs.microsoft.com/en-us/azure/iot-edge/quickstart).

 

Installing Docker CE using Yocto onto our ARM target was challenging:

 

  • We had to update all Yocto layers from Jethro to Rocko to add virtualization features (this included updating to the freescale community BSP)
  • Added meta-virtualization layer
  • Added required kernel configurations (CONFIG_BRIDGE=y was the only missing one for us)
  • Added yocto recipe to install Docker CE.  We used this one from yoctoproject.org :
  • Added yocto recipes to install Azure Iot Edge Runtime control and dependant Python scripts


Docker CE requires kernel version 3.10+.  Fortunately, the Solidrun version of the kernel we were using was 3.14 so we didn’t have to update our kernel patches!

Since the ARM cores on the devices are pretty slow, we were required to patch one of the timeouts.  In docker.py, we changed DEFAULT_TIMEOUT_SECONDS = 60” to “DEFAULT_TIMEOUT_SECONDS = 180”.

The Azure Iot Agent and Azure Iot Hub take up quite a bit of diskspace, and more diskspace is required for future deployment of modules.  We decided to maximize the rootfs on our SD card by adding the following to our layer.conf: IMAGE_ROOTFS_SIZE = “15271040”

 

Our customer wanted the Azure Iot Edge to start automatically on boot.  We added init/shutdown scripts to perform the following:

 

  • Start Docker daemon
  • “iotedgectl setup” command on bringup using configured “connection string”
  • “iotedgectl start” command on bring up
  • “iotedgectl login” command on bring up (to logic to docker registries)
  • “iotedgectl stop” on reboot
  • “iotedgectl uninstall”  on reboot
  • “docker system prune” on reboot (through testing, we found this was required to remove unused containers so the system didn’t run out of disc space)


We ran into issues trying to start the Azure Iot Hub.  We were able to resolve by changing the user to root in the edgeHub’s container create options: “Env”:[“EdgeHubUser=root”]

With the installation complete, we were able to see the simulated temperature sensor messages reach the cloud!  Now it was time for us to create Azure Iot Edge modules for interacting with the hardware sensors.

Second Phase – Creating an Azure Iot Edge Modules to Interact with Sensors

 

The main challenge of interfacing with the hardware sensors is that the kernel interface is in C (which is currently not a supported language for Azure IoT Edge modules). Therefore, we had to create a C binary for the ARM target to interface with the kernel drivers. Then, we used a TCP socket to transmit the sensor information to a C# program.  The C# program handled all of the interactions with the Azure IoT Hub.

The camera was the most fun to get working (it’s always rewarding to be able to “see” your work).  The customer requested a 100 image buffer with the images saved in the file system. They also requested that images could be sent to the cloud using the message interface. Here’s a brief description of the architecture.

The C program:

 

  • Is compiled using yocto for ARM target
  • Acts as the TCP server with C# program
  • Opens the video device (/dev/video0) and uses Video 4 Linux  iotcl() calls to configure the camera
  • Grabs raw images and saves them as a file in /data/images/
  • Optionally converts the raw image to PNG format
  • Uses the TCP socket to send the image file name


The C# program:

 

  • Based off of Azure’s temperature sensor example:
  • Starts the C binary process
  • Acts as a TCP client to C program
  • Uses the TCP socket to receive the image file name
  • Uses the JSON message interface to transmit either the file name, or the image binary data as a base64 encoded string
  • Gets and sets configurable camera properties from the Azure IoT Hub (such as resolution, frame rate, exposure, gain)


The docker container for the module:

 

  • Installs the C binary using the “COPY” command in the Dockerfile
  • Installs dependent libraries in the Docker container with the Dockerfile (using “RUN apt-get”).  For the camera we required imagemagick to convert the raw image into PNG

Since Docker containers are completely isolated from the host, we had to “mount” the video device (/dev/video/) and data volume (/data/) into the container.  This is easily accomplished using the “container create options” in the Azure Iot Edge Portal:

{

 “HostConfig”: {

   “Binds”: [

     “/data:/data”

   ],

   “Devices”: [

     {

       “PathOnHost”: “/dev/video0”,

       “PathInContainer”: “/dev/video0”,

       “CgroupPermissions”: “rwm”

     }

   ]

 }

}

We replicated the methodology used for the camera, for the I2C sensors and microphone.  After powering on the device all of the Docker containers are downloaded automatically. I was able to monitor to the device to cloud message received at my Azure Iot Hub.

[IoTHubMonitor] Message received from [_IOT_EDGE]:
{
 “imageInfo”: {
   “imageName”: “/data/images/image_58”,
   “imageBase64String”: null
 },
 “timeCreated”: “2018-06-06T17:11:31.2710461+00:00”
}

Lessons Learned


The filesystem format that docker uses is important! Since we were on kernel 3.14, we were stuck with .VFS file system format. As a result our docker containers take up a huge amount of space (all 5 containers consume about 8GB). The overlay2 filesystem format was added in kernel version 4.0, which is the preferred file system format for Docker and consumes significantly less disk space.

We should have placed all of our sensors into one docker container instead of 3 separate ones. This would reduce the deployment time and the disk space required. Docker containers are pretty cool! We’re definitely going to consider using them for future projects. The tool is much more powerful than their cute whale logo suggests 🙂

If you have any questions about the project, don’t hesitate to contact me!

Sean Edmond
sean.edmond@mistywest.com