Machine Learning II  – Detecting Lanes On A Highway

Written by Div Gill

With all the hype of self driving cars these days, I thought it would be fun to try to detect lanes on a highway.

Lane detection is one of the techniques currently being used to offer driver assist features to drivers and maybe, in the future: full driver-less features. We won’t go all the way to detecting the lanes to the degree you need to for a self driving car; instead the goal is to remove most things in the image except the lines on the highway. After that, the hard work will be done and locating the lines should be much easier.

We will be marking everything we think is a lane as one value and everything else as another, thus creating a binary image. The techniques we will use are also very low overhead and thus can be implemented in real time on a mobile system.

In [49]:

from scipy import misc
from  scipy import ndimage
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.pyplot import imshow
%matplotlib inline
from IPython.core.pylabtools import figsize


In [50]:

img = misc.imread(highway.jpeg)

Image source:*qtuIbycQUWjP0hUtY9Zj_g.jpeg

In Figure 1 (above) we have an image of a highway on a nice sunny day. It’s in colour, but most of what we need to do to detect the lanes we can do without colour; it’ll make things much simpler since we need to only deal with grayscale values and not separate RGB values.

In [165]:

figsize(15, 10)
plt.title(Fig 1: Test Image)



<matplotlib.image.AxesImage at 0x298c8a43940>

Below (Figure 2), we convert the image to grayscale using a simple equation that combines the RGB values of the image to produce a single grayscale value.

In [166]:

figsize(15, 10)
# Convert the image
R = img[:, :, 0]
G = img[:, :, 1]
B = img[:, :, 2]
img_gray = R * 299. / 1000 + G * 587. / 1000 + B * 114. / 1000
plt.title(Fig 2: Grayscale Image)
imshow(img_gray, cmap=gray)



<matplotlib.image.AxesImage at 0x298934be668>

To do most of the work of detecting the lanes on the road, we will be using image filters. I won’t go into details about either of those things but simply put: image filters will allow us to globally remove certain kinds of information form an image.

The first of these filters is a gaussian blur filter. This filter blurs the image, and the degree of the blurring can be controlled by a single parameter. Another way of thinking about a blurring filter is that the filter doesn’t allow “sharp” features in the image to pass through; it filters them out. The higher the degree of blurring, the less the filter is allowing to pass.

In [167]:

img_filtered_1 = ndimage.filters.gaussian_filter( img_gray, 2)

figsize(15, 10)
plt.title(Fig 3: Blurred Image factor of 2)
imshow(img_filtered_1, cmap=gray)



<matplotlib.image.AxesImage at 0x298c74e9898>

Below (Figure 3) is our grayscale image blurred by a factor of 2. You can see it’s a bit duller than the original.

Next up in Figure 4 we have the same grayscale image and it has been blurred by a factor of 4. As you can see, the image has been blurred quite a lot. Now, compare Figures 3 and 4 to see what features in these images were affected by the blurring and what were not. It may be hard to see but but the lanes are clearer in Figure 3 than in Figure 4.

What would happen if you subtracted a less blurred image by a more blurred image? The less blurred image and the slightly more blurred image will have most things in common, but the less blurred one will allow slightly more ‘sharp’ features to pass that the more blurred image will block. The result of subtracting will leave only the ‘sharp’ features that the less blurred image allowed to pass.

In [168]:

img_filtered_2 = ndimage.filters.gaussian_filter( img_gray, 4)

figsize(15, 10)
plt.title(Fig 4: Blurred Image factor of 4)
imshow(img_filtered_2, cmap=gray)



<matplotlib.image.AxesImage at 0x298939124e0>

Below in Figure 5 is the subtracted image of Figures 3 and 4. As you can see, only ‘sharp’ features exist. However, before we do the subtraction, we will blur the original image. This dulls the image a bit, but we do this because noise in the image shows up as ‘sharp’ features and we are not interested in sharp features. By blurring the image, the sharpest features get blurred out.

In [169]:

img_smooth = ndimage.filters.gaussian_filter( img_gray, 8)

img_filtered_1 = ndimage.filters.gaussian_filter( img_smooth, 0.1)
img_filtered_2 = ndimage.filters.gaussian_filter( img_smooth, 1.2)
img_dog = img_filtered_1 img_filtered_2
img_dog = img_dog * (255/ img_dog.max())
plt.title(‘Fig 5: Difference of Gaussians’)
imshow(img_dog, cmap=gray)



<matplotlib.image.AxesImage at 0x2989396ee80>

Now, let’s threshold the image and convert it into a binary image. All values above 80 are to be set to 255 (the maximum value allowed) and all other value are to be set to 0. The result is a binary image. Notice in Figure 6 (below) how the lanes on the road survive but nothing else on the road does. This is because the sharp transition of the white lanes to the black road are very sharp and are thus preserved. Other less sharp features are also preserved, but their intensities are not as high, and our threshold doesn’t allow them to pass through.

In [174]:

img_thresh = np.where(img_dog <= 50, 0, 255)
plt.title(Fig 6: Threshold Image)
imshow(img_thresh, cmap=gray)



<matplotlib.image.AxesImage at 0x298939585c0>

We could finish there, but there is an additional step we could take: using a line detection kernel–one for vertical lines and one for horizontal lines–we can extract the areas where the boundaries are of the surviving lanes.

In [175]:

kernel_horoz = np.array([[-1, -1, -1],
                        [2,  2, 2],
                        [-1, -1, -1]])

kernel_vert = np.array([ [-1, 2, -1],
                        [-1, 2, -1],
                        [-1, 2, -1]])

img_lines_horoz = ndimage.convolve(img_thresh, kernel_horoz)
img_lines_horoz = np.where(img_lines_horoz <= 15, 0, 255)

img_lines_vert =  ndimage.convolve(img_thresh, kernel_vert)
img_lines_vert = np.where(img_lines_vert <= 15, 0, 255)

img_lines = img_lines_vert + img_lines_horoz

figsize(15, 10)


In [172]:

plt.title(Fig 7: Line Extracted Image)
imshow(img_lines, cmap=gray)



<matplotlib.image.AxesImage at 0x2989487cfd0>

In [173]:

plt.subplot(2, 1, 1)
plt.title(Fig 8a: Initial Image)
plt.subplot(2, 1, 2)
plt.title(Fig 8b: Final Image)
imshow(img_lines, cmap=gray)



<matplotlib.image.AxesImage at 0x298c98b3eb8>

When you compare the final image with the initial image we started with (Figures 8A and 8B, below), you can see that we did quite a lot of work and have successfully extracted the lanes. You will still need to go through a few more steps to make what we have useful, but the above demonstrates what you can do with simple image filters.

A comment one reading this could make is that the value we chose to get our results was uniquely chosen for our specific example image and would not work as well on other images in different lighting. This is true, however, using methods like histogram normalization and other such techniques, one can normalize the images that are coming into the lane detection algorithm, and thus solve this problem (to a degree).

My final note to make here is that the use of such filters is not actually the best way to solve this problem. State of the art lane detection algorithms use much more sophisticated techniques to detect lanes, and these techniques are invariant to brightness changes as well as outliers, and have been proven to be far more robust than techniques based on simple filters like ours. None-the-less, even these algorithms use image filters in their processing steps, so understanding how these filters work is quite useful.