The 411 on Lidar

By: Div Gill, Engineering Physicist EIT MistyWest

The term Lidar comes up quite a lot when talking about self-driving cars. In this blogpost, we will discuss what Lidar is, why it is such a useful tool for autonomous driving, and the pros and cons that are associated with its use. The term LIDAR stands for Light Detection and Ranging. Technically Lidar covers a broad range of sensors that use electromagnetic radiation in the visible, infrared and UV bands to detect distances to objects. Its modern use refers to specific types of sensors that are referred to as lidars. There are many variations of lidars, but at the very minimum a lidar is capable of doing a line scan. This is achieved by performing spot distance measurements with a laser of a given wavelength, and either rotating the entire unit, or more commonly bouncing the laser off a rotating mirror. The result is a set of data points along a line. These are both the most common and cheapest types of lidars. In the early days of autonomous driving, such units were used to scan the road ahead of a car. They were mounted on the roof, typically at varying angles with respect to ground. For example, in Figure 1 the Stanford car used for the Darpa Grand Challenge has 5 lidars mounted at different angles. The result gives the car 5 lines-scans in front of it at various distances perpendicular to the cars motion. As the car moves forward the data from the lidars is put together to create a 3d map of the road and the obstacles ahead.

Figure 1: Stanford self-driving car for that 2005 Darpa Grand Challenge. Source:

Modern lidars do something very like the Stanford car but instead of having multiple separate units, a single unit contains multiple laser-receiver pairs at various angles. One such lidar shown in Figure 2 is made by the company Velodyne.

Figure 2: Velodyne Lidar unit. Source:

The resulting point cloud is shown in Figure 3. Each circle in the image comes from a single laser-receiver pair being rotated 360 degrees. By counting the number of circles, you can tell how many laser-receiver pairs are in the one Velodyne lidar unit. You can also see that the angle of each laser is chosen to keep the circles equidistant from each other when scanning a flat surface in front of the car.

Figure 3: Velodyne point cloud. Source:

Most companies conducting research in self-driving cars currently use such 3D lidars. The final category of lidars relevant to self-driving cars are solid state lidars. A solid state lidar is any unit that can do 3D scans without having any moving parts. The elimination of moving parts is thought to be a fundamental necessity if lidars are to become more robust and less expensive. Currently there are various techniques to achieve solid state operation. One of them is the so-called flash lidar. The idea is very simple: instead of having multiple laser-receiver pairs, you have one laser and multiple receivers arranges in a grid like CMOS sensors in a modern digital camera. The laser is fired through a diffuser that turns the point coming out of a laser into a diffused flash of light. As this light bounce of objects some of it is reflected back to the flash lidar where it goes through a lens and onto the array of receivers. The receivers measure the time it takes for light to hit them with respect to when the laser was fired. The result is a depth image very similar to that which commercial depth cameras like the Kinect provide. The major difference is that flash lidars are designed to have a measurement range of 100 meters or more. This means a powerful laser, expensive and large optics, and very sensitive optical receivers. The result is a lidar unit that is currently too expensive for use in autonomous driving. Figure 4 shows a diagram of how a flash lidar works.

Figure 4: Diagram of a flash lidar Source:

The real advantage of flash lidars (other than their solid-state nature) is that they capture an instant 3D image of the world. Figure 5 shows a flash lidar imaging a plane taking off. As you can see the 3D point cloud captures the plane and the rotating propeller, however the propeller is frozen in time and can be captured without distortion. A conventional 3D lidar would have captured a rotating disk instead.

Figure 5: Flash Lidar Image. Source:

It’s worth mentioning that there are semi solid state lidars out there as well. They are like flash lidars but instead of having a 2D grid of receivers they have a 1d line of receives. The laser hits a rotating mirror and then hits a diffuser that turns the laser dot into a laser line. This line is thus moving back and forth, scanning the scene in front giving us a 3D image of scene. Note the difference between this setup and the Velodyne setup. A single laser gives us an instantaneous set of data points that lie on a line. The number of points captured only depends on the number receivers, which greatly simplifies the design. One downside of such a design is that they are usually limited to a field of view of less than 180 degrees. Most likely such semi solid state devise will be the first type of lidars that will end up in production cars. Now that we understand what lidars with respect to the self-driving car industry, a good question to ask would be why use lidars over other sensors. The answer varies depending on who you ask. For example, according to Tesla you don’t use lidar at all. Instead, Tesla plans to use an array of high dynamic range cameras to identify obstacles, lanes, road signs and everything else needed for self-driving. Tesla’s play is to use cheap hardware and push the complexity of understating what’s in the cars surroundings into software. This means Tesla requires much more complicated algorithms and much more processing power than someone else who relies on lidars as their primary sensors. However, the benefit is that Tesla doesn’t have to wait for lidar tech to mature. Instead they are selling cars that are “hardware ready” for autonomous driving and continually update software. Can Tesla’s approach work? Well, consider that human drivers have 2 very high resolution, high dynamic range cameras as their primary sensors, and are currently the state of the art when it comes to driving a car. Tesla’s latest car models have 8 cameras all over the car. They are nowhere near as good as a human eye of course and a Tesla car does not possess anywhere near the computing power a human brain has, but recent advances in neural networks and deep learning suggest that a powerful modern GPU is good enough to do many task that were previously thought to only be doable by humans. Time will tell. On the contrary, if you ask companies like Google, Uber, and almost every other company doing full self driving cars, the answer is that lidar is the sensor to use. Lidar out of the box gives a full 3d map of the environment. All that is required is to interpret the point cloud and identify cars, pedestrians, the road, etc. This is a much simpler and easier task than taking the 2D images provided by cameras and inferring 3D information from them. Lidar is also self-illuminating so it will work in all lighting conditions, unlike cameras. So lidars are a much more robust and less computationally expensive choice compared to cameras, and thus a much lower risk. That is why most companies have chosen this path. However, the lidars most of these companies are currently using are not ready to enter the consumer market because of cost and robustness. Current lidars are simply too expensive and break easily. They can also interfere with each other. Two lidars facing each other can interfere with each other. The level of interference depends on the type of lidar but no variation is risk free. With all their limitations lidars are still the future of the self-driving market. Their measurement precision and scanning capability are unmatched. But there is much work that needs to be done to make lidars cheap enough for the mass market, robust enough to handle the beating they will unavoidably get, and reliable enough to entrust them with human lives.

Figure 6: Lidar Comparison