Autonomous driving based on sensor fusion
The most important premise in road traffic is that road users do not collide with stationary objects or other vehicles. Safety, therefore, takes the highest priority. In order to enable safe mobility in a complex 3D world, sufficient distance to surrounding objects and vehicles must always be maintained. In manually controlled vehicles, the driver is responsible for maintaining these distances and avoiding collisions with other road users. At higher levels of automation, however, the vehicle itself will be responsible for sensing its surroundings. Environment detection is, therefore, a critical and essential factor in automated driving and sensor networks consisting of ultrasound, camera, radar, and LiDAR will be integrated in the vehicles of the future.
But why are so many different sensor systems needed for automated driving?
Is LiDAR a “crutch”?
According to Tesla boss Elon Musk, at least one sensor technology is not necessary for autonomous driving. At an investor conference in 2019, he explained that LiDAR sensors are unnecessary and that cameras with capable algorithms in combination with radar technology are sufficient for automated driving functions. His argument that LiDAR sensors are too expensive and large to be built into current production vehicles may be true so far – this is precisely why the Blickfeld technology was developed – but relying only on camera and radar is not a safe way to go in today’s world.
A recent incident on a highway in Taiwan shows why: A truck had tipped over on a highway, blocking the lanes. The upper side of the truck’s white tarpaulin pointed in the direction of the following traffic. A Tesla crashed into the truck without braking. Fortunately, the truck had no cargo on board, so nobody was hurt. How did this accident happen? Since the vehicle did not slow down as it approached the obstacle, it can be assumed that the so-called autopilot was switched on. A human driver would probably have reacted at least shortly before the impact. Tesla’s autopilot system is based on a sensor suite without LiDARs, but relies on cameras supported by radar and ultrasound. The image recognition software, which analyzes the recorded camera data and thus provides the basis for driving decisions, did not know what to do with the unknown situation of the overturned truck and did not even detect an object in its own lane. The camera system detected the truck tarpaulin incorrectly and thus did not interpret the white surface as an obstacle.
Cameras – the eyes of cars?
Cameras are similar to our human eyes – they capture images as we see them, in color. What camera recordings lack, however, is the third dimension, which is necessary for measuring distances. This ability is essential when it comes to avoiding objects. The human brain interprets the recorded 2D information to estimate distances, while cameras require image recognition software to perform this feat.
The problem with image recognition: In order to interpret images, algorithms must learn by labeling and storing experienced situations. This is achieved with the help of artificial intelligence, machine learning, and thousands of test kilometers – both real and simulated. But what happens when the vehicle encounters an unknown situation? Covering this so-called “long tail,” i.e. recording all those situations that are not part of everyday driving and can be described as extraordinary, is a challenge that has not yet been solved. As long as this challenge exists, a camera cannot safely serve as the sole sensor technology on which automated driving functions are based. The necessary interpretation of camera data by means of algorithms creates room for errors, which ultimately endanger the safety of road users.
LiDARs: Leaving no room for interpretation
Sensor technologies such as LiDAR leave no room for interpretation as to whether an object is on the roadway by emitting laser beams that are reflected by surrounding objects and then detected again by the sensor. They capture direct 3D data and thus skip the intermediate step of converting 2D to 3D. If there is an obstacle in front of the vehicle, LiDAR sensors detect it reliably at an early stage, identify the exact dimensions and, above all, the distance to the vehicle.
However, the type of object located in the path of the vehicle can also be a crucial factor, as not every object is an obstacle that requires the vehicle to brake. The various sensor technologies classify objects in different ways: LiDAR sensors, for example, identify point clusters in the sensor data. Based on the size of these clusters, objects can be divided into various categories such as cars, motorcycles, or pedestrians. In order to identify, for example, a blown-up plastic bag as such and thus harmless, the analysis of the camera data is again helpful, which, as already described, makes use of image recognition software. Cameras are also needed to recognize road signs, for example, because LiDAR sensors do not record colors.
Autonomous driving without LiDAR won’t work
Thus, every sensor technology has its advantages and disadvantages and its justification. In fact, it is clear that redundancies in a sensor network are necessary to ensure the safety of vehicles with automated driving functions. None of the available sensor technologies will enable autonomous driving on their own. Incidents such as the above-mentioned accident in Taiwan also clearly show that LiDAR sensors should not be disregarded in the sensor networks. After all, automated vehicles need to be one thing above all else – safe. With LiDAR sensors, autonomous vehicles come a major step closer to achieving this goal.