12. March 2021 - Providentia Editors

Many sensors, one digital twin: How does it work?

In the coming months, the number of sensors in the Providentia++ project will more than triple. This will create challenges, especially for data fusion experts such as Leah Strand from the TU Munich.

 

Ms. Strand, the research project uses a sensor network consisting of area scan cameras, radars, and lidars. Why are so many different sensors necessary?

Each sensor has its original “field of view”: It looks at what is happening from its own angle. In a multi-sensor setup, in which many sensors are integrated, it is possible to expand this view as desired and also to play to the sensors’ individual strengths. A camera is light-dependent, so it can only be used during the day, while a radar provides good data even at night and also measures vehicles’ speeds very precisely. Lidar’s strengths lie in the 3D detection of objects, meaning that pedestrians or cyclists, for example, can be detected well in three dimensions at close range.

The sensors have different operating methods. A radar works with radar waves, a lidar works with laser beams, and video cameras “capture” light. How can the data be combined into a single image of the traffic?

There are essentially two possibilities: low-level fusion and high-level fusion. In low-level fusion, the raw data from the individual sensors is fused together. The advantage is that this uses the maximum amount of information from the raw data. But the fusion algorithms with raw data are not very adaptive to new sensors, scenes, etc. This complicates development. Therefore, we rely on high-level fusion, in which sensor-specific evaluations are encapsulated. We first evaluate the raw data from the sensors individually. In the case of an area scan camera, this is a two-dimensional pixel matrix containing color and light intensities, and in the case of lidar, it is a three-dimensional point cloud. We then obtain abstracted object detections. The advantage here is that these detections are independent of the measurement principle of the sensors. Only after an object has been detected and classified using a sensor-specific detection algorithm – in the case of the camera images, for example, this is done with the help of a neural network based on the YOLOV4 architecture – does the actual fusion of the data begin.

So you put the data into a common form, and use this as the basis for your further work …

Yes, we have created evaluation pipelines for the individual sensors, so to speak. However, the objects generated from this can be faulty, and they are also subject to certain measurement inaccuracies. Inaccuracies can be compensated for by means of a Bayesian state estimator (also known as a filter). This involves comparing the measurements with a motion model and then calculating how likely it is that the object is located at a certain point. After all, since a vehicle’s movements are determined by physical laws, they can be described with dynamic motion models such as this. Measurements are only accepted as realistic and valid if they actually describe a real possible movement. For this procedure to work well and for the system to be able to handle measurement inaccuracies well, the parameterization of this state estimator is crucial, which requires a great deal of system knowledge and experience. This filter ultimately provides a probability distribution that reflects where objects are most likely to be located, based on the data from each sensor. These probability distributions, which are the result at the end of each sensor pipeline, can then be fused together to produce a consistent result – the digital twin. Uncertainties and inaccuracies are increasingly reduced as more independent sources, in our case the sensors, confirm the same result. In the livestream on our Innovation Mobility website, for example, you can see that the detection of the vehicles is already very precise.

Work on the expansion of the test section is underway. Fifty new sensors – for the first time including lidars – will be mounted on new sensor masts and overhead signs. Can the sensor network then simply be scaled up at the push of a button?

We are currently preparing to make the system scalable for a virtually infinite number of measurement points. As a first step, we are adapting the system architecture and algorithms. In doing so, we will proceed differently than before. Up to now, we have merged the data for the entire test section into a global digital twin in the back end. Central fusion is too complex for the entire stretch from the A9 highway to Garching-Hochbrück, however, and it is not scalable in terms of computing power and network requirements. In the future, there will therefore no longer be a digital twin of the entire test section, but rather a series of local digital twins in relevant areas. Evaluations will no longer run via a central node, but in a decentralized manner. A further challenge is to integrate lidars, as we are using these for the first time. The important thing is to “get smart” from the point clouds and to get the 3D object detection to work flawlessly, which will still present us with some challenges. As soon as the sensor-specific detection pipeline for the lidar is up and running, nothing should stand in the way of our local digital twins, precisely because of the high-level fusion architecture.

Learn more about multi-sensor fusion

Note: The publication above is interesting because algorithms based on random finite set (RFS) theory are also used in the Providentia++ system.

FURTHER CURRENT TOPICS

1. July 2022

Cognition Factory: Evaluate and visualize camera data

Since the beginning of research on the digital twin, AI specialist Cognition Factory GmbH has focused on processing camera data. In the meantime Dr. Claus Lenz has deployed a large-scale platform

MORE >

1. July 2022

Digital real-time twin of traffic: ready for series production

Expand the test track, deploy new sensors, decentralize software architecture, fuse sensor data for 24/7 operation of a real-time digital twin, and make data packets public: TU Munich has decisively advanced the Providentia++ research project.

MORE >

11. May 2022

Elektrobit: Coining Test Lab to stationary data

Elektrobit lays the foundation for Big Data evaluations of traffic data. Simon Tiedemann on the developments in P++.

MORE >