26. September 2020 - Providentia editors

Bird’s-eye view: More precision for the digital twin

The digital twin of traffic is only as good as the precision of the data allows. Just how accurate are the measurements of the cameras and radars mounted on overhead signs on the A9 highway, and how precisely does the digital twin, created by tracking and sensor data fusion, map the current traffic situation? To find out, fortiss, in cooperation with the German Aerospace Center (DLR), dispatched a helicopter with an integrated camera system to determine the exact position of the vehicles on the highway. Researcher Annkathrin Krämmer from fortiss explains what the “bird’s-eye view” revealed.

Ms. Krämmer, you are responsible for data fusion in the Providentia++ project, and you develop digital twins. How do you go about this?

We use cameras and radars that allow us to determine the position, speed, and type of vehicles along the road. We are currently merging data from eight cameras and four radars – combining the data in the best possible way. We use the strengths of each sensor, so to speak, and also apply complementary knowledge, such as the time sequence. In this way, we create a complete digital twin over the entire section of the highway: a digital image of the traffic situation. In order to use the digital twin for a wide variety of applications, such as planning in autonomous vehicles or for the further development of the algorithms in the Providentia system, we have to evaluate how accurately it maps the actual vehicle positions. To do this, we need a ground truth, that is, the exact positions of all vehicles on the road, so that we can compare these with the positions we have calculated.

And this is why the helicopter came into play …

The advantage of a view from a height of 300 meters is that the almost right-angle overhead view of the road minimizes projection errors and there is no need to worry about obscurations. The center points and outlines of the vehicles can therefore be determined very precisely, independently of our system. The German Aerospace Center’s high-resolution cameras, together with the highly accurate georeferencing methods they have developed, offer an accuracy of six centimeters and thus a perfect basis for comparison with our digital twin.

The verification of our digital twin by the German Aerospace Center’s camera system has shown us that our system works well under good weather conditions and in normal traffic. (Annkathrin Krämmer, fortiss)

How precise is the digital twin at the moment?

We have seen that most of the deviations are in the order of magnitude of a vehicle’s dimensions, about two meters. But there are also outliers with larger errors. This is because the cameras are still detecting vehicles from either the front or the back, rather than detecting their size. If you factor in the average length of the vehicles in the test field – 4.80 meters – the practically self-inflicted error comes to 2.40 meters, or the distance to the center of the vehicle. Seen in this light, the majority of our measurements are quite good. Our task is now to factor in the vehicle dimensions, which will allow us to determine the exact center of the vehicle. In addition, it has been shown that the greatest measurement errors occur where the vehicles are furthest away from the cameras and radars. The reason for this is the calibration of our sensors. If there is an angular deviation of just 0.1 degree, measurement errors over several hundred meters become increasingly larger.

How can these errors be reduced?

The calibration of our devices must be as precise as possible. The more the overhead signs on which the cameras and radars are mounted vibrate, for example, the greater the risk that the calibrations will bring major errors into the system. That’s why our goal is for the sensors to self-calibrate in real time. This autocalibration would especially reduce errors over greater distances. Another idea is to mount lidars on the overhead signs and to determine the exact vehicle dimensions from the point clouds they create.

Are you satisfied with the current state of research?

The verification of our digital twin by the German Aerospace Center’s camera system has shown us that our system works well under good weather conditions and in “normal” traffic, especially when we factor in the lengths of the vehicles. In the next step – and this will also be an important part of Providentia++ – we are working on the robustness of the digital twin. Bad weather, extreme light conditions, and numerous occlusions resulting from heavy traffic must not affect the quality of the digital twin. Ultimately, the system must guarantee an optimal image of the traffic – no matter how many vehicles are on the road and no matter how poor conditions are.

Learn more about the architecture of the Providentia system, the creation of the digital twin, and its evaluation in the research report:

Providentia – A Large-Scale Sensor System for the Assistance of Autonomous Vehicles and Its Evaluation

Picture: fortiss, 2020


1. July 2022

Cognition Factory: Evaluate and visualize camera data

Since the beginning of research on the digital twin, AI specialist Cognition Factory GmbH has focused on processing camera data. In the meantime Dr. Claus Lenz has deployed a large-scale platform


1. July 2022

Digital real-time twin of traffic: ready for series production

Expand the test track, deploy new sensors, decentralize software architecture, fuse sensor data for 24/7 operation of a real-time digital twin, and make data packets public: TU Munich has decisively advanced the Providentia++ research project.


11. May 2022

Elektrobit: Coining Test Lab to stationary data

Elektrobit lays the foundation for Big Data evaluations of traffic data. Simon Tiedemann on the developments in P++.