19. February 2021 - Providentia Editors

Providentia++ consortium meeting: New sensor stations in early 2021

Five new sensor stations are already planned. More than 50 new cameras, radars, and lidars will go into operation in spring 2021. At the first Providentia++ (P++) consortium meeting, it is clear that the real-time digital twin is becoming increasingly robust and precise, thus laying the foundation for the primary goal of connected traffic – greater safety.

AT A GLANCE

  • The test route will be expanded by 5 sensor stations in spring 2021.
  • The message rate almost doubled in 2020, and live traffic is visualized in the virtualization tool CARLA.
  • Individual sensor failures will be compensated for in the overall system.
  • The database for training neural networks will be expanded.
  • In the future, vehicles will be reliably detected despite brief concealment.
  • Driving maneuvers can be classified and analyzed with the help of a data pool.
  • Object detection and classification will be expanded to pedestrians and cyclists, and corresponding software will be scalable via modules.

 

The two sensor stations on the A9 highway in Garching won’t be alone much longer: Starting next spring, five more are expected to be added, equipped with more than 50 new area scan cameras, radars, and lidars. In addition, the test section of the highway will be extended by 2.5 kilometers to a total of 3.5 kilometers. To be included for the first time are lidars, 360-degree cameras, and event-based cameras. At the most interesting measuring point of the project alone – the intersection of Schleißheimer Strasse and Zeppelinstrasse in Garching-Hochbrück, called “S110” by the scientists – more than 15 new sensors are slated to be installed. All of the above-mentioned types of cameras will be used there in order to capture the complexity of the urban intersection, with its pedestrians, cyclists, and new movement scenarios. “Surveys have begun, the first soil samples have been taken, and the construction companies will start in January,” says Christian Creß, a scientist at the Technical University of Munich (TUM), who expects the first measurements to begin in March of next year.

TU Munich: Providentia++ test section to be expanded by two kilometers, more than 50 new area scan camera, radars, and lidars to be added

Creß’s presentation at the initial (virtual) consortium meeting of the Providentia++ research project in early December was especially welcomed by the project’s industry and research partners. Over the course of 2020, the TUM team has already achieved nighttime operation with radars, visualized live traffic in real time with the virtualization tool CARLA, nearly doubled the message rate (to 25 hertz), and made a data interface available 24/7. The successive expansion of the test section, beginning in spring 2021, brings the first data collection in the urban environment within reach.

Intel: Integrating Providentia++ infrastructure in Mobileye’s RSS concept

The project’s progress means that Ralf Gräfe, from the technology group Intel, will receive live data from event-based cameras for the first time in the framework of Providentia++. This will allow him to calculate and predict maneuvers that have been simulated in CARLA on the basis of real data. Gräfe’s main concerns are the safety and reliability of the digital twin system. “Even if errors occasionally occur, the digital twin needs to work safely overall,” says the program manager at Intel Labs Europe, who also speaks of the “error tolerance of multimodal perception.” If individual sensors fail, the digital twin should still provide solid information. In addition, one challenge is the merging of cameras and lidars, or fusing image information from the area scan cameras and point clouds from the lidars into aggregated, plausible information. To use the external infrastructure for maneuver strategies, Intel relies on the RSS (Responsibility-Sensitive Safety) concept of its subsidiary Mobileye: Vehicles already use the technology to observe the environment and to recognize and react to objects – keeping a safe distance from the vehicle ahead, for example. With the P++ infrastructure, multiple vehicles can be observed at the same time and parallel maneuver strategies can be fed back from the infrastructure to the vehicles. “The RSS enhancements based on the P++ infrastructure will be incorporated into CARLA and be available as open source in the future,” explains Gräfe, who expects to gain new insights for the “less structured” urban traffic in particular.

Valeo: Optimizing environmental model for precise evaluation of driving situations

P++ partner Valeo’s research should ultimately also serve “to evaluate traffic situations and to manage comfortable vehicle reactions,” explains Dr. Jens Honer. But first, the perception, tracking, and data fusion expert at Valeo wants to concentrate on continuing to optimize the model of the environment, overall vehicle control, and system integration. Optimizing the model of the environment involves merging the lidar points with data from the fish-eye cameras and high-resolution maps (soon also from P++ partner 3D Mapping), for example. The evaluation of the respective driving situation is a prerequisite for overall vehicle control. For this purpose, Valeo has already implemented a reference algorithm to “augment” the P++ data, which according to Honer “considerably expands” the database for training a neural network. In addition, Valeo’s test vehicles are in operation on the test section of the A9 highway. The “Rec01,” for example, is equipped with 47 sensors and can “synchronize its sensors and the P++ system to GPS time by means of a dedicated time server,” Honer explains. In 2021, the Valeo expert has three goals: I. A robust fusion system, to fully exploit the data from the cameras and lidars, for example. II. A system to interpret and evaluate traffic situations with adaptive algorithms, in order to control safe and comfortable vehicle reactions. III. A solid database in combination with evaluation algorithms, to enable objective evaluation of the system performance.

fortiss: Making the digital twin even more reliable

These developments go hand in hand with the research of other partners such as fortiss, whose main goal is to make the digital twin as robust as possible, so that it can enable good results and predictions even in poor visibility conditions (such as at night) or in challenging traffic situations. The focus is currently on three topics: data fusion, including tracking; movement prediction; and the development of a real-time platform. To determine the current performance of the system for tracking vehicles on the road, the German Aerospace Center (DLR) provided support to the research institute from the air (see also Bird’s-eye view: More precision for the digital twin). The goal: to establish a “ground truth” that can serve as a basis for all further developments. An evaluation based on this data has shown that the Providentia++ system is very exact and identifies vehicles reliably. This performance will now be transferred to more complex traffic scenarios. In the case of high vehicle density, active “occultation management” can make it possible to continue tracking vehicles even when they are briefly concealed from view. In addition, movement prediction based on neural networks will be able to determine future trajectories with the highest degree of probability. To be able to handle hardware failure, fortiss relies on a real-time platform. Its task: transferring applications that fail due to a hardware problem to another hardware via “application migration.” In 2021 fortis will further perfect these approaches, continue making algorithmic improvements to the test vehicle fortuna and the Providentia++ system, and build a demonstrator for real-time applications.

Elektrobit: Making stationary data usable

More than 700 hours of data from the two existing sensor stations on the A9 highway have already been stored by Elektrobit in their EB Assist Test Lab. The data pool created in Microsoft Cloud Azure enables Providentia++ partners to use stationary data. Elektrobit receives the data via Providentia++’s Internet interface, extracts metadata, analyzes it 24/7, and stores and processes it in real time. A preliminary analysis of the 10-minute segments determines which data is interesting and which is less interesting. Hot storage data flows into the scene database (see also Elektrobit: Making research data available through Test Lab). Of particular interest to Simon Tiedemann of Elektrobit: the classification of driving maneuvers. It is interesting, for example, to find situations in which vehicles tailgate, swerve, or pass each other, and to relate these events to the weather and time of day. “The five new sensor stations will allow us to make valuable new data available in our data pool,” says Tiedemann, the Test Lab product manager.

Cognition Factory: Software for detecting and classifying road users is now modular

When it comes to expanding the test section, Claus Lenz from Cognition Factory also sees new requirements. The core task of his company is to detect and classify objects in real time. The addition of five new sensor stations to the two existing ones in the spring will bring many new cameras, lidars, and radars into operation. The software must therefore be scalable. That’s why Lenz is looking to modularization: “We’ve built a software management system that is expandable. We use software containers as encapsulated software modules with their own operating system.” The advantage: These software containers are easy to control and monitor, and they can be tested remotely. “We have already tested the software,” Lenz says. “It can be rolled out to any number of other nodes.” Lenz is already able to switch from camera to camera on his screen and knows, for example, what proportion of the total traffic is made up of cars, trucks, and buses, as well as how busy the individual lanes are. Very soon, the algorithms will also detect cyclists and pedestrians.

“The real-time digital twin is reality” (Prof. Alois Knoll)

The first Providentia++ consortium meeting has shown that “the real-time digital twin is reality,” says Prof. Alois Knoll, leader of the consortium and the Chair of Robotics, Artificial Intelligence and Real-time Systems at the TU Munich. “And it is also clear that over the course of the year, we have continued to perfect the toolchain together with our partners. This is very important, especially in view of our next goals – studying urban space and developing value-added services.”

FURTHER CURRENT TOPICS

1. July 2022

Cognition Factory: Evaluate and visualize camera data

Since the beginning of research on the digital twin, AI specialist Cognition Factory GmbH has focused on processing camera data. In the meantime Dr. Claus Lenz has deployed a large-scale platform

MORE >

1. July 2022

Digital real-time twin of traffic: ready for series production

Expand the test track, deploy new sensors, decentralize software architecture, fuse sensor data for 24/7 operation of a real-time digital twin, and make data packets public: TU Munich has decisively advanced the Providentia++ research project.

MORE >

11. May 2022

Elektrobit: Coining Test Lab to stationary data

Elektrobit lays the foundation for Big Data evaluations of traffic data. Simon Tiedemann on the developments in P++.

MORE >