Categories
blog

ADAS Testing: From Subjective Customer Preferences To Objective Validation At Scale

Advanced Driver Assistance Systems (ADAS) have been around since quite a while: Modern-day vehicles inevitably come with assistance features such as Adaptive Cruise Control (ACC), Lane-Keep Assist (LKA), Blind Spot Monitors, Traffic Sign Recognition and other features supposed to make driving safer and more comfortable. In contrast to Autonomous Vehicles, ADAS is a huge and profitable market today, and will likely remain so for the foreseeable future.

If you’ve driven a car with such features in the last few years, though, you might have realized that the performance of these systems can vary – sometimes by quite a lot: While some LKA systems do a good job of keeping your vehicle on track, others tend to react too late, or overcorrect and send you right across the opposite lane border rather than properly centering the car between them. Similarly, some ACCs make for a smooth ride, while others may apply the brakes when another vehicle cuts you off after overtaking – rather than just letting their higher speed widen the gap between you, as most human drivers would do.


Join live panel discussion “How to scale ADAS testing with objective KPIs“. Register here.


Why ADAS Performance Varies So Much

One of the reasons for the varying performance of ADAS features is that reliable, objective testing procedures for public roads have been rare.

During real-world testing, car makers and their suppliers routinely record all onboard data from sensors, actuators and more – allowing for in-depth analysis of system failures, near-misses and similar incidents after a drive. However: These onboard systems only record their own version of events – figuratively speaking, what the car thinks happened during the drive. If you want to iron out false positives/negatives, you need to compare this questionable version of events to a more trusted data set – reference data, or “ground truth” data.

Using Lane-Keep Assist as an example: If a test vehicle missed a lane border, overcorrected, or failed completely, you need to closely examine the actual environment/lane borders in that exact position – as well as the vehicle’s relative position and pose in that specific moment.

In the past, this has been a huge task and massive undertaking, requiring lots of on-site engineering manpower and high-precision measurements that were only possible in closed-off, controlled environments – in other words, on proving grounds.

And while proving grounds are an amazing asset for automotive testing, the total scope and variability of their test routes are by definition limited – which makes it a challenge to optimize a system for use on hundreds of thousands of kilometers of open road. Similarly, standardized tests as defined by EuroNCAP or ISO don’t come close to capturing the variety of roads and scenarios a vehicle is sure to encounter during its lifecycle.

To use another metaphor: Imagine driver training only taking place on a single safety course, with drivers unable to learn better performance after passing the test and entering public roads.

To better optimize for the real world, more real world testing is required – and it needs to happen without sacrificing the precision and fidelity of known approaches: It is not feasible having to rely on subjective feedback from test drivers that “it felt maybe a little bit strange somewhere back there.”

This leads us to a second reason why ADAS features differ so much from brand to brand: The lack of an objective standard for how they should perform – and what the criteria for optimal driving pleasure might be. The following portion of this article describes a field-tested approach to solving both of these problems.

From Subjective Feelings To Objective KPIs and Measurements

In a collaboration with the performance car maker Porsche, the companies GeneSys, MdynamiX and atlatec as well as research partner Adrive Living Lab of the Kempten University of Applied Sciences have created a solution to bridge this gap. Together, the partners have introduced a testing process that allows for objective ADAS validation at scale and on open roads, as featured in ATZ magazine (Automobiltechnische Zeitschrift).

The approach consists of 4 steps – in the aforementioned project, it was applied to validate LKA performance:

  1. The definition of objective criteria for ADAS performance.
  2. The creation of ground truth environment/HD map data
  3. The accurate recording of vehicle position/pose reference data during test drives 
  4. In-depth analysis of relevant driving situations and their recreation in the virtual space/simulation

Defining Objective Criteria For Driving Pleasure

The first challenge is already one of the hardest: How do you quantify an emotional quality criterion such as driving pleasure, or the feeling of safety? To solve this challenge, the Kempten University of Applied Sciences has developed a model of 3 layers: Subjective customer assessment, subjective expert assessment and finally a layer of objective vehicle signals.

ADAS Testing: From Subjective Customer Preferences To Objective Validation At Scale
Translating driver’s subjective feelings into measurable vehicle signals. © Kempten University of Applied Sciences*

A series of test drives, workshops, benchmarking campaigns and more produce insights into the subjective preference of customers for how a feature (e. g. an LKA) should perform. These insights are then refined into categories and sub-categories by automotive experts. Finally, the results are matched with related vehicle-level signals and the expected intensity to be measured for each of them, on a scale from none (0) to high (9).

Generating Ground Truth Data At Scale

Automotive OEMs and suppliers of ADAS technology already do test drives on a defined set of routes: These routes are chosen for factors like their variety, internationality, likelihood of certain events and more – and can cover hundreds or thousands of kilometers of public roads across multiple continents.

These routes are a great resource for objective ADAS testing on public roads – if you have access to high-accuracy measurements of their features and a way to generate reference data for the trajectories your test vehicles drive over them.

To this effect, the High Definition (HD) mapping capabilities of atlatec are leveraged to create 3-dimensional maps of test routes on – in this case, on public roads around Stuttgart and Kempten in Southern Germany.

ADAS Testing: From Subjective Customer Preferences To Objective Validation At Scale
Mapped lane border types and positions, as seen from a vehicle perspective and above. © atlatec

The produced HD maps contain information on lane borders’ types and their position with inch-perfect accuracy. atlatec’s vision-based approach to mapping allows for consistent high accuracy, even in areas with bad GPS reception.

The finished maps are exported into a multi-layered data format allowing for localization and matching of vehicle poses in real time. For the described collaboration, a variation of OpenDRIVE was used.

Test Drive Recording At High Fidelity

Accurately recording and recreating all trajectories driven during testing is made possible by an Automotive Dynamic Motion Analyzer (ADMA) unit by GeneSys: a high-precision motion sensor which allows for differential GNSS correction and is designed specifically for vehicle dynamics testing.

ADAS Testing: From Subjective Customer Preferences To Objective Validation At Scale
Variations of the ADMA system on equipped vehicles. © GeneSys

Based on this technology new test methods had to be developed for the objective evaluation of driving characteristics in the ADAS/AD context. Therefore, driver input as well as road and traffic input, control intervention and the resulting vehicle reaction/movement should be evaluated in its 6 degrees of freedom. Derived from the automated lateral control, it is necessary to obtain a high level of knowledge of the road excitation (essentially road markings and surface geometry) and the driver input in order to be able to evaluate the resulting vehicle reaction accordingly. In the case of assisted longitudinal guidance, a high-level knowledge of the surrounding traffic is required. 

Like all sensors, environmental sensors such as cameras, radar or lidar are faulty and not available or sufficiently accurate in all situations. This can have a significant impact on driving characteristics: For example, a camera may not be able to reproduce the curvature of a road accurately, which can cause difficulties for the lane-keeping controller. This repeatedly leads to uncertainties if the experienced driving characteristics are a result of the poor performance of sensors, trajectories, controllers, actuators or the poor response of the vehicle influenced by steering, axles, tires and chassis control systems.

In order to investigate this cause and effect chain, a much more accurate reference measurement method should be used as “Ground Truth”.

ADAS Testing: From Subjective Customer Preferences To Objective Validation At Scale
“Ground Truth“ measurement method © MdynamiX**/atlatec

In addition, an optimized measuring steering wheel allows for precise recording of steering speed/angle and torque/gradient. In the collaboration with Porsche, an original steering wheel was used to fully preserve the brand- and model-specific haptics, control functions and other details, ensuring realistic driver/vehicle interaction.

For the test drives, a comprehensive catalog of defined maneuvers and situations is created by MdynamiX and the University of Applied Science Kempten, ensuring that all relevant scenarios are encountered and recorded.

ADAS Testing: From Subjective Customer Preferences To Objective Validation At Scale
Example of a defined driving maneuver for LKA testing. © MdynamiX***

Turning Data Into Insights And Reproducible Scenarios For Simulation

The use of suitable algorithms makes for precise calculations and the automatic generation of KPI values from the recorded data. For example, the yaw rate and lateral acceleration recorded by the reference system – based on the ground truth curvature – can be matched with the measurements from the onboard system, allowing for accurate measurement of the production system’s deviation from the actual/reference values.

Comparing objective criteria and side offsets for straight/curved driving.
Comparing objective criteria (top) and side offsets for straight/curved driving (bottom). © Kempten University of Applied Sciences****

To gain further insights from the data, the digitalized test routes can be imported into automotive simulation tools. This allows for additional MIL/SIL/HIL tests as well as immersive Driver-in-the-Loop simulations.

Additionally, select scenarios encountered and recorded on real-world test drives can be reproduced – allowing for variation of parameters to further narrow down performance limitations.

Supplementing the test setup with atlatec sensor equipment also allows to accurately record real-world traffic and re-create other vehicles’ trajectories in simulations: This is particularly useful when validating ADAS features that are supposed to react to dynamic agents (e. g. Adaptive Cruise Control or Emergency Brake Assist systems). “Scenario fuzzing” again allows for manipulation of the real-world situation and can aid in the hunt for edge/corner cases.

Scenario Fuzzing
“Scenario Fuzzing” – the creation of variations from real-world traffic situations. © atlatec

Additional Reading/References

If you’d like to explore this topic further and in more scientific detail, we recommend the following resources:

M. Höfer, F. Fuhr, B. Schick und P. E. Pfeffer, „Attribute-based development of driver assistance systems“ in 10th International Munich Chassis Symposium 2019, P. E. Pfeffer, Hg., Wiesbaden: Springer Fachmedien Wiesbaden, 2020, 293 – 306.

J. Nesensohn, S. Levéfre, D. Allgeier, B. Schick, and F. Fuhr. “An Efficient Evaluation Method for Longitudinal Driver Assistance Systems within a Consistent KPI based Development Process”.

S. Keidler, D. Schneider, J. Haselberger, K. Mayannavar und B. Schick, „Entwicklung fahrstreifengenauer Ground Truth Karten für die objektive Eigenschaftsbewertung von automatisierten Fahrfunktionen“ in 17. VDI-Fachtagung, Hannover, 2019.

B. Schick, C. Seidler, S. Aydogdu und Y.-J. Kuo, „Driving experience vs. mental stress with automated lateral guidance from the customer’s point of view“ in Proceedings, 9th International Munich Chassis Symposium 2018, P. Pfeffer, Hg., Wiesbaden: Springer Fachmedien Wiesbaden, 2019, S. 27–44, doi: 10.1007/978-3-658-22050-1_5.

S. Aydodgu, B. Schick und M. Wolf, „Claim and Reality? Lane Keeping Assistant – The Conflict Between Expectation and Customer Experience“ in 27. Aachener Kolloquium, Aachen, 2018.

D. Schneider, B. Schick, B. Huber und H. Lategahn, „Measuring Method for Function and Quality of Automated Lateral Control Based on High-precision Digital “Ground Truth” Maps“ in 34. VDI/VW-Gemeinschaftstagung Fahrerassistenzsysteme und Automatisiertes Fahren 2018: Wolfsburg, 07. und 08. November 2018, 2018.

B. Schick, F. Fuhr, M. Höfer und P. E. Pfeffer, „Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z, Jg. 121, Nr. 4, S. 70–75, 2019, doi: 10.1007/s35148-019-0006-2.

To learn more from the parties involved, feel free to reach out directly:

atlatec

Contact: Dr. Henning Lategahn
Email: lategahn@atlatec.de
LinkedIn: https://www.linkedin.com/in/henning-lategahn/

GeneSys

Contact: Peter Arnold
Email: arnold@genesys-offenburg.de
LinkedIn: https://www.linkedin.com/in/peter-arnold-16584355/

Kempten University of Applied Sciences

Contact: Prof. Bernhard Schick
Email: Bernhard.Schick@hs-kempten.de
LinkedIn: https://www.linkedin.com/in/bernhard-schick-1556aa4/

MdynamiX

Contact: Matthias Niegl
Email: Matthias.Niegl@mdynamix.de
LinkedIn: https://www.linkedin.com/in/matthias-niegl-53a661161/

Image source:

Titel image: „Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z
*„Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z
**„Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z
***„Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z
****„Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z

Categories
blog

How Accurate Are HD Maps for Autonomous Driving and ADAS Simulation?

In our mission to create digital twins of real world roads, our team at atlatec has taken on a number of HD Mapping projects all over the world, delivering HD maps and 3D models for autonomous vehicle operations and AV/ADAS simulation.

Along the way, we’ve discovered a number of topics and questions that are of relevance to almost all project partners involved – and we want to take the opportunity to discuss some of these in more detail. To start, we’ve decided to answer one of the most prominent and frequent questions we get:

“How accurate are HD maps?”

Maintaining high accuracy is one of the biggest challenges in building HD maps of real-world roads – and a rather complex one. Let’s dive right in and start by looking at what accuracy means in this context:

What does “accuracy” mean for HD maps?

With regard to accuracy, there are two main focus points that determine the quality of an HD map:

  • Global accuracy (positioning of a feature on the face of the Earth)
  • Local accuracy (positioning of a feature in relation to road elements around it).

It is important to note that, in terms of road mapping, accuracy is an index that cannot be derived from a single variable. With regard to our mapping technology, accuracy is directly dependent on 3 potential sources of error:

Survey-stage-atlatec
Stages of the mapping process and accuracy-related errors that may originate from them.

Global/GPS error

Global accuracy of maps is generally bound by the accuracy of GPS: This challenge is the same for map providers all over the world. With regard to this type of error, then, the main cause is poor GPS signal quality. It is most often affected when driving a survey vehicle in areas that are covered by roof-like structures (most commonly under bridges and through tunnels), as well as surrounded by tall buildings (within street canyons).

The challenge to accurately determine the position of ones sensor pod is a very old one. It’s basically the same that seafarers used to have when navigating by the stars: In order to accurately pinpoint your goal and chart a course, you need to first determine where you are located. Similarly, in HD mapping, you can’t answer the question “Where is this sign we’re detecting?” unless you first answer the question “Where are we currently positioned?”.

As a result, your ability to accurately survey a road and its surroundings is directly dependent on your ability to first pinpoint the position and pose of your survey vehicle – along all trajectories it was driven. Any errors in determining a sensor’s position and pose will subsequently result in a global accuracy error of the map created from this sensor’s data.

To maintain a high degree of global accuracy, our sensor pods contain survey-grade differential GPS sensors: This ensures optimized signal reception and allows to supplement the real-time satellite signal by using correction data from GPS base stations, which exist almost all over the world. In combination, such correction data significantly enhances the accuracy, compared to using only GPS satellite signals.

atlatec-gps-hdmaps
atlatec combines GPS satellite and base station data for optimal global accuracy.

Local error

Before a survey vehicle enters a tunnel and after it comes out at the other end, the differential GPS receiver usually provides accurate global coordinates to determine its position. However, as mentioned above, attempting to track its movement on a global scale whilst it is driving through a tunnel produces error – there is no GPS satellite signal underground.

This is where the importance of the stereo cameras comes in: Imagery that we collect from the two, calibrated cameras whilst driving through a tunnel allows us to compute and track the pose and motion of a survey vehicle by using computer vision technology. To further supplement accuracy, we add another, redundant sensor in the form of a motion sensor, or IMU (inertial measurement unit).

When it comes to the processing stage, then, we use sensor fusion to combine the data from the cameras, GPS and IMU to successfully reconstruct the trajectory of a survey vehicle and its surroundings, maintaining high accuracy throughout the entire data set. The advantages of using computer vision technology stand out in contrast to other systems that are mainly IMU-based: Their main side effect is that, in areas with no or poor GPS, the trajectory of a vehicle can go off (drift) and may only be corrected once GPS signal is recovered. In the context of autonomous driving, such errors can not be afforded.

Using imagery collected from the stereo cameras allows us to recreate a very consistent trajectory, even in GPS-denied areas. If the GPS signal is lost for a very long distance, though, drift/local referencing error will eventually occur, as is the case for all known approaches.

Camera-based-approach-atlatec

The benefit of using a camera-based approach – also called visual odometry – over an IMU-based system lies in the nature of how the error accumulates over time: whereas the total error of an IMU accumulates in a cubic fashion (at a factor of x³, with x being the distance travelled), atlatec’s vision-based approach only makes for linear accumulation of error.

mapping
Mapping in a tunnel: atlatec’s computer vision-based approach maintains high accuracy even in GPS-denied areas.

Sampling error

This type of error is caused by incorrect calculation of distance between a point of interest (for example a stop line or a traffic light) and a sensor pod camera. Local sampling, or annotation takes place after the collected data is translated into a 3D model and is the process of labelling features within this model, thus making them identifiable to simulation tools or autonomous vehicles. In other words, annotation is the process of translating 3D imagery, which humans can easily understand and process, into a vectorized “digital twin” which can be processed by algorithms and AI.

atlatec_hdmaps_intersection
A 3D model of an intersection, with lane geometry/topology and other features annotated.

In order to annotate road objects accurately, we use a combination of AI and manual work, which will be discussed in more detail later on.

What is atlatec’s approach to creating accurate HD maps?

Attempting to deal with all three causes of accuracy error in the practice of road mapping poses a number of challenges both in terms of software and hardware development. Our mapping technology employs a number of tools and solutions which allow us to achieve high HD map accuracy in a cost-effective manner.

Portable, camera-based mapping setup

At atlatec, we use a sensor setup that is mainly camera-based. Having two cameras and a GPS receiver at a fixed distance from each other in a small, portable box allows us to map roads worldwide with very little logistical difficulty: The metal case containing all sensors is the size of a suitcase and can be set up on any car in a matter of minutes.

Leveraging the survey-grade differential GPS and our computer vision expertise as explained above, we manage to accurately recreate all trajectories driven during data acquisition. As both our hardware and software are developed inhouse, the sensor pods’ configuration and the pipeline for processing the data from them are heavily optimized for each other.

atlatec technology

Loop closure

Strong emphasis on achieving extremely accurate loop closure is a crucial step in creation of coherent datasets. Our survey methods include driving on every lane of a road we set out to map, extending initial driving duration but ensuring higher data quality (and eliminating occlusion issues). The main reason why this increases mapping accuracy is that, by driving on every lane of the same stretch of a road, the same road object can be detected multiple times, enabling us to determine its global position more accurately. The process of bringing the sensor data from these multiple survey trajectories together into one consistent result is what’s called loop closure.

To exemplify this, let’s say a vehicle equipped with an atlatec sensor pod drives on a lane framed by dashed lane borders. The survey vehicle will drive on that lane at least once (the example of driving past a desired point on the road twice is represented in the schematic image below as trajectory a and trajectory b). Moreover, the vehicle will also drive on its neighboring lanes (if there are any) as part of the same survey session which starts and ends at the same location. In turn, once it comes to the annotation stage, we will be able to represent, for example, a corner of any individual dash as a point in a 3D map.

In complex cases such as sharp turns where a certain point can be absent in some trajectories, then, we will still be able to determine the position of a dash accurately. The reason for it is that, thanks to loop closure, our data sets are very coherent. That makes it possible to connect the data acquired from both stereo cameras and track key points from multiple trajectories in which they are visible.

atlatec-survey-vehicle
Capturing the same point multiple times allows us to accurately determine its position, even when it is hardly visible from certain perspectives.

Human-assisted AI

Our third and main strength is our software. Data retrieved from the stereo cameras, the GPS receiver and the IMU is first pre-processed in order to accurately reconstruct driving trajectories and mitigate potential incoherences from driving in areas with poor GPS signal. Following this stage, we use a combination of AI and manual work to reconstruct a broad spectrum of road objects in a virtual environment. Although our software can detect and identify a wide range of road elements accurately, integration of manual work is an important step in ensuring high accuracy and consistency throughout the entire map.

atlatec hd maps
Top view of an automatically processed map, showing lane geometry/topology annotated in color.

How accurate are atlatec HD maps?

Based on thousands of kilometers of HD maps we’ve created all over the world and the results of various tests and audits, we conclude that accuracy errors will be lower than the following for 95% of atlatec HD map coverage:

Global/GPS accuracy

In areas with good GPS reception we achieve a global accuracy of less than 3 cm deviation using satellite signals and correction data from base stations.

In GPS-denied areas, however, inaccuracy rises with distance traveled through the area, being largest in its middle. This means that the maximum GPS error can be expressed as a percentage of the distance traveled through a GPS-denied area: We have quantified this through repeated tests which indicate that this value is less than 0.5%.

For instance, if we drive through a tunnel that is 500 meters long, our GPS-based estimation of the global position of a survey vehicle will not deviate more than max. 1,25 m from the truth in the middle of that tunnel.

As this is still a relatively high margin of error, we leverage computer vision as discussed above to mitigate the error on a local level:

Local accuracy (drift)

By using computer vision technology to reconstruct the trajectory driven on any route we can work around GPS, keeping consistency and accuracy at a high level even in tunnels and urban canyons.

As mentioned above, the error that occurs when relying on visual odometry accumulates far slower than e. g. MEMS IMU-based approaches: Within a certain horizon (h) around a survey vehicle, the drift of the reference trajectory will contribute to an error of less than 0.1%*h.

For instance, a feature located at 20 meters distance from a survey vehicle will not be displaced by more than 2 cm due to local drift of the reference trajectory.

Sampling accuracy

Inside of a corridor of 10 meters width around the mapping trajectory, features in the finished 3D model can be surveyed with less than 4 cm deviation. At a larger lateral distance, precision will drop.

Which kind of accuracy matters most for HD maps?

We have taken on a number of mapping projects all over the world so far, a typical customer use case being the creation of 3D models for (ADAS) simulation. With that in mind, it is important to note that, when it comes to virtual testing environments, the relevance of the accuracy errors mentioned above can differ.

Usually for simulation use cases, a low local and sampling error are of highest significance. Meanwhile, global accuracy and GPS positioning are often irrelevant in this context. In fact, GPS receivers weren’t even a part of our sensor setup in the beginning: This is due to the nature of virtual testing, where what matters is that the local environment is reproduced accurately – e. g. in the process of simulating lane-keep assistance on a digital twin of real-world lane geometry. As long as the positioning of the vehicle in relation to road elements is correct, it usually does not matter where on the globe these road elements are located. We will discuss map development for simulation in more detail in a separate article.

If you want to see for yourself how atlatec data can boost simulation, you can download a free sample map of Downtown, San Francisco here – provided in the OpenDRIVE format, as supported by a growing number of simulation tools.