As a business developer, I'm a "matchmaker" between atlatec's HD map/scenario portfolio and the needs of automotive and mobility companies in ADAS and AVs. I try to keep on top of market and technology trends and enjoy sharing with other industry stakeholders.
As usual at the end of each month, we’ve prepared a brief automotive news overview to help you to keep track of the hottest headlines.
This time, we’re especially happy to include some news of our own! Additionally, we found interesting developments at Volvo Trucks and their partnership with Aurora, the Polestar’s long-term commitment to the first climate-neutral EV, and Nvidia’s DRIVE Orin AI-computing platform, that Volvo Cars has opted to use for their AV.
I hope you enjoy the read and make sure to watch out latest fire-side chat that is already available on YouTube.
While actual autonomous vehicles may still be a few years out, the L1/L2 ADAS domain is already going stronger than ever. That’s why we’re happy to publish some news of our own this week: A detailed look at a solution for ADAS validation that brings capabilities and fidelity previously limited to proving grounds to public road testing.
Take a look at this solution for creating and leveraging reference data at scale as it was piloted by Porsche and built together by atlatec and our partners GeneSys, MdynamiX and the Kempten University of Applied Sciences. We’re quite excited to share this and hope you will take an interest, too: If you have any thoughts, we’d love to hear them!
Autonomous trucks are often regarded as perhaps the first instance of actual production AVs we’re likely to encounter on the open road. Reasons include the focus on (relatively non-comlex) highway routes, saving human drivers the grind of long-distance trips as well as the clear business case to be made in the logistics domain.
The latter, of course, relies on actual commercialization – towards which Volvo Trucks may have just taken another step, announcing a partnership with AV stack provider Aurora. The mutual goal: To bring autonomous hub-to-hub truck operations to North America – and thus bringing everyone a step closer to encountering actual AVs on public roads.
Basically every car maker and their suppliers are currently asking themselves, “How can we reduce carbon emissions a bit more – and perhaps offset the rest?” This is, of course, a relevant effort; and it continues to produce reductions for CO2, NOx and other emittents by a few percent every year (or at least every time a new emissions standard is announced).
However, instead of asking about 10% less, Geely-owned Polestar has chosen to question everything about themselves, aiming for 100% elimination of emissions – including not only the operations lifecycle of their new “Polestar 0” model, but also the entire supply chain and production, moving away from toxicity-related materials for chassis and batteries.
That asking bigger questions lead to bigger answers is something tech companies like Google have known for a long time (see “10X thinking”) – it will be exciting to see its effects on automotive and manufacturing, and whether others will follow suit!
More news from Sweden, and thus from Geely, who are also the proud owners of Volvo Cars: As was announced during NVIDIA’s GTC this month, the car maker has chosen their “DRIVE Orin” system to enable its passenger cars to drive themselves.
As Volvo Cars has previously announced, they’re skipping Level 3 entirely, instead aiming for L4 operations on highways as their debut on the autonomous vehicles stage. The first vehicle to come with the new NVIDIA SoC is the next-gen XC90; in which it will work hand in hand with ADAS software developed by Zenseact and LiDAR sensors supplied by Luminar.
Stay tuned for the atlatec industry newsletter coming at the end of May! In the meantime, feel free to reach out to us if you have any questions.
Get automotive industry news directly to your mailbox – sign up for the atlatec newsletter.
Advanced Driver Assistance Systems (ADAS) have been around since quite a while: Modern-day vehicles inevitably come with assistance features such as Adaptive Cruise Control (ACC), Lane-Keep Assist (LKA), Blind Spot Monitors, Traffic Sign Recognition and other features supposed to make driving safer and more comfortable. In contrast to Autonomous Vehicles, ADAS is a huge and profitable market today, and will likely remain so for the foreseeable future.
If you’ve driven a car with such features in the last few years, though, you might have realized that the performance of these systems can vary – sometimes by quite a lot: While some LKA systems do a good job of keeping your vehicle on track, others tend to react too late, or overcorrect and send you right across the opposite lane border rather than properly centering the car between them. Similarly, some ACCs make for a smooth ride, while others may apply the brakes when another vehicle cuts you off after overtaking – rather than just letting their higher speed widen the gap between you, as most human drivers would do.
Join live panel discussion “How to scale ADAS testing with objective KPIs“. Register here.
Why ADAS Performance Varies So Much
One of the reasons for the varying performance of ADAS features is that reliable, objective testing procedures for public roads have been rare.
During real-world testing, car makers and their suppliers routinely record all onboard data from sensors, actuators and more – allowing for in-depth analysis of system failures, near-misses and similar incidents after a drive. However: These onboard systems only record their own version of events – figuratively speaking, what the car thinks happened during the drive. If you want to iron out false positives/negatives, you need to compare this questionable version of events to a more trusted data set – reference data, or “ground truth” data.
Using Lane-Keep Assist as an example: If a test vehicle missed a lane border, overcorrected, or failed completely, you need to closely examine the actual environment/lane borders in that exact position – as well as the vehicle’s relative position and pose in that specific moment.
In the past, this has been a huge task and massive undertaking, requiring lots of on-site engineering manpower and high-precision measurements that were only possible in closed-off, controlled environments – in other words, on proving grounds.
And while proving grounds are an amazing asset for automotive testing, the total scope and variability of their test routes are by definition limited – which makes it a challenge to optimize a system for use on hundreds of thousands of kilometers of open road. Similarly, standardized tests as defined by EuroNCAP or ISO don’t come close to capturing the variety of roads and scenarios a vehicle is sure to encounter during its lifecycle.
To use another metaphor: Imagine driver training only taking place on a single safety course, with drivers unable to learn better performance after passing the test and entering public roads.
To better optimize for the real world, more real world testing is required – and it needs to happen without sacrificing the precision and fidelity of known approaches: It is not feasible having to rely on subjective feedback from test drivers that “it felt maybe a little bit strange somewhere back there.”
This leads us to a second reason why ADAS features differ so much from brand to brand: The lack of an objective standard for how they should perform – and what the criteria for optimal driving pleasure might be. The following portion of this article describes a field-tested approach to solving both of these problems.
From Subjective Feelings To Objective KPIs and Measurements
In a collaboration with the performance car maker Porsche, the companies GeneSys, MdynamiX and atlatec as well as research partner Adrive Living Lab of the Kempten University of Applied Sciences have created a solution to bridge this gap. Together, the partners have introduced a testing process that allows for objective ADAS validation at scale and on open roads, as featured in ATZ magazine (Automobiltechnische Zeitschrift).
The approach consists of 4 steps – in the aforementioned project, it was applied to validate LKA performance:
The definition of objective criteria for ADAS performance.
The creation of ground truth environment/HD map data
The accurate recording of vehicle position/pose reference data during test drives
In-depth analysis of relevant driving situations and their recreation in the virtual space/simulation
Defining Objective Criteria For Driving Pleasure
The first challenge is already one of the hardest: How do you quantify an emotional quality criterion such as driving pleasure, or the feeling of safety? To solve this challenge, the Kempten University of Applied Sciences has developed a model of 3 layers: Subjective customer assessment, subjective expert assessment and finally a layer of objective vehicle signals.
A series of test drives, workshops, benchmarking campaigns and more produce insights into the subjective preference of customers for how a feature (e. g. an LKA) should perform. These insights are then refined into categories and sub-categories by automotive experts. Finally, the results are matched with related vehicle-level signals and the expected intensity to be measured for each of them, on a scale from none (0) to high (9).
Generating Ground Truth Data At Scale
Automotive OEMs and suppliers of ADAS technology already do test drives on a defined set of routes: These routes are chosen for factors like their variety, internationality, likelihood of certain events and more – and can cover hundreds or thousands of kilometers of public roads across multiple continents.
These routes are a great resource for objective ADAS testing on public roads – if you have access to high-accuracy measurements of their features and a way to generate reference data for the trajectories your test vehicles drive over them.
To this effect, the High Definition (HD) mapping capabilities of atlatec are leveraged to create 3-dimensional maps of test routes on – in this case, on public roads around Stuttgart and Kempten in Southern Germany.
The finished maps are exported into a multi-layered data format allowing for localization and matching of vehicle poses in real time. For the described collaboration, a variation of OpenDRIVE was used.
Test Drive Recording At High Fidelity
Accurately recording and recreating all trajectories driven during testing is made possible by an Automotive Dynamic Motion Analyzer (ADMA) unit by GeneSys: a high-precision motion sensor which allows for differential GNSS correction and is designed specifically for vehicle dynamics testing.
Based on this technology new test methods had to be developed for the objective evaluation of driving characteristics in the ADAS/AD context. Therefore, driver input as well as road and traffic input, control intervention and the resulting vehicle reaction/movement should be evaluated in its 6 degrees of freedom. Derived from the automated lateral control, it is necessary to obtain a high level of knowledge of the road excitation (essentially road markings and surface geometry) and the driver input in order to be able to evaluate the resulting vehicle reaction accordingly. In the case of assisted longitudinal guidance, a high-level knowledge of the surrounding traffic is required.
Like all sensors, environmental sensors such as cameras, radar or lidar are faulty and not available or sufficiently accurate in all situations. This can have a significant impact on driving characteristics: For example, a camera may not be able to reproduce the curvature of a road accurately, which can cause difficulties for the lane-keeping controller. This repeatedly leads to uncertainties if the experienced driving characteristics are a result of the poor performance of sensors, trajectories, controllers, actuators or the poor response of the vehicle influenced by steering, axles, tires and chassis control systems.
In order to investigate this cause and effect chain, a much more accurate reference measurement method should be used as “Ground Truth”.
In addition, an optimized measuring steering wheel allows for precise recording of steering speed/angle and torque/gradient. In the collaboration with Porsche, an original steering wheel was used to fully preserve the brand- and model-specific haptics, control functions and other details, ensuring realistic driver/vehicle interaction.
For the test drives, a comprehensive catalog of defined maneuvers and situations is created by MdynamiX and the University of Applied Science Kempten, ensuring that all relevant scenarios are encountered and recorded.
Turning Data Into Insights And Reproducible Scenarios For Simulation
The use of suitable algorithms makes for precise calculations and the automatic generation of KPI values from the recorded data. For example, the yaw rate and lateral acceleration recorded by the reference system – based on the ground truth curvature – can be matched with the measurements from the onboard system, allowing for accurate measurement of the production system’s deviation from the actual/reference values.
To gain further insights from the data, the digitalized test routes can be imported into automotive simulation tools. This allows for additional MIL/SIL/HIL tests as well as immersive Driver-in-the-Loop simulations.
Additionally, select scenarios encountered and recorded on real-world test drives can be reproduced – allowing for variation of parameters to further narrow down performance limitations.
If you’d like to explore this topic further and in more scientific detail, we recommend the following resources:
M. Höfer, F. Fuhr, B. Schick und P. E. Pfeffer, „Attribute-based development of driver assistance systems“ in 10th International Munich Chassis Symposium 2019, P. E. Pfeffer, Hg., Wiesbaden: Springer Fachmedien Wiesbaden, 2020, 293 – 306.
J. Nesensohn, S. Levéfre, D. Allgeier, B. Schick, and F. Fuhr. “An Efficient Evaluation Method for Longitudinal Driver Assistance Systems within a Consistent KPI based Development Process”.
S. Keidler, D. Schneider, J. Haselberger, K. Mayannavar und B. Schick, „Entwicklung fahrstreifengenauer Ground Truth Karten für die objektive Eigenschaftsbewertung von automatisierten Fahrfunktionen“ in 17. VDI-Fachtagung, Hannover, 2019.
B. Schick, C. Seidler, S. Aydogdu und Y.-J. Kuo, „Driving experience vs. mental stress with automated lateral guidance from the customer’s point of view“ in Proceedings, 9th International Munich Chassis Symposium 2018, P. Pfeffer, Hg., Wiesbaden: Springer Fachmedien Wiesbaden, 2019, S. 27–44, doi: 10.1007/978-3-658-22050-1_5.
S. Aydodgu, B. Schick und M. Wolf, „Claim and Reality? Lane Keeping Assistant – The Conflict Between Expectation and Customer Experience“ in 27. Aachener Kolloquium, Aachen, 2018.
D. Schneider, B. Schick, B. Huber und H. Lategahn, „Measuring Method for Function and Quality of Automated Lateral Control Based on High-precision Digital “Ground Truth” Maps“ in 34. VDI/VW-Gemeinschaftstagung Fahrerassistenzsysteme und Automatisiertes Fahren 2018: Wolfsburg, 07. und 08. November 2018, 2018.
B. Schick, F. Fuhr, M. Höfer und P. E. Pfeffer, „Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z, Jg. 121, Nr. 4, S. 70–75, 2019, doi: 10.1007/s35148-019-0006-2.
To learn more from the parties involved, feel free to reach out directly:
Titel image: „Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z *„Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z **„Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z ***„Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z ****„Eigenschaftsbasierte Entwicklung von Fahrerassistenzsystemen“, ATZ Automobiltech Z
As we approach the end of March, let’s look back at the headlines that made noise this month. In this issue: Tesla, Honda, and Volvo. This month we are especially excited about the release of atlatec’s brand-new website. We would like to thank everyone who participated in this challenging project and contributed to the result that we are ready to present. Feel free to check out atlatec.de and let us know what you think.
Tesla is one of those companies that tends to polarize people: You’re either a real fan or a pronounced sceptic, with little middle ground between “Teslaratis” and outspoken critics.
One large reason for that is Tesla’s “Full Self Driving” (FSD) feature – on which, apparently, Tesla is pretty divided itself: While Elon Musk has repeatedly praised the system as an actual self-driving feature on Twitter, his lawyers argue the polar opposite in front of the DMV: A new trove of emails, revealed after after a public records request show that Tesla’s lawyers adamantly claim FSD to be nothing but a L2 driver assist feature – with no perspective or even a plan to turn it into anything resembling autonomous driving, under any conditions.
The article contains a link to the emails if you want to dive in yourself. An additional takeaway that was very interesting to us: Tesla lawyer Eric Williams references the Model 3 handbook, clarifying that FSD will indeed have trouble in areas for which proper map data is not available and may very well be unable to recognize stop signs and traffic lights due to inaccurate maps. Once again, quite the contrast to the messages of Musk himself, who has called reliance on (HD) maps “a really bad idea” before.
So there it is, the first Level 3 system on the market, that will actually allow you to take your hands off the wheel, while the car takes over responsibility for driving.
Honda debuted its first L3 feature this month, the “Traffic Jam Pilot” which can drive autonomously in bumper-to-bumper highway traffic, while the “driver” is free to enjoy the infotainment system or otherwise occupy themself – provided they remain able to take back operations if the system notifies this to be required.
Honda reports they’ve driven 1.3 million kilometers for testing, and have simulated around 10 million scenarios in preparation. Still, the company wants to make sure they’re not moving too fast: The feature will only be available to 100 leasing customers to start with and they’re limiting it to speeds up to 50 km/h rather than the 60 mph regulation allows for.
Volvo Cars is one company that has been behind some massive innovations in automotive over the decades: The 3-prong safety belt, SIPS/side airbags and limiting all new vehicles to 180 km/h top speed, to name a few. The first and the latter were pretty controversial at their time (the latter as recently as 2020) but Volvo did what they thought was right anyway.
The next chapter in that legacy may be ahead: Volvo Cars has announced they see “no long-term future for cars with an internal combustion engine” and will sell nothing but electric vehicles by 2030. By 2025, half of the fleet shall be fully electric already, with hybrids making up the other 50%.
In addition to this massive overhaul, they also want to modernize the customer experience in order to make car sales more digital and mainly online-driven, only offering in-person assistance where customers really want it (e. g. around test drives and delivery).
This month, we have some news of our own, and we’re pretty excited about it: After loads of discussions, drafting, designing and reworking, we are happy to announce the launch of our all-new atlatec.de website.
So, why the do-over? First of all, we wanted to reflect the degree of maturity that we’ve achieved over time: Working for international automotive OEMs and Tier1 suppliers as well as other leading companies in the mobility sector, we thought it was high time to get rid of what our CEO lovingly called “Mickey Mouse animations” and replace similar young-blood gimmicks with actual footage of our work.
Secondly, we wanted to present said work in a more customer-oriented manner: Rather than focusing on what we find interesting ourselves, the new website breaks down our solutions by customer use cases, such as HD maps/scenarios for simulation or maps for AV/ADAS production systems. For those and more, atlatec.de now offers dedicated pages focusing on specific, related parts of our portfolio: All the relevant info is curated in one place, the rest left to explore elsewhere, for those who want to do so.
If you decide to take a look at the new website, we’d love to hear your thoughts on it: Let us know by simply replying to this email or shoot us a message on LinkedIn!
I hope this overview helps you to stay on top of the industry news. Make sure to watch the latest fireside chat with the atlatec team on YouTube.
Stay tuned for the atlatec industry newsletter coming end of April!
News from TuSimple, Motional, New Flyer, AIMotive, and MathWorks
As usual, at the end of the month the atlatec team prepares for you a short overview of the automotive news that we found the most interesting. Enjoy the summary and make sure to watch our latest Zoom talk – it is already available on YouTube.
There has been a lot of news from China about robotaxi rollouts in the last few months; now comes a huge leap for autonomous trucks: TuSimple, a 4 year old startup has received approval for operating a fleet of 5000 fully self-driving trucks, without safety drivers on board.
This is also interesting news for investors in the space: TuSimple expects to turn a net profit of $300 million thanks to this move – while eyeing an IPO in 2021 that might lead to a $7 billion valuation.
Motional, the joint venture by Hyundai and Aptiv, will begin to offer driverless rides in Las Vegas, joining companies such as Waymo and Cruise. A “safety steward” (with somewhat unspecified responsibilities) will apparently be on board, but the permit issued by the state of Nevada allows for an empty driver’s seat.
An interesting detail is that operations are reportedly focused on “suburban residential areas”, which arguably make for a good use of AVs: Offering a bridge across the “last mile” gap between public transit stations and people’s homes might make more sense that deploying an ever-rising number of vehicles in city centers, where public transportation is usually at its best and most dense.
Speaking of public transportation: Why are we reading so much about autonomous trucks and robotaxis, but rarely hear of autonomous buses? Reasons behind that might be the challenge of navigating massive vehicles in dense, busy urban environments – but apparently New Flyer, North America’s biggest producer of buses feels up to that: Their first autonomous model, an electric Xcelsior, will begin testing in 2022.
There’s also advantages over other AV use cases according to New Flyer president Chris Stoddart: “One of the nice things is the ability to pre-map the routes, when you can run your vehicle around that route and pre-map it so that you have some redundancy and don’t have to rely completely on your various visual systems all the time […] When your average bus speed is only 12.5 mph that certainly helps.”
There’s lots of providers of tools for AV/ADAS simulation, and it mostly seems they’re sticking to their own devices, attempting to build the best solution they can independently of other players in the space. It’s a refreshing change to see some collaboration here, with AImotive and MathWorks integrating their “aiSim” and “RoadRunner” offerings:
This will apparently allow for an easy import of road models created in RoadRunner (formerly by VectorZero) into aiSim, an ISO 26262/ASIL-D-certified simulation platform. Since RoadRunner in turn provides the ability to import real-world OpenDRIVE HD maps (e. g. by atlatec), this might indeed make for a compelling toolchain, coupling access to realistic environment models with sophisticated virtual sensor simulation. If you happen to be using/trialing this solution, we’d love to hear some impressions!
We hope you enjoyed this issue. Stay tuned for the upcoming automotive news overview at the end of March. Get the overview directly to your mailbox – sign up for the atlatec newsletter.
When I initially saw the headlines about this, I was intrigued by words like “partnership” and “collaboration” between Waymo and Daimler Trucks North America. Upon closer reading of the press pieces, it turns out this partnership amounts to: Daimler selling trucks to a customer (who happens to be Waymo). Apparently, the Freightliner team at Daimler will not be involved in the “Waymofication” of the vehicles and have no insight whatsoever.
Seems like a lot of buzz for “OEM sells vehicles”, but serves to highlight the conflict of legacy OEMs and Silicon Valley software companies: Will the Daimlers of the world become the new Tier1s in the world of autonomous driving? Let’s wait and see – after all, Daimler Trucks still has its own AV project going on with Torc Robotics …
There’s been a lot of news items this year about OEMs electrifying their model range; most recent examples including GM and Volkswagen, whose chairman called EVs “the only reasonable option” for the future.
One piece that was not quite as popular was this one from Volvo Trucks – which piqued my interest because electrification in commercial vehicles (save for buses) hasn’t been that much of a hot topic in my opinion. That might change quite soon, with Volvo promising EV options for their entire range, starting next year in Europe.
Cut down on delivery times and budget demands for HD maps: The atlatec OpenDRIVE database gives you instant access to over 1000 km of real-world HD maps. Our founder and CEO Dr. Henning Lategahn calls atlatec database “Mapflix for Simulation”: it is as easy to access and is cost-efficient.
It’s actually happening: Starting in Q1 of next year, the public will be able to buy a new Honda, capable of L3 automation – the first SAE level to actually be considered “automated driving” rather than “driving support”. To start, the vehicles will only be taking over operation on highways, and only in limited situations, such as stop-and-go traffic. To me personally, that’s one of the most tedious driving situations, though, so automating it should be a great value add for people in areas prone to traffic jams.
Easily the most underreported piece of news to me this month: The UK has decided to ban the sale of new petrol/diesel driven vehicles from 2030 (hybrids from 2035). Sure, Norway is 5 years earlier – but the UK is a rather different animal, both in terms of population and economy. While I feel this is an exceptionally brave move and hope to see it turn into a success, I remain somewhat sceptical: The required infrastructure alone will be a massive feat – and ten years can be a much shorter time, especially if you are also dealing with Brexit and a worldwide pandemic right when you start.
And some more news from atlatec: We’ve released an expanded version of our San Francisco HD map sample – one that includes 3D assets and textures, for use in Carla, VTD and other simulations, entirely free! Visit the article to read more about the data, which was created in a collaboration with Trian Graphics, see a video and grab a download link. And if you do: Be sure to tell us what you think!
Just like last month, we got on a Zoom talk with Henning Lategahn and Tom Dahlström to discuss some of these news – the video is now available on YouTube. We hope you enjoy this issue!
I hope you had lovely holidays and would like to extend my best wishes for the new year! 2020 is going to be an exciting one for us at atlatec, and I hope the same is true for you. Speaking of exciting stuff, here’s some news from our industry that stuck out this month – enjoy the read:
Just a few weeks after Daimler’s new chairman Ola Källenius announced the company would cut down its investment in robotaxis, the car maker has now launched a pilot service in San Jose, collaborating with Tier1 supplier Bosch. The service is only open to a select group of pilot users (who are company employees) and there’s going to be both a safety driver and a separate engineer on board with passengers. However: “Daimler and Bosch hope to begin offering service to the general public in San Jose as soon as possible” – let’s see when that will turn out to be.
It’s been 4 years since BMW, Daimler and Audi teamed up to buy HERE from Nokia, aiming to build proprietary mapping competence rather than relying on (and being dependent upon) US tech companies. Now they’re going to share with Mitsubishi and Japanese telco provider Nippon Telegraph and Telephone (NTT). The goal, according to HERE’s CEO Edzard Overbeek: “[F]urther diversifying our shareholder base beyond automotive, which is important given the appeal and necessity of location technology across geographies and industries.”
60 years ago, when Volvo invented the 3-point seat belt, they decided to open the patent to other car makers for free: The potential for saving lives was more important than clinging to intellectual property. If you thought such decisions for the common good couldn’t happen in today’s economy (like I did), Elon Musk proved you wrong this December, opening Tesla’s EV patents to other companies. The move might also make sense from a business standpoint, however: If it helps to drive the electrification of traffic as a whole, it stands to reason that more customers will look to buy an EV – and thus consider a Tesla.
This one’s less of a news item in the sense that it describes a new technological feat by GM’s self-driving car company Cruise – but I felt it’s an important piece, taking a step back to reflect on the automotive industry’s overall approach and asking the question whether we’re even solving for the right problems: “Despite making up less than 1% of all vehicle miles traveled, ride-sharing has added further congestion, more emissions, and potentially even decreased safety in our cities from over-tired and overworked drivers.”
In closing, I have some atlatec news to offer in the form of a video: We are now able to offer real-world traffic data (in addition to maps) for use in simulators, such as IPG’s CarMaker. We and our pilot OEM customer for this technology are confident that this kind of real-world content will be very helpful for digital validation of AV/ADAS systems that are supposed to react to traffic and other moving agents, such as adaptive cruise control, cross-traffic alerts, adaptive high-beam control and more – what do you think?
If you have any remarks about the pieces linked above, please don’t hesitate to leave a comment or reach out! I’m always happy to have a conversation and remain available by email or on LinkedIn. Speak soon!
Reminder: We also offer this monthly Automotive Innovation overview as a newsletter – if that sounds interesting to you, you’re more than welcome to sign up here.
If you work in the ADAS/Autonomous Vehicles field, you are probably familiar with HD maps – virtual recreations of real-world roads including their 3D profile, driving rules, inter-connectivity of lanes etc.
A lot of these HD maps go into the simulation domain, where car makers and suppliers leverage them to train new ADAS/AV systems or for verification/validation of features from those domains. The reason to use HD maps of real-world roads (rather than just generic, fictional routes created from scratch) is simple: In the end, you want your system to perform in the real world – so you want to optimize for real-world conditions as early as possible, starting in simulation. As we all know, the real world is nothing if not random, and you will encounter many situations you would rarely find in generic data sets.
So far, so good: These HD maps can be used to properly train lane-keep assistance or lane-departure warning systems, validate speed limit sign detection and many other systems. However, a map only contains the static features of an environment – what about ADAS/AV features that are supposed to react to other traffic participants? Emergency braking systems, cross-traffic alerts or adaptive cruise control are all required to perform differently, depending on how other cars, bikes or pedestrians around the vehicle are acting. For proper training or testing of such systems in simulation, HD maps alone are not sufficient.
Scenario data: HD maps plus traffic data
The solution to bridge this gap is rather obvious: You add traffic to “populate” your HD maps.
Real-world traffic, of course, is probably even more complex than real-world maps. Attempting to generically reconstruct the simplest traffic situations, such as a number of drivers coming to a halt at an intersection will fall short most of the time: Real drivers are human beings with infinite complexity, each with their own driving styles, preferences, vastly varying experience – and sometimes we have a bad day.
Once again, you want to optimize for real-world traffic situations while still in the simulation stage – so why not bring the real world into the virtual domain once more, capturing vehicles, pedestrians and more? The result are real-world scenarios, a perfectly aligned combination of HD maps and traffic, both built using data captured from survey runs on real roadways. Here’s a side-by-side-comparison, taken from an atlatec scenario:
As you may have noticed, not all connecting arms along the route are part of the HD map used for this scenario, so some vehicles seem to appear or disappear off-road. This is a perfect example of recreating only the features that are of interest for any given case – or to manipulate the data in ways that allows for testing of more rare cases: For example, you would want your front-view/radar system to correctly identify a vehicle pulling onto a road, even if it was pulling out “from nowhere”. This leads us right to the next topic: How do you use real-word data to simulate more extreme situations, or even accidents?
Scenario fuzzing: Manipulating real-world data for edge case identification.
If you want to find out where the limitations of your system lie, you will have to simulate some scenarios that are beyond their performance – failures, for short. This is of course a challenge: Looking at an emergency brake assist (EBA) as an example, you can hardly collect real-world data for instances where it failed – it is impractical to keep driving and hoping for an accident to occur to record it. This is where “scenario fuzzing” comes into play.
Using a toolchain that is optimized for scenario-based simulation, you can select certain variables of a scenario and manipulate them slightly. For example, you could raise the speed of the survey vehicle by a few km/h, or decrease the distance at which another car cuts in front of you. Keep doing this in ever so slight increments, and you will eventually end up with a fuzzed scenario where the EBA will no longer be able to prevent a crash – finding what’s commonly referred to as an edge case, or system performance limit. Combine the fuzzing for both variables (speed of the ego vehicle and cut-in distance), and you will identify which speed allows for which minimum distance and vice versa – resulting in a corner case, an instance where two edge cases meet.
Here is a different example, showing a white vehicle pulling out to reveal a stationary car on the lane ahead – in reality, on the left, it pulls out with ample time left for an emergency braking maneuver. In the manipulated version, on the right, it pulls out later, leaving less time for the system to identify and react to the hazard:
Leveraging scenario fuzzing of recorded data allows you to reap both benefits: Enhancing simulation realism and relevance by using real-world data and identifying edge/corner cases by incrementally manipulating scenario variables.
To discover this topic in more depth, and to see some video examples (including the one where the above screenshot was taken from), we recommend the presentation “Edge Case Hunting in Scenario Based Virtual Validation of AVs” from this year’s “Apply & Innovate” hosted by IPG Automotive:
Recording scenarios during Field Operational Testing
One opportunity to encounter a multitude of relevant scenarios or even edge cases in the real world is the Field Operational Testing phase (FOT): This is when OEMs, Tier1s or their partners conduct test drives on open roads, over thousands and thousands of kilometers. These test drivers take place when a system is considered safe enough for testing in public, as a prerequisite to final approval by regulators.
Of course, it is not uncommon to spot ADAS performance issues in the FOT phase – that’s what it’s for, after all. Typically, these will be issues that occur very rarely, either in very specific situations (such as near-edge cases) or only after a certain time of operation – this is, after all, the first time a system is being tested at scale in reality.
When such a situation occurs, it is a treasure trove of information for validation and verification engineers: All the onboard data recorded during these drives is analyzed in as much detail as possible, attempting to identify the cause and to fix errors, if there were any. However, onboard data will only tell you what a system “thought” happened: If you are hunting for false negatives or false positives (e. g. in sensor data), you need to match this data against what really happened.
To this end, you can to leverage scenario recording during FOT: When test vehicles are equipped to record HD map and traffic data, you can recreate the exact situation in which a system failure or near-failure occurred – including the precise road layout, a vehicle’s precise position and pose as well as the traffic that occurred around it at that time.
Replaying these “micro scenarios” in simulation allows for much more comprehensive insights into the situation surrounding a performance issue identified during FOT. Additionally, by fuzzing the data you can play around with infinite “what if” questions, further drilling down into the precise cause and severity of any errors.
If you have any questions or would like to discuss how to leverage scenarios for your work, don’t hesitate to reach out via email or request a meeting with us.
There are a lot of differences of opinion in the autonomous vehicle space, but one thing everyone can agree on:
Virtual training and validation of AV/ADAS systems and components are a key factor in achieving the massive scale of testing which is necessary.
To this end, we at atlatec are always working to support more simulation tools, allowing our customers to continue working with their toolchain of choice when leveraging our HD maps.
Today, we are happy to announce the newest addition to our list of supported simulation software: Cognata.
Cognata is a cloud-based simulation platform designed specifically for autonomous driving and ADAS applications and is used for AV training, validation and analysis. It offers several datasets to test AV components against, such as traffic lights, signs, pedestrians and vehicles.
By importing atlatec HD maps, Cognata customers will have the added benefit of training and testing on road environments that are highly accurate digital twins of real-world routes, ensuring a more robust system performance and similar results to a real drive-test. The maps are supplied in the OPENdrive format.
If you are a Cognata user and interested in learning more about how to leverage our HD maps, please reach out via email or schedule a call with us.
Another month is over – and while Covid-19 seems to be flaring up again all over the world, it’s not the only news out there: The automotive industry, which was hit hard by the Corona crisis has produced some interesting news items this October. Here’s my personal overview of what stuck out:
This month, Tesla released the beta version of its “Full Self-Driving” system to a limited batch of paying customers. The resonance has been mixed and there’s lots of video and more out there, showing situations which FSD apparently handles well – or not. This article got a bit lost in the wake of all this – but I feel it emphasizes an underlying conflict of any self-driving tech relying on drivers’ attention: The better the self-driving performance and user experience, the less attention “drivers” will pay – and the less they’ll be prepared to take over in critical situations. Tesla’s user experience is apparently the worst at keeping drivers’ attention in auto mode, as per this recent NCAP analysis.
Bonus: I also recommend a look at this Twitter thread by Voyage CEO Oliver Cameron who took the time to analyze footage from one of the first test drives in detail.
Waymo, arguably a leader in the autonomous vehicles domain reached another milestone this month: The Google company will open up its driverless robotaxi service in Phoenix to about 1 000 app users, who can now request rides without safety drivers onboard. Remote operators will be on standby to take control of the vehicles if necessary, but Waymo expects little work for them.
Cruise, a subsidiary of General Motors, is not yet offering rides to the public but got approval by the Californian DMV “to test five autonomous vehicles without a driver behind the wheel on specified streets within San Francisco.” This is the fifth permit for driverless testing after Waymo, AutoX, Nuro and Zoox and it comes with some restrictions: “The Cruise vehicles are designed to operate on roads with posted speed limits not exceeding 30 miles per hour, during all times of the day and night, but will not test during heavy fog or heavy rain, the DMV said.”
There’s an ongoing discussion about the ethics of these public-roads tests: On the one hand, companies are supposed to “verify vehicles are capable of operating without a driver” to get a permit, but on the other hand those tests are being conducted with the specific purpose to verify this in the first place – they are tests, after all. This has potential for further controversy, and further underlines the need for comprehensive, real-word-based simulation ahead of on-road operations.
This was relatively unnoticed news this month, but I find it worth noting because GEELY’s Lynk & Co. brand is attempting to redesign one of the basic fundamentals in automotive: The relation between car ownership and access to (car-based) mobility.
What Lynk & Co. is offering with the new “01” model is a built-in car-sharing platform, complete with mobile apps to unlock vehicles by phone etc. Individuals can take out a lease on a 01 (around 500 EUR a month, including service by Volvo dealerships) and then offer it for use via the platform – defining when it’s available and how much they want to charge to rent it out to other users, who don’t pay for vehicles/leases themselves. Sure, car sharing is nothing new – but if done right, this could bring a new level of convenience to the game which might really make a difference.
I find this move a) very brave, because it essentially means a commitment to sell less cars by GEELY and b) very innovative to come from an OEM because it doesn’t attempt to solve any and every mobility challenge by adding more, or better vehicles but instead truly treats mobility as a commodity. The new car – and service – will pilot in Amsterdam, arguably one of the major European cities which has done most to move away from traditional car ownership models.
How Accurate Are HD Maps for Autonomous Driving and ADAS Simulation? – via atlatec
It’s definitely one of the most frequently asked questions for us here at atlatec: “How accurate are HD maps”? It sounds innocent, but answering it correctly is rather complex. However, we feel that the question is important, both when it comes to safety in autonomous vehicle operations and regarding the validity of simulations based on real-world maps.
This month, we’ve therefore taken the time to answer the question comprehensively; taking a close look at what accuracy really means in the context of HD maps – and of course we’re also putting numbers to what atlatec achieves in this domain.
As a first this month, we took to Zoom to discuss some of these news items internally – and we recorded it: Tune in to hear what our CEO, Henning Lategahn thinks about the developments at Tesla and Lynk & Co. and for a some more explanation on the topic of HD map accuracy on our YouTube channel!
In our mission to create digital twins of real world roads, our team at atlatec has taken on a number of HD Mapping projects all over the world, delivering HD maps and 3D models for autonomous vehicle operations and AV/ADAS simulation.
Along the way, we’ve discovered a number of topics and questions that are of relevance to almost all project partners involved – and we want to take the opportunity to discuss some of these in more detail. To start, we’ve decided to answer one of the most prominent and frequent questions we get:
“How accurate are HD maps?”
Maintaining high accuracy is one of the biggest challenges in building HD maps of real-world roads – and a rather complex one. Let’s dive right in and start by looking at what accuracy means in this context:
What does “accuracy” mean for HD maps?
With regard to accuracy, there are two main focus points that determine the quality of an HD map:
Global accuracy (positioning of a feature on the face of the Earth)
Local accuracy (positioning of a feature in relation to road elements around it).
It is important to note that, in terms of road mapping, accuracy is an index that cannot be derived from a single variable. With regard to our mapping technology, accuracy is directly dependent on 3 potential sources of error:
Global accuracy of maps is generally bound by the accuracy of GPS: This challenge is the same for map providers all over the world. With regard to this type of error, then, the main cause is poor GPS signal quality. It is most often affected when driving a survey vehicle in areas that are covered by roof-like structures (most commonly under bridges and through tunnels), as well as surrounded by tall buildings (within street canyons).
The challenge to accurately determine the position of ones sensor pod is a very old one. It’s basically the same that seafarers used to have when navigating by the stars: In order to accurately pinpoint your goal and chart a course, you need to first determine where you are located. Similarly, in HD mapping, you can’t answer the question “Where is this sign we’re detecting?” unless you first answer the question “Where are we currently positioned?”.
As a result, your ability to accurately survey a road and its surroundings is directly dependent on your ability to first pinpoint the position and pose of your survey vehicle – along all trajectories it was driven. Any errors in determining a sensor’s position and pose will subsequently result in a global accuracy error of the map created from this sensor’s data.
To maintain a high degree of global accuracy, our sensor pods contain survey-grade differential GPS sensors: This ensures optimized signal reception and allows to supplement the real-time satellite signal by using correction data from GPS base stations, which exist almost all over the world. In combination, such correction data significantly enhances the accuracy, compared to using only GPS satellite signals.
Before a survey vehicle enters a tunnel and after it comes out at the other end, the differential GPS receiver usually provides accurate global coordinates to determine its position. However, as mentioned above, attempting to track its movement on a global scale whilst it is driving through a tunnel produces error – there is no GPS satellite signal underground.
This is where the importance of the stereo cameras comes in: Imagery that we collect from the two, calibrated cameras whilst driving through a tunnel allows us to compute and track the pose and motion of a survey vehicle by using computer vision technology. To further supplement accuracy, we add another, redundant sensor in the form of a motion sensor, or IMU (inertial measurement unit).
When it comes to the processing stage, then, we use sensor fusion to combine the data from the cameras, GPS and IMU to successfully reconstruct the trajectory of a survey vehicle and its surroundings, maintaining high accuracy throughout the entire data set. The advantages of using computer vision technology stand out in contrast to other systems that are mainly IMU-based: Their main side effect is that, in areas with no or poor GPS, the trajectory of a vehicle can go off (drift) and may only be corrected once GPS signal is recovered. In the context of autonomous driving, such errors can not be afforded.
Using imagery collected from the stereo cameras allows us to recreate a very consistent trajectory, even in GPS-denied areas. If the GPS signal is lost for a very long distance, though, drift/local referencing error will eventually occur, as is the case for all known approaches.
The benefit of using a camera-based approach – also called visual odometry – over an IMU-based system lies in the nature of how the error accumulates over time: whereas the total error of an IMU accumulates in a cubic fashion (at a factor of x³, with x being the distance travelled), atlatec’s vision-based approach only makes for linear accumulation of error.
This type of error is caused by incorrect calculation of distance between a point of interest (for example a stop line or a traffic light) and a sensor pod camera. Local sampling, or annotation takes place after the collected data is translated into a 3D model and is the process of labelling features within this model, thus making them identifiable to simulation tools or autonomous vehicles. In other words, annotation is the process of translating 3D imagery, which humans can easily understand and process, into a vectorized “digital twin” which can be processed by algorithms and AI.
In order to annotate road objects accurately, we use a combination of AI and manual work, which will be discussed in more detail later on.
What is atlatec’s approach to creating accurate HD maps?
Attempting to deal with all three causes of accuracy error in the practice of road mapping poses a number of challenges both in terms of software and hardware development. Our mapping technology employs a number of tools and solutions which allow us to achieve high HD map accuracy in a cost-effective manner.
Portable, camera-based mapping setup
At atlatec, we use a sensor setup that is mainly camera-based. Having two cameras and a GPS receiver at a fixed distance from each other in a small, portable box allows us to map roads worldwide with very little logistical difficulty: The metal case containing all sensors is the size of a suitcase and can be set up on any car in a matter of minutes.
Leveraging the survey-grade differential GPS and our computer vision expertise as explained above, we manage to accurately recreate all trajectories driven during data acquisition. As both our hardware and software are developed inhouse, the sensor pods’ configuration and the pipeline for processing the data from them are heavily optimized for each other.
Strong emphasis on achieving extremely accurate loop closure is a crucial step in creation of coherent datasets. Our survey methods include driving on every lane of a road we set out to map, extending initial driving duration but ensuring higher data quality (and eliminating occlusion issues). The main reason why this increases mapping accuracy is that, by driving on every lane of the same stretch of a road, the same road object can be detected multiple times, enabling us to determine its global position more accurately. The process of bringing the sensor data from these multiple survey trajectories together into one consistent result is what’s called loop closure.
To exemplify this, let’s say a vehicle equipped with an atlatec sensor pod drives on a lane framed by dashed lane borders. The survey vehicle will drive on that lane at least once (the example of driving past a desired point on the road twice is represented in the schematic image below as trajectory a and trajectory b). Moreover, the vehicle will also drive on its neighboring lanes (if there are any) as part of the same survey session which starts and ends at the same location. In turn, once it comes to the annotation stage, we will be able to represent, for example, a corner of any individual dash as a point in a 3D map.
In complex cases such as sharp turns where a certain point can be absent in some trajectories, then, we will still be able to determine the position of a dash accurately. The reason for it is that, thanks to loop closure, our data sets are very coherent. That makes it possible to connect the data acquired from both stereo cameras and track key points from multiple trajectories in which they are visible.
Our third and main strength is our software. Data retrieved from the stereo cameras, the GPS receiver and the IMU is first pre-processed in order to accurately reconstruct driving trajectories and mitigate potential incoherences from driving in areas with poor GPS signal. Following this stage, we use a combination of AI and manual work to reconstruct a broad spectrum of road objects in a virtual environment. Although our software can detect and identify a wide range of road elements accurately, integration of manual work is an important step in ensuring high accuracy and consistency throughout the entire map.
How accurate are atlatec HD maps?
Based on thousands of kilometers of HD maps we’ve created all over the world and the results of various tests and audits, we conclude that accuracy errors will be lower than the following for 95% of atlatec HD map coverage:
In areas with good GPS reception we achieve a global accuracy of less than 3 cm deviation using satellite signals and correction data from base stations.
In GPS-denied areas, however, inaccuracy rises with distance traveled through the area, being largest in its middle. This means that the maximum GPS error can be expressed as a percentage of the distance traveled through a GPS-denied area: We have quantified this through repeated tests which indicate that this value is less than 0.5%.
For instance, if we drive through a tunnel that is 500 meters long, our GPS-based estimation of the global position of a survey vehicle will not deviate more than max. 1,25 m from the truth in the middle of that tunnel.
As this is still a relatively high margin of error, we leverage computer vision as discussed above to mitigate the error on a local level:
Local accuracy (drift)
By using computer vision technology to reconstruct the trajectory driven on any route we can work around GPS, keeping consistency and accuracy at a high level even in tunnels and urban canyons.
As mentioned above, the error that occurs when relying on visual odometry accumulates far slower than e. g. MEMS IMU-based approaches: Within a certain horizon (h) around a survey vehicle, the drift of the reference trajectory will contribute to an error of less than 0.1%*h.
For instance, a feature located at 20 meters distance from a survey vehicle will not be displaced by more than 2 cm due to local drift of the reference trajectory.
Inside of a corridor of 10 meters width around the mapping trajectory, features in the finished 3D model can be surveyed with less than 4 cm deviation. At a larger lateral distance, precision will drop.
Which kind of accuracy matters most for HD maps?
We have taken on a number of mapping projects all over the world so far, a typical customer use case being the creation of 3D models for (ADAS) simulation. With that in mind, it is important to note that, when it comes to virtual testing environments, the relevance of the accuracy errors mentioned above can differ.
Usually for simulation use cases, a low local and sampling error are of highest significance. Meanwhile, global accuracy and GPS positioning are often irrelevant in this context. In fact, GPS receivers weren’t even a part of our sensor setup in the beginning: This is due to the nature of virtual testing, where what matters is that the local environment is reproduced accurately – e. g. in the process of simulating lane-keep assistance on a digital twin of real-world lane geometry. As long as the positioning of the vehicle in relation to road elements is correct, it usually does not matter where on the globe these road elements are located. We will discuss map development for simulation in more detail in a separate article.
If you want to see for yourself how atlatec data can boost simulation, you can download a free sample map of Downtown, San Francisco here – provided in the OpenDRIVE format, as supported by a growing number of simulation tools.