S&P Global Mobility hosted a discussion with experts on XiL testing's role in the automotive industry, focusing on autonomous vehicles.
Continuing our series of round tables, S&P Global Mobility held a discussion with three experts about XiL (X-in-the-Loop) testing and its role in the automotive industry, particularly for autonomous vehicles (AVs).
XiL testing is driven by the need to integrate various systems in AVs, enabling communication and scalability. The panel discusses how setting up an effective XiL testing environment requires accurate virtual environments, correlated component models and good hardware connectivity. Challenges include improving realism of simulation, accurately simulating sensor data and ensuring models are well-correlated with real-world data. Risks of relying solely on XiL testing include the possibility of creating models that are physically unachievable. Mitigation involves always correlating simulations with real-world references and ensuring models are accurate. XiL testing's future lies in reducing development costs and parallel testing of subsystems. Although XiL testing can reduce the need for prototypes, they remain key for validating real-world performance and addressing issues like noise, vibration and harshness due to complex physical interactions that are difficult to simulate accurately.
Matt Dustan, director laboratory test systems at AB Dynamics (top), Dave Kirman, technical director at AB Dynamics simulation group (bottom left) and Josh Wreford, head of sales at rFpro (bottom right).
Sources: AB Dynamics; rFpro.
The following is an edited transcript of the conversation:
S&P Global Mobility: Let’s start with the basics, what does XiL encompass, and how is it being utilized by the automotive industry?
Dave Kirkman (DK): XiL testing is the ability to integrate external elements into your simulation. The three main elements are software-in-the-loop (SiL), hardware-in-the-loop (HiL) and driver-in-the-loop (DiL), which is where my experience lies.
XiL testing is now extremely important in terms of rapidly developing autonomous vehicles. It enables you to test the hardware and software of subsystems in parallel before migrating to a prototype vehicle and allows manufacturers to make critical design decisions early in the development chain. This makes a big difference in terms of being able to get the vehicle to production quickly.
Josh Wreford (JW): I think Dave has summed it up quite nicely there. XiL testing enables complete subsystems to be thoroughly developed before advancing to on-track testing. As a SiL specialist, the thing I find exciting is the amount of testing that can be done before any hardware exists, this is hugely valuable. Initial testing can be done on any standard desktop computer with your various component models without requiring specialist equipment, test benches or prototypes. This is particularly important for autonomous vehicles as the quantity of software required has increased significantly.
Matt Dustan (MD): Exactly, so by the point you are ready to commit to producing hardware you are already very confident in your solution. And that is where I come in as a HiL test machine engineer. Once it is available, we can put the real subsystem hardware into the development loop. This is generally still performed at an early stage to continue frontloading the engineering process as much as possible.
For example, we can use a DiL simulator to enable a human to drive a virtual vehicle with the real steering or suspension hardware. So as the virtual vehicle is driven over a bump in the road, that compression can be fed into the real suspension system on a test rig and the resultant forces passed back into the simulation to close the loop.
XiL testing is increasingly being utilized by original equipment manufacturers, but what is driving this adoption, particularly for autonomous vehicles?
MD: Josh has just touched on it. With autonomy, we suddenly have a lot of systems that have previously been isolated now having to talk to each other, so the vehicle is becoming a lot more connected through software.
Autonomous functions are interacting with safety-critical systems, so doing the ground-up development on real roads or on a proving ground is not recommended. This is one of the key drivers for the increased adoption of XiL testing.
JW: I think it all comes down to cost, efficiency and scalability. The ability to run iterations of simulations on the cloud automatically is essential for autonomous vehicle developers. The scale of tests that can be achieved is only limited by the available computing power. This allows OEMs to rigorously validate the safety-critical software components of autonomous vehicles relatively cheaply and at great speed.
DK: My specialist area is driver-in-the-loop simulators, and you might think I will be out of a job when it comes to helping develop autonomous vehicles. But we have customers who have come to us for DiL simulators specifically for the development of autonomous vehicles. At the end of the day, there is still a human [who] sat in that vehicle, and manufacturers need to ensure the passenger is comfortable and doesn’t feel sick so they will use it again.
Ride and comfort become key players in autonomous vehicles. There is also the human factors side of development that needs to be assessed. In a semi-autonomous vehicle, how does the vehicle hand over control? Or how does a passenger communicate their needs to the vehicle? It's always about the human-machine interface, and that's really where a DiL simulator really comes into its own.
MD: It’s true, that human interaction and subjective assessment is key to vehicle development. Our steering test machine, the SSTM (Steering System Test Machine), was originally developed as a steering characterization rig. But now we have added HiL capability to the machine, enabling steering feel to be assessed; it is being used more and more for that now and it is HiL testing that is driving demand for that product.
What are the key components involved in setting up an effective XiL testing environment?
JW: I would say one of the most critical components is the virtual environment where you are conducting simulations. This must replicate the real world as accurately as possible, including a HD road surface, 3D objects with material definitions and environmental characteristics, such as time of day and the weather. This ideally needs to be based on real-world locations, creating a digital twin, so you can easily correlate your simulations. The component models also need to be highly accurate and correlated; vehicle model, tire model, camera sensor model etc.
MD: I am with Josh that at the heart of everything, especially relating to autonomy, is the need for a good virtual environment. It is then critical to have good connectivity between the various systems and their [electronic control units], and they need to function as they would in the real world. You need to understand the specific testing that you want to do and ensure you have the appropriate hardware for in-the-loop testing for those systems.
DK: Correlation, correlation, correlation. This is the single most important component of XiL testing to ensure you can rely on your digital models. It’s the only way to ensure you don’t go down the wrong development path.
Your hardware is also key. You need a high-fidelity system that's capable of communicating at the right frequency so that you're getting as much information as possible. I think that's the most important part of any XiL simulation.
What are some of the challenges faced when implementing XiL testing for autonomy, and how can they be overcome?
DK: The biggest challenge is to first get your models good enough to run in the real-time environment. That's the first building block of any XiL simulation. Then when you've got some confidence in the models it is progressing to hardware in the loop which helps to correlate the models.
MD: Definitely, at the early stages of development, you'll get away with maybe quite crude simulations of some of the hardware. But as the design is refined and gets locked in, that's when you want to start using the real hardware in the loop to give you that precise feedback and correlate your simulations as soon as possible.
If you don't have good benchmark data and are relying solely on simulations, then you are going to head down the wrong path. At the start of everyone's journey into autonomy and virtualization, you must use good benchmark data.
JW: I couldn’t agree more, the more accurate your models and the virtual environment are the more confident you can be in your results. Increasing the realism of simulation is probably the biggest challenge. When simulating an environment for the human eye there are certain efficiencies that can be made to trick the driver. But you can’t trick a sensor. So, you have no choice but to replicate the real world as best you can. That’s why we have developed our ray tracing rendering engine at rFpro. It simulates each light ray in a scene to create photorealistic images.
We have found the best way to improve your component models is to collaborate with component manufacturers. For example, we have recently started working with Sony to improve the fidelity of camera models.
What are the current limitations of the approach, and how is this likely to be overcome?
JW: One of the biggest limitations is being able to accurately simulate everything as I have just mentioned. But it is improving all the time and collaboration with other industry leaders has become a big part of this. Whether this is component manufacturers providing digital models of their products or government bodies helping to improve traffic modelling and crash scenario databases.
DK: Josh is right, it is getting better and better every day, but I believe one of the issues holding XiL testing back is simply that it is not yet common practice. The technology exists today to carry out XiL testing across many vehicle subsystems. So as the benefits of simulation become clearer, adoption will accelerate. One of the biggest advantages is the parallelization of development — being able to test each subsystem in isolation and altogether. Typically, in any OEM, you'll have a department just working on one particular part of the vehicle and XiL allows you to bring all those parts together in a much faster way.
What areas of autonomy will benefit most from XiL testing and why?
MD: It must be the perception systems. The ability to run thousands of computations, limited only by your available computing power, is of tremendous value. Mixing the software of the AI with the real sensor hardware in the same loop in a simulation is infinitely safer than doing the same testing on the road. This also enables the interactions with the steering and braking systems to be safely explored. So, the perception systems and the general safety of testing autonomous vehicles will benefit the most from XiL capabilities.
DK: Exactly. The ability to run hundreds of simulations offline allows you to rapidly develop and train the perception systems, this is where autonomy benefits the most from XiL testing.
Again, there are lots of elements where a human is interacting with the vehicle, whether as a passenger or even a pedestrian, so DiL becomes very important. Autonomous vehicle manufacturers are very keen that passengers can carry on with day-to-day activities when in the car, such as working on a laptop or watching a movie, all while feeling safe and comfortable. There are a number of different technologies out there to help improve the passenger experience, but it's quite low in maturity. XiL testing enables you to mature that technology to the point that you can then deploy it safely in the real world.
JW: I think they have hit the nail on the head. Perception systems definitely benefit the most. XiL testing is particularly useful for exploring edge cases and rare events that are challenging and sometimes dangerous to replicate in physical testing or occur infrequently in real-world driving. But almost every single area of autonomous vehicle development will benefit from XiL testing.
Can you provide examples of specific automotive systems or components that can be effectively tested using XiL methodologies?
JW: As I say, almost every area of a vehicle will benefit from using XiL methodologies. I think the perception systems, improving sensor fusion algorithms, object detection and tracking, is the most exciting. But then being able to assess how those affect the chassis and safety systems is very powerful.
MD: Exactly, testing how interconnected systems work together is of great value. A good example would be the steering system. AB Dynamics’ SSTM allows the real steering hardware and software to be tested using a simulated vehicle model. It enables a driver sitting in a simulator or at the rig to provide subjective feedback on the system without the need for a prototype vehicle. This is not only important for semi-autonomous vehicles but for fully autonomous vehicles too. Lane keeping functions or handover processes can be evaluated before progressing to on-track testing. We can even inject faults deliberately into the ECU to understand how it fails to make sure it does this safely.
DK: Personally, I've always been fascinated with the science of the human interacting with a vehicle and how that vehicle behaves, the vehicle dynamics basically. A DiL simulator gives a human that sensation to feel what’s going on with the vehicle and this subjective assessment is critical to ride and handling. In my opinion, it is key to get this right in an autonomous vehicle because if you go down the wrong path it is very expensive to resolve, particularly if the issue is identified close to production. In my opinion, testing this using XiL is critical.
Can you share any success stories where XiL testing has been instrumental in identifying and resolving critical issues in autonomy systems?
MD: We worked closely with a North American OEM developing a new semi-autonomous vehicle. A lot of the technology was completely new, so by using XiL testing they were able to take a very immature software toolchain and rapidly build in an extensive level of robustness in the laboratory. Without the use of XiL testing at least some of this testing would have to be conducted on the public road, therefore from a safety point of view this was hugely beneficial.
JW: One of the biggest providers of autonomous technologies, Waymo, is using software-in-the-loop extensively to develop and test its automated driving system. I believe it also employs hardware-in-the-loop and vehicle-in-the-loop (VIL) methodologies to a lesser extent too.
Our ray-tracing rendering engine has been designed specifically for SiL development and particularly for the training of AV systems. Our customers are using this to ‘drive’ hundreds of thousands of virtual miles every single day.
DK: Unfortunately, from my side, a lot of our current work with autonomous vehicle developers is confidential so we are unable to talk about specifics. However, when I was working in Formula 1 in the 2000s, we used the simulator to develop systems like traction control and automatic gear changes, using real hardware and software-in-the-loop. The engineers who were working on these systems said it was absolutely invaluable to be able to run tests on the simulator before they got anywhere near the track.
We even assessed concept ideas during regulation changes. For example, you can very quickly evaluate the impact of a four-wheel drive system and rule that in or out of your development path early before spending valuable resources on the design and manufacture.
This is a great example of how simulation and XiL testing allow you to rapidly develop a new technology in an environment that doesn't cost very much money. These very same benefits translate to the development of autonomous vehicles too.
It is believed that virtual validation is the only way to thoroughly develop autonomous vehicles and XiL testing will be a key part of that, but what are the risks or limitations of relying solely on XiL testing? And how can these risks be mitigated?
MD: You can define a set of vehicle characteristics in a digital vehicle model without having to outline any physical attributes. So, one of the risks is to invest in a design in the virtual world that’s not actually physically achievable. The way to mitigate this is to always take a step back to the CAD stage and ask the question, can this be physically done? Will it fit? Again, it comes back to correlating the simulation with a known reference point from the real world.
DK: Exactly, it goes back to the need to always correlate your simulations as much as possible. The biggest risk is believing in a model that’s incorrect. If your model is wrong, you will head down the wrong path. And then you could be in for a very expensive and rude shock when you migrate to the real world. The only way to mitigate this is to correlate your models.
The entire industry is moving towards virtual validation. Virtual NCAP testing is a great example of this. It is the only way to assess an autonomous vehicle’s performance safely and thoroughly. It provides exact control of the environment and the test scenario and enables that to be consistently repeated.
Where do you see the future of XiL testing and how will it impact autonomous development?
DK: Specifically on the simulator side of things, I think OEMs are looking to reduce the cost of development. High-quality dynamic simulators are a significant investment and OEMs now need a number of them to keep up with the development demands. So, we are creating a range of platforms that are catering for different niche areas of development. They won’t have the full capabilities of a top-level simulator but will be optimized for that area of vehicle development. This approach means we can significantly reduce the cost of the system.
More generally, it is without question moving towards ViL testing — having all the vehicle subsystems, both hardware and software, tested in parallel. We are already seeing customers working on brake systems, steering systems and powertrain dynos, for example, and we're starting to build those blocks together and link them in a simulation. The next step is to actually do the whole vehicle, and that would accelerate development further.
MD: I agree. There is a real shift towards parallel testing taking place. As a provider of HiL capable machines, we are seeing customers increasingly wanting to do this.
We are also seeing a shift towards more cloud computing. We might move away from having sophisticated computing power on-site at the OEM and instead move to it being hosted in the cloud by an external company. This is particularly beneficial for autonomous start-ups that can avoid the upfront investment costs and instead rent the computing power, switching it on and off as needed.
JW: We touched on it earlier in the conversation, that vehicles and their subsystems are becoming more and more connected. As the complexity of autonomous technologies continues to grow, XiL testing provides a cost-effective and controlled environment to test and validate the functionality, performance and integration of these systems.
I think we will also continue to increase the fidelity of simulation and as this becomes more advanced it will enable even more testing to be done as we move towards virtual validation. It will be an integral part of the validation process, providing evidence of safety, reliability, and compliance with regulatory standards.
Will XiL testing remove the need for prototypes?
DK: That's certainly the ambition. Simulation has developed significantly over the last 20 years. For example, we now have very accurate thermodynamic tyre models that provide exceptional correlation. As these building blocks continue to improve, we will only become more and more confident in making that step. Yes, I think we will get there, but when we will get there? I don’t know.
MD: We are already seeing examples where that is starting to happen, certainly at the subsystem level. However, I personally believe you will always need that validation step using a real prototype, but the hope is that the product will be a lot more mature when the prototype hits the road.
JW: I agree with Matt. While XiL testing significantly reduces the reliance on prototypes during the development process, they remain an essential part of autonomous system development. Prototypes play a crucial role in validating real-world performance, testing hardware integration, conducting system-level testing, and gathering user feedback.
Autonomous vehicles are likely to be based on battery electric platforms where rattles, vibrations and squeaks can be more prominent. These potential warranty issues are often identified at the prototype stage once all the subsystems come together in the vehicle. How difficult is it to simulate these noise, vibration and harshness (NVH) issues?
JW: NVH issues often arise from the physical interaction of various components within the vehicle, such as vibrations caused by the powertrain, suspension or body structure. It can be incredibly difficult to simulate the complex dynamics and interactions that lead to NVH issues.
MD: Yes, very difficult indeed. For example, rubber is a very complex material, and it is difficult to accurately replicate in simulation. But the material plays a key role in NVH when it comes to suspension joints and mounting points, for example. However, this is a key focus area for OEMs, and we are seeing a new set of test rigs being developed to help explore this very issue.
About Dave Kirkman, technical director at AB Dynamics simulation group.
Dave Kirkman has more than 20 years of experience in designing and developing DiL simulators. With a background in Formula 1, managing Williams Racing’s simulator department, Dave is now developing next-generation DiL platforms for the development of road cars working with some of the world’s largest OEMs.
About Matt Dustan, director laboratory test systems, AB Dynamics
Matt Dustan is responsible for the development of AB Dynamics laboratory test products, including the SSTM (Steering System Test Machine) and the SPMM (Suspension Parameter Measurement Machine), both of which are HiL capable.
About Josh Wreford, head of sales, rFpro
Josh Wreford is head of sales for the automotive division at rFpro and has been instrumental in helping OEMs and Tier 1 suppliers implement and advance their SiL programs. He performed extensive SiL testing at McLaren Automotive before moving to rFpro.
S&P Global Mobility comment:
Jeremy Carlson, associate director for the Autonomy practice with S&P Global Mobility, says: “Simulations serve as a useful tool, enabling autonomous vehicles to confront edge cases and other high-risk scenarios without endangering costly prototypes or other road users. It is nonetheless a challenge to craft simulation environments that fully immerse vehicles, encode their hardware and software, and engage with testing using real-world physics. Evaluating the immense amounts of data from these highly scalable, virtual tests is no easy task either. This roundtable discussion underscores the ever-evolving nature of development and testing that is fundamental to future deployment and ongoing improvements in automated and autonomous vehicles.”
S&P Global Mobility provides insight into OEM and supplier technology strategies across the full spectrum of autonomous, automated and assisted features — including new Level 2+ criteria. Our enhanced Autonomy Forecasts provide access to detailed model-level technology specifications: From application content and feature packaging to a snapshot of all sensors installed in a single vehicle version. In addition to key strategic questions around automated and autonomous technology development, OEM sourcing structures and emerging suppliers, the enhanced Autonomy Forecasts also employ a new application-software and sensor-hardware pricing model, the latest regulatory insights and consumer survey inputs to build a comprehensive outlook for the automotive industry.
Copyright © 2024 S&P Global Inc. All rights reserved.
These materials, including any software, data, processing technology, index data, ratings, credit-related analysis, research, model, software or other application or output described herein, or any part thereof (collectively the “Property”) constitute the proprietary and confidential information of S&P Global Inc its affiliates (each and together “S&P Global”) and/or its third party provider licensors. S&P Global on behalf of itself and its third-party licensors reserves all rights in and to the Property. These materials have been prepared solely for information purposes based upon information generally available to the public and from sources believed to be reliable.
Any copying, reproduction, reverse-engineering, modification, distribution, transmission or disclosure of the Property, in any form or by any means, is strictly prohibited without the prior written consent of S&P Global. The Property shall not be used for any unauthorized or unlawful purposes. S&P Global’s opinions, statements, estimates, projections, quotes and credit-related and other analyses are statements of opinion as of the date they are expressed and not statements of fact or recommendations to purchase, hold, or sell any securities or to make any investment decisions, and do not address the suitability of any security, and there is no obligation on S&P Global to update the foregoing or any other element of the Property. S&P Global may provide index data. Direct investment in an index is not possible. Exposure to an asset class represented by an index is available through investable instruments based on that index. The Property and its composition and content are subject to change without notice.
THE PROPERTY IS PROVIDED ON AN “AS IS” BASIS. NEITHER S&P GLOBAL NOR ANY THIRD PARTY PROVIDERS (TOGETHER, “S&P GLOBAL PARTIES”) MAKE ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, FREEDOM FROM BUGS, SOFTWARE ERRORS OR DEFECTS, THAT THE PROPERTY’S FUNCTIONING WILL BE UNINTERRUPTED OR THAT THE PROPERTY WILL OPERATE IN ANY SOFTWARE OR HARDWARE CONFIGURATION, NOR ANY WARRANTIES, EXPRESS OR IMPLIED, AS TO ITS ACCURACY, AVAILABILITY, COMPLETENESS OR TIMELINESS, OR TO THE RESULTS TO BE OBTAINED FROM THE USE OF THE PROPERTY. S&P GLOBAL PARTIES SHALL NOT IN ANY WAY BE LIABLE TO ANY RECIPIENT FOR ANY INACCURACIES, ERRORS OR OMISSIONS REGARDLESS OF THE CAUSE. Without limiting the foregoing, S&P Global Parties shall have no liability whatsoever to any recipient, whether in contract, in tort (including negligence), under warranty, under statute or otherwise, in respect of any loss or damage suffered by any recipient as a result of or in connection with the Property, or any course of action determined, by it or any third party, whether or not based on or relating to the Property. In no event shall S&P Global be liable to any party for any direct, indirect, incidental, exemplary, compensatory, punitive, special or consequential damages, costs, expenses, legal fees or losses (including without limitation lost income or lost profits and opportunity costs or losses caused by negligence) in connection with any use of the Property even if advised of the possibility of such damages. The Property should not be relied on and is not a substitute for the skill, judgment and experience of the user, its management, employees, advisors and/or clients when making investment and other business decisions.
The S&P Global logo is a registered trademark of S&P Global, and the trademarks of S&P Global used within this document or materials are protected by international laws. Any other names may be trademarks of their respective owners.
The inclusion of a link to an external website by S&P Global should not be understood to be an endorsement of that website or the website's owners (or their products/services). S&P Global is not responsible for either the content or output of external websites. S&P Global keeps certain activities of its divisions separate from each other in order to preserve the independence and objectivity of their respective activities. As a result, certain divisions of S&P Global may have information that is not available to other S&P Global divisions. S&P Global has established policies and procedures to maintain the confidentiality of certain nonpublic information received in connection with each analytical process. S&P Global may receive compensation for its ratings and certain analyses, normally from issuers or underwriters of securities or from obligors. S&P Global reserves the right to disseminate its opinions and analyses. S&P Global Ratings’ public ratings and analyses are made available on its sites, www.spglobal.com/ratings (free of charge) and www.capitaliq.com (subscription), and may be distributed through other means, including via S&P Global publications and third party redistributors.