share

Testing of autonomous vehicles technology methods and protocols

Testing of autonomous vehicles technology methods and protocols in 2024

Years ago, people started working on making cars drive themselves. This journey began with the Automated Highway System. They tested the idea of cars driving automatically on particular roads. Here, we talk over about Testing of autonomous vehicles technology methods and protocols.

Over time, the technology got better, and we have things like Advanced Driver Assistance Systems (ADAS). These are helpful features in your car, such as automatic lane-keeping and intelligent travel control, available in many vehicles. Exciting projects for cars that can drive independently often discuss the idea that fully autonomous car fleets will soon be widespread. However, making a handful of AVs work under control. Conditions with expert drivers are very different from creating millions of vehicles. That can navigate the fickle real world.

Some argue that if a self-driving car has well verified its abilities and covered a significant distance (even hundreds of thousands of kilometers). It’s ready for large-scale use. However, more than just testing in different situations may be required to guarantee. These cars are safe enough for widespread deployment.

Testing of autonomous vehicles technology methods and protocols

 In-feasibility of Complete Testing:

More than simply individual testing of autonomous vehicles is required to guarantee safety. A well-known challenge is that it’s impossible to test systems thoroughly. To confirm, they operate like a dream and are dependable in all situations.

The idea of having a million self-driving cars operating regularly raises safety concerns. These testing of autonomous vehicles is challenging, and achieving absolute safety is impossible.

Imagine a fleet of one million cars running for one hour daily, adding a million working hours daily if the goal is to experience one catastrophic computer failure every 1,000 days. The safety target becomes a mean time between catastrophic failures of 10^9 hours. This is similar to the failure rates allowed for aircraft. This means accepting the possibility of multiple devastating failures during the fleet’s lifespan.

We’d need to conduct at least a billion hours of testing to check this failure rate. They are assuming a very typical testing environment. However, building a physical fleet for such broad testing without risking the public is impractical. Hence, alternative Validation methods such as simulation, formal proofs, and fault injection, are necessary. They are gaining field experience with technology in non-critical roles.

Testing of autonomous vehicles systems is even more challenging than regular software systems. Traditional testing of autonomous vehicles methods need to be revised. They are making it essential to explore new approaches. Validation strategies include simulation, formal proofs, fault injection, and step-by-step increasing fleet size. To gain experience with technology in non-critical roles and human reviews.

Testing of autonomous vehicles may be a primary means of validation for less critical computing systems. If failures are low severity and exposure, they might be acceptable at a higher occurrence rate. For instance, if a particular failure once every 1,000 hours is tolerable, testing for several thousand hours might validate the failure rate. This doesn’t mean abandoning software quality processes. Instead, adopt a suitable testing and failure-monitoring strategy to ensure components meet acceptable failure rates.

Ensuring the safety for the testing of autonomous vehicles requires a complete and new approach to testing and validation. While testing is crucial, it’s clear that traditional methods alone are insufficient. The industry needs to explore a combination of simulation, formal proofs, fault injection, field experience, and human reviews. To guarantee the safety of these advanced technologies on our roads.

The V model as a straight point:

System-level there may need to be more than system testing. That’s why it is essential to establish a more robust development framework when creating software critical for safety.

The V Model is a standard method used in software development and testing. It can be adapted for validation and testing of autonomous vehicle. The V Model represents the development process and similar testing phases in a V-shaped diagram. Each stage of development has a matching testing phase. And testing is integrated throughout the development process.

The V model is like a road map roadmap for creating and checking things. Imagine it as a letter “V.” On the left side, you start with what you need and go through planning, designing, and building. You break things into smaller parts at each step, like puzzle pieces.

On the right side of the “V,” you test and check those pieces, putting them together to ensure everything works. You keep doing this until you’re sure the whole thing works well.

The usual road map might not perfectly fit because making a self-driving car involves unique challenges. So, we must be extra careful in figuring out how to use this roadmap. To make sure these cars are safe and work as they should.

Driver out of the Loop:

The biggest challenge with full AVs is that they are planned so the driver does not have to drive. In old cars, drivers control the vehicle. However, in self-driving cars, the idea is that the car manages everything independently. This means we can’t rely on the driver to give directions or control the vehicle while it’s moving. The car needs to handle everything without human input.

Autonomy Architecture approach:

Regarding ISO 26262, which deals with the safety of automotive systems, giving control to the computer proposes two ways to assess risk. One option is to consider the controllability as “C3 Difficult to control or uncontrollable.” This works when the risk is low, but a system needs a high Automotive Safety Integrity Level (ASIL) for higher risks.

Another way to handle high-ASIL autonomy is to use ASIL decomposition through a monitor/actuator setup with severance. In this setup, the actuator performs significant functions. The monitor checks and validates behavior. If the actuator misbehaves, the monitor shuts down the entire function, creating a fail-silent system.

 

The advantage is that the actuator can have a low ASIL. Suppose the monitor has a high ASIL and detects all faults. This is useful if the monitor can be more straightforward than the actuator, decreasing the size of the high-ASIL component. However, the downside is that if something goes wrong, both modules shut down, affecting the function.

Regardless of the approach, detecting faults in autonomy functions and keeping the system safe is essential. This requires a fail-useful degraded mode autonomy capability.

Requirements challenges:

I am taking the driver out of the picture and relying on software to control a vehicle. This means the software needs to deal with unexpected situations. These can include challenges like lousy weather, strange driving behaviors, and equipment failures—even rare events like animals crossing the road or unusual driving conventions.

Suppose we have numerous self-driving vehicles. They are likely to encounter all types of unusual situations that we may be unable to list in a set of rules. Some events are so rare or unexpected. Predicting them in advance and writing specific regulations for each one takes time.

The traditional way of making a plan is like the V model. You list all the rules and requirements that need to be revised for self-driving cars because too many unusual things can happen on the road. The challenge is figuring out how to teach the software to handle these unexpected situations. Without having a rule for every single one.

Operational concept approach:

When dealing with the complications of requirements for technologies like autonomous vehicles, one approach is to start with limited setups and gradually expand. Developers already do this by focusing on testing of autonomous vehicles in specific geographic areas. The idea is to use an “operational concept,” a set of conditions under which the technology operates.

 

You can limit these operational concepts in various ways, such as:

Road Access:

Specify where the technology works, like on highways, in specific lanes, rural areas, suburbs, closed campuses, or urban streets.

Visibility: 

When the technology operates daytime, nighttime, in fog, haze, smoke, rain, or snow.

Vehicular Environment:

Pacify conditions like self-parking in a closed garage, using autonomous-only lanes, or having marker transponders on non-autonomous vehicles.

External Environment:

Consider infrastructure support, pre-mapped roads, or driving in convoys with human-driven cars.

Speed: 

Limit the technology to certain speeds, reducing the impact of failures and providing more room for recovery.

By choosing specific conditions from these categories, you are not trying to make things more difficult. Instead, you’re simplifying the initial scenarios to understand the requirements better. This strategy is like taking small steps to introduce more advanced features in a controlled manner. Once you’re confident in understanding and meeting the requirements for a particular set of conditions, you can gradually add more scenarios. This step-by-step approach helps manage complexity and avoids vast challenges that could arise if you try to tackle everything at once.

Machine Learning Systems:

AVs need to make many decisions to drive correctly. These decisions involve understanding the surroundings and controlling the vehicle. To get this right, various settings need to be adjusted perfectly, like tuning camera models and deciding how much importance to give to different risks, such as deviating or stopping to avoid problems on the road. It often involves using machine learning. Machine learning allows computers to learn from examples rather than being explicitly programmed.

There are different ways to approach machine learning, such as learning from demonstration or using supervised and unsupervised methods. The idea is to use examples of training data to create rules the vehicle follows when driving.

One challenge is that the training data effectively becomes the system’s requirements. In other words, what the vehicle learns from the training data becomes its rules for operating. Trying to avoid this by setting requirements for collecting training data shifts the problem to a different level.

Deciding what makes good training data and how to validate it is still an open question. This might involve understanding the characteristics of the data and the processes used to collect it.   Even though the text talks about complex-sounding processes, it’s essentially about teaching autonomous vehicles how to drive by showing them many examples and figuring out the best way.

 

Leave a Comment

Join Our Weekly Newsletter

Stay updated with our latest news and offers by subscribing to our popup notifications. Get instant access to exclusive deals and updates by subscribing to our popup notifications.