The Implicit Controllability Pitfall for autonomous system safety
The Implicit Controllability Pitfall:
A safety case must account for not only failures within the autonomy system, but also failures within the vehicle. With a fully autonomous vehicle, responsibility for managing potentially unsafe equipment malfunctions that were previously mitigated by a human driver falls to the autonomy.
A subtle pitfall when arguing based on conformance to a safety standard is neglecting that the assumptions made when assessing the subsystem that have potentially been violated or changed by the use of autonomy. Of particular concern for ground vehicles is the “controllability” aspect of an ISO 26262 ASIL analysis. (Severity and exposure might also change for an autonomous vehicle due to different usage patterns and should also be considered, but are beyond the scope of this discussion.)
The risk analysis of an underlying conventional vehicle according to ISO 26262 requires taking into account the severity, exposure, and controllability of each hazard (ISO 2011). The controllability aspect assumes a competent human driver is available to react to and mitigate equipment malfunctions. Taking credit for some degree of controllability generally reduces the integrity requirements of a component. This idea is especially relevant for Advanced Driver-Assistance Systems (ADAS) safety arguments, in which it is assumed that the driver will intervene in a timely manner to correct any vehicle misbehavior, including potential ADAS malfunctions.
With a fully autonomous vehicle, responsibility for managing potentially unsafe equipment malfunctions that were previously mitigated by a human driver falls to the autonomy. That means that all the assumed capabilities of a human driver that have been built in to the safety arguments regarding under-lying vehicle malfunctions are now imposed upon the autonomy system.
If the autonomy design team does not have access to the analysis behind underlying equipment safety arguments, there might be no practical way to know what all the controllability assumptions are. In other words, the makers of an autonomy kit might be left guessing what failure response capabilities they must provide to preserve the correctness of the safety argumentation for the underlying vehicle.
The need to mitigate some malfunctions is likely obvious, but we have found that “obvious” is in the eye of the beholder. Some examples of assumed human driver interventions we have noted, or even experienced first-hand include:
Creating a thorough safety argument will require either obtaining or re-verse engineering all the controllability assumptions made in the design of the underlying vehicle. Then, the autonomy must be assessed to have an adequate ability to provide the safety relevant controllability assumed in the vehicle design, or an alternate safety argument must be made.
For cases in which the controllability assumptions are not available, there are at least two approaches that should both be used by a prudent design team. First, FMEA, HAZOP, and other appropriate analyses should be performed on vehicle components and safety relevant functions to ensure that the autonomy can react in a safe way to malfunctions. Such an analysis will likely struggle with whether or not it is safe to assume that the worst types of malfunctions will be adequately mitigated by the vehicle without autonomy intervention.
Second, defects reported on comparable production vehicles should be considered as credible malfunctions of the non-autonomous portions of any vehicle control system since they have already happened in production systems. Such malfunctions include issues such as the drivetrain reporting the opposite of the current direction of motion, uncommanded acceleration, significant braking lag, loss of headlights, and so on (Koopman 2018a).
(This is an excerpt of our SSS 2019 paper: Koopman, P., Kane, A. & Black, J., "Credible Autonomy Safety Argumentation," Safety-Critical Systems Symposium, Bristol UK, Feb. 2019. Read the full text here)
A safety case must account for not only failures within the autonomy system, but also failures within the vehicle. With a fully autonomous vehicle, responsibility for managing potentially unsafe equipment malfunctions that were previously mitigated by a human driver falls to the autonomy.
A subtle pitfall when arguing based on conformance to a safety standard is neglecting that the assumptions made when assessing the subsystem that have potentially been violated or changed by the use of autonomy. Of particular concern for ground vehicles is the “controllability” aspect of an ISO 26262 ASIL analysis. (Severity and exposure might also change for an autonomous vehicle due to different usage patterns and should also be considered, but are beyond the scope of this discussion.)
The risk analysis of an underlying conventional vehicle according to ISO 26262 requires taking into account the severity, exposure, and controllability of each hazard (ISO 2011). The controllability aspect assumes a competent human driver is available to react to and mitigate equipment malfunctions. Taking credit for some degree of controllability generally reduces the integrity requirements of a component. This idea is especially relevant for Advanced Driver-Assistance Systems (ADAS) safety arguments, in which it is assumed that the driver will intervene in a timely manner to correct any vehicle misbehavior, including potential ADAS malfunctions.
With a fully autonomous vehicle, responsibility for managing potentially unsafe equipment malfunctions that were previously mitigated by a human driver falls to the autonomy. That means that all the assumed capabilities of a human driver that have been built in to the safety arguments regarding under-lying vehicle malfunctions are now imposed upon the autonomy system.
If the autonomy design team does not have access to the analysis behind underlying equipment safety arguments, there might be no practical way to know what all the controllability assumptions are. In other words, the makers of an autonomy kit might be left guessing what failure response capabilities they must provide to preserve the correctness of the safety argumentation for the underlying vehicle.
The need to mitigate some malfunctions is likely obvious, but we have found that “obvious” is in the eye of the beholder. Some examples of assumed human driver interventions we have noted, or even experienced first-hand include:
- Pressing hard on the brake pedal to compensate for loss of power assist
- Controlling the vehicle in the event of a tire blowout
- Path planning after catastrophic windshield damage from debris impact
- Manually pumping brakes when anti-lock brake mechanisms time out due to excessive activation on slick surfaces
- Navigating by ambient light starlight after a lighting system electrical failure at speed while bring vehicle to a stop
- Attempting to mitigate the effects of uncommanded propulsion power
Creating a thorough safety argument will require either obtaining or re-verse engineering all the controllability assumptions made in the design of the underlying vehicle. Then, the autonomy must be assessed to have an adequate ability to provide the safety relevant controllability assumed in the vehicle design, or an alternate safety argument must be made.
For cases in which the controllability assumptions are not available, there are at least two approaches that should both be used by a prudent design team. First, FMEA, HAZOP, and other appropriate analyses should be performed on vehicle components and safety relevant functions to ensure that the autonomy can react in a safe way to malfunctions. Such an analysis will likely struggle with whether or not it is safe to assume that the worst types of malfunctions will be adequately mitigated by the vehicle without autonomy intervention.
Second, defects reported on comparable production vehicles should be considered as credible malfunctions of the non-autonomous portions of any vehicle control system since they have already happened in production systems. Such malfunctions include issues such as the drivetrain reporting the opposite of the current direction of motion, uncommanded acceleration, significant braking lag, loss of headlights, and so on (Koopman 2018a).
- ISO (2011) Road vehicles -- Functional Safety -- Management of functional safety, ISO 26262, International Standards Organization, 2011.
- Koopman, P., (2018a), Potentially deadly automotive software defects, https://betterembsw.blogspot.com/2018/09/potentially-deadly-automotive-software.html, Sept. 25, 2018.
0 comments:
Post a Comment