The Discounted Failure Pitfall for autonomous system safety

The Discounted Failure Pitfall:Arguing that something is safe because it has never failed before doesn't work if you keep ignoring its failures based on that reasoning.A particularly tricky pitfall occurs when a proven in use argument is based upon a lack of observed field failures...

The “Small” Change Fallacy for autonomous system safety

The “small” change fallacy:In software, even a single character source code can cause catastrophic failure.  With the possible exception of a very rigorous change analysis process, there is no such thing as a "small" change to software.Headline: July 22, 1969: Mariner 1 Done...

Assurance Pitfalls When Using COTS Components

Assurance Pitfalls When Using COTS Components:Using a name-brand, familiar component doesn't automatically ensure safety.It is common to repurpose Commercial Off-The-Shelf (COTS) software or components for use in critical autonomous vehicle applications. These include components...

Proven In Use: The Violated Assumptions Pitfall for Autonomous Vehicle Validation

The Violated Assumptions Pitfall:Arguing something is Proven In Use must address whether the new use has sufficiently similar assumptions and operational conditions as the old one.The proven in use argumentation pattern uses field experience of a component (or, potentially, an engineering...

Pitfall: Arguing via Compliance with an Inappropriate Safety Standard

For a standard to help ensure you are actually safe, not only do you have to actually follow it, but it must be a suitable standard in domain, scope, and level of integrity assured.  Proprietary safety standards often don't actually make you safe.Historically some car makers...

The Implicit Controllability Pitfall for autonomous system safety

The Implicit Controllability Pitfall:A safety case must account for not only failures within the autonomy system, but also failures within the vehicle. With a fully autonomous vehicle, responsibility for managing potentially unsafe equipment malfunctions that were previously...

Edge Cases and Autonomous Vehicle Safety -- SSS 2019 Keynote

Here is my keynote talk for SSS 2019 in Bristol UK.Edge Cases and Autonomous Vehicle SafetyMaking self-driving cars safe will require a combination of techniques. ISO 26262 and the draft SOTIF standards will help with vehicle control and trajectory stages of the autonomy pipeline. Planning might be made safe using a doer/checker architectural pattern...

Command Override Anti-Pattern for autonomous system safety

Command Override Anti-Pattern:Don't let a non-critical "Doer" over-ride the safety critical "Checker."  If you do this, the Doer can tell the Checker that something is safe when it isn't.(First in a series of postings on pitfalls and fallacies we've seen used in safety assurance...

Credible Autonomy Safety Argumentation Paper

Here is our paper on pitfalls in safety argumentation for autonomous systems for SSS 2019.  My keynote talk will mostly be about perception stress testing but I'm of course happy to talk about this paper as well at the meeting.Credible Autonomy Safety ArgumentationPhilip Koopman, Aaron Kane, Jen BlackCarnegie Mellon University, Edge Case Research...