Ethical Problems That Matter for Self Driving Cars

It's time to get past the irrelevant Trolley Problem and talk about ethical issues that actually matter in the real world of self driving cars.  Here's a starter list involving public road testing, human driver responsibilities, safety confidence, and grappling with how safe is safe enough.


  • Public Road Testing. Public road testing clearly puts non-participants such at pedestrians at risk. Is it OK to test on unconsenting human subjects? If the government hasn't given explicit permission to road test in a particular location, arguably that is what is (or has been) happening. An argument that simply having a "safety driver" mitigates risk is clearly insufficient based on the tragic fatality in Tempe AZ last year. 
  • Expecting Human Drivers to be Super-Human. High-end driver assistance systems might be asking the impossible of human drivers. Simply warning the driver that (s)he is responsible for vehicle safety doesn't change the well known fact that humans struggle to supervise high-end autonomy effectively, and that humans are prone to abusing highly automated systems. This gives way to questions such as:
    • At what point is it unethical to hold drivers accountable for tasks that require what amount to super-human abilities and performance?
    • Are there viable ethical approaches to solving this problem? For example, if a human unconsciously learns how to game a driver monitoring system (e.g., via falling asleep with eyes open -- yes, that is a thing) should that still be the human driver's fault if a crash occurs?
    • Is it OK to deploy technology that will result in drivers being punished for not being super-human if result is that the total death rate declines?
  • Confidence in Safety Before Deployment.  There is work that advocates even slightly better than a human is acceptable (https://www.rand.org/blog/articles/2017/11/why-waiting-for-perfect-autonomous-vehicles-may-cost-lives.html). But there isn't a lot of discussion about the next level of what that really means. Important ethical sub-topics include:
    • Who decides when a vehicle is safe enough to deploy? Should that decision be made by a company on its own, or subject to external checks and balances? Is it OK for a company to deploy a vehicle they think is safe based just on subjective criteria alone: "we're smart, we worked hard, and we're convinced this will save lives"
    • What confidence is required for the actual prediction of casualties from the technology? If you are only statistically 20% confident that your self-driving car will be no more dangerous than a human driver, is that enough?
    • Should limited government resources that could be used for addressing known road safety issues (drunk driving, driving too fast for conditions, lack of seat belt use, distracted driving) be diverted to support self-driving vehicle initiatives using an argument of potential public safety improvement?
  • How Safe is Safe Enough? Even if we understand the relationship between an aggregate safety goal and self-driving car technology, where do we set the safety knob?  How will the following issues affect this?
    • Will risk homeostatis apply? There is an argument that there will be pressure to turn up the speed/traffic volume dials on self-driving cars to increase permissiveness and traffic flow until the same risk as manual driving is reached. (Think more capable cars resulting in crazier roads with the same net injury and fatality rates.)
    • Is it OK to deploy initially with a higher expected death rate than human drivers under an assumption that systems will improve over time, long term reducing the total number of deaths?  (And is it OK for this improvement to be assumed rather than proven to be likely?)
    • What redistribution of demographics for victims is OK? If fewer passengers die but more pedestrians die, is that OK if net death rate is the same? Is is OK if deaths disproportionately occur to specific sub-populations? Did any evaluation of safety before deployment account for these possibilities?
I don't purport to have the definitive answers to any of these problems (except a proposal for road testing safety, cited above). And it might be that some of these problems are more or less answered. The point is that there is so much important, relevant ethical work to be done that people shouldn't be wasting their time on trying to apply the Trolley Problem to AVs. I encourage follow-ups with pointers to relevant work.

If you're still wondering about Trolley-esque situations, see this podcast and the corresponding paper. The short version from the abstract of that paper: Trolley problems are "too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety, and should not be used to inform engineering or policy." In general, it should be incredibly rare for a safely designed self-driving car to get into a no-win situation, and if it does happen they aren't going to have information about the victims and/or aren't going to have control authority to actually behave as suggested in the experiments any time soon, if ever.

Here are some links to more about applying ethics to technical systems in general (@IEEESSIT) and autonomy in particular (https://ethicsinaction.ieee.org/), as well as the IEEE P7000 standard series (https://www.standardsuniversity.org/e-magazine/march-2017/ethically-aligned-standards-a-model-for-the-future/).


Car Drivers Do More than Drive

How will self-driving cars handle all the non-driving tasks that drivers also perform?  How will they make unaccompanied kids stop sticking their head out the window?

https://pixabay.com/photos/stuffed-animals-classic-car-driving-2798333/
Hey Kids -- Don't stick your heads out the window!

The conversation about self-driving cars is almost all about whether a computer can safely perform the "dynamic driving task." As well it should be -- at first.  If that part isn't safe, then there isn't much to talk about.

But, looking forward, human drivers do more than drive. They also provide adult supervision (and, on a good day, mature judgement) about the operation of the vehicle in other respects. If you've never heard the phrase "stop doing that right now or I swear I'm going to stop the car!" then probably you've never ridden in a car with multiple children.  And yet, we're already talking about sending kids to school in an automated school bus. Presumably the point is to avoid the cost of the human supervision.

But is putting a bunch of kids in a school bus without an adult a good idea?  Will the red-faced person on the TV monitor yelling at the kids really be effective?  Or just provide entertainment for already screaming kids?

But there's more than that to consider. Here's my start at a list of things human drivers (including vehicle owners, taxi drivers, and so on) do that isn't really driving.

Some tasks will arguably be done by a fleet maintenance function:
  • Preflight inspection of vehicle. (Flat tires, structural damage.)
  • Preflight correction of issues. (Cleaning off snow and ice. Cleaning windshield.)
  • Ensure routine maintenance has been performed. (Vehicle inspections, good tires, fueling/charging, fluid top-off if needed.)
  • Maintain vehicle interior cleanliness.  And we're not just talking about empty water bottles here. (Might require taking vehicle out of service for cleaning up motion sickness results. But somehow the maintenance crew needs to know there has been a problem.)
But some things have to happen on the road when no human driver is present. Examples include:
  • Ensure vehicle occupants stay properly seated and secured.
  • Keep vehicle occupants from doing unsafe things. (Hand out window, head out sunroof, fighting, who knows what. Generally providing adult supervision. Especially if strangers or kids are sharing a vehicle.)
  • Responding to cargo that comes loose.
  • Emergency egress coordination (e.g., getting sleeping children, injured, and mobility impaired passengers out of vehicle when a dangerous situation occurs such as a vehicle fire)
Anyone who seriously wants to build vehicles that don't have a designated "person in charge" (which is the driver in conventional vehicles) will need to think through all these issues. And likely more. Any argument that a self driving vehicle is safe for unattended service will need to deal with all these issues, and more. (UL 4600 is intended to cover all this ground.)

Can you think of any other non-driving tasks that need to be handled?

Evolution of the motor vehicle

1860 The Frenchman Lenoir constructs the first internal-combustion engine; this powerplant relies on city gas as its fuel source. Thermal efficiency is in the 3% range. 1867 Otto and Langen display an improved inter- nal-combustion engine at the Paris Interna- tional Exhibition. Its thermal efficiency is ap- proximately 9%. 1876 Otto builds the first gas-powered engine to uti- lise the four-stroke compression cycle. At virtu- ally the same time Clerk constructs the first gas- powered two-stroke engine in England. 1883 Daimler and Maybach develop the first high- speed four-cycle petrol engine using a hot- tube ignition system. 1885 The first automobile from Benz (patented in 1886). First self-propelled motorcycle from Daimler (Fig. 1). 1886 First four-wheeled motor carriage with petrol engine from Daimler (Fig. 2). 1887 Bosch invents the magneto ignition. 1889 Dunlop in England produces the first pneu- matic tyres. 1893 Maybach invents the spray-nozzle carburet- tor . Diesel patents his design for a heavy oil- burning powerplant employing the self-igni- tion concept. 1897 MAN presents the first workable diesel engine. 1897 First Electromobile from Lohner-Porsche (Fig. 2). 1913 Ford introduces the production line to auto- motive manufacturing. Production of the Tin Lizzy (Model T, Fig. 3). By 1925, 9,109 were leaving the production line each day. 1916 The Bavarian Motor Works are founded. 1923 First motor lorry powered by a diesel engine produced by Benz-MAN (Fig. 4). 1936 Daimler-Benz inaugurates series-production of passenger cars propelled by diesel engines. 1938 The VW Works are founded in Wolfsburg. 1949 First low-profile tyre and first steel-belted ra- dial tyre produced by Michelin. 1954 NSU-Wankel constructs the rotary engine (Fig. 4). 1966 Electronic fuel injection (D-Jetronic) for stan- dard production vehicles produced by Bosch . 1970 Seatbelts for driver and front passengers. 1978 Mercedes-Benz installs the first Antilock Brak- ing System (ABS) in vehicles. 1984 Debut of the airbag and seatbelt tensioning system . 1985 Advent of a catalytic converter designed for operation in conjunction with closed-loop mix- ture control, intended for use with unleaded fuel. 1997 Electronic suspension control systems (ESP) . Toyota builds first passenger car with a hybrid drive. Alfa Romeo introduces the common-rail direct injection (CRDI) system for diesel engines. As of Advanced driver assistance systems , such as 2000 parking assistance, distance warning systems, lane change assistance.


Modern
Automotive Technology
Fundamentals, service, diagnostics
2nd English edition
The German edition was written by technical instructors, engineers and technicians
Editorial office (German edition): R. Gscheidle, Studiendirektor, Winnenden – Stuttgart
VERLAG EUROPA-LEHRMITTEL  ·  Nourney, Vollmer GmbH & Co. KG
Düsselberger Straße 23  ·  42781 Haan-Gruiten, Germany

Other Autonomous Vehicle Safety Argument Observations

Other AV Safety Issues:
We've seen some teams get it right. And some get it wrong. Don't make these mistakes if you're trying to ensure your autonomous vehicle is safe.

Defective disengagement mechanisms. Generally this involves the ability of an arbitrary fail-active autonomy failure to prevent successful disengagement by a human supervisor. As a concrete example, a system might read the state of the disengagement activation mechanism (the “big red button”) as an I/O device fed directly into the primary autonomy computer rather than using an independent safing mechanism. This is a special case of a single point of failure in the form of the autonomy computer.

Assuming perception failures are independent. Some arguments assume independent failures of multiple perception modes. While there is clearly utility in creating a safety case for the non-perception parts of an autonomous vehicle, one must argue rather than assume the safety of perception to create a credible safety case at the vehicle level.

Requiring perfect human supervision of autonomy. Humans are well known to struggle when assigned such monitoring tasks. Koopman et al. (2019) cover this topic in more detail as it relates to autonomous vehicle road testing safety.

Dismissing a potential fault as “unrealistic” without supporting data.For example, argumentation might state that a lightning strike on a moving vehicle is unrealistic or could not happen in the “real world,” despite data to the contrary (e.g. Holle 2008). To be sure, this does not mean that something like a lightning strike must be completely mitigated via keeping the vehicle fully operational. Rather, such faults must be considered in risk analysis. Dismissing hazards without risk analysis based on a subjective assertion that they are “unrealistic” results in a safety case with insufficient evidence.

Using multi-channel comparison approaches for autonomy. In general autonomy algorithms are nondeterministic, sensitive to initial conditions, and have many acceptable (or at least safe) behaviors for any given situation. Architectural approaches based on voting diverse autonomy algorithms tend to run into a problem of deciding whether the outputs are close enough to be valid. Averaging and other similar approaches are not necessarily appropriate. As a simple example, the average of veering to the right and veering to the left to avoid an obstacle could result in hitting the obstacle dead-on.

Confusion about fault vs. failure. While there is a widely recognized terminology document for dependable system design (Avizienis 2004), we have found that there is widespread confusion about the terms fault and failure in practical use. This is especially true when discussing malfunctions that are not due to a component fault, but rather a requirements gap or an excursion from the intended operational environment. It is beyond the scope of this paper to attempt to resolve this, but we note it as an area worthy of future work and particular attention in interdisciplinary discussions of autonomy safety.

(This is an excerpt of our SSS 2019 paper:  Koopman, P., Kane, A. & Black, J., "Credible Autonomy Safety Argumentation," Safety-Critical Systems Symposium, Bristol UK, Feb. 2019.  Read the full text here)

  • Avizienis, A., Laprie, J.-C., Randell B., Landwehr, C. (2004) “Basic concepts and taxonomy of dependable and secure computing,” IEEE Trans. Dependability, 1(1):11-33, Oct. 2004.
  • Holle, R. (2008) Lightning-caused deaths and injuries in the vicinity of vehicles, American Meteorological Society Conference on Meteorological Applications of Light-ning Data, 2008.
  • Koopman, P. and Latronico, B., (2019) Safety Argument Considerations for Public Road Testing of Autonomous Vehicles, SAE WCX, 2019.