The UK already has a pretty good answer for how to do self-driving car testing safely. US stakeholders could learn something from it.
You can see the document for yourself at:
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/446316/pathway-driverless-cars.pdfAs industry and various governmental organizations decide what to do in response to the tragic Tempe Arizona pedestrian accident, it's worth looking abroad to see what others have done. As it turns out, the UK Department for Transport issued a 14 page document in July 2015: "The Pathway to Driverless Cars: A Code of Practice for testing." It covers test drivers, vehicle equipment, licensing, insurance, data recording, and more. So far so good, and kudos for specifically addressing the topic of test platform safety that long ago!
As I'd expect from a UK safety document, there is a lot to like. I'm not going to try to summarize it all, but here are some comments on specific sections that are worth noting. Overall, I think that the content is useful and will be helpful to improve safety for testing Autonomous Vehicle (AV) technology on public roads. My only criticism is that it doesn't go quite far enough in a couple places.
First, it is light on making sure that the safety process is actually performing as intended. For example, they say it's important to make sure that test drivers are not fatigued, which is good. But they don't explicitly say that you need to take operational data to make sure that the procedures intended to mitigate fatigue problems are actually resulting in alert drivers. Similarly, they say that the test drivers need time to react, but they don't require feedback to make sure that on-road vehicles are actually leaving the drivers enough time to react during operation. (Note that this is a tricky bit to be sure you get right because distracted drivers take longer to react, so you really need to ensure that field operations are leaving sufficient reaction time margin.)
In fairness, they say "robust procedures" and for a safety person taking data to make sure the procedures are actually working should be obvious. Nonetheless I've found in practice that it's important to spell out the need for feedback to correct safety issues. If you have a high stakes environment such as the autonomy race to market, it's only natural that the testers will be under pressure to cut corners. The only way I know of to ensure that "aggressive" technology maturation doesn't cross the line to being unsafe is to have continual feedback from field operations to ensure that the assumptions and strategy underlying the safety plan are actually effective and working as intended. For example, you should detect and correct systemic problems with safety driver alertness long before you experience a pedestrian fatality.
Second, although they say it's important for the takeover mechanism to work, they don't specifically require designing it according to a suitable functional safety standard. Again, for a safety person this should be obvious, and quite possibly it was so obvious to the authors of this document that they didn't bother mentioning it. But again it's worth spelling out.
To be clear, it's important that any on-road testing of AV technology should be no more dangerous than the normal operation of a human-driven non-autonomous vehicle. That's the whole purpose of having a safety driver! But getting safety drivers to be that good in practice can be a challenge. However, rather than succumb to pessimism about whether testing can actually be safe, I say let the AV developers prove that they can handle this challenge with a transparent, public safety argument. (
See also my previous posting on safe AV testing for a high level take on things.)
The UK testing safety document is well worth considering by any governmental agency or AV company contemplating how on-road testing of AV technology should be done.
Below are some more detailed notes. The bullets are from the source document, with some informal comments after each bullet:
- 1.3: ensure that on-road testing "is carried out with the minimum practicable risk"
This appears to be invoking the UK legal concept of
ALARP ("As Low As Reasonably Practicable") and SFAIRP ("So Far As Is Reasonably Practicable.") This is a technical concept, not an intuitive concept. You can't simply say "this ought to be OK because I think it's OK." Rather, that you need to demonstrate via a rigorous engineering process that you've done everything reasonably practicable to reduce risk.
- 3.4 Testing organisations should:
- ... Conduct risk analysis of any proposed tests and have appropriate risk management strategies.
- Be conscious of the effect of the use of such test vehicles on other road users and plan trials to manage the risk of adverse impacts.
It's not OK to start driving around without having done some work to understand and mitigate risks.
- 4.16 Testing organisations should develop robust procedures to ensure that test drivers and operators are sufficiently alert to perform their role and do not suffer fatigue. This could include setting limits for the amount of time that test drivers or operators perform such a role per day and the maximum duration of any one test period.
The test drivers have to stay alert. Simply setting the limits isn't enough. You have to actually make sure the limits are followed, that there isn't undue pressure for drivers to skip breaks, and in the end you have to make sure that drivers are actually alert. Solving alertness issues by firing sleepy drivers doesn't fix any systemic problem with alertness -- it just gives you fresh drivers who will have just as much trouble staying alert as the drivers you just fired.
- 4.20 Test drivers and operators should be conscious of their appearance to other road users, for example continuing to maintain gaze directions appropriate for normal driving.
This appears to address the problem of other road users interacting with an AV. The theory seems to be that if for example the test driver makes eye contact with a pedestrian at a crosswalk, that means that even if the vehicle makes a mistake the test driver will intervene to give the pedestrian right of way. This seems like a sensible requirement, and could help the safety driver remain engaged with the driving task.
- 5.3 Organisations wishing to test automated vehicles on public roads or in other public places will need to ensure that the vehicles have successfully completed in-house testing on closed roads or test tracks.
- 5.4 Organisations should determine, as part of their risk management procedures, when sufficient in-house testing has been completed to have confidence that public road testing can proceed without creating additional risk to road users. Testing organisations should maintain an audit trail of such evidence.
You should
not be doing initial development on public roads. You should be using extensive analysis and simulation to be pretty sure everything is going to work before you ever get near a public road. On-road testing should be to check that things are OK and there are no surprises. (Moreover, surprises should be fed back to development to avoid similar surprises in the future.) You should have written records that you're doing the right amount of validation before you ever operate on public roads. (emphasis added)
- 5.5 Vehicle sensor and control systems should be sufficiently developed to be capable of appropriately responding to all types of road user which may typically be encountered during the test in question. This includes more vulnerable road users for example disabled people, those with visual or hearing impairments, pedestrians, cyclists, motorcyclists, children and horse-riders.
Part of your development should include making sure the system can deal with at-risk road users. This means there should be a minimal chance that a pedestrian or other at-risk road user will be put into danger by the AV even without safety driver intervention. (The safety driver should be handling unexpected surprises, and not be relied upon as a primary control mechanism during road testing.)
- 5.8 This data should be able to be used to determine who or what was controlling the vehicle at the time of an incident. The data should be securely stored and should be provided to the relevant authorities upon request. It is expected that testing organisations will cooperate fully with the relevant authorities in the event of an investigation
With regard to data recording, there should be no debate over whether the autonomy was in control a the time of the mishap. (How can it possibly be that a developer says "we're not sure if the autonomy was in control at the time of the mishap?" Yet I've heard this on the news more than once.) It's also important to be transparent about the role of autonomy at times just before any mishap. For example, if autonomy disengages a fraction of a second before impact, it's unreasonable to just blame the human driver without a more thorough investigation.
- 5.18 Ensuring that the transition periods between manual and automated mode involve minimal risk will be an important part of the vehicle development process and one which would be expected to be developed and proven during private track testing prior to testing on public roads or other public places.
It's really important that manual takeover by a safety driver actually works. As mentioned above, the takeover system should be designed to a suitable level of safety (e.g., according to ISO 26262).
- 5.21 ... All software and revisions have been subjected to extensive and well documented testing. This should typically start with bench testing and simulation, before moving to testing on a closed test track or private road. Only then should tests be conducted on public roads or other public places.
Again, testing should be used to confirm that the design is right, not as an iterative drive-fix-drive approach to gradually beating the system into proper operation via brute force road testing.
These comments are based on a preliminary reading of the document. I might change my thoughts on this document over time.