(Obsolete article) Automation Levels for Vehicle Testing
(This article is for archival purposes only. A newer take on this topic is here:
https://safeautonomy.blogspot.com/2021/11/regulating-AVs-with-human-drivers.html)
Summary: Vehicle automation testing on public roads should not use SAE Levels, and instead be based on a simple taxonomy: conventional vehicles, automation test platforms, and production automated vehicles. Automated test platforms depend on a safety driver, and should be operated in conformance with the SAE J3018 testing safety standard. Production automated vehicles should not put other road users at increased risk compared to conventional vehicles, and should not use human drivers as a #MoralCrumpleZone.
A serious shortcoming of the normal SAE Levels for vehicle automation (as described in SAE J3016) with regard to testing is that they deal with "design intent" rather than how the vehicle is actually operated in testing. Use of these Levels in regulations has resulted in situations that put the public at risk in a substantive way, with the the most notable concerns associate with the Tesla so-called Full Self-Driving (FSD) "beta" test program.
Because the SAE Levels consider "design intent," an SAE Level 4 highly automated vehicle might actually be a true "self-driving" car operating in a limited Operational Design Domain (ODD) with no human safety supervision. Or, it might be a preliminary prototype vehicle that can barely stay on the road without human driver intervention. Worse, it might be in the dangerous middle ground -- good enough to induce driver complacency, but bad enough that the complacent driver is at elevated risk of causing an avoidable crash. Those are dramatically different operational concepts, but it's all the same to a Level 4 classification so long as the "design intent" is to eventually create a Level for "self-driving" car.
Because of the ambiguity of what vehicle might be Level 4 when operating on the road, the SAE Level system can be aggressively gamed. For example, a vehicle that is in reality an early Level 4 prototype being testing on public roads can be portrayed to regulators as a Level 2 vehicle to evade regulations based on SAE J3016. This can be done by saying "there is a driver responsible for safety so that makes it Level 2." That can be improperly done even if the real plan is to later say there has been a change of heart and deployment will be Level 4. (See this article for how this relates to Tesla FSD beta public road testing.)
It is clear that the SAE Levels are not working out for regulatory purposes. The situation is a common enough one. A taxonomy that works well for passively applied engineering description of something breaks when there is incentive to exploit loopholes and ambiguities. In this case the reward for claiming something is Level 2 when the design intent is really Level 4 is evasion of regulatory supervision for public road testing. That is not the only safety loophole in J3016, but it is the one subject to the most egregious exploitation at the moment.
One approach to addressing the Level 2 Loophole would be to regulate all Level 2 vehicles as highly automated vehicles, on a par with Level 3-5 vehicles. While this might work, it does not tease apart the important difference between testing immature development software and deploying mature Level 2 systems. It's too big a hammer, and indeed there is a better way.
For road testing purposes, regulators should focus on both the operational concept and technology maturity of the vehicle being operated rather than on what might eventually be built as a product. In other words "design intent" isn't relevant to the risk being presented to road users when a vehicle veers into opposing traffic. Avoiding crashes is the issue, not engineering taxonomies.
We propose abandoning the use of SAE Levels for regulatory purposes. Instead, regulations related to automated vehicles that have a human driver assigned to a safety role should be based on whether it is a vehicle automation test platform vs. a mature, deployed product.
The simple version is that an automated test platform is one that requires special driver training and care to compensate for unsafe vehicle behavior. An automated vehicle (one that is not a test platform) is one that any driver can operate safely without special training, with about the same degree of care and skill used in driving a conventional vehicle safely.
Here's the detail:
(1) A vehicle is automated if a computer-based system exerts sustained control over the vehicle path.
Notes: This means if steering is automated, it's an automated vehicle. Speed control might or might not be automated, but is not relevant to the classification. "Automated" excludes features that momentarily bump or nudge steering. "Sustained" is a concept from J3016. Let's say for practical considerations "sustained" is more than 5 to 10 seconds of continuous automated steering, regardless of position of human driver hands on or off the steering wheel.
(2) A vehicle is a test platform if a human driver is expected to compensate for hazardous automation behavior in a way that requires skills above and beyond those of a typical conventional vehicle driver.
Notes: Skills go beyond ordinary driving skill to include both attention self-management and reacting to sudden unsafe vehicle behaviors (e.g., a vehicle that might do "the worst thing at the worst time"). This includes requiring vigilance beyond what is reasonably enforced by any installed driver monitoring system. Signs of unsafe testing include vehicle operation that results in violation of road rules such as driving through red traffic signals, veering into opposing traffic, and failing to yield to pedestrians. Generally, if an elevated rate of incidents or mishaps is blamed on drivers not compensating for a dangerous vehicle behavior, that provides strong evidence that the vehicle in question is a test platform (and that testing practices need to be improved). Designation of a vehicle software release as a "beta" or "test" release is strong evidence that the vehicle is a test platform. Driver disclaimers that the software is not acceptably safe without exceptional driver care is strong evidence that the vehicle is a test platform. There is a presumption that special qualification and training of test drivers as well as testing protocols will be required for test platforms.
Regulation of any vehicle with a driver in it is now simplified into three types:
- Conventional: this is an ordinary vehicle; normal rules apply.
- Automated test platform: Operation of such vehicles should be done in accordance with SAE J3018, which covers safety driver skills and operational safety procedures. Crashes while automation is turned on are generally attributed to a failure of the safety driver to cope with dangerous vehicle behavior, with dangerous behavior being an expectation for any test platform. In other words, safety responsibility rests with the safety driver, not the automation. Test organizations should convince regulators that testing will overall present an acceptably low risk to other road users. This covers all vehicles currently said to be Level 4/5 test vehicles, and also any other Level 2 or Level 3 vehicles that make demands on driver attention and reaction capabilities that are excessive for drivers without special test training.
- Automated vehicle: This is an automated production vehicle. A civilian driver should be able to handle this vehicle in automated mode without any special training beyond general familiarization that must be completed by every driver of the vehicle. (Familiarization is not just the purchaser -- needs to include mandatory familiarization for friends, family, and rental car use before automation can be activated. If there is no mandatory familiarization, then safety credit for that cannot be taken.) Violation of traffic rules or crashes at a rate higher than for drivers of conventional vehicles should be considered prima facie evidence that civilian drivers cannot handle the vehicle without special training, and indicate a vehicle design defect. (Alternately, risky driving indicates that it is really an automated test platform being misrepresented as an automated vehicle.) In other words, human drivers of a production automated vehicle should not serve as Moral Crumple Zones by being asked to perform beyond civilian driver capabilities to compensate for system shortcomings and work-in-progress system defects. This can included fully automated vehicles that do not require human driver intervention.
0 comments:
Post a Comment