Conformance Metrics (Metrics Episode 13)
Metrics that evaluate progress in conforming to an appropriate safety standard can help track safety during development. Beware of weak conformance claims such as only hardware, but not software, conforms to a safety standard.
Conformance metrics have to do with how extensively your system conforms to a safety standard.
A typical software or systems safety standard has a large number of requirements to meet the standard, with each requirement often called clauses. An example of a clause might be something like "all hazards shall be identified" and another clause might be "all identified hazard shall be mitigated." (Strictly speaking a clause is typically a numbered statement in the standard in the form of a "shall" requirement that usually has a lot more words in it than those simplified examples.)
There are often extensive tables of engineering techniques or technical mitigation measures that need to be done based on the risk presented by each hazard. For example, mitigating a low risk hazard might just need normal software quality practices, while a life critical hazard might need dozens or hundreds of very specific safety and software quality techniques to make sure the software is not going to fail in use. The higher the risk, the more table entries need to be performed in design validation and deployment.
The simplest metric related to a safety standard is as simple yes/no question: Do you actually conform to the standard?
However, there are nuances that matter. Conforming to a standard might mean a lot less than you might think for a number of reasons. So one way to measure the value of that conformance statement is to ask about the scope of the conformance and any assessment that was performed to confirm the conformance. For example, is a conformance just hardware components and not software also, or is it both hardware and software? It’s fairly common to see claims of conformance to an appropriate safety standard that only covered the hardware, and that’s a problem if a lot of the safety critical functionality is actually in the software.
If it does cover the software, what scope? Is it just the self test software that exercises the hardware (again, a common conformance claim that omits important aspects of the product)? Does it include the operating system? Does it include all the application software that’s relevant to safety? What actually is the claim of conformance be made on? Is it just a single component within a very large system? Is it a subsystem? Is it entire vehicle? Does it cover both the vehicle and its cloud infrastructure and the communications to the cloud? Does it cover the system used to collect training data that is assumed to be accurate to create a safety critical machine learning based system? And so on. So if you see a claim of conformance, be sure to ask what exactly the claim applies to you because it might not be everything that matters for safety.
Also conformance can have different levels of credibility ranging from – well it’s "in the spirit of the standard." Or "we use an internal standard that we think is equivalent to this international standard." Or "our engineering team decided we think we meet it." Or "a team inside our company thinks we meet it but they report to the engineering manager so there’s pressure upon them to say yes." Or "conformance evaluation is done by a robustly separated group inside our company." Or "conformance evaluation is done via qualified external assessment with a solid track record for technical integrity."
Depending on the system, any one of these categories might be appropriate. But for life critical systems, you need as much independence and actual standards conformance as you can get. If you hear a claim for conformance it’s reasonable ask: well, how do you know you conform to the extent that matters, and is the group assessing conformance independent enough and credible enough for this particular application?
Another dimension of conformance metrics is: how much of the standard is actually being conformed to? Is it only some chapters or all of the chapters? Sometimes we’re back to where only the hardware conformed so they really only looked at one chapter of a system standard that would otherwise cover hardware and software. Is it only the minimum basics? Some standards have a significant amount of text that some treat as optional (in the lingo: "non-normative clauses"). In some standards most of the test is not actually required to claim conformance. So did only the required texts get addressed or were the optional parts addressed as well?
Is the integrity level appropriate? So it might conform to a lower ASIL than you really need for your application, but it still has the conformance stamp to the standard on it. That can be a problem if using, for example, something assessed for noncritical functions and you want to use it in a life critical application. Is the scope of the claim conformance appropriate? For example, you might have dozens of safety critical functions in a system, but only three or four were actually checked for conformance and the rest were not. You can say it conforms to a standard, but the problem is there’s pieces that really matter that were never checked for conformance.
Has the standard been aggressively tailored so that it weakens the value of the claim conformance? Some standards, permit skipping some clauses if they don’t matter to safety in that particular application, but with funding and deadline pressures, there might be some incentive to drop out clauses that really might matter. So it’s important to understand how tailored the standard was. Was that the full standard or where pieces left out that really should matter?
Now to be sure, sometimes limited conformance on all these paths makes perfect sense. It’s okay to do that so long as, first of all, you don’t compromise safety. So you’re only leaving out things that don’t matter to safety. Second you’re crystal clear about what you’re claiming and you don’t ask more of the system that can really deliver for safety.
Typically signs of aggressive tailoring or conformance to only part of a standard are problematic for life critical systems. It’s common to see misunderstandings based on one or more of these issues. Somebody claims conformance to a standard does not disclose the limitations and somebody else gets confused and says, oh, well, the safety box has been checked so nothing to worry about. But, in fact safety is a problem because the conformance claim is much narrower than is required for safety in that application.
During development (before the design is complete), partial conformance and measuring progress against partial conformance can actually be quite helpful. Ideally, there’s a safety case that documents the conformance plan and has a list of how you plan to conform to all the aspects of the standard you care about. Then you can measure progress against the completeness of the safety case. The progress is probably not linear, and not every clause take same amount of effort. But still just looking at what fraction of the standard you’ve achieved conformance to internally can be very helpful for managing the engineering process.
Near the end of the design validation process, you can do mock conformance checks. The metric there is the number of problems found with conformance, which basically amounts to bug reports against the safety case rather than against the software itself.
Summing up, conforming to relevant safety standards is an essential part of ensuring safety, especially in life critical products. There are a number of metrics, measures and ways to assess how well that conformance actually is going to help your safety. It’s important to make sure you’ve conformed to the right standards, you’ve conformed with the right scope and that you’ve done the right amount of tailoring so that you’re actually hitting all the things that you need to in the engineering validation and deployment process to ensure you’re appropriately safe.
For the podcast version of this posting, see: https://archive.org/details/metrics-14-safety-standard-conformance-metrics
Thanks to podcast producer Jackie Erickson.
0 comments:
Post a Comment