Most Important Things You Should Check in Your Vehicle Before Go To Road Trip

Once in a while, we get invigorated to make that street experience, legitimate? Be it for big business or delight, there's an a-unquestionable requirement checklist before making your road experience. 

If you are asking, for what reason would I like to check my vehicle before a street venture?' indeed, road ventures, specifically on the off chance that they are extensive can be fairly hard for your car. 

Most Important Things You Should Check in Your Vehicle Before Go To Road Trip

A little issue can be intensified and may end up being negative key parts that are expensive to reestablish. Here are the top 6 basic issues to check sooner than a street ride.

1. Leaks and hoses

The essential issue that you should investigate is the breaks and hoses. On the off chance that you see lumps or rankles on any of the hoses, you at that point should refresh it. 

Most Important Things You Should Check in Your Vehicle Before Go To Road Trip

This is because of the reality lumps and rankles are indications of shortcoming that exist inside the mass of the hoses and can bring about blasting and spillages. Additionally, verify that all clasps are cutting pleasantly into the entirety of the elastic to forestall spills. Supplant any hose that has pinhole spills.

2. Engine oil and coolant

On the off chance that you have recently deliberately ignored changing over oil, at that point it's unnecessary time you supplanted the oil sooner than making your excursion. A road ride will expose your car motor to substantial commitment adventure and stress. 

Most Important Things You Should Check in Your Vehicle Before Go To Road Trip

On the off chance that your oil is in terrible condition, at that point there might be a possible hood of the motor failing. At the point when you trade your motor oil and coolant, guarantee which you are changing with the supported brand of oil and coolants.

Also read: Simple Ways To Solve A Problem with Different Types of Car Oils 

3. Tyres

Tires that are inappropriate circumstances will furnish you with the incredible arrangement wished tranquility of contemplations while voyaging. What are the exact issues that you need to test in your car tires? There are two significant components that you need not miss. 

Most Important Things You Should Check in Your Vehicle Before Go To Road Trip

These comprise of the strings and the strain on the tires. The strings should be in fitting condition with a reason to offer most footing. At the indistinguishable time guarantee that the weight in all the four tires is set as following the pointers scripted on driver gas filler entryway.

Also read: Pros And Cons Of Thick Tyres Of Cars

               Tires when to change them: Do You Really Need It? This Will Help You Decide!

4. Lighting

Driving without light is an offense however far more terrible is that it's miles hazardous to pressure without lighting installations inside the evening. Start by checking the required lighting. 

Most Important Things You Should Check in Your Vehicle Before Go To Road Trip

Those comprise the tail and perspective lighting. You should test the headlights, assortment plate, and thwart lights, just as course signs. Moreover verify that the haze lighting, threat alert lighting installations, are running ideally.

5. Brakes

Can that brake liquids ingest dampness with time? Indeed, as brake liquids age, it retains dampness which could erode the slowing down segment. 

Most Important Things You Should Check in Your Vehicle Before Go To Road Trip

If you find that your brake liquid has gone to rather a maple syrup then you certainly need to refresh it. Have the brake cushions assessed and the exhausted ones supplanted.

Also read: Anti-lock Braking System(ABS), Working and Diagnosis of ABS

                 Little Known Ways to How to Know When Car Brakes Need Work

6. Fuel cap

A fuel cap plays out a basic situation in the fuel framework. It'll save you vanishing, spillage of gases just as saving off pollutants from getting into the gas tank. 

Verify that your gas cap is safely mounted sooner than beginning your excursion. Interestingly, after your car recognizes an issue inside the gas framework, the investigate motor light goes ahead. Utilize an obd2 scanner to kill the lighting in the wake of affixing the fuel cap.

Most Important Things You Should Check in Your Vehicle Before Go To Road Trip

Checking these six things sooner than making your road trip is a breathtaking way to remain one stride in front of your vehicle stalling while you need it the most. You could materially examine your vehicle to get mindful of if there's any indication of approaching difficulty.


Happy New Year 2021 Wishes, Images, Shayari, Status, Wallpapers

Welcome to another special blog of Explore Automotive, in this special blog we shares beautiful images of Happy New Year 2021,

Happy New Year 2021,

Happy New Year 2021 wishes, Happy New Year 2021 images, shayari.

Happy New Year 2021 status, Happy New Year 2021 wallpapers,

Happy New Year 2021 HD wallpapers

Happy New Year 2021 best wishes images

Happy New Year 2021 free images download


Happy New Year 2021 Wishes, Images, Shayari, Status, Wallpapers


Happy New Year 2021 Wishes, Images, Shayari, Status, Wallpapers


Happy New Year 2021 Wishes, Images, Shayari, Status, Wallpapers


Happy New Year 2021 Wishes, Images, Shayari, Status, Wallpapers


Happy New Year 2021 Wishes, Images, Shayari, Status, Wallpapers


Happy New Year 2021 Wishes, Images, Shayari, Status, Wallpapers


Happy New Year 2021 Wishes, Images, Shayari, Status, Wallpapers


Happy New Year 2021 Wishes, Images, Shayari, Status, Wallpapers





What You Should Know About Using Natural Gas (CNG) Vehicles

Will I get the Same Fuel Mileage and Vehicle Performance?

Against prevalent attitudes, methane conveys more British Thermal Units (BTUs) than gas. The energy substance of gas is subject to the season and oxygenated fuel added substances like methyl tertiary-butyl ether (MBTE) [used to build the oxygen levels] that lessen discharges.

What You Should Know About Using Natural Gas (CNG) Vehicles

BTU levels for gasoline (Petrol)

114,500 for a late spring gallon of gas w/no added substances

112,500 for a colder time of year gallon of gas w/no added substances

112,000 for summer mixes with MTBE

110,210 for summer mixes with ethanol

112,210 for winter mixes with ethanol

BTU levels for gaseous petrol

114,000 all year

BTU levels for diesel

129,500

The reasonable champ and most costly is a diesel with 129,500 BTUs. Since filling stations don't need to publicize that they are utilizing a 10% ethanol blend in their fuel, most don't; notwithstanding, most stations administer this blend.

What is the Difference Between Dedicated and Converted Vehicles?

Dedicated vehicles

Runs on 100% gaseous petrol

Subject to CNG powering stations

Cannot mood killer the fuel if it is overheating

Costly motor re-arrangements to manage 0 ointments and high warmth levels

High upkeep costs

Decreased force and HP

Decreased mileage

High change costs

Converted vehicles

Runs on gas or gaseous petrol

Runs on diesel + petroleum gas blend

Pairs the driving reach

Switch energizes while driving at the flip of a switch

Starts on gas or diesel to grease up the motor at that point changes to gaseous petrol

Low upkeep costs

Expanded force and HP

Expanded mileage

Low change costs

What is the Difference Between Aspirated and Direct Injection CNG Kits?

Aspirated: These frameworks are the original basic innovation. Suctioned units as a rule cost less since and are moderately straightforward and simple to introduce. They have a shut circle regulator that gets criticism from an oxygen sensor to keep the vehicle working ideally.

The exhibition of these frameworks is restricted in light of the fact that air admission should be choked down for them to work appropriately. 

Additionally, suctioned CNG units will in general trigger the "Check Engine" light which is a typical event. This application is proposed for more seasoned vehicles with a carburetor.

Direct Injection: These frameworks are the pioneers in CNG Conversion Kit innovation. They use a ground-breaking chip that underpins the underlying programmed adjustment and alignment framework. 

Since these frameworks are planned to utilize progressed gadgets and convey far unrivaled execution they will cost more than suctioned units.

They have developed altogether lately. The most recent innovation in these frameworks uses broad electronic sensors, injectors, and a 32 bit PC that incorporates the vehicle's PC. This gives the best mileage and execution. A "Check Engine" light isn't satisfied with these establishments.

Are There Special Maintenance Needs for CNG Converted Vehicles?

Gaseous petrol motors work basically a similar route as gas motors. An air-fuel blend is infused into the admission complex, brought into the burning chamber, and afterward lighted by a flush fitting. 

Most motor assistance necessities are fundamentally the same as and can be dealt with by a seller, car shop, or prepared specialist. If there is a retrofit fix required, for example, a broken injector or free pressure fittings, these are eventually dealt with by the installer.

Do Conversions Increase my Maintenance Costs?

Converted vehicles require lower support and HALF the ordinary oil and channel changes. Utilizing CNG keeps up the oil consistency, causing less wear on different motor parts, for example, the cylinders, valves, and head dividers. 

The gases are scorched all the more productively radiating no hurtful or foul carbon stores which keep your oil more clean. 

Utilizing CNG additionally expands the life of the motor generally because of the regulation of the oil film that gathers on the chamber dividers. This control keeps oil from shipping to other motor parts dissimilar to motors running on fuel or diesel.

Conversion Kits

Gas CNG Conversion Kits run on either 100% gas or 100% gaseous petrol (Bi-Fuel CNG Conversions). A CNG tank is added to hold the petroleum gas. 

The expansion of another fuel tank copies the driving reach. Bi-fuel transformations enormously decrease your fuel and upkeep costs. Since the fuel combusts totally, your motor remains a lot cleaner. 

These transformations likewise decrease oil and channel changes. The cleaner consuming fuel expands the life of your vehicles and trucks.

Diesel CNG Conversion Kits mix a blend of both diesel and flammable gas, Dual-Fuel. This extraordinarily lessens the utilization of costly diesel fuel. 

Expect a builds on your mileage and a decrease in the upkeep of costly oil and channel changes. The air/gaseous petrol combination is denser. This denser higher octane combination makes more force and expanded force. The cleaner consuming fuel broadens the life of your trucks and transports.

Benefits to the Vehicle

Trucks and Cars Running on CNG.

Flammable gas conveys higher BTU levels than fuel (125 versus 90) because of ethanol and oxygenated fuel added substances in the gas.

Diminishes the Fuel and Maintenance Cost Approximately 40%.

Combusts without self-start and shields your motor from thumping, even on motors with higher pressure and productivity levels.

Combusts 100% because of an appropriate air to gaseous petrol combination at surrounding temperatures.

The oil that greases up the motor is less tainted and diminishes oil and channel changes down the middle.

There are no residue and the flash fittings are kept clean. Oil on the dividers of the motor chambers is not washed off.

Ignition gases are not destructive. By not harming metals, the life of the fumes line and silencers is longer.

The vaporous idea of the fuel kills the examining activities in the chambers during quick increasing speed periods, with the benefit of decreasing the erosion of metal surfaces.

The motor presents extraordinary execution adaptability during increasing velocities without anomalies or reverse discharges, even at low direct speed.

Changed over vehicles may change from utilizing CNG to fuel by essentially pressing a catch while driving.

CNG gas bi-fuel framework pairs the vehicle's driving reach.

CNG changes run smoother and all the more discreetly.

Vehicles work in all territories, even across mountains. For instance, a truck with a 37-ton load drove over 4,800 meters high in the Peruvian Andes in May 2008.

CNG turns out great under any atmosphere condition. Since the fuel doesn't freeze not even at low temperatures the vehicle will consistently be fit to be utilized.

The framework is completely reversible and can be moved to different vehicles after assessment.

Three Main Components to a Vehicle Conversion

CNG Conversion Kits

CNG Tanks

NFPA-52 Certified high weight parts

The CNG tank can be mounted in a few spots. Most well-known areas are in the storage compartment or in the bed of the truck. 

A high weight fuel line courses the methane through a channel than to the motor office. The recently introduced 2-stage controller will diminish the high weight down to encompassing levels more like 36 PSI.

The additional commotion decrease injectors will at that point apportion the gas. This cycle is constrained by the framework's ECU which is modified to apportion the flammable gas to coordinate vehicle's or truck's unique fuel planning. 

Sensors and PCs change the fuel-air blend for top execution. CNG change units don't decrease your vehicle or truck's exhibition.

Powering at CNG stations is much the same as you were filling with gas or diesel and takes a similar measure of time. The two tanks will twofold your driving reach. You actually need to utilize gas as this is needed to begin, grease up, and warm the vehicle.

CNG transformation units are modified to ensure against over the top warmth, and abrupt changes in driving conditions. 

Should you need to break intensely, out of nowhere accelerate, or hit a precarious slope your fuel framework will consequently switch back to gas for the required extra force.

Converted vehicles work much the same as the first. Adding the new fuel framework is totally non-intrusive and simple to invert. 

Consecutive port infusion frameworks from believed makers don't adjust or meddle with any of the first motor producer (OEM) processing plant settings or fuel framework nor do they alter any motor gadgets or controls. Since there is no altering, there is no infringement against altering laws. 

Benefits of Steam Car Wash

Steam vehicle wash is the technique of using steam vapor to ease a car's outdoors and interior. For a long time, steam has been an important thing in the cleaning enterprise because of its effectiveness and sanitizing power. Today, steam has become an ever-developing asset to the automobile wash enterprise.

Benefits of Steam Car Wash

The steam generated by using the Steamer is hot enough to boost off any grease and soil age produces sufficient strain to interrupt down dust and is gentle sufficient to not damage the surface. 

In much less time than a conventional automobile wash, the Steamer now not only thoroughly washes your vehicle but additionally sanitized the indoors and refurbishes greasy engine components anew. 

It can even additionally reach the tightest spaces and fittings that could in any other case be absolutely inaccessible with a pressure washing machine.

We have a variety of accessories that praise your Car wash; from extended period hoses and guns to brushes for hard dirt and spray bottles for wax and brought shine!

Why go for Steam Wash?

Car steam wash is a brand new era of the car wash. It washes your car from every wherein and from those corners which aren't possible to attain.

We rather advise automobile steam wash it also because it will smooth your automobile from every corner and clean up your automobile too. The sanitizing abilities of dry vapor steam minimize the want for chemical substances, sharply decreasing harmful residue that could reach our hurricane drains. 

Benefits of Steam Car Wash

And the principal purpose is to save water due to the fact as everybody knows that water discount is getting very massively. So all of us have to have to reflect on consideration on it and ought to must do something in that area. 

I found that human beings waste 100 liters and above water to wash an unmarried vehicle. It misplaced water had to wash a car and anyone washes their automobile very regularly. 

But with this car steam wash, you can wash your automobile with the most effective 20 liters of water. Now simply believe how much water is stored by choosing automobile steam wash.

With hardly ever any wastewater to clean up, now not most effective will you store money from buying a storage tank, however additionally avoid hefty fines from wastewater guidelines. 

Moreover, with the aid of using fewer chemical cleaners, you'll collect financial savings from frequently buying chemical products. Environmentally friendly and price-effective in the long run.

Benefits of Steam Car Wash

Cleaning Application:-

Clean Engine Safely.

Clean interiors, exteriors, engine compartments, door jams, floor mats, tires, trunks, upholstery, wheel wells, and tough to attain nooks and crannies.

Remove stains from outdoors in less time and the usage of much less water than an extractor.

Chemical-free sanitation.

Remove, dust, scratch, and grease.

Deodorize and sterilize surfaces.


Different Types of Car Floor Mat Materials

Car Floor Mats are the primary line of the guard for your vehicle, truck, or SUV's valuable industrial facility flooring. That presents floor mats with the extreme errand of having the option to withstand shifting degrees of maltreatment consistently while giving an extra degree of solace and commending the vehicles inside stylish. 

Different Types of Car Floor Mat Materials

Vehicle makers have endeavored to take care of the novel issues floor mats face by making a huge number of material kinds, each pointed toward giving sufficient insurance to a vehicle's expected use. 

Reseller's exchange rug and floor tangle makers offer items utilizing materials and tones intended to coordinate the OEMs while offering upgrades upon the firsts.

What Original Materials Are Available?

Cutpile

Cut Pile is made out of 100% Nylon yarn. 

Tufted to a 1/8 measure cut heap, it contains 14 ounces of yarn for every square yard. 

Cut Pile has been a unique material in most homegrown vehicles since around 1974. 

Cut Pile material can have Mass support. Mass support is a roughly 45mil thick EVA material. It's an extraordinary sound and warmth hindrance and builds the general appearance of the rug after establishment. 

Cut Pile material width is 78 inches.

Loop

Circle material is made out of 100% 6,6 Nylon yarn called Raylon 

Tufted to a 1/8 check, Loop contains 20 ounces of yarn for every square yard. 

Circle material was initially utilized in vehicles that were fabricated before 1974. 

Circle material can have Mass support. Circle material width is 78 inches.

Nylon

Nylon is made out of 100% Nylon yarn. 

Tufted to a 1/8 measure, Nylon contains 12 ounces of yarn for each square yard. 

Nylon material was initially utilized in the late 1960 model Ford. 

Nylon material is accessible with our discretionary Mass support. 

Nylon material width is 78 inches.

Truvette

Truvette is made out of 100% Nylon yarn. 

Tufted to a 5/64 check, Truvette contains 14 ounces of yarn for every square yard. 

Truvette material was presented in the mid-1990s for Corvettes. 

Truvette material can have Mass support. Mass support is around 45mil thick EVA material. 

Truvette material width is 78 inches.

Daytona

Daytona weave cover is made out of Cotton, Nylon, and Rayon yarn. 

Daytona contains 27.5 ounces of yarn per square yard and is a circle style cover. 

Daytona weave cover was introduced around 1954 for GM vehicles.

Froth backing just, Daytona can't be formed or Mass sponsored. It is hand-cut and sewn with the most extreme quality control. 

Daytona material width is 54 inches.

Tuxedo

Tuxedo is made out of Nylon and Olefin fiber yarns. 

Tufted to a 1/8 check, Tuxedo contains 23 ounces of yarn for every square yard. 

Tuxedo material can have Mass support. 

Tuxedo material width is 52 inches.

Gros Point

Gros Point material is made out of 100% Nylon yarn. 

Gros Point contains 31.5 ounces of yarn per square yard and is a fine circle style cover. 

Gros Point was created for the early model exemplary muscle and full-size traveler vehicles work during the 1950s and 1960s. 

Froth backing just, Gros Point can't be formed or Mass sponsored. It is hand-cut and sewn with the most extreme quality control. 

Gros Point material width is 54 inches.

What Are The Aftermarket Options?

Essex

Accessible as a discretionary overhaul for most of our ACC Floor Mats 

Essex is made out of 100% Nylon yarn. 

Tufted to a 1/10 check cut heap, Essex contains 22.5 ounces of yarn per square yard giving it a lavish and rich look and feel. 

Essex has been accessible as a secondary selling material decision since the last part of the 2000s. It is a top-notch, current variant of the Cut Pile material. It very well may be indicated for practically any model vehicle. 

Essex material can have Mass Backing. Mass support is around 45mil thick EVA material. It's an extraordinary sound and warmth hindrance and expands the general look of the floor covering after establishment. 

Essex material width is 78 inches.

Safety Performance Indicator (SPI) metrics (Metrics Episode 14)

SPIs help ensure that assumptions in the safety case are valid, that risks are being mitigated as effectively as you thought they would be, and that fault and failure responses are actually working the way you thought they would.

Safety Performance Indicators, or SPIs, are safety metrics defined in the Underwriters Laboratories 4600 standard. The 4600 SPI approach covers a number of different ways to approach safety metrics for a self-driving car, divided into several categories.

One type of 4600 SPI safety metric is a system-level safety metric. Some of these are lagging metrics such as the number of collisions, injuries and fatalities. But others have some leading metric characteristics because while they’re taken during deployment, they’re intended to predict loss events. Examples of these are incidents for which no loss occurs, sometimes called near misses or near hits, and the number of traffic rule violations. While by definition, neither of these actually results in a loss, it’s a pretty good bet that if you have many, many near misses and many traffic-rule infractions, eventually something worse will happen.

Another type of 4600 metric is intended to deal with ineffective risk mitigation. An important type of SPI relates to measuring that hazards and faults are not occurring more frequently than expected in the field.

Here’s a narrow but concrete example. Let’s assume your design takes into account that you might lose one in a million network packets due to corrupted data being detected. But out in the field, you’re dropping every tenth network packet. Something’s clearly wrong, and it’s a pretty good chance that undetected errors are slipping through. You need to do something about that situation to maintain safety.

A broader example is that a very rare hazard might be deemed not to be risky because it just essentially never happens. But just because you think it almost never happens doesn’t mean that’s what happens in the real world. You need to take data to make sure that something you thought would happen to one vehicle in the fleet every hundred years isn’t in fact happening every day to someone, because if that’s the case, you badly misestimated your risk.

Another type of SPI for field data is measuring how often components fail or behave badly. For example, you might have two redundant computers so that if one crashes, the other one will keep working. Consider one of those computers is failing every 10 minutes. You might drive around for an entire day and not really notice there’s a problem because there’s always a second computer there for you. But if your calculations assume a failure once a year and it’s failing every 10 minutes, you’re going to get unlucky and have both fail at the same time a lot sooner than you expected. 

So it’s important to know that you have an underlying problem, even though it’s being masked by the fault tolerance strategy.

A related type of SPI has to do with classification algorithm performance for self-driving cars. When you’re doing your safety analysis, it’s likely you’re assuming certain false positive and false negative rates for your perception system. But just because you see those in testing doesn’t mean you’ll see those in the real world, especially if the operational design domain changes and new things pop up that you didn’t train on. So you need a SPI to monitor the false negative and false positive rates to make sure that they don’t change from what you expected.

Now, you might be asking, how do you figure out false negatives if you didn’t see it? But in fact, there’s a way to approach this problem with automatic detection. Let’s say that you have three different types of sensors for redundancy and you vote three sensors and go with the majority. Well, that means every once in a while, one of the sensors can be wrong and you still get safe behavior. But what you want to do is take a measurement of how often the one wrong happens, because if it happens frequently, or the faults on that sensor correlate with certain types of objects, those are important things to know to make sure your safety case is still valid.

A third type of 4600 metric is intended to measure how often surprises are encountered. There’s another segment on surprises, but examples are the frequency at which an object is classified with poor confidence, or a safety relevant object flickers between classifications. These give you a hint that something is wrong with your perception system, and that it’s struggling with some type of object. If this happens constantly, then that indicates a problem with the perception system. It might indicate that the environment has changed and includes novel objects not accounted for by training data. Either way, monitoring for excessive perception issues is important to know that your perception performance is degraded, even if an underlying tracking system or other mechanism is keeping your system safe.

A fourth type of 4600 metric is related to recoveries from faults and failures. It is common to argue that safety-critical systems are in fact safe because they use fail-safes and fall-back operational modes. So if something bad happens, you argue that the system will do something safe. It’s good to have metrics that measure how often those mechanisms are in fact invoked, because if they’re invoked more often than you expected, you might be taking more risks than you thought. It’s also important to measure how often they actually work. Nothing’s going to be perfect. And if you’re assuming they work 99% of the time but they only work 90% of the time, that dramatically changes your safety calculations.

It’s useful to differentiate between two related concepts. One is safety performance indicators, SPIs, which is what I’ve been talking about. But another concept is key performance indicators, KPIs. KPIs are used in project management and are very useful to try and measure product performance and utility provided to the customer. KPIs are a great way of tracking whether you’re making progress on the intended functionality and the general product quality, but not every KPI is useful for safety. For example, a KPI for a fuel economy is great stuff, but normally it doesn’t have that much to do with safety.

In contrast, an SPI is supposed to be something that’s directly traced to parts of the safety case and provides evidence for the safety case. Different types of SPIs include making sure the assumptions in the safety case are valid, that risks are being mitigated as effectively as you thought they would be, and that fault and failure responses are actually working the way you thought they would. Overall, SPIs have more to do with whether the safety case is valid and the rate of unknown surprise arrivals is tolerable. All these areas need to be addressed one way or another to deploy a safe self-driving car.

For the podcast version of this posting, see: https://archive.org/details/metrics-15-safety-performance-indicator-metrics

Thanks to podcast producer Jackie Erickson.

Conformance Metrics (Metrics Episode 13)

Metrics that evaluate progress in conforming to an appropriate safety standard can help track safety during development. Beware of weak conformance claims such as only hardware, but not software, conforms to a safety standard.

Conformance metrics have to do with how extensively your system conforms to a safety standard. 

A typical software or systems safety standard has a large number of requirements to meet the standard, with each requirement often called clauses. An example of a clause might be something like "all hazards shall be identified" and another clause might be "all identified hazard shall be mitigated."  (Strictly speaking a clause is typically a numbered statement in the standard in the form of a "shall" requirement that usually has a lot more words in it than those simplified examples.)

There are often extensive tables of engineering techniques or technical mitigation measures that need to be done based on the risk presented by each hazard. For example, mitigating a low risk hazard might just need normal software quality practices, while a life critical hazard might need dozens or hundreds of very specific safety and software quality techniques to make sure the software is not going to fail in use. The higher the risk, the more table entries need to be performed in design validation and deployment.

The simplest metric related to a safety standard is as simple yes/no question: Do you actually conform to the standard? 

However, there are nuances that matter. Conforming to a standard might mean a lot less than you might think for a number of reasons. So one way to measure the value of that conformance statement is to ask about the scope of the conformance and any assessment that was performed to confirm the conformance. For example, is a conformance just hardware components and not software also, or is it both hardware and software? It’s fairly common to see claims of conformance to an appropriate safety standard that only covered the hardware, and that’s a problem if a lot of the safety critical functionality is actually in the software.

If it does cover the software, what scope? Is it just the self test software that exercises the hardware (again, a common conformance claim that omits important aspects of the product)? Does it include the operating system? Does it include all the application software that’s relevant to safety? What actually is the claim of conformance be made on? Is it just a single component within a very large system? Is it a subsystem? Is it entire vehicle? Does it cover both the vehicle and its cloud infrastructure and the communications to the cloud? Does it cover the system used to collect training data that is assumed to be accurate to create a safety critical machine learning based system? And so on. So if you see a claim of conformance, be sure to ask what exactly the claim applies to you because it might not be everything that matters for safety.

Also conformance can have different levels of credibility ranging from – well it’s "in the spirit of the standard."  Or "we use an internal standard that we think is equivalent to this international standard." Or "our engineering team decided we think we meet it." Or "a team inside our company thinks we meet it but they report to the engineering manager so there’s pressure upon them to say yes." Or "conformance evaluation is done by a robustly separated group inside our company." Or "conformance evaluation is done via qualified external assessment with a solid track record for technical integrity." 

Depending on the system, any one of these categories might be appropriate. But for life critical systems, you need as much independence and actual standards conformance as you can get. If you hear a claim for conformance it’s reasonable ask: well, how do you know you conform to the extent that matters, and is the group assessing conformance independent enough and credible enough for this particular application?

Another dimension of conformance metrics is: how much of the standard is actually being conformed to? Is it only some chapters or all of the chapters? Sometimes we’re back to where only the hardware conformed so they really only looked at one chapter of a system standard that would otherwise cover hardware and software. Is it only the minimum basics? Some standards have a significant amount of  text that some treat as optional (in the lingo: "non-normative clauses"). In some standards most of the test is not actually required to claim conformance. So did only the required texts get addressed or were the optional parts addressed as well?

Is the integrity level appropriate? So it might conform to a lower ASIL than you really need for your application, but it still has the conformance stamp to the standard on it. That can be a problem if using, for example, something assessed for noncritical functions and you want to use it in a life critical application. Is the scope of the claim conformance appropriate? For example, you might have dozens of safety critical functions in a system, but only three or four were actually checked for conformance and the rest were not. You can say it conforms to a standard, but the problem is there’s pieces that really matter that were never checked for conformance.

Has the standard been aggressively tailored so that it weakens the value of the claim conformance? Some standards, permit skipping some clauses if they don’t matter to safety in that particular application, but with funding and deadline pressures, there might be some incentive to drop out clauses that really might matter. So it’s important to understand how tailored the standard was. Was that the full standard or where pieces left out that really should matter?

Now to be sure, sometimes limited conformance on all these paths makes perfect sense. It’s okay to do that so long as, first of all, you don’t compromise safety. So you’re only leaving out things that don’t matter to safety. Second you’re crystal clear about what you’re claiming and you don’t ask more of the system that can really deliver for safety. 

Typically signs of aggressive tailoring or conformance to only part of a standard are problematic for life critical systems. It’s common to see misunderstandings based on one or more of these issues. Somebody claims conformance to a standard does not disclose the limitations and somebody else gets confused and says, oh, well, the safety box has been checked so nothing to worry about. But, in fact safety is a problem because the conformance claim is much narrower than is required for safety in that application.

During development (before the design is complete), partial conformance and measuring progress against partial conformance can actually be quite helpful. Ideally, there’s a safety case that documents the conformance plan and has a list of how you plan to conform to all the aspects of the standard you care about. Then you can measure progress against the completeness of the safety case. The progress is probably not linear, and not every clause take same amount of effort.  But still just looking at what fraction of the standard you’ve achieved conformance to internally can be very helpful for managing the engineering process.

Near the end of the design validation process, you can do mock conformance checks. The metric there is the number of problems found with conformance, which basically amounts to bug reports against the safety case rather than against the software itself.

Summing up, conforming to relevant safety standards is an essential part of ensuring safety, especially in life critical products. There are a number of metrics, measures and ways to assess how well that conformance actually is going to help your safety. It’s important to make sure you’ve conformed to the right standards, you’ve conformed with the right scope and that you’ve done the right amount of tailoring so that you’re actually hitting all the things that you need to in the engineering validation and deployment process to ensure you’re appropriately safe.

For the podcast version of this posting, see: https://archive.org/details/metrics-14-safety-standard-conformance-metrics

Thanks to podcast producer Jackie Erickson.

Surprise Metrics (Metrics Episode 12)

You can estimate how many unknown unknowns are left to deal with via a metric that measures the surprise arrival rate.  Assuming you're really looking, infrequent surprises predict that they will be infrequent in the near future as well.

Your first reaction to thinking about measuring unknown unknowns may be how in the world can you do that? Well, it turns out the software engineering community has been doing this for decades: they call it software reliability growth modeling. That area’s quite complex with a lot of history, but for our purposes, I’ll boil it down to the basics.

Software reliability growth modeling deals with the problem of knowing whether your software is reliable enough, or in other words, whether or not you’ve taken out enough bugs that it’s time to ship the software. All things being equal, if a complete same system test reveals 10 times more defects in the current release than in the previous release, it’s a good bet your new release is not as reliable as your old one.

On the other hand, if you’re running a weekly test/debug cycle with a single release, so every week you test it, you remove some bugs, then you test it some more the next week, at some point you’d hope that the number of bugs found each week will be lower, and eventually you’ll stop finding bugs. When the number of bugs per week you find is low enough, maybe zero, or maybe some small number, you decide it’s time to ship. Now that doesn’t mean your software is perfect! But what it does mean is there’s no point testing anymore if you’re consistently not finding bugs. Alternately, if you have a limited testing budget, you can look at the curve over time of the number of bugs you’re discovering each week and get some sort of estimate about how many bugs you would find if you continued testing for additional cycles.

At some point, you may decide that the number of bugs you’ll find and the amount of time it will take simply isn’t worth the expense. And especially for a system that is not life critical, you may decide it’s just time to ship. A dizzying array of mathematical models has been proposed over the years for the shape of the curve of how many more bugs are left in the system based on your historical rate of how often you find bugs. Each one of those models comes with significant assumptions and limits to applicability. 

But the point is that people have been thinking about this for more than 40 years in terms of how to project how many more bugs are left in a system even though you haven’t found them. And there’s no point trying to reinvent all those approaches yourself.

Okay, so what does this have to do with self-driving car metrics?

Well, it’s really the same problem. In software tests, the bugs are the unknowns, because if you knew where the bugs were, you’d fix them.  You’re trying to estimate how many unknowns there are or how often they’re going to arrive during a testing process. In self-driving cars, the unknown unknowns are the things you haven’t trained on or haven’t thought about in the design. You’re doing road testing, simulation and other types of validation to try and uncover these. But it ends up in the same place. You’re trying to look for latent defects or functionality gaps and you’re trying to get idea of how many more there are left in the system that you haven’t found yet, or how many you can expect to find if you invest more resources in further testing.

For simplicity, let’s call the things in self-driving cars that you haven’t found yet surprises. 

The reason I put it this way is that there are two fundamentally different types of defects in these systems. One is you built the system the wrong way. It’s an actual software bug. You knew what you were supposed to do, and you didn’t get there. Traditional software testing and traditional software quality will help with those, but a surprise isn’t that. 

A surprise is a requirements gap or something in the environment you didn’t know was there. Or a surprise has to do with imperfect knowledge of the external world. But you can still treat it as a similar, although different, class from software defects and go at it the same way. One way to look at this is a surprise is something you didn’t realize should be in your ODD and therefore is a defect in the ODD description. Or, you didn’t realize the surprise could kick your vehicle out of the ODD and is a defect in the model of ODD violations that you have to detect. You’d expect that surprises that can lead to safety-critical failures are the ones that need the highest priority for remediation.

To create a metric for surprises, you need to track the number of surprises over time. You hope that over time, the arrival rate of surprises gets lower. In other words, they happen less often and that reflects that your product has gotten more mature, all things being equal. 

If the number of surprises gets higher, that could be a sign that your system has gotten worse with dealing unknowns, or could also be a sign that your operational domain has changed, and more novel things are happening than used to because of some change in the outside world. That requires you to update your ODD to reflect the new real world situation. Either way, a higher rival rate of surprises means you’re less mature or less reliable and a lower rate means you’re probably doing better.

This may sound a little bit like disengagements as a metric, but there’s a profound difference. That difference applies even if disengagements on road testing are one of the sources of data.

The idea is that measuring how often you disengage, that a safety driver takes over, or the system gives up and says, “I don’t know what to do” is a source of raw data. But the disengagements could be for many different reasons. And what you really care about for surprises is only disengagements that happened because of a defect in the ODD description or some other requirements gap.

Each incident that could be a surprise needs to be analyzed to see if it was a design defect, which isn’t really an unknown unknown. That’s just a mistake that needs to be fixed.

But some incidents will be true unknown unknown situations that require re-engineering or retraining your perception system or another remediation to handle something you didn’t realize until now was a requirement or operational condition that you need to deal with. Since even with a perfect design and perfect implementation, unknowns are going to continue to present risk, what you need to be tracking with a surprise metric is the arrival of actual surprises.

It should be obvious that you need to be looking for surprises to see them. That’s why things like monitoring near misses and investigating the occurrence of unexpected, but seemingly benign, behavior matters. Safety culture plays a role here. You have to be paying attention to surprises instead of dismissing them if they didn’t seem to do immediate harm. A deployment decision can use the surprise arrival rate metric to get an approximate answer of how much risk will be taken due to things missing from the system requirements and test plan. In other words, if you’re seeing surprises arrive every few minutes or every hour and you deploy, there’s every reason to believe that will continue to happen about that often during your initial deployment.

If you haven’t seen a surprise in thousands or hundreds of thousands of hours of testing, then you can reasonably assume that surprises are unlikely to happen every hour once you deploy. (You can always get unlucky, so this is playing the odds to be sure.)

To deploy, you want to see the surprise arrival rate reduced to something acceptably low. You’ll also want to know the system has a good track record so that when a surprise does happen, it’s pretty good at recognizing something has gone wrong and doing something safe in response.

To be clear, in the real world, the arrival rate of surprises will probably never be zero, but you need to measure that it’s acceptably low so you can make a responsible deployment decision.

For the podcast version of this posting, see: https://archive.org/details/metrics-13-surprise-metrics

Thanks to podcast producer Jackie Erickson.

Operational Design Domain Metrics (Metrics Episode 11)

Operational Design Domain metrics (ODD metrics) deal with both how thoroughly the ODD has been validated as well as the completeness of the ODD description. How often the vehicle is forcibly ejected from its ODD also matters.

Operational Design Domain metrics (ODD metrics) deal with both how thoroughly the ODD has been validated as well as the completeness of the ODD description.

An ODD is the designer’s model of the types of things that the self-driving cars intended to deal with. The actual world, in general, is going to have things that are outside the ODD. As a simple example, the ODD might include fair weather and rain, but snow and ice might be outside the ODD because the vehicle is intended to be deployed in a place where snow is very infrequent.

Despite designer’s best efforts, it’s always possible for the ODD to be violated. For example, if the ODD is Las Vegas in the desert, this system might be designed for mostly dry weather or possibly light rain. But in fact, in Vegas, once in a while, it rains and sometimes it even snows. The day that it snows the vehicle will be outside its ODD, even though it’s deployed in Las Vegas.

There are several types of ODD safety metrics that can be helpful. One is how well validation covers the ODD. What that means is whether the testing, analysis, simulation and other validation actually cover everything in the ODD, or have gaps in coverage.

When considering ODD coverage it’s important to realize that ODDs have many, many dimensions. There are much more than just geo-fencing boundaries. Sure, there’s day and night, wet versus dry, and freeze versus thaw.  But you also have traffic rules, condition of road markings, the types of vehicles present, the types of pedestrians present, whether there are leaves on the tree that affect LIDAR localization, and so on.  All these things and more can affect perception, planning, and motion constraints.

While it’s true that a geo-fence area can help limit some of the diversity in the ODD, simply specifying a geo-fence doesn’t tell you everything you need to know, and you’ve covered all the things that are inside that geo-fenced area. Metrics for ODD validation can be based on a detailed model of what’s actually in the ODD -- basically an ODD taxonomy of all the different factors that have to be handled and how well testing, simulation, and other validation cover that taxonomy.

Another type of metric is how well the system detects ODD violations. At some point, a vehicle will be forcibly ejected from its ODD even though it didn’t do anything wrong, simply due to external events. For example, a freak snowstorm in the desert, a tornado or the appearance of a new type of completely unexpected vehicle and force a vehicle out of its ODD with essentially no warning. The system has to recognize when it has exited its ODD and be safe. A metric related to this is how often ODD violations are happening during testing and on the road after deployment.

Another metric is what fraction of ODD violations are actually detected by the vehicle. This could be a crucial safety metric, because if an ODD violation occurs and the vehicle doesn’t know it, it might be operating unsafely. Now it’s hard to build a detector for ODD violations that the vehicle can’t detect (and such failures should be corrected). But this metric can be gathered by root cause analysis whenever there’s been some sort of system failure or incident. One of the root causes might simply be failure to detect an ODD violation.

Coverage of the ODD is important, but an equally important question is how good is the ODD description itself? If your ODD description is missing many things that happen every day in your actual operational domain (the real world,), then you’re going to have some problems.

A higher level of metric to talk about is ODD description quality. That is likely to be tied to other metrics already mentioned in this and other segments. Here are some examples. The frequency of ODD violations can help inform the coverage metric of the ODD against the operational domain. Frequency of motion failures could be related to motion system problems, but could also be due to missing environmental characteristics in your ODD. For example, cobblestone pavers are going to have significantly different surface dynamics than a smooth concrete surface and might come as a surprise when they are encountered. 

Frequency of perception failures could be due to training issues, but could also be something missing from the ODD object taxonomy. For example, a new aggressive clothing style or new types of vehicles. The frequency of planning failures could be due to planning bugs, but could also be due to the ODD missing descriptions of informal local traffic conventions.

Frequency of prediction failures could be prediction issues, but could also be due to missing a specific class of actors. For example, groups of 10 and 20 runners in formation near a military base might present a challenge if formation runners aren't in training data. It might be okay to have an incomplete ODD so long as you can always tell when something is happening that forced you out of the ODD. But it’s important to consider that metric issues in various areas might be due to unintentionally restricted ODD versus being an actual failure of the system design itself.

Summing up, ODD metric should address how well validation covers the whole ODD and how well the system detects ODD violations. It’s also useful to consider that a cause of poor metrics and other aspects of the design might in fact be that the ODD description is missing something important compared to what happens in the real world.

For the podcast version of this posting, see: https://archive.org/details/metrics-12-odd-metrics

Thanks to podcast producer Jackie Erickson.


Prediction Metrics (Metrics Episode 10)

You need to drive not where the free space is, but where the free space is going to be when you get there. That means perception classification errors can affect not only the "what" but also the "future where" of an object.

Prediction metrics deal with how well a self driving car is able to take the results of perception data and predict what happens next so that it can create a safe plan. 

There are different levels of prediction sophistication required depending on operational conditions and desired own-vehicle capability. The first, simplest prediction capability is no prediction at all. If you have a low speed vehicle in an operational design domain in which everything is guaranteed to also be moving at low speeds and be relatively far away compared to the speeds, then a fast enough control loop might be able to handle things based simply on current object positions. The assumption there would be everything’s moving slowly, it’s far away, and you can stop your vehicle faster than things can get out of control.  (Note that if you move slowly but other vehicles move quickly, that violates the assumptions for this case.)

The prediction basically amounts to, nothing moves fast compared to its distance. But even here, a prediction metric can be helpful because there’s an assumption that everything is moving slow compared to its distance away. That assumption might be violated by nearby objects moving slowly but a little bit too fast because they’re so close, or by far away things moving fast such as a high speed vehicle in an urban environment that is supposed to have a low speed limit. The frequency at which the assumption is violated that things move slowly compared to the distance away will be an important safety metric.

For self driving cars that operate at more than a slow crawl. You’ll start to need some sort of prediction based on likely object movement. You often hear: "drive to where the free space is" with the free space being the open road space that’s safe for a car to maneuver in. 

But that doesn’t actually work once you’re doing more than about walking speed, because it isn’t where the free space is now that matters. What you need to do is to drive to where the free space is going to be when you get there. Doing that requires prediction because many of the things on the road move over time, changing where the free space is one second from now, versus five seconds from now, versus 10 seconds from now.

A starting point for prediction is assuming that everything maintains the same speed and direction as it currently has and update the speeds and directions periodically as you run your control loop. Doing this requires tracking so that you know not only where something is, but also what its direction and speed are. That means that with this type of prediction, metrics having to do with tracking accuracy become important, including distance, direction of travel and speed. 

For safety it isn’t perfect position accuracy on an absolute coordinate frame that matters, but rather whether tracking is accurate enough to know if there’s a potential collision situation or other danger. It’s likely that better accuracy is required for things that are close and things that are moving quickly toward you and in general things that pose collision threats. 

For more sophisticated self driving cars, you’ll need to predict something more sophisticated than just tracking data. That’s because other vehicles, people, animals and so on will change direction or even change their mind about where they’re going or what they’re doing. 

From a physics point of view, one way to look at this is in terms of derivatives. The simplest prediction is the current position. A slightly more sophisticated prediction has to do with the first derivative: speed and direction. An even more sophisticated prediction would be to use the second derivative: acceleration and curvature. You can even use the third derivative: jerk or change in acceleration. To the degree you can predict these things, you’ll be able to have a better understanding of where the free space will be when you get there.

From an every day point of view, the way to look at it is that real things don’t stand still -- they move. But when they’re moving, they change direction, they change speed, and sometimes they completely change what they’re trying to do, maybe doubling back on themselves. 

An example of a critical scenario is a pedestrian standing on a curb waiting for a crossing light. Human drivers use the person’s body language to tell the pedestrian is a risk of stepping off the curb even though they’re not supposed to be crossing. While that’s not perfect, most drivers will have stories of the time they didn’t hit someone because they noticed the person was distracted by looking at their cell phone or the person looked like they were about to jump into the road and so on. If you only look at speed and possibly acceleration, you won’t handle cases in which a human driver would say, “That looks dangerous. I’m going to slow down to give myself more reaction time in case behavior changes suddenly.” 

It isn’t just the current trajectory that matters for a pedestrian. It’s what the pedestrian’s about to do, which might be a dramatic change from standing still to running across through to catch a bus. 

The same would hold true for a human driver of another vehicle that you have some telltale available that suggests they’re about to swerve or turn in front of you. For even more sophisticated predictions, you probably don’t end up with a single prediction, but rather with a probability cloud of possible positions and directions of travel over time, where keeping on the same path might be the most probable. But a maximum command authority, right turn left turn, accelerate, decelerate might all be possible with lower probability but not zero probability. Given how complicated prediction can be, metrics might have to be more complicated than simply "did you guess exactly right?"  There’s always going to be some margin of error in any prediction, but you need to predict in a way that results in acceptable safety even in the face of surprises.

One way to handle the prediction is to take a snapshot of the current position and the predicted movement. Wait a few control loop cycles, some fractions of a second or a second. Then check to see how it turned out. In other words, you can just wait a little while, see how well your prediction turned out and keep score as to how good your prediction is. In terms of metrics, you need some sort of bounds on the worst case error of prediction. Every time that bound is violated, it is potentially a safety-related event and should be counting it as a metric. Those bounds might be probabilistic in nature, but at some point there has to be a bound as to what is acceptable prediction error and what’s not.

To the degree that prediction is based on object type, for example, you’re likely to assume a pedestrian typically cannot go as fast as a bicycle, but that a pedestrian can jump backwards and pivot turn. You might want to know if the type-specific prediction behavior is violated. For example, a pedestrian suddenly going from stop to 20 miles per hour crossing right in front of your car, might be a competitive sprinter that’s decided to run across the road, but more likely signals that electric rental scooters have arrived in your town and you need to include them in your operational design domain.

Prediction metrics might be related to the metrics for correct object classification if the prediction is based on the class of the object. 

Summing up, sophisticated prediction of behavior might be needed for highly permissive operation in complex dense environments. If you’re in a narrow city street with pedestrians close by and other things going on, you’re going to need really good prediction. Metrics for this topic should focus not only on motion measurement accuracy and position accuracy, but also on the ability to successfully predict what happens next, even if a particular object performs a sudden change in direction, speed, and so on. In the end, your metric should help you understand the likelihood that you’ll correctly interpret where the free space is going to be so that your path planner can plan a safe path.

For the podcast version of this posting, see: https://archive.org/details/metrics-11-prediction-metrics

Thanks to podcast producer Jackie Erickson.