Autonomous Cars: The Level 5 Fallacy

It feels good to predict the emergence of SD cars: It has to happen because it’d be cool if it did. But when? Betting your company on even an approximate estimate of “Level Five: Full Automation” is a risk that you don’t want to take.

 

There are Six Automation Levels of Autonomous Cars:

Level 0: No Automation

You drive it. Acceleration, braking and steering are all controlled by a human driver at all times, even if they're assisted by warning tones or safety intervention systems. If your car has automated emergency braking, for example, it can still be viewed as level zero.

Level 1: Driver Assistance

Hands on the wheel. In certain driving modes, the car can either take control of the steering wheel or the pedals. The best examples of level one automation are adaptive cruise control and park assist. The computer is never in control of both steering and acceleration/braking.

Level 2: Partial Automation

Hands off the wheel, eyes on the road. A level two vehicle has certain modes in which the car can take over both the pedals AND the wheel, but only under certain conditions, and the driver must maintain ultimate control over the vehicle. This is where Tesla's Autopilot has been at since 2014.

Level 3: Conditional Automation

Hands off the wheel, eyes off the road – sometimes. In a level 3 vehicle, the car has certain modes that will fully take over the driving responsibilities, under certain conditions, but a driver is expected to retake control when the system asks for it. This car can decide when to change lanes, and how to respond to dynamic incidents on the road, but uses the human driver as the fallback system. These are dangerous waters in terms of liability, and automakers are more or less trying to skip over it and move straight to level four.

Level 4: High Automation

Hands, off, eyes off, mind off – sometimes. A level four vehicle can be driven by a human, but it doesn't ever need to be. It can drive itself full time under the right circumstances, and if it encounters something it can't handle, it can ask for human assistance, but will park itself and put its passengers in no danger if human help isn't forthcoming. At this point, you're looking at a true self-driving car. This is the level Google/Waymo's test cars have been operating at for a number of years now.

Level 5: Full Automation

Steering wheel is optional. The front seats might face backwards to make this a social space, because the car neither needs nor wants your help. Full-time automation of all driving tasks on any road, under any conditions, whether there's a human on board or not.

 

Level 5 means going from point A to point B with a fraction, say 1/10th, of today’s accident rate. No ifs, no buts, no steering wheel. It’s a great vision, but one that’s not likely to happen any time soon.

Imagine the 1 pm Sunday scene with crowded sidewalks and sticky car traffic. In today’s world, pedestrians and drivers manage a peaceful if hiccupping coexistence. Through eye contact, nods, hand signals, and, yes, courteous restraint, pedestrians decide to sometimes forfeit their right-of-way and let a few cars come through. On the whole, drivers are equally patient and polite.

Can we “algorithmicize” eye contact and stuttering restraint? Can an SD car acknowledge a pedestrian’s nod, or negotiate “turning rights” with a conventional vehicle? No, we can’t. And we don’t appear to have a path to overcome such “mundane” challenges.

 

Chris Urmson, Google’s Director of Self-Driving Cars from 2013 to late 2016 gives a sobering yet helpful vision of the project’s future, summarized by Lee Gomes in an IEEE Spectrum:

“Not only might it take much longer to arrive than the company has ever indicated?—?as long as 30 years, said Urmson?—?but the early commercial versions might well be limited to certain geographies and weather conditions. Self-driving cars are much easier to engineer for sunny weather and wide-open roads, and Urmson suggested the cars might be sold for those markets first.”

 

Last week, we saw how the engineers who are in charge of Tesla’s self-driving technology keep leaving the company, quickly followed out the door by their replacements. Apparently, they disagree with Elon Musk’s overly enthusiastic representation of the future of Tesla’s SD technology. This is more telling than it might seem. Not about Musk’s enthusiasm?—?it has worked well for him so far?—?but about the engineers’ views of the SD timeline. A two-to-three year engineering timeline isn’t unusual; five years is considered longterm. Beyond the five-year horizon? No thanks, I’ll switch to a more spiritually and financially rewarding pursuit. We’ll leave the worthy but nebulous commitments to Carnegie Mellon and Stanford.

In other words: No Level 5 in the foreseeable, bankable future. Instead of the soothing vision of a saloon on wheels on the road tomorrow.

Instead of the pure, straight-to-Level 5 moonshot, we’ll see a progression of incremental improvements, percentages gained, more miles of roads successfully (and unsuccessfully) navigated. And we’ll be treated to vociferous arguments not unlike what we saw and keep seeing in PCs, smartphones, and other tech battlefields.

The messy “30-year transition”, the many uncertain steps in sensor and software engineering, the poorly understood problems of coexistence between conventional and SD cars leave much room for competitors large and small. SD cars are a much more complicated challenge than the PCs Microsoft helped standardize.

 

Source: Jean-Louis Gassée, 2017.

Wednesday Sep 20, 2017