Apple freezes its self-driving car project

 Bloomberg reports that Apple has decided to scale back on its self-driving car project. This has resulted in people from the car team reassigned or let go, in addition to those who quit. Now the goal is apparently to develop an autonomous driving system that would allow Apple to either partner with existing carmakers or resume its own automobile initiative at some point in the future. In any case, Apple customers will be better off for it (and probably the company as well). Let the cobbler stick to his last.

Autonomous Vehicles Gain a Big Friend: The U.S. Government

 

The U.S. government recently came out strongly in favor of autonomous cars. Self-driving vehicles, officials said, would save lives, and would make commuters’ less miserable.

To be sure, the government stopped short of issuing new regulations in the rapidly developing market. Still, the 15-point guidelines it issued were sufficiently specific as to signal its focus on safety, yet vague enough as to avoid restricting further developments.

The guidelines deal with four broad issues. Safety standards for the design and development of autonomous vehicles; a recommendation for states to agree on uniform policies on self-driving cars; how current regulations apply to driverless vehicles; and opening the field to new regulations on the technology.

At Future Imperfect, we have addressed repeatedly the challenges posed by self-driving cars, not always welcoming the new technology. It would take this writer a lot of convincing, and perhaps some more forceful methods of persuasion, to ride a fast machine with nobody at the wheel.

Yet what we find commendable in the government’s attitude is what has often set the United States apart from other countries. In the face of inevitable technological progress, the government decided to embrace it, and hence have greater involvement in its development. Hindering it would not stop it, and might even imperil passengers and pedestrians in a regulatory vacuum. Conversely, a farsighted stance is pioneering, and serves best the public interest.

Uber’s Pilot Test of Self-Driving Cars and the Point of Automation

A reporter from The Verge tested one of the self-driving vehicles that the ride-sharing company Uber began to test in Pittsburgh. As he describes it in a gripping article, it was both thrilling and mundane. It also included a few hair-raising moments as well, such as a pedestrian appearing out of nowhere in front of the car.

Smart at is, the computer did order the car to brake behind an SUV that was not moving. Still, the self-navigating system did not understand the other motorist’s gestures to drive around his vehicle. Human intervention was needed. The automaton would also unexpectedly return the car to human control. Without access to the car’s logs, however, it will not be possible to know why.

For its pilot test, two trained Uber employees sat in the cars, one behind the wheel and the other on the passenger side. As the technology is still in its infancy, human supervision is needed. Regardless, this is no less of a landmark. For the first time, a taxi service will be offering rides in self-driving cars to passengers who explicitly opt in.

Which brings us to the question we want to address: What is the point of automation if it will require a pilot and a co-pilot? Surely, this is only for the initial period. Still, as The Verge’s writer notes, there are three dimensions to this issue: technology, social acceptance, and regulation.

When it comes to technology, it is still premature: for instance, it does not interpret human body language and it may be imperfect (to put it softly) to deal with unexpected behavior that will not surprise a reasonably experienced driver, if only because humans can anticipate the conduct of their kin. Regulation, too, is another major question: what type of insurance will handle robots and the issues of liability arising from it.

But the essential barrier is social acceptance. And for good reason too. The point of a self-driving car would be to free up motorists’ time to do other things. Instinctively, however, few people would trust a robotic car to guide itself while they fiddle with their mobile phones or watch a movie during their ride.

We simply do not trust self-driving cars because we do not trust robots to make decisions that involve risk assessment of human behavior and conscience, beyond calculations and mechanics. Cars are fast-moving vehicles mostly driving through streets and roads of densely populated areas. There is no doubt that automation will take over vehicular traffic at some point. That, however, will only happen when robots have satisfactorily shown to be reliable partners of humans on the road, and able to predict their habits, either at the wheel or when crossing the street.