Apple freezes its self-driving car project

 Bloomberg reports that Apple has decided to scale back on its self-driving car project. This has resulted in people from the car team reassigned or let go, in addition to those who quit. Now the goal is apparently to develop an autonomous driving system that would allow Apple to either partner with existing carmakers or resume its own automobile initiative at some point in the future. In any case, Apple customers will be better off for it (and probably the company as well). Let the cobbler stick to his last.

Autonomous Vehicles Gain a Big Friend: The U.S. Government

 

The U.S. government recently came out strongly in favor of autonomous cars. Self-driving vehicles, officials said, would save lives, and would make commuters’ less miserable.

To be sure, the government stopped short of issuing new regulations in the rapidly developing market. Still, the 15-point guidelines it issued were sufficiently specific as to signal its focus on safety, yet vague enough as to avoid restricting further developments.

The guidelines deal with four broad issues. Safety standards for the design and development of autonomous vehicles; a recommendation for states to agree on uniform policies on self-driving cars; how current regulations apply to driverless vehicles; and opening the field to new regulations on the technology.

At Future Imperfect, we have addressed repeatedly the challenges posed by self-driving cars, not always welcoming the new technology. It would take this writer a lot of convincing, and perhaps some more forceful methods of persuasion, to ride a fast machine with nobody at the wheel.

Yet what we find commendable in the government’s attitude is what has often set the United States apart from other countries. In the face of inevitable technological progress, the government decided to embrace it, and hence have greater involvement in its development. Hindering it would not stop it, and might even imperil passengers and pedestrians in a regulatory vacuum. Conversely, a farsighted stance is pioneering, and serves best the public interest.

A Self-Driving Car Causes a Fatality: Who’s Responsible?

A Tesla Model-S crashed last May while on autopilot, killing its driver. While the car’s self-driving feature has earned favorable reviews, it does have its glitches. In this case, the autopilot failed to spot a white tractor trailer against “a brightly lit sky.” If past experience is any guidance, a long legal fight will probably ensue to determine liability. The novelty here, however, is that one of the subjects in the incident is a piece of software. Now, as Patrick Lin discusses in his article for Forbes, disclaimers may not exempt Tesla from responsibility in the crash. The self-driving feature is still in Beta-testing mode and drivers agreed to use it at their discretion, monitoring traffic. Is the decision to activate a feature on Beta-testing in real driving wise? Is that an example of inexperience by a very new carmaker such as Tesla? For General Motors has refused to allow autopilot Beta-testing mode in real driving. It would not be the first case in which a fatality as a result of product malfunction opens room for debate. Simply put, no matter what the small letter says, the product failed. That’s so, regardless of arguments lawyers for the plaintiff and the defendant will put forth, if the case gets to court. But a new legal dimension is opening up now. Who is responsible when a device or system that lacks human consciousness and will makes life-or-death decisions? Would drivers have to configure an “ethics setting” profile that determined the autopilot’s responses to the universe of potential hazards? Most importantly, the advantages of autopilot for emergency situations are clear to imagine. But if autopilot will be used to free up drivers to engage for distractions, peril will not only be at their expense but also other motorists and pedestrians who have not signed up for it.