So it finally happened: a self-driving car struck and killed a pedestrian in Arizona. And, of course, the car was an Uber.
(Why Uber? Well, Uber is a taxi firm. Lots of urban and suburban short journeys through neighbourhoods where fares cluster. In contrast, once you set aside the hype, Tesla's autopilot is mostly an enhanced version of the existing enhanced cruise control systems that Volvo, BMW, and Mercedes have been playing with for years: lane tracking on highways, adaptive cruise control ... in other words, features used on longer, faster journeys, which are typically driven on roads such as motorways that don't have mixed traffic types.)
There's going to be a legal case, of course, and the insurance corporations will be taking a keen interest because it'll set a precedent and case law is big in the US. Who's at fault: the pedestrian, the supervising human driver behind the steering wheel who didn't stop the car in time, or the software developers? (I will just quote from CNN Tech here: "the car was going approximately 40 mph in a 35 mph zone, according to Tempe Police Detective Lily Duran.")
This case, while tragic, isn't really that interesting. I mean, it's Uber, for Cthulhu's sake (corporate motto: "move fast and break things"). That's going to go down real good in front of a jury. Moreover ... the maximum penalty for vehicular homicide in Arizona is a mere three years in jail, which would be laughable if it wasn't so enraging. (Rob a bank and shoot a guard: get the death penalty. Run the guard over while they're off-shift: max three years.) However, because the culprit in this case is a corporation, the worst outcome they will experience is a fine. The soi-disant "engineers" responsible for the autopilot software experience no direct consequences of moral hazard.
But there are ramifications.