Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!
Six years ago, Walter Huang was driving his Tesla Model X to work. At a junction between two highways near San Francisco, the car drove head on into a traffic barrier. He later died from his injuries. Lawyers for his estate sued Tesla, claiming its Autopilot system malfunctioned and was the proximate cause of the crash.
On its website, the law firm representing the estate says the Autopilot system installed in Huang’s Model X was defective and caused Huang’s death. The navigation system of Huang’s Tesla misread the lane lines on the roadway, failed to detect the concrete median, and failed to brake the car, but instead accelerated the car into the median.
“Mrs. Huang lost her husband, and two children lost their father because Tesla is beta testing its Autopilot software on live drivers,” said Mark Fong, a partner at Minami Tamaki LLP. “The Huang family wants to help prevent this tragedy from happening to other drivers using Tesla vehicles or any semi-autonomous vehicles.”
The allegations against Tesla include product liability, defective product design, failure to warn, breach of warranty, intentional and negligent misrepresentation, and false advertising. The trial is set to begin on March 18, 2024.
The lawsuit also names the State of California Department of Transportation as a defendant. Huang’s vehicle impacted a concrete highway median that was missing its crash attenuator guard [basically a big cushion that was supposed to prevent cars from hitting the cement barrier at the junction], which Caltrans failed to replace in a timely fashion after an earlier crash at that same location.
This attorneys for Huang’s estate plan to introduce testimony from Tesla witnesses indicating Tesla never studied how quickly and effectively drivers could take control if Autopilot accidentally steered towards an obstacle. According to Reuters, one witness testified that Tesla waited until 2021 to add a system to monitor how attentive drivers were to the road ahead. That technology is designed to track a driver’s movements and alert them if they fail to focus on the road ahead.
A Damning Email
In preparation for trial, the attorneys uncovered a March 25, 2016 email from Jon McNeill, who was president of Tesla at the time, to Sterling Anderson, who headed the Autopilot program at the time. A copy o the email also went to Elon Musk. McNeill said in the email he tried out the Autopilot system and found it performed perfectly, with the smoothness of a human driver. “I got so comfortable under Autopilot, that I ended up blowing by exits because I was immersed in emails or calls (I know, I know, not a recommended use).”
Both McNeill and Anderson are no longer working for Tesla. McNeill is a member of the board member at General Motors and its self-driving subsidiary, Cruise. Anderson is a co-founder of Aurora, a self-driving technology company.
For its part, Tesla intends to offer a “blame the victim” defense. In court filings, it said Huang failed to stay alert and take over driving. “There is no dispute that, had he been paying attention to the road, he would have had the opportunity to avoid this crash,” the company claims.
What Did Tesla Know And When Did It Know It?
The lawyers intend to suggest at trial that Tesla knew drivers wouldn’t use Autopilot as directed and failed to take appropriate steps to address that issue. Experts in autonomous vehicle law tell Reuters the case could pose the stiffest test yet of Tesla’s insistence that Autopilot is safe, provided drivers do their part.
Matthew Wansley, a Cardozo law school associate professor with experience in the automated vehicle industry, said Tesla’s knowledge of likely driver behavior could prove legally pivotal. “If it was reasonably foreseeable to Tesla that someone would misuse the system, Tesla had an obligation to design the system in a way that prevented foreseeable misuse,” he said.
Richard Cupp, a Pepperdine law school professor, said Tesla might be able to undermine the plaintiffs’ strategy by arguing that Huang misused Autopilot intentionally. But if the suit against Tesla is successful, it could provide a blueprint for others suing because of injuries or deaths in which Autopilot was a factor. Tesla faces at least a dozen such suits now, eight of which involve fatalities.
Despite marketing features called Autopilot and Full Self-Driving, Tesla has yet to achieve Musk’s oft-stated ambition of producing autonomous vehicles that require no human intervention. Tesla says Autopilot can match speed to surrounding traffic and navigate within a highway lane. “Enhanced” Autopilot, which costs $6,000, adds automated lane changes, highway ramp navigation and self parking features. The $12,000 Full Self Driving option adds automated features for city streets, such as stop light recognition.
The Handoff Conundrum
We have been round and round this particular mulberry bush many times here at CleanTechnica. Some of us think Autopilot and FSD are the eighth wonder of the modern world. Others think it’s OK for Tesla to make its owners into lab rats but it is unfair to involve other drivers in Musk’s fantasies without their knowledge and informed consent. Those people think any car using a beta version of experimental software on public roads should have bright flashing lights and a sign on the roof warning other drivers — “DANGER! Beta testing in progress!”
The issue that Tesla knows about but refuses to address is a common phenomenon in the world or technology known simply as “the handoff.” That is the time between when a computer says, “Hey, I am in over my head here (metaphorically speaking, of course) and I need you, human person, to take control of the situation” and the time when the human operator actually takes control of the car.
An article in Breaking Defense entitled “Artificial Stupidity: Fumbling The Handoff From AI To Human Control,” examines how a failure in an automatic control system allowed Patriot missiles to shoot down two commercial aircraft in 2003. The author says many think the combination of AI and human intelligence makes both better but in fact the human brain and AI sometimes reinforce each other’s failures. “The solution lies in retraining the humans, and redesigning the artificial intelligences, so neither party fumbles the handoff,” he suggests.
Following that tragic incident, Army Maj. Gen. Michael Vane asked, “How do you establish vigilance at the proper time? (It’s) 23 hours and 59 minutes of boredom, followed by one minute of panic.”
In the world of Musk, when Autopilot or FSD is active, drivers are like KITT, the self-driving sensor embedded in the hood of a Pontiac Firebird in the TV series Knight Rider, constantly scanning the road ahead for signs of danger. That’s the theory. The reality is that when those systems are active, people are often digging it the glove box looking for a tissue, turning around to attend to the needs of a fussy child it the back seat, or reading War and Peace on their Kindle. Focusing on the road ahead is often the last thing on their mind.
A study done by researchers at the University of Iowa for NHTSA in 2017 found that humans are challenged when performing under time pressure and that when automation takes over the easy tasks from an operator, difficult tasks may become even more difficult. The researchers highlighted several
potential problems that could plague automated vehicles, specifically when drivers must reclaim control from automation. These include over-reliance, misuse, confusion, reliability problems, skills maintenance, error inducing designs, and shortfalls in expected benefits.
The lack of situational awareness that occurs when a driver has dropped out of the control loop has been studied for some time in several different contexts. It has been shown that drivers had significantly longer reaction times in responding to a critical event when they were in automation and required to intercede compared to when they were driving manually. More recent data suggest that drivers may take around 15 seconds to regain control from a high level of automation and up to 40 seconds to completely stabilize the vehicle control. [For citations, please see the footnotes in the original report.]
Are Tesla’s Expectation Realistic?
Lawyers for the estate of Walter Huang case are questioning Tesla’s contention that drivers can make split second transitions back to driving if Autopilot makes a mistake. The email form McNeill shows how drivers can become complacent while using the system and ignore the road, said Bryant Walker Smith, a University of South Carolina professor with expertise in autonomous-vehicle law. The former Tesla president’s message, he said, “corroborates that Tesla recognizes that irresponsible driving behavior and inattentive driving is even more tempting in its vehicles”.
Plaintiffs’ attorneys also cited public comments by Musk while probing what Tesla knew about driver behavior. After a 2016 fatal crash, Musk told a news conference that drivers struggle more with attentiveness after they have used the system extensively. “Autopilot accidents are far more likely for expert users,” he said. “It is not the neophytes.”
A 2017 Tesla safety analysis, a company document that was introduced into evidence in a previous case, made clear that the Tesla autonomous driving system relies on quick driver reactions. Autopilot might make an “unexpected steering input” at high speed, potentially causing the car to make a dangerous move, according to the document, which was cited by plaintiffs in one of the trials Tesla won. Such an error requires that the driver “is ready to take over control and can quickly apply the brake”.
In depositions, a Tesla employee and an expert witness the company hired were unable to identify any research the automaker conducted before the 2018 accident into drivers’ ability to take over when Autopilot fails. “I’m not aware of any research specifically,” said the employee, who was designated by Tesla as the person most qualified to testify about Autopilot.
Asked if he could name any specialists in human interaction with automated systems whom Tesla consulted while designing Autopilot, Christopher Monk, who Tesla presented as an expert, replied “I cannot.” Monk studies driver distraction and previously worked for the NHTSA.
In an investigation of the crash that killed Walter Huang, the National Transportation Safety Board concluded that “Contributing to the crash was the Tesla vehicle’s ineffective monitoring of driver engagement, which facilitated the driver’s complacency and inattentiveness.”
A Tesla employee has testified in another case that the company considered using cameras to monitor drivers’ attentiveness before Huang’s accident, but didn’t introduce such a system until May 2021.
Musk, in public comments, has long resisted calls for more advanced driver-monitoring systems, reasoning that his cars would soon be fully autonomous and safer than human piloted vehicles. “The system is improving so much, so fast, that this is going to be a moot point very soon,” he said in 2019 on a podcast with artificial-intelligence researcher Lex Fridman. “I’d be shocked if it’s not by next year, at the latest … that having a human intervene will decrease safety.”
Kelly Funkhouser, associate director of vehicle technology at Consumer Reports, told Reuters that even after its most recent over the air update, road tests of two Tesla vehicles failed in myriad ways to address the safety concerns that sparked the recall. “Autopilot usually does a good job,” he said. “It rarely fails, but it does fail.”
The Takeaway
These stories always get a lot of comments. There are some who will defend Elon Musk no matter what he does. There are others who think he has gone over to the dark side. We think neither of those is true. He puts on his pants one leg at a time the same as everyone else. We do think he sometimes plays fast and loose with established norms.
There are trial attorneys all across America who want to be the first to take down Tesla. So far, they have all been unsuccessful. The Huang case could be the first to hold Tesla at least partly responsible. The trial begins next week and we will keep you updated as it progresses. Of course, no matter who wins there will be appeals, so things will remain in legal limbo a while longer.
The upshot is that no one has cracked any driver assistance technologies that are much more than Level 2+. Apple’s plans to build a car foundered on the rocks of autonomy recently. Elon is as stubborn as a mule and will keep pursuing his dream for as long as he is able to draw a breath — unless the courts or safety regulators tell him he can’t. Stay tuned.
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Latest CleanTechnica TV Video
CleanTechnica uses affiliate links. See our policy here.