US Court Green Lights Execution by Tesla: Probation for Robotic Manslaughter

In California, a Tesla equipped with Autopilot ignored red lights and caused fatalities, leading to its owner receiving only a two-year probation. Consequently, it appears that the penalty for such an incident, where a Tesla with autonomous capabilities is involved in causing harm, is relatively lenient. Enabling Autopilot and directing it towards someone seems to result only in a charge of vehicular manslaughter, potentially giving a green light to robot attacks.

A 28-year-old man who was behind the wheel of a Tesla Model S on autopilot in 2019 when it ran a red light in Gardena and slammed into a car, killing two people, authorities said, has been sentenced to two years of probation after pleading no contest to two counts of vehicular manslaughter.

This outcome is disheartening as it appears that the misuse of Autopilot is now seemingly protected from prosecution, especially when it is exploited for deliberate and motivated assassinations (even in the case of mass casualties) aimed at advancing certain social or political causes.

ALTHOUGH MOST PERSONS WOULD NOT ADMIT THE WOULD-BE ASSASSIN’S RIGHT TO JUDGE, CONDEMN, AND EXECUTE, THEY MIGHT CONCEDE THE RIGHT OF AN INDIVIDUAL TO JUDGE, CONDEMN, AND ORDER EXECUTION, AS THEY DID TO HARRY TRUMAN IN HIS USE OF THE HYDROGEN BOMB.

The widespread release of faulty software in Tesla vehicles, which can be centrally controlled and planned, has raised a significant concern. These vehicles, operating with features similar to loitering cluster munitions (known as Autopilot attacks), pose a real threat. This situation implies that even without the need for sophisticated Intercontinental Ballistic Missiles (ICBMs), certain entities like white nationalist groups or North Korea could potentially compromise data integrity to employ swarms of Teslas for dangerous actions, such as ignoring traffic lights or intentionally veering onto sidewalks and into buildings, causing harm and chaos.

Furthermore, the manipulation of Tesla’s Autopilot could lead to individuals escaping severe charges, downplaying their actions as simple negligence rather than being held accountable for intentional actions that result in explosions and capital murder. This raises serious concerns about the potential misuse and consequences of such technology.

Yoshihiro Umeda

In April 2018, Yoshihiro Umeda tragically lost his life due to a Model X Autopilot incident. The owner of the Tesla claimed to be asleep at the time of the incident, as the vehicle suddenly accelerated, crashing into a group of people standing in front of a large white van. This incident bore striking resemblance to a terror attack.

Following this unfortunate event, Umeda’s family filed a wrongful death lawsuit against Tesla, holding Autopilot responsible, and claiming that defective vehicles were the cause of the tragedy. However, in 2022, the lawsuit was coldly dismissed from federal court in California and unexpectedly moved to Japan, raising suspicions about the reasons behind the decision.

For years, Tesla managed to prolong the legal proceedings, putting immense pressure on the victim’s family through their arguments. While Tesla continued to sell cars in Japan, incidents of these vehicles causing harm and even fatalities among the Japanese population were reported. Despite this, the company argued that it should be exempt from prosecution under US or California laws because “Japan is not a party to the Hague Convention.”

“Tesla further argues, persuasively, that access to third party evidence in Japan for a proceeding in the U.S. would be at best extraordinarily cumbersome, time-consuming, expensive and with uncertain results. Dkt. 15-1 (Yamashita Decl.) ¶ 15 (“Japan is not a party to the Hague Convention …”) Umeda v. Tesla Inc., Case No. 20-cv-02926-SVK, 10 (N.D. Cal. Sep. 23, 2020)

Let’s clarify some misleading information.

First, contrary to what a Tesla lawyer might claim or a possibly confused US Judge, Japan is indeed a party to the Hague Convention. This fact is supported by concrete evidence. The Convention on the Service Abroad of Judicial and Extrajudicial Documents in Civil or Commercial Matters (known as the Hague Service Convention) was adopted on November 15, 1965, at the Hague Conference on Private International Law. Japan signed this convention on March 12, 1970, and subsequently ratified it on May 28, 1970. It officially entered into force on July 27, 1970.

In the past, American courts have utilized this Hague Service Convention when dealing with Japanese auto manufacturers for many years. Therefore, it is essential to dispel any misinformation and recognize that Japan is, in fact, a party to the Hague Convention.

During the Hague Conference on Private International Law in 2008, Japan provided its self-assessment regarding the general operation of the Service Convention. The rating they gave themselves is as follows:

[ ] Excellent
[X] Good
[ ] Satisfactory
[ ] Unsatisfactory

In this self-assessment, Japan marked “Good” to indicate their perception of the general operation of the Service Convention at that time.

Additionally, it’s essential to note that the Hague Convention is not the sole authority governing court discovery. The European Union (EU) has its own superseding international Council Regulation (EC) No. 1393/2007 that deals with this matter. Furthermore, even before the Hague Convention, Japan had already been operating under the 1963 Consular Convention with the United States for handling requests to obtain evidence.

Japan’s own convention contains an Article 17(1)(e), as mentioned in the link provided, stating that a US attorney must have any Japanese witness voluntarily offer information and cannot compel them to do so. In light of this, Tesla’s objection might have been more of a rejection of victim rights than anything else.Third, service through the Central Authority under the Hague Service Convention is no easy and fast picnic either. It requires months simply due to preparation of the documents and the usual multi-lingual bureaucracy of an “official language” such as Japanese (Article 5 of the Service Convention).

Tesla has deliberately employed stall tactics and delays as a cruel defense strategy to hinder the progress of litigation. Their repeated efforts to avoid US courts, specifically English reporting, have caused Japanese families who are grieving over the loss of their loved ones years of unnecessary burden, making it very clear who has intentionally turned the judicial process into something “extraordinarily cumbersome, time-consuming, expensive, and uncertain.”

When considering the entire situation, it becomes evident that Tesla’s Autopilot was clearly unsafe, defective, and posed an immediate threat to pedestrians back in 2018. However, the company has managed to evade accountability in the court system. If the US courts had held Tesla’s robotics accountable for their defects, it is possible that numerous deaths, possibly dozens if not hundreds, could have been prevented.

Elaine Herzberg

As many of us are aware, in March 2019, Arizona prosecutors concluded that Uber was not criminally responsible for their “self-drive” vehicles. However, the “back-up” driver involved in the incident was charged with negligent homicide.

This echoed Arizona Governor’s Executive Order 2018-04 inviting Uber to operate in the state.

If a failure of the automated driving system occurs that renders that system unable to perform the entire dynamic driving task relevant to its intended operational design domain, the fully autonomous vehicle will achieve a minimal risk condition; The fully autonomous vehicle is capable of complying with all applicable traffic and motor vehicle safety laws and regulations of the State of Arizona, and the person testing or operating the fully autonomous vehicle may be issued a traffic citation or other applicable penalty in the event the vehicle fails to comply with traffic and/or motor vehicle laws

The charges against the human operator of Uber’s autonomous vehicle were based on the failure of the vehicle to achieve a “minimal risk condition,” as specified in the Executive Order (EO). However, a significant issue with this liability concept arose when Arizona had previously defined a “minimal risk condition” as a robot operating without any human operator at all. This conflicting definition created a glaring problem in the case.

MINIMAL RISK CONDITION. A low-risk operating mode in which a fully autonomous vehicle operating without a human person achieves a reasonably safe state, such as bringing the vehicle to a complete stop, upon experiencing a failure of the vehicle’s automated driving system that renders the vehicle unable to perform the entire dynamic driving task.

This paragraph contains two major issues. Firstly, it is redundant to describe “minimal risk” from a robot as a “reasonably safe state.” This requirement lacks meaningful clarity as it essentially says that safety should be reasoned to be safe, which is not specific enough. Additionally, the state’s definition of a “fully autonomous” system as one “designed to function as a level four or five” creates a potential danger. Instead of focusing on preventing harms such as injury or death, the emphasis seems to be on designing a system that can predict when it is unable to predict, which is counterintuitive and confusing.

Secondly, the paragraph highlights that a temporary low-skilled human operator can be held liable for “dynamic” robots, even in cases of full autonomy with no human operator present. This creates a concerning accountability loophole for potentially harmful robots.

In essence, the paragraph suffers from vague safety definitions and poses a risk of holding the wrong parties accountable for the actions of autonomous systems, thereby leaving room for potential misuse and harm.

The situation went from bad to worse when the Arizona district attorney had to step back from the investigation due to their prior involvement in actively promoting Uber’s “self-drive” technology to gain people’s trust. This created a conflict of interest in handling the case. The district attorney’s campaign suggested that Uber’s autonomous vehicles reduced risk since they couldn’t drink and drive, implying they were safer than human drivers under the influence of alcohol.

However, despite this argument, the liability for the incident fell on Uber’s “back-up” driver, who could be argued to have been more impaired in perception and judgment than someone under the influence of drugs or alcohol. This raises serious questions about the fairness and accuracy of the assignment of responsibility in the case.

…she was looking toward the bottom of the SUV’s center console, where she had placed her cell phone at the start of the trip.

In addition to the previous issues, it was revealed that Uber had deliberately disabled their vehicle’s driver-assistance and braking technology, which was specifically designed to enhance pedestrian safety. This action raises significant concerns about approaches to robot manufacturer responsibility for safety and transfer of responsibility to hamstrung operators when ensuring the well-being of pedestrians and other road users.

At 1.3 seconds before impact, the self-driving system determined emergency braking was needed. But Uber said, according to the NTSB, that automatic emergency braking maneuvers in the Volvo XC90 were disabled while the car was under computer control in order to “reduce the potential for erratic vehicle behavior.”

After disabling the automatic braking system, Uber also neglected to implement any alerting mechanism to inform their “back-up” driver that the car had detected the need for emergency braking. In simple terms, Uber’s system failed to recognize a human in its path, misclassifying her as both a vehicle and a bicycle, and consequently, it didn’t prevent the harm despite the object being clearly visible at a considerable distance. The blame for the incident was then shifted to the driver.

To their credit, Uber eventually decided to shut down their self-driving operation. In contrast, Tesla seemed to downplay their involvement in a pedestrian death during the same time in 2018 and continued to charge customers a substantial premium to use unsafe autonomous vehicles on public roads, seemingly pushing the boundaries of what they could get away with in terms of already proven hazardous technology.

Gilberto Alcazar Lopez and Maria Guadalupe Nieves-Lopez

The above two significant incidents involving autonomous vehicles provide an important backdrop for a tragic event in 2019. During this later incident, a Tesla, as has been observed in several cases, disregarded several speed warnings and ran a red light at a speed exceeding 70mph. This predictably led to a collision with another car, resulting in the death of its two occupants.

Los Angeles police Officer Alvin Lee testified that numerous traffic signs warn drivers to slow down as they approach the end of the freeway. […] The case was believed to be the first felony prosecution filed in the U.S. against a driver using widely available partial-autopilot technology.

According to the NHTSA, their investigation teams have been involved in 26 crashes related to Tesla’s Autopilot since 2016, which resulted in at least 11 deaths. However, data collected from local Tesla crash reports on tesladeaths.com shows a much higher tally of 432 deaths, with Autopilot being at fault in 38 cases, more than three times the NHTSA’s number.

Similar to the 2018 Tesla victim family in Japan, the 2019 victim families in California have filed lawsuits against Tesla for selling defective vehicles. A joint trial is scheduled for mid-2023, and this time Tesla cannot prevent it from being conducted in English and within the United States.

One can’t help but wonder if this trial could have been avoided if Yoshihiro Umeda’s case had been given proper attention.

The lenient two-year probation given for the intentional violent use of a potentially dangerous autonomous system raises serious concerns about preventable deaths, especially if used by socially or politically motivated actors on public roads. The implications are significant, with even global figures like Kim Jong Un, Trump or Putin possibly observing a new widespread destructive threat vector with interest.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.