HK Tesla Blows Intersection and Flips Taxi

From the Hong Kong news comes another story of Tesla failing to navigate an intersection, crashing into and flipping a car, causing five people to be injured.

At about 9 pm yesterday (Apr 27)… at the junction of Yau Tong Road and Lei Yue Mun Road, causing the taxi to be overturned. …the Tesla turned right onto Yau Tong Road and accidentally blocked and collided with the taxi.

Source: Google Maps

Running red lights is still an ongoing problem for Tesla.

Related: MI Tesla Blows Red Light and Flips SUV

Self-Driving Cars Could “Create Hell on Earth”

This article originally appeared in 2016.

by Paul Wagenseil Senior editor, security, privacy and gaming.

LAS VEGAS — Autonomous driving systems such as Tesla’s Autopilot or Google’s self-driving cars are far from being safe, security researcher Davi Ottenheimer said last week at the BSides Las Vegas hacker conference here, adding that letting such machines make life-or-death decisions might “create hell on earth.”

The problem, Ottenheimer said, is that we put too much faith in the infallibility of computer and algorithms, especially when “machine learning” and massive amounts of data, just waiting to be processed, seem to promise a boundless future of rapid innovation. Instead, he said, it’s time to slow down, admit that machines will take a long time to catch up, and let humans take the wheel.

“We believe that machines will be better than us,” Ottenheimer said. “But they repeat the same mistakes we do, only faster.”

Ottenheimer said Tesla bears some responsibility for the death of Joshua Brown, the Ohio man who was killed May 7 when his Tesla Model S’s auto-driving system apparently failed to see a tractor-trailer crossing the Florida highway on which Brown was traveling. But Ottenheimer adds that Brown was just as culpable by letting the car take over under the assumption that it would be a better driver than he was.

“Computers are increasingly bearing the burden of making our decisions,” Ottenheimer said. “But Brown’s death was a tragedy that didn’t have to happen.”

In the six months before he died, Brown had posted nearly two dozen video clips to YouTube showing the performance of Autopilot in various conditions, and he seemed to understand Autopilot and its limitations well.

Yet Ottenheimer played Brown’s last clip, posted April 5, in which Brown’s Tesla is nearly forced off a highway near Cleveland by a work truck drifting into his lane. It looks as if Brown wasn’t being as attentive as he should have been, which Brown himself acknowledged in the text accompanying the clip.

“I actually wasn’t watching that direction and Tessy (the name of my car) was on duty with autopilot engaged,” Brown wrote. “I became aware of the danger when Tessy alerted me with the ‘immediately take over’ warning chime and the car swerving to the right to avoid the side collision.”

The big white work truck was forward and slightly to the left of Brown’s Tesla, about 11 o’clock in military parlance. An attentive driver would have noticed the truck merging into an adjoining lane and anticipated that he or she would soon be in the truck’s blind spot. Instead, Ottenheimer contends, Brown’s reliance on Autopilot created a double blind spot, in which neither Brown nor the work truck’s driver were aware of each other.

“Brown shifted the burden of safety to the car,” Ottenheimer said. “He should have reacted much earlier, but he waited 20 seconds for the car to take over.”

Tesla’s Autopilot saved Brown then, but it failed a month later, when Brown had his fatal encounter with the tractor-trailer in Florida. Tesla argued in the wake of Brown’s death that perhaps the car’s camera didn’t see the white trailer against a bright sky.

Ottenheimer had a slightly different theory, based on the truck driver’s recollection that Brown’s car changed lanes at the last moment to aim right for the center of the trailer. (Had the Tesla swerved left, it would have gone into a left-hand turn lane and perhaps been safer.)

“The Tesla may have thought there was separation between the front and rear wheels of the trailer,” Ottenheimer said. “It may have thought there was an open lane.”

Such mistakes are to be expected from machine algorithms, Ottenheimer said. They’re simply not as smart as people.

As examples, he cited Google searches for “professional hair” that returned images of white people, while “unprofessional hair” returned images of mostly black people; facial-recognition programs that thought pro wrestlers in the process of losing seemed happy; and a robot security guard that ran over a toddler at a shopping mall in the heart of Silicon Valley in June because it couldn’t predict the whereabouts of small children. (The toddler hurt his foot but was otherwise OK.)

“Was Joshua Brown a victim of innovation?” Ottenheimer asked. “Yes, and so was that toddler.”

Ottenheimer said self-driving cars can barely handle driving on a freeway, the most basic and easiest kind of driving. They won’t be able to deal with city traffic, or to navigate a crowded parking lot with vehicles and pedestrians coming from all directions.

To make self-driving cars safer, he suggested that Silicon Valley innovators step back, realize that they’re practicing what Ottenheimer called “authoritomation” — the process of transferring authority to automation — and consider what that means.

“We’re getting out of engineering, and moving into philosophical issues of responsibility and authority,” Ottenheimer said. “Don’t expect an easy help button.”

Facebook Co-Founder Says Tesla is the New Enron

Over a year ago I wrote how Tesla has become an Enron and gets away with it. It was serious, although I couldn’t help including the sardonic Enron logo.

I’d like to think this at least made a ripple in the news, if not more.

And in 2020 I wrote a little poem about Silicon Valley, which I’m certain nobody paid any attention.

Woke? That’s accountability.

Hate woke? That’s Enron-level hatred for accountants.

To Jalopnik’s credit, three days ago they came up with something much catchier.

Go Anti-woke? Go broke.

Today I noticed a very small Threads post already making huge waves and headlines, while claiming to “sound crazy”.

Source: Facebook

That is the much followed account of Dustin Moskovitz, co-founder of Facebook and Asana, saying out loud what I’ve been saying on this blog for years already.

Welcome everyone to the slow countdown to Tesla’s financial bankruptcy (given its moral bankruptcy arrived around 2014, which I warned by 2016 was leading to a modern day Titanic).

Related: Tesla bogus safety gimmicks have been killing a lot of people and the fraudulent brand should be banned from public roads.

Source: New Yorker, April 16, 1966

NHTSA Increases Official Report to 29 (out of 44) Deaths From Tesla Autopilot

Officially there have been 29 avoidable deaths, according to a new U.S. government report on Tesla’s fatally flawed Autopilot design.

In total, NHTSA investigated 956 crashes, starting in January 2018 and extending all the way until August 2023. Of those crashes, some of which involved other vehicles striking the Tesla vehicle, 29 people died. There were also 211 crashes in which “the frontal plane of the Tesla struck a vehicle or obstacle in its path.” These crashes, which were often the most severe, resulted in 14 deaths [out of the 29] and 49 injuries.

Why so many? The NHTSA cites serious design deficiencies pushed by Tesla management when compared to industry safety standards.

“A comparison of Tesla’s design choices to those of L2 peers identified Tesla as an industry outlier in its approach to L2 technology by mismatching a weak driver engagement system with Autopilot’s permissive operating capabilities,” the agency said.

Even the brand name “Autopilot” is misleading, NHTSA said, conjuring up the idea that drivers are not in control. While other companies use some version of “assist,” “sense,” or “team,” Tesla’s products lure drivers into thinking they are more capable than they are. California’s attorney general and the state’s Department of Motor Vehicles are both investigating Tesla for misleading branding and marketing.

Not just an outlier, Tesla allegedly is a liar too.

Top management of the infamously toxic brand is suspected of covering up even more death to fool Wall Street.

NHTSA acknowledges that its probe may be incomplete based on “gaps” in Tesla’s telemetry data. That could mean there are many more crashes involving Autopilot and FSD than what NHTSA was able to find.

How big a coverup? Over 80% of crash events allegedly are being hidden from authorities by design.

…[due to management choices] Tesla only receives data from certain types of crashes, with the NHTSA claiming the company collects data on around 18 percent of crashes reported to police.

Tesladeaths.com, for example, provides evidence using local journalism of 44 deaths from Autopilot. That’s a huge delta from the 29 being reported by Tesla to the NHTSA.

We may finally be approaching a point of no return for the brand, where it is rightfully banned for covering up its intentionally terrible design that poses a dangerous threat.

Lawn darts killed fewer.

Ford Pinto killed fewer.

Enron lied less.

The new report highlights that Tesla remains unique in that its drivers had more than enough time to prevent an accident, yet they alone falsely relied on Autopilot.

We’ve known this since 2016 and I’ve explained it many times before. Elon Musk, since the beginning, rashly promoting Autopilot to the stock market (as something it’s absolutely incapable of delivering) becomes especially dangerous… because he disagrees directly with his own product manual.

This is a toxic form of propaganda; gaslighting 101.

If the manual said drivers have to pay attention, and Musk didn’t say anything, it would be safer.

If Musk said drivers don’t have to pay attention, and the manual didn’t say anything, it would be safer.

The problem is the contradictory combination, destroying truth, which is the very definition of gaslighting.

Because Musk invariably and intentionally contradicts safety statements in the product manual, he willfully creates the most unsafe combination possible.

His believers thus operate in the anti-science realm of beliefs, putting society at risk, just like with any cult following anti-patterns doused in snake oil.

This is not about whether technology can improve safety of humans. That’s not even worth discussing. Vaccines are effective and safe. Don’t let that grain of truth get tainted in the fraud campaign.

This is about a negligently designed technology system that discourages drivers from paying attention, creating a huge rise in accidents that easily could have been avoided if a CEO had not gloated about completely fraudulent concepts of safety.

Is there any reason to continue allowing a proven death trap design that sends dozens of people to early graves? How many communities must mourn these avoidable deaths before roads and homes are free from Tesla?

Related: After proving itself more antisemitic than even Ford, Tesla has been trying to copy Edison’s worst ideas. Expect an “88 hate station” near you.