Category Archives: Security

NHTSA Promotes New “AV STEP” Program to Monitor Road Robots

The acronym STEP has a healthy dose of irony, because the entire reason American “wheelmen” (bicycles) devoted themselves in the late 1800s to paved roads in America was they couldn’t go when and where horses could (e.g. stepping through rain and snow, fields and forests).

The exit interview for the head of the NHTSA highlights their new road robot regulation program idea launched last June.

We’re also considering a new program called AV STEP. That would combine the opportunity for manufacturers to deploy automated vehicles with a process that would allow NHTSA significant access to information about redundancy and safety systems,

This program also has an Automated Vehicle Transparency and Engagement for Safe Testing Initiative (AV TEST) component described in an ARTS23 keynote speech.

We’re now looking to use Section 30114 to establish a new program, which we’re calling AV STEP, which I’m really excited to describe for you. Data truly is fundamental to our work, and we are continuously looking for new ways to gather ADS data to inform future oversight and rulemaking. One way we do this is through our import program established under Section 30114.

It started with a short list.

The participating companies are Beep, Cruise, Fiat Chrysler Automobiles, Local Motors, Navya, Nuro, Toyota, Uber, and Waymo. The States are California, Florida, Maryland, Michigan, Ohio, Pennsylvania, Texas, and Utah.

The White House occupant in 2016, corrupted by Tesla, not only tried to silence the NHTSA but also declare robot safety a state concern instead of being under federal jurisdiction.

This led to over 30 states allowing known unsafe robots on roads, and predictable unnecessary deaths as a result, making US roads so unsafe it has been “like living through a war“.

AV STEP seems intended to return the country to some semblance of safety science, away from the corruption and fraud spread by Tesla. It’s a “step” in the right direction, although most car brands aren’t yet developing mechanical legs.

ChatGPT Generating Fake History and Leaking Passwords

Details on the ongoing ChatGPT security disaster have been posted by Dan Goodin, one of my favorite and most trusted tech reporters.

“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email. “They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).”

As I presented at last year’s RSA conference in SF, using ChatGPT brings with it a critical integrity vulnerability. If your “history” is artificially generated by the software company, how would you prove it wasn’t/isn’t yours?

In related news, Italy says ChatGPT violates privacy regulations.

Report to GM Board of Directors on Cruise “Sev-0” Oct 2 Crash Into Pedestrian

Reading the full report I found an investigation table insightful.

Source: REPORT TO THE BOARDS OF DIRECTORS OF CRUISE LLC, GM CRUISE HOLDINGS LLC, AND GENERAL MOTORS HOLDINGS LLC REGARDING THE OCTOBER 2, 2023 ACCIDENT IN SAN FRANCISCO, January 24, 2024

The crux of the complaints relate to GM not transmitting the “dragging” data, which establishes why and how the robot likely hurt this pedestrian after impact far worse than a human driver would have.

Further along in the report it’s made plain how Cruise was not disclosing that their robot did the wrong thing or that it significantly increased harms to the pedestrian.

Communications members also continued to give reporters the following bullet point on background: “[t]he AV came to a complete stop immediately after impacting the struck pedestrian, even though by this time Cruise, including senior members of its communications team, knew that the AV moved forward immediately after striking the pedestrian. Cruise communications team members gave this statement to media reporters after the 6:45 a.m. SLT meeting, some of whom published it, well into the afternoon of October 3, including Forbes, CNBC, ABC News Digital, Engadget, Jalopnik, and
The Register.

That’s not good. But the worst part is when Cruise staff defined harming pedestrians in an urban environment as an “edge” case they aren’t concerned about.

Vogt reportedly characterized the October 2 Accident as an extremely rare event, which he labeled as an “edge case”.

Cold. Cruel. Immoral.

This is a good reminder that American “death corridors” in cities were no accident. And I’ve been saying robots on roads will kill a lot of pedestrians since 2016, the exact opposite of edge.

OR Tesla Totalled by 24 Year Old With Five Years of Driving Violations

Makes sense why a violent repeat offender chose a Tesla to stomp the accelerator straight into a wall.

In 2019, he pleaded guilty to driving under the influence of intoxicants, hit-and-run and fourth-degree assault. In 2022, he pleaded guilty to driving under the influence of intoxicants and recklessly endangering another person.

At 8am in Bend, Oregon he was practically engaged in an act of domestic terrorism.

Police reviewed video footage, which they say showed a white Tesla driving more than 60 mph while heading south in the northbound lanes and sidewalk on NE Third Street in Bend. The Tesla crashed at the entrance to the US Foods Chef’Store on Third Street, rolled “multiple times” and stopped at the retaining wall at U.S. Bank.

Driving the wrong way at high speed and on sidewalks? It’s surprising he didn’t kill anyone, like in the other tragic Tesla manslaughter case in Oregon by a known repeat offender.

If you’re in Oregon and see a Tesla, be ready for a disaster.