FL Tesla Kills One Cyclist, Run Over From Behind

Again a cyclist has been run over from behind by a Tesla, and killed. This crash report has the usual hallmarks of driverless design defects.

According to the Palm Beach County Sheriff’s Office (PBSO) crash report, at about 1:23 a.m., the bicyclist, identified as Sean Norris, was traveling westbound on Indiantown Road in the center of the outside lane. Around the same time, a Telsa was driving west on Indiantown Road in the outside lane and approaching the rear of the bicycle.

The crash report states that the front of the Tesla struck the rear of the bicycle, ejecting Norris, 38, who landed on top of the car.

To put this in proper context.

1) Elon Musk initially promoted the Tesla driverless concept specifically in response to a cyclist being killed. As early as 2013 he promised he would use AI to prevent cars from crashing into cyclists.

2) Tesla instead has been killing cyclists, including motorcyclists, by running them over from behind.

3) When Uber ran over one pedestrian with a bicycle in 2018, they shut down their entire driverless program. Tesla has continued killing, seemingly with abandon.

Tourists Flock to German Art Museum that Harshly Criticizes Their Ignorance

Along with digital-detox comes the latest trend against lazy comforts, harsh criticism of ignorance.

Tourists apparently can’t get enough of the quick wit and harsh critiques, as Düsseldorf’s Kunstpalast museum “performances” are completely sold out.

In spite of the rudeness, or perhaps because of it, the twice-monthly “Grumpy Guide” tour has been a surprise hit, with each one since the launch in May sold out. Anyone looking to book a spot will have to wait until next year.

“I never insult visitors directly, based on their personality or their appearance, but I insult them as a group,” said Carl Brandi, 33, the performance artist who conceived of and performs as the aggressive Langelinck. “My contempt is directed at an inferred ignorance that may not even exist. But I try to make them feel as ignorant as possible.”

Germany is giving a glimpse into a future generation of thinkers, unafraid of learning history. The recent American trend towards “know nothingism” (e.g. MAGA) wouldn’t stand a chance.

Militant Drones of Peter Thiel Rated “Disaster”: Can’t Hit Targets

This devastating report sounds a lot like Palantir.

A drone start-up backed by the US tech billionaire Peter Thiel has conducted two trials with British and German armed forces that were branded a “disaster”, raising questions about its bold public claims and its hopes of winning government contracts.

Attack drones produced by Berlin-based Stark failed to hit a single target during four attempts at two separate exercises this month with the British army in Kenya and the German army near the town of Munster, in Lower Saxony, according to four people familiar with the trials.

If history is any guide, this means the Peter Thiel-backed company with drones that fail every test will become highly valued and maybe even a darling of Wall Street.

The U.S. Army warned in 2012 that Palantir didn’t work at all—and they were right. Yet look at the wild success that followed failure, proving the actual customer doesn’t matter, experts be damned.

The extremist ACTS 17 preacher work of Thiel suggests disinformation is an intentional success strategy: cultivating high-level adherents and billionaire investors regardless of product performance or reality.

Thiel companies succeed despite devastating product failure through cultivating certain high-powered believers.

The real question is who in Britain and Germany are being targeted by Thiel—which military leaders are in scope to be taken in by this new ACTS 17 (political extremist) operation?

Scientists Prove Social Media Lowers Intelligence, Especially AI

A new study shows promise in proving the wrong type of garbage in, means garbage out. Social media is apparently a kind of garbage that lowers intelligence the most.

Wang and his colleagues wanted to see the effects of large language models (LLMs) trained on low-quality data — defined as short, popular social-media posts, or those containing superficial or sensationalist content. They looked at how these data affected model reasoning, retrieval of information from long inputs, the ethics of responses and model personality traits.

The team reports that models given low-quality data skip steps in their reasoning process — or don’t use reasoning at all — resulting in the model providing incorrect information about a topic, or when the authors presented a multiple choice question, the model would pick the wrong answer. In data sets with a mix of junk and high-quality data, the negative effect on reasoning increased as the proportion of junk data increased.

Most notably, the report describes this as an integrity breach that can’t be fixed. The decline is deemed irreversible simply because additional instruction tuning or retraining with high-quality data doesn’t restore lost performance. Degraded models can’t overcome a nearly 20% gap compared to versions that avoid the garbage data.

Are there better methods to reverse the decline, or restore intelligence? A new market for integrity controls has been born, officially now.