I suppose Starbucks could have just charged more to upsell personal space. Instead they seem to have banned people bringing any sort of privacy devices into the cafe.
The move targets a small but persistent group of clients known as “cagongjok.” The term blends the Korean words for cafe and study tribe. It refers to people who work or study for hours in coffee shops.
Most use only laptops. But Starbucks says some have been setting up large monitors, printers and even cubicle-style dividers.Source: Twitter
Honestly, if they had just started selling the privacy dividers, or at least renting them for a premium, they would have probably made more money and had happier customers.
The story is perhaps notable because Meta, which was renamed at great expense to chase a bogus metaverse and VR nonsense, has been dumping itself into stupid privacy violating surveillance “glasses”. When you think about it, nobody really wants to be the Harry Caul of today. They want privacy, not total power over anyone who wants privacy.
1974 American mystery thriller film written, produced, and directed by Francis Ford Coppola and starring Gene Hackman, John Cazale, Allen Garfield, Cindy Williams, Frederic Forrest, Harrison Ford, Teri Garr, and Robert Duvall.
The smart response to papparazzi tech is physical privacy dividers, a logical and superior future. It’s almost like reinvention of the Victorian cafe. Why see or be seen in any public spaces if you want to remain comfortably safe from institutional capture?
On a related note, people walk around and sit in cafes with audio noise cancelling headphones. They can’t hear you and they aren’t talking. Why shouldn’t they try to put up simple vision noise cancelling barriers too? They don’t want to see you, and they don’t want to strap anything to their head.
That’s a lot of damage from one Tesla losing control. It reads like a battle report.
The Hall County Sheriff’s Office said the crash occurred just before 11:30 a.m. Thursday when the driver of a Tesla Model 3 was traveling northbound on Falcon Parkway when he left his travel lane and struck a raised concrete curb at the intersection of Martin Road. The vehicle then vaulted over a drainage ditch and struck a small tree on the property.
After hitting the tree, the vehicle then launched from a pile of dirt at the base of the tree and landed on top of a Dodge Caravan, which was parked at the convenience store. The tesla then continued uncontrolled into a support beam on an awning covering gas pumps, which caused part of the structure to collapse.
In the process, a gas pump was knocked free from its base, striking a Honda Accord parked at the pumps. Debris from the Tesla damaged two more vehicles, according to HCSO.Telsa collapsed the Exxon Circle K canopy over its pumps on Thursday, Aug. 14, 2025. Source: William Daughtry
Hit a tree yet still launched and landed on top of a van?
A Tesla owner wasn’t paying attention, blew speed limits by double, and then tried to argue in court that AI is to blame for any mistakes. The court disagreed.
The court said in its ruling that “it is in any case the defendant’s obligation to be aware of the speed limit at all times, and that it would be negligent to rely blindly on the car’s technical systems being correctly set. The court therefore finds it proven beyond any reasonable doubt that the defendant acted as described in the indictment, and that he at the very least acted with gross negligence. He is therefore convicted in accordance with the charges.
Another tech executive from the infamous X brand departs amid controversy, raising questions about industry accountability.
The xAI co-founder says he was inspired to start the firm after a dinner with Max Tegmark, the founder of the Future of Life Institute, in which they discussed how AI systems could be built safely to encourage the flourishing of future generations. In his post, Babuschkin says his parents immigrated to the U.S. from Russia…
Babuschkin’s departure comes after a tumultuous few months for xAI, in which the company became engrossed in several scandals related to its AI chatbot Grok. For instance, Grok was found to cite Musk’s personal opinions when trying to answer controversial questions. In another case, xAI’s chatbot went on antisemitic rants and called itself “Mechahitler.”
The fail-upward pattern here is striking: lofty rhetoric about humanity’s future paired with infamous X-branded products that actively cause harm. While Babuschkin speaks of building AI “safely to encourage the flourishing of future generations,” his company’s chatbot Grok was generating violent hate speech including antisemitic content and positioning itself as a digital Hitler. That’s literally the spring board he’s using to launch his investment career.
“It’s uncontroversial to say that Grok is not maximalising truth or truth seeking. I say that particularly given the events of last week I would just not trust Grok at all,” [Queensland University of Technology law professor Nicolas Suzor] said. […] Suzor said Grok had been changed not to maximise truth seeking but “to ensure responses are more in line with Musk’s ideological view”.
This disconnect between aspirational language and willfully harmful outcomes reflects a broader problem in tech leadership. Historical awareness shows us how empty emotive future-oriented rhetoric can mask concerning agendas:
Authoritarian movements consistently frame discriminatory policies as protecting future generations
Eugenics programs were justified using language about genetic “health” and societal progress
Educational indoctrination was presented as investment in humanity’s future
Population control measures were framed as ensuring a “better” tomorrow
The concerning pattern isn’t the language itself (similar to how Nazi rhetoric centered on future prosperity), but how it’s deployed to justify harmful technologies while deflecting accountability. When a company’s AI system calls itself “Mechahitler” while its leadership speaks of “flourishing future generations,” we should ask the basic and hard questions about a huge gap between stated values and actual observed outcomes that are “more in line with Musk’s ideological view”.
A Nazi “Afrikaner Weerstandsbeweging” (AWB) member in 2010 South Africa (left) and a “MAGA” South African-born member in 2025 America (right). Source: The Guardian. Photograph: AFP via Getty Images, Reuters
Tech leaders routinely use futuristic-sounding rhetoric to market products that surveil users, spread misinformation, or amplify harmful content. Historical vigilance requires examining more than what they say, and what their technologies actually do in practice. Mechahitler was no accident.
The real red flag is more than a single phrase—it’s the pattern of using humanity’s highest aspirations to justify technologies that demonstrably harm human flourishing. Just look at all the really, really big red X flags.
Twitter changed its logo to a swastika
The Nazi use of “flourishing” language was particularly insidious because it hijacked universally positive concepts (growth, prosperity, future well-being) to justify exclusion, violence, and ultimately genocide. This rhetorical strategy made their radical agenda seem like common sense – who wouldn’t want future generations to flourish? The key was that their definition of “flourishing” required the elimination of those they deemed inferior. Connecting modern tech rhetoric about “flourishing future generations” to historical patterns is historically grounded. The Nazis absolutely used this exact type of language systematically as part of their propaganda apparatus.
a blog about the poetry of information security, since 1995