Category Archives: Food

Used Coffee Grounds Mixed Into Concrete Significantly Increases Strength

Grounds for celebration? Just in case you weren’t already using old coffee grounds as compost or pest management for your garden…

…the team experimented with pyrolyzing the materials at 350 and 500 degrees C, then substituting them in for sand in 5, 10, 15 and 20 percentages (by volume) for standard concrete mixtures.

The team found that at 350 degrees is perfect temperature, producing a “29.3 percent enhancement in the compressive strength of the composite concrete blended with coffee biochar,” per the team’s study, published in the September issue of Journal of Cleaner Production. “In addition to reducing emissions and making a stronger concrete, we’re reducing the impact of continuous mining of natural resources like sand,” Dr. Roychand said.

Suddenly cities full of espresso machines have an entirely new construction supply chain model. The scientists claim they were trying to solve for waste and not just hoping to justify drinking 10 cups of coffee per day.

…inspiration for our work was to find an innovative way of using the large amounts of coffee waste…

And so they conclude 100% of the 75,000 tonnes of waste that coffee drinkers produced in Australia can become a source for structural concrete. Worldwide there’s allegedly upwards of 6 million tonnes available. That means plenty of room still for innovations like powering public transit or making milk and mushrooms.

ChatGPT is Fraud, Court Finds Quickly and Sanctions Lawyer

For months now I have been showing lawyers how ChatGPT lies, and they beg and some plead for me to write about it publicly.

“How do people not know more about this problem” they ask me. Indeed, how is ChatGPT failure not front page news, given it is the Hindenburg of machine learning?

And then I ask myself how do lawyers not inherently distrust ChatGPT — see it as explosive garbage that can ruin their work — given the law has legendary distrust in humans and a reputation for caring about tiny details?

And then I ask myself why I am the one who has to report publicly on ChatGPT’s massive integrity breaches? How could ChatGPT be built without meaningful safety protections? (Don’t answer that, it has to do with greedy fire-ready-aim models curated by a privileged few at Stanford; a rush to profit from stepping on everybody to summit an artificial hill created for a evil new Pharoah of technology centralization).

All kinds of privacy breaches these days will result in journalists banging away on keyboards. Everyone writes about them all the time (two decades after regulation forced their hand, 2003 breach disclosure laws started).

However, huge integrity breaches seem to be left comparatively ignored even when harms may be greater.

In fact, when I blogged about the catastrophic ChatGPT outage practically every reporter I spoke with said “I don’t get it”.

Get what?

Are integrity breaches today somehow not as muckrackworthy as back in The Jungle days?

The lack of journalist attention to integrity breaches has resulted in an absurd amount of traffic coming to my blog, instead of people reading far better written stuff on the NYT (public safety paywall) or Wired.

I don’t want or need the traffic/attention here, yet I also don’t want people to be so ignorant of the immediate dangers they never see them before it’s too late. See something, say…

And so here we are again, dear reader.

A lawyer has become a sad casualty of fraud known as the OpenAI ChatGPT. An unwitting, unintelligent lawyer has lazily and stupidly trusted this ChatGPT product, a huge bullshit generator full of bald-faced lies, to do their work.

The lawyer asked the machine to research and cite court cases, and of course the junk engineering… basically lied.

The court was very displeased with reviewing lies, as you might guess. Note the conclusion above to “never use again” the fraud of ChatGPT.

Harsh but true. Allegedly the lawyer asking ChatGPT for answers decided it was to be trusted because it was asked if it could be trusted. Hey witness, should I believe you? Ok.

Apparently the court is now sanctioning the laziest lawyer alive, if not worse.

A month ago when presenting findings like this I was asked by a professor how to detect ChatGPT. To me this is like asking a food critic how can they detect McDonalds. I answered “how do you detect low quality” because isn’t that the real point? Teachers should focus on quality output, and thus warn students that if they generate garbage (e.g. use ChatGPT) they will fail.

The idea that ChatGPT has some kind of quality to it is the absolute fraud here, because it’s basically operating like a fascist dream machine (pronounced “monopolist” in America): target a market to “flood with shit” and destroy trust, while demanding someone else must fix it (never themselves, until they eliminate everyone else).

Look, I know millions of people willingly will eat something called a McRib and say they find it satisfying, or even a marvel of modern technology.

I know, I know.

But please let us for a minute be honest.

A McRib is disgusting and barely edible garbage, with long term health risks.

Luckily, just one sandwich probably won’t have many permanent effects. If you step on the scale the next day and see a big increase, it’s probably mostly water. The discomfort will likely cease after about 24 hours.

Discomfort. That is what nutrition experts say about eating just one McRib.

If you never experienced a well made beef rib with proper BBQ, that does not mean McDonalds has achieved something amazing by fooling you into paying them for a harmful lie that causes discomfort before permanent harmful effects.

…nausea, vomiting, ringing in the ears, delirium, a sense of suffocation, and collapse.

This lawyer is lucky to be sanctioned early instead of disboweled later.

Sorry, meant disbarred. Autocorrect. See the problem yet?

Diabetes is a terrible thing to facilitate, as we know from what happened from people guzzling McDonalds instead of real food and then realizing too late their life (and healthcare system) is ruined.

The courts must think big here to quickly stop any and all use of ChatGPT, with a standard of integrity straight out of basic history. Stop those avoiding accountability, who think gross intentional harmful lies for profit made by machines (e.g. OpenAI) should be prevented or cleaned up by anyone other than themselves.

The FDA, created because of reporting popularized by The Jungle, didn’t work as well as it should. But that doesn’t mean the FDA can’t be fixed to reduce cancer in kids, or that another administration can’t be created to block the sad and easily predictable explosion in AI integrity breaches.

FDA Loophole for American Candy Gives Cancer to Kids

A NYT report highlights something I’ve been seeing a lot lately in American generative AI logic.

…many chemicals are approved under a provision known as Generally Recognized As Safe, which states that a food additive can forego review by the F.D.A. if it has been deemed safe by “qualified experts.”

Qualified experts is an obviously shady phrase that can enable private companies to self-regulate, a political process meant to directly corrupt safety for profits.

If a doctor at Stanford will go on the record saying smoking is good for you, in exchange for him getting lavish gifts, American tobacco companies will absolutely use that to deny science. True story.

So too with American candy companies, which seem to use giant safety regulation loopholes to act like cancer isn’t the predictable outcome of known carcinogens they serve children.

One point of contention is that the vast majority of the research on these additives has been done in animals because it is difficult (and unethical) to conduct toxicology research in humans. As a result, “It’s impossible to say that eliminating Red 3 or titanium dioxide from the American diet will reduce the number of people who suffer from cancer by a certain amount with total precision,” Mr. Faber said. “But anything that we can do to reduce our exposure to carcinogens, whether known or suspected carcinogens, is a step in the right direction.”

This is probably a good time to remember that the FDA was created as a reaction to labor abuse complaints in Chicago, as captured in The Jungle. Instead of directly improving rights for workers, the government sought to improve perception of the food quality from places with exposed inhumane working conditions.

At some point these discussions should start to push forward a realization that America often seems to embrace obvious graft and oppose quality, even in cases of children getting cancer.

One rotten apple spoils the bunch is a saying that seems to entirely escape anti-regulation zealots tying the hands of the FDA. And this behavior is having a profound impact on generative AI learning, which parrots inane ideas like science is evasive and pluralist because (hypothetically speaking) some candy oligarch sent her kid to medical school to keep them on family payroll as a dissenting “qualified expert”.

Related: Lege packt aus: Miese Maschen im Snack-Regal

How Fixing Howitzers in Ukraine is Like Baking a Cake

“From America with love” is written on a Ukrainian M777 “three axes” howitzer to be fired at Russians.

When I wrote my first book in 2012, I pitched the publisher on cooking recipes for cloud security.

My vision was that one page would describe how to make an historic meal (such as Royal Navy spotted dick) and then the rest of the chapter would be cloud technical steps (such as how to setup secure remote administration).

I even presented a test chapter for the RSA Conference in China on how to grill the perfect hamburger, as a recipe for cloud encryption and key management.

Things didn’t turn out quite like I had expected, as the publisher asked to change the title to virtualization, drop the food recipes, and insert a DVD. It felt like preparing a gourmet vegan dessert and being told to stick to the meat and potatoes.

*Sigh*

Nonetheless in my mind cooking remains a powerful way to convey the relationship between technology and knowledge.

Everybody eats.

Food automation tends to be disgusting, even causing illness. Whereas technology augmentation in human cooking, using recipes for quality control and governance, will produce the best possible meal.

Perhaps the canonical example I hear all the time in AI ethics circles… if you brought a robot into your home and told it to prepare you a steak dinner, should you be surprised if later you can’t find the dog?

Hey, I didn’t say the robot was Chinese. Stop thinking so simply.

Microsoft management clearly didn’t understand such basic anthropological tenets of technology use. The big news, hopefully surprising nobody, is illness has forced them to cancel a massively funded VR program.

The personnel demoing the tech appear to be using a variant of Microsoft HoloLens. The government recently halted plans to buy more “AR combat goggles” from Microsoft, instead approving $40 million for the company to develop a new version. The reversal came after discovering that the current version caused issues like headaches, eyestrain and nausea.

Such a waste of time and money to find out what is easily predicted.

Soldiers “cited IVAS 1.0’s poor low-light performance, display quality, cumbersomeness, poor reliability, inability to distinguish friend from foe, difficulty shooting, physical impairments and limited peripheral vision as reasons for their dissatisfaction,” per the DOT&E assessment. The Army knows that IVAS 1.0 is something of a lemon [yet] still plans on fielding the 5,000 IVAS 1.0 units it’s currently procuring from Microsoft at $46,000 a pop to training units and Army Recruiting command for a total price tag of $230 million.

It’s like reading some people got sick and then discovered their taco MRE bag wasn’t really a taco, just sugar and cornmeal drenched in preservatives and artificial taco flavors.

VR from Microsoft sounds like the hardtack (dry “cracker”) of combat goggles. A real bargain at $230 million.

See-through augmentation measured on efficiency and minimal interference is a whole different story, as it avoids all the foundational problems of automation (e.g. where to get flavor, or actual useful nutrition from).

Google glass really blew it on this point. They could have developed an HUD for highly technical work like repairing machines with both hands.

Of course Google didn’t think like this because their engineers all went straight from elite schools to sitting in a gourmet cafeteria eating free lunches and talking mostly about their exotic vacations.

They’re in a virtual world, the opposite of what’s required for knowledge, let alone innovation. And that’s why their products depend on finding people who really live, who have daily struggles and needs in a real world, to tell them what to engineer.

That’s all background to the main point here that howitzers in Ukraine are proving today what everyone should have been working on for at least the last decade: cooking.

DARPA’s training demos use something more pedestrian: cooking. Dr. Bruce Draper, the program’s manager, describes it as the ideal proxy task. “[Cooking is] a good example of a complex physical task that can be done in many ways. There are lots of different objects, solids, liquids, things change state, so it’s visually quite complex. There is specialized terminology, there are specialized devices, and there’s a lot of different ways it can be accomplished. So it’s a really good practice domain.” The team views PTG as eventually finding uses in medical training, evaluating the competency of medics and other healthcare services.

First you bake a cake together as a team using augmented vision… then you destroy invading armies with it.

Using phones and tablets to communicate in encrypted chatrooms, a rapidly growing group of U.S. and allied troops and contractors is providing real-time maintenance advice — usually speaking through interpreters — to Ukrainian troops on the battlefield. In a quick response, the U.S. team member told the Ukrainian to remove the gun’s breech at the rear of the howitzer and manually prime the firing pin so the gun could fire. He did it and it worked.

Delicious.

I’m not going to claim credit for this obvious future of technology based on ancient wisdom, given there are so many children’s tales saying the same thing.

Ratatouille is probably my favorite, easily digested in movie format.

The real kicker to the howitzer example is the technical teams spell out very precisely in life and death context where augmentation works best and where it fails (hint: Blockchain is a disaster).

As the U.S. and other allies send more and increasingly complex and high-tech weapons to Ukraine, demands are spiking. And since no U.S. or other NATO nations will send troops into the country to provide hands-on assistance — due to worries about being drawn into a direct conflict with Russia — they’ve turned to virtual chatrooms.

I use virtual chatrooms so much I forgot for a minute that they’re virtual.

The Ukrainian troops are often reluctant to send the weapons back out of the country for repairs. They’d rather do it themselves, and in nearly all cases — U.S. officials estimated 99% of the time — the Ukrainians do the repair and continue on. …Ukrainians can now put the split weapon back together. “They couldn’t do titanium welding before, they can do it now,” said the U.S. soldier, adding that “something that was two days ago blown up is now back in play.”

I love this SO MUCH. Right to Repair in a nutshell. Technology dramatically enhances developing markets by sharing knowledge like how to restore that technology in the field.

It’s the awesome Dakar Malle model of efficiency and sustainability that all technology should be put through, instead of lionizing the biggest waste teams.

And now for the main point:

Sometimes video chats aren’t possible. “A lot of times if they’re on the front line, they won’t do a video because sometimes (cell service) is a little spotty,” said a U.S. maintainer. “They’ll take pictures and send it to us through the chats and we sit there and diagnose it.”

Visual diagnosis in real time to bake a highly complicated cake. Including translation for chefs representing 17 nations in a small kitchen.

As they look to the future, they are planning to get some commercial, off-the-shelf translation goggles. That way, when they talk to each other they can skip the interpreters and just see the translation as they speak, making conversations easier and faster.

And I warned you about bockchain.

The expanse of weapons and equipment they’re handling and questions they’re fielding were even too complicated for a digital spreadsheet — forcing the team to go low-tech. One wall in their maintenance office is lined with an array of old-fashioned, color-coded Post-it notes, to help them track the weapons and maintenance needs.

Hope that’s clear. Writing a big blog post about how to share knowledge in the future is hard. Not as hard as a book, obviously, but I definitely could use some augmentation right now

More than anything it’s clear to me without government funded research teams, many tech companies would be utterly and completely lost in expensive dead end navel gazing.

DARPA is asking for developing recipes that really were needed a decade ago, based on assessment of hunger they see right now. While it’s fashionable to call this future thinking to avoid blame, in reality it’s being less ignorant about the present troubles.

Let the Russians desperate for a Chinese MRE eat cake instead, a delicious one right out of the howitzer.

Or I believe Molotov in WWII would have called them “bread baskets“.

Vyacheslav Molotov claimed in 1939 the Soviet Union was not dropping bombs on Finland, just airlifting food. The Finns thereafter called RRAB-3 cluster bombs “Molotov’s bread basket” (Molotovin leipäkori) and named their improvised incendiary device (used to counter Soviet tanks) a Molotov cocktail — “a drink to go with the food.”