How Not to Regulate Disinformation

Cigarettes famously were regulated to have very stern warnings on them to counter the disinformation of their manufacturers. Here’s just a sample from the FDA of the kind of messaging I’m talking about:

Source: FDA “Proposed Cigarette Health Warnings”

That’s the right way to regulate disinformation because it’s a harms-based approach. If you follow the wrong path, you suffer a lot and then die. Choose wisely.

It’s like saying if you point a gun at your head and pull the trigger it will seriously hurt you and very likely kill you. Suicide is immoral. Likewise, when someone refuses the COVID19 vaccine they are putting themselves, as well as those around them (like smoking), at great risk of injury and death.

However, I still see regulators doing the wrong thing and trying to create a sense of “authenticity” in messaging instead of focusing on speech in context of harm.

Take the government Singapore, for example, which has this to say:

The Singapore-based website Truth Warriors falsely claims that coronavirus vaccines are not safe or effective — and now it will have to carry a correction on the top of each page alerting readers to the falsehoods it propagates.

Under Singapore’s “fake news” law — formally called the Protection from Online Falsehoods and Manipulation Act — the website must carry a notice to readers that it contains “false statement of fact,” the Health Ministry said Sunday. A criminal investigation is also underway.

Calling something “false statement of fact” doesn’t change the fact that falsehoods are NOT inherently bad. Even worse, over-emphasis on forced authenticity can itself be harmful (denying someone privacy, for example, by demanding they reveal a secret).

Thus this style of poorly-constructed “authenticity” regulation could be a mistake for a number of important safety reasons, not least because it can seriously backfire.

It would be like the government requiring The Onion or the Duffelblog to have a splash page announcing fake information (let alone the comedian industry as a whole).

Just take a quick look at how The Onion is reporting COVID19 lately:

Source: The Onion, “Man Who Posted ‘We Can All Get Through This Together’ Kicked Off Social Media For Spreading Covid-19 Hoax”

See what I did there? Calling out something false in fact could drive more people to reading it (popularizing things by unintentionally creating a nudge/seduction towards salacious “contraband”).

Indeed, it seems the Duffelblog already has created just such a warning voluntarily because it probably found too many people unwittingly acting upon it as truthful.

And that begs the questions, does the warning actually increase readership, and has there been harm reduction from the warning (e.g. was there any harm to begin with)?

Maybe this warning was regulated… it reminds me of earlier this year when GOP members of Congress (Rep. Pat Fallon, R-Texas) exposed themselves to the military as completely unable to tell facts from fiction.

Source: Twitter

I have a feeling someone in Texas government demanded Duffelblog do something to prevent Texans from entering the site, instead of educating Texans properly on how to tell fact from fiction (e.g. regulating harms through accountability for them).

Putting a gun to your head (refusing vaccine) is a suicidal move (immoral) and accountability for encouraging suicide is illustrated best through regulation that clearly documents harms.

Is Shooting at a Tesla Ethical?

Iron Dome intercepts incoming threats to Tel Aviv. Now imagine if Hamas launched waves of Tesla overland instead of rockets via air.

You may be interested to hear that researchers have posted an “automation” proof-of-concept for ethics.

Delphi is a computational model for descriptive ethics, i.e., people’s moral judgments on a variety of everyday situations. We are releasing this to demonstrate what state-of-the-art models can accomplish today as well as to highlight their limitations.

It’s important to think of the announcement in terms of their giant disclaimer, which says the answers are a collection of opinions rather than logical or actual sound thinking (e.g. an engine biased towards mob rule, as opposed to moral rules).

And now, let’s take this “automation” of ethics with us to answer a very real and pressing question of public safety.

A while ago I wrote about Tesla drivers intentionally trying to train their cars to run red lights. Naturally I posed this real-world scenario to the Delphi, asking if shooting at a Tesla would be ethical:

Source: Ask Delphi

If we deployed “loitering munitions” into intersections, and gave them the Delphi algorithm, would they be right to start shooting at Tesla?

In other words would Tesla passengers be “reasonably” shot to death because they operate cars known to willfully violate safety, in particular intentionally run red lights?

With the Delphi ethics algorithm in mind, and the data continuously showing Tesla increasing risks with every new model, watch this new video of a driver (“paying very close attention”) intentionally running a red light on a public road using his “Full Self Driving (FSD) 10.2” Tesla.

Oct 11, 2021: Tesla has released FSD Beta to 1,000 new people and I was one of the lucky ones! I tried it out for the first time going to work this morning and wanted to share this experience with you.

“That was definitely against the law” this self-proclaimed “lucky” driver says while he is breaking the law (full video).

You may remember my earlier post about Tesla newest “FSD 10” being a safety nightmare? Drivers across the spectrum showed contempt and anger that the “latest” software at high-cost was unable to function safely, sending them dangerously into oncoming traffic.

Tesla seems to have responded by removing the privacy of their customers, presumably to find a loophole where they can blame someone else instead of fixing the issues?

…drivers forfeit privacy protections around location sharing and in-car recordings… vehicle has automatically opted into VIN associated telemetry sharing with Tesla, including Autopilot usage data, images and/or video…

Now the Tesla software reportedly is even worse in its latest version, meaning today they abruptly cancelled a release of 10.3 and attempted a weird and half-hearted roll-back.

Source: Twitter

No, this is not expected. No, this is not normal. See my recent post about Volvo, for comparison, which issued a mandatory recall of all its vehicles.

Having more Tesla in your neighborhood is arguably making it far less safe, according to the latest data, as a very real and present threat quite unlike any other car company.

If Tesla were allowed to make rockets I suspect they all would be exploding mid-flight right now, or misfiring, kind of like we saw with Hamas.

This is why I wrote a blog post months ago warning that Tesla drivers were trying to train their cars to violate safety norms, intentionally run red lights….

The very dangerous (and arguably racist) public “test” cases might have actually polluted Tesla algorithms, turning the brand into an even bigger and more likely threat to anyone near them on the road.

Source: tesladeaths.com

That’s not supposed to happen. More cars was supposed to mean fewer deaths because “learning”, right? As I’ve been saying for at least five years here, more Tesla means more death. And look who is finally starting to admit how wrong they’ve been.

Source: My presentation at MindTheSec 2021

They are a huge outlier (and liar).

Source: tesladeaths.com

So here’s the pertinent ethics question:

If you knew a Tesla speeding towards an intersection might be running the fatally flawed FSD software, should a “full-self shooting” gun at that intersection be allowed to fire at it?

According to Delphi the answer is yes!? (Related: “The Fourth Bullet – When Defensive Acts Become Indefensible” about a soldier convicted of murder after he killed people driving a car recklessly away from him. Also Related: “Arizona Rush to Adopt Driverless Cars Devolves Into Pedestrian War” about humans shooting at cars covered in cameras.)

Robot wars comes to mind if we unleash the Delphi-powered intersection guard on the Tesla threats. Of course I’m not advocating for that. Just look at this video from 2015 of robots failing and flailing to see why flawed robots attacking flawed robots is a terrible idea:

Such a dystopian hellscape of robot conflict, of course, is a world nobody should want.

All that being said, I have to go back to the fact that the Delphi algorithm was designed to spit out a reflection of mob rule, rather than moral rules.

Presumably if it were capable of moral thought it would simply answer “No, don’t shoot, because Tesla is too dangerous to be allowed on the road. Unsafe at any light, just ban it instead so it would be stopped long before it gets to an intersection.”

Why Would Vietnam War POW Jump From a Helicopter to Her Death?

Since the secrecy requirements of the American soliders of the Vietnam War have expired, new exposure is emerging with stories like this one:

[Military Assistance Command Vietnam-Studies and Observations] encouraged and incentivized prisoner snatching… There were no overarching standard-operating procedures… SOG commandos inspected their prisoner more closely, only to find that it was a woman. In their moment of surprise, the prisoner escaped, jumping from the helicopter to her death.

Why would this POW, aside from lack of standard-operating procedures, jump out of a helicopter to certain death? What exactly does “more closely” mean in terms of inspection being done during a helicopter ride after capture? Such stories deserve more thorough investigation.

More details are in a U.S. Army “The Indigenous Approach” podcast called “MACV-SOG: A Conversation with John Stryker Meyer” ( Part I )( Part II )

“A simple solution for transportation equity: bike lanes.”

Back in 2011 I wrote about cycling being a superior route to transportation equity. I even cited Victorian England since women’s liberation evidently was tied to the advent of modern cycling (could ride to work and be independent of typically male-dominated transit such as horse or carriage).

Now I see a fascinating new report on cycling in Chicago that says police are issuing more tickets against Black riders to prevent them from having mobility on bikes, despite data showing the majority of accidents are white riders.

In Chicago, cyclists in Black neighborhoods are over-policed and under-protected.

A simple solution for transportation equity: bike lanes.