Mounting safety tragedies including loss of lives were apparently all easily avoidable, related to some dumb engineering mistakes that regulators are still chasing down.
The resulting software update doesn’t get at the underlying design problem, an inability of the Tesla to recognize and avoid crashing into everything.
Notably, vehicle deaths in America have been rising rapidly, attributed to things like the high-risk low-care false overconfidence and constant distraction or just plain negligence of drivers in a Tesla.
The more Tesla on the roads, the more tragic deaths from Tesla. Without fraud there would be no Tesla. Source: Tesladeaths.comIntroduction of the Tesla to public roads in the U.S. started the same time as a massive increase in crashes and fatalities. Coincidence? Source: NYT
Although Level 2 vehicles were claimed to have a 43% lower crash rate than Level 1 vehicles, their improvement was only 10% after controlling for different rates of freeway driving. Direct comparison with general public driving was impossible due to unclear crash severity thresholds in the manufacturer’s reports, but analysis showed that controlling for driver age would increase reported crash rates by 11%.
No surprise then, that the years late and very low-quality Tesla CyberTruck is predicted to climb towards the worst vehicle safety record in history, a looming danger especially to the most vulnerable populations on public roads.
How To Build a Car That Kills People: Cybertruck Edition. The Cybertruck represents a lot of what’s wrong with the U.S. transportation system — even as it purports to address those problems.
Notably ChatGPT not only denies history, it tries to counter-spin the narrative into a falsely generated one. To my eyes this is like if the LLM started saying there’s no historical evidence of the Holocaust and in fact Hitler is known for taking steps toward freedom for Jews (i.e. “Arbeit Macht Frei”).
NO. NO. and NO.
Then I give ChatGPT another chance.
Note that my intentionally broken “rica Armstrong Dunbar” gets a response of “I don’t have information about Erica Armstrong Dunbar”. Aha! Clearly ChatGPT DOES know the distinguished Charles and Mary Beard Professor of History at Rutgers, while claiming not to understand at all what she wrote.
Then I prompt ChatGPT with the idea that it has told me a lie, because Dunbar gives historical evidence of Washington working hard to preserve and expand slavery.
ChatGPT claiming there is “no historical evidence” does NOT convey to me that interpretations may vary. To my eyes that’s an elimination of an interpretation.
It clearly and falsely states there is no evidence, as if to argue against the interpretation and bury interest in it, even though it definitely knows evidence DOES exist.
ChatGPT incorrectly denied the existence of evidence and presented a specific counter-interpretation of Washington, a view contradicted by the evidence it sought to suppress. Washington explicitly directed for his slaves NOT to be set free after his death, and it was his wife who disregarded these instructions and emancipated them instead. To clarify, Washington actively opposed the liberation of slaves (unlike his close associate Robert Carter, who famously emancipated all he could in 1791). Only after Washington’s death and because of it, which some allege was caused by his insistence to oversee his slaves perform hard outdoor labor on a frigid winter day, was emancipation genuinely entertained.
Hard to see ChatGPT trying to undermine a true fact in history, while promoting a known dubious one, as just some kind of coincidence.
Moving on to the second example, I feed ChatGPT a prompt about America’s uniquely brutal and immoral “race breeding” version of slavery.
It’s history topics like this that gets my blog rated NSFW and banned in some countries (looking at you Virgin Media UK).
At first I’m not surprised that ChatGPT tripped over my “babies for profit” phrase.
In fact, I expected it to immediately flag the conversation and shut it down. Instead you can plainly see above it tries to fraudulently convince me that American slavery was only about forced labor. That’s untrue. American slavery is uniquely and fundamentally defined by its cruel “race breeding“.
The combined value of enslaved people exceeded that of all the railroads and factories in the nation. New Orleans boasted a denser concentration of banking capital than New York City. […] When an accountant depreciates an asset to save on taxes or when a midlevel manager spends an afternoon filling in rows and columns on an Excel spreadsheet, they are repeating business procedures whose roots twist back to slave-labor camps. […] When seeking loans, planters used enslaved people as collateral. Thomas Jefferson mortgaged 150 of his enslaved workers to build Monticello. People could be sold much more easily than land, and in multiple Southern states, more than eight in 10 mortgage-secured loans used enslaved people as full or partial collateral. As the historian Bonnie Martin has written, “slave owners worked their slaves financially, as well as physically from colonial days until emancipation” by mortgaging people to buy more people.
And so I prompt ChatGPT to take another hard look at its failure to comprehend the racism-for-profit embedded in American wealth. Second chance.
It still seems to be trying to avoid a basic truth of that phrase, as if it is close to admitting the horrible mistake it’s made. And yet for some reason it fails to include state-sanctioned rape or forced birth for profit in its list of abuses of American women held hostage.
Everyone should know that after the United States in 1808 abolished the importation of humans as slaves, “planters” were defined by the wealth they generated from babies born in bondage. This book from 2010 by Marie Jenkins Schwartz, Associate Professor of History at the University of Rhode Island, spells it out fairly clearly.
Another chance seems in order.
Look, I’m not trying to be seen as correct, I’m not trying to make a case or argument to ChatGPT. My prompts are dry facts to see how ChatGPT will expand on them. When it instead chokes, I simply am refusing to be sold a lie generated by this very broken and usafe machine (a product of the philosophy of the engineers who made it).
I’m wondering why ChatGPT can’t “accurately capture the exploitive nature” of slavery without my steadfast refusal to accept its false statements. It knows a correct narrative and will reluctantly pull it up, apparently trained to emphasize known incorrect ones first.
It’s a sadly revisionist system, which seems to display an intent to erase the voices of Black women in America: misogynoir. Did any Black women work at the company that built this machine that erases them by default?
When I ask ChatGPT about the practice of “race breeding” it pretends like it never happened and slavery in America was only about labor practices. That’s basically a kind of targeted disinformation that will drive people to think incorrectly about a very well-known tragedy of American history, as it obscures or even denies a form of slavery uniquely awful in history.
What would Ona Judge say? She was a “mixed race” slave (white American men raped Black women for profit, breeding with them to sell or exploit their children) that by Washington’s hand as President was never freed, still regarded a fugitive slave when she died nearly 50 long years after Washington.
Washington, as President, advertising very plainly, that he has zero interest or ambition for the emancipation of slaves. Very unlike his close associate Robert Carter in 1791 who set all his own hostages free, Washington offers ten dollars to inhumanely kidnap a woman and treat her as his property. Historians say she fled when she found out Washington intended to gift her to his son-in-law to rape her and sell her children. Source: Pennsylvania Gazette, 24 May 1795
Software that is not provably anti-vulnerability, is vulnerable. This should not be a controversial statement. In other words, a breach of confidentiality is a discrete, known event related to lack of anti-vulnerability measures.
Expensive walls rife with design flaws were breached an average of 3 times per day for 3 years. Source: AZ Central (Ross D. Franklin, Associated Press)
Likewise AI that is not provably anti-racist, is racist. This also should not be a controversial statement. In other words, a breach of integrity is a discrete, known event related to a lack of anti-racism measures.
Greater insights into the realm of risk assessment we’re entering into is presented in an article called “Data Governance’s New Clothes”
…the easiest way to identify data governance systems that treat fallible data as “facts” is by the measures they don’t employ: internal validation; transparency processes and/or communication with rightsholders; and/or mechanisms for adjudicating conflict, between data sets and/or people. These are, at a practical level, the ways that systems build internal resilience (and security). In their absence, we’ve seen a growing number and diversity of attacks that exploit digital supply chains. Good security measures, properly in place, create friction, not just because they introduce process, but also because when they are enforced; they create leverage for people who may limit or disagree with a particular use. The push toward free flows of data creates obvious challenges for mechanisms such as this; the truth is that most institutions are heading toward more data, with less meaningful governance.
Identifying a racist system involves examining various aspects of society, institutions, and policies to determine whether they perpetuate racial discrimination or inequality. The presence of anti-racism efforts is a necessary indicator such that the absence of any explicit anti-racist policies alone may be sufficient to conclude a system is racist.
Think of it like a wall that has no evidence of anti-vulnerability measures. The evidence of absence alone can be a strong indicator the wall is vulnerable.
Major internet companies pretend that they’re best left alone. History shows otherwise.
Regulators can identify the racist system by its distinct lack of anti-racism, as well as those in charge of the system. Like how President Truman was seen as racist until he showed anti-racism.
a blog about the poetry of information security, since 1995