Any AI System NOT Provably Anti-Racist, is Provably Racist

Software that is not provably anti-vulnerability, is vulnerable. This should not be a controversial statement. In other words, a breach of confidentiality is a discrete, known event related to lack of anti-vulnerability measures.

Expensive walls rife with design flaws were breached an average of 3 times per day for 3 years. Source: AZ Central (Ross D. Franklin, Associated Press)

Likewise AI that is not provably anti-racist, is racist. This also should not be a controversial statement. In other words, a breach of integrity is a discrete, known event related to a lack of anti-racism measures.

Greater insights into the realm of risk assessment we’re entering into is presented in an article called “Data Governance’s New Clothes

…the easiest way to identify data governance systems that treat fallible data as “facts” is by the measures they don’t employ: internal validation; transparency processes and/or communication with rightsholders; and/or mechanisms for adjudicating conflict, between data sets and/or people. These are, at a practical level, the ways that systems build internal resilience (and security). In their absence, we’ve seen a growing number and diversity of attacks that exploit digital supply chains. Good security measures, properly in place, create friction, not just because they introduce process, but also because when they are enforced; they create leverage for people who may limit or disagree with a particular use. The push toward free flows of data creates obvious challenges for mechanisms such as this; the truth is that most institutions are heading toward more data, with less meaningful governance.

Identifying a racist system involves examining various aspects of society, institutions, and policies to determine whether they perpetuate racial discrimination or inequality. The presence of anti-racism efforts is a necessary indicator such that the absence of any explicit anti-racist policies alone may be sufficient to conclude a system is racist.

Think of it like a wall that has no evidence of anti-vulnerability measures. The evidence of absence alone can be a strong indicator the wall is vulnerable.

For further reading about what good governance looks like, consider another article called “The Tech Giants’ Anti-regulation Fantasy

Major internet companies pretend that they’re best left alone. History shows otherwise.

Regulators can identify the racist system by its distinct lack of anti-racism, as well as those in charge of the system. Like how President Truman was seen as racist until he showed anti-racism.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.