Testing Things the “Wrong Way” is the Right Way to Test

Yesterday I have a talk at ISACA-SF where I repeatedly emphasized how AI auditing is about testing things in a way that breaks them.

This shouldn’t be news to anyone used to testing things, and yet many of the platforms somehow are trying to respond to algorithm failure by telling people to stop the tests.

I documented in my talk Amazon and Tesla doing it especially plainly, showing that their preferred response to security flaws is for people to stop testing for them. It’s like the 1980s all over again, despite bug bounties and stunt hacking having become so popular.

Here’s a perfect example from Facebook.

In 2017, I got fed up. I filmed a little experiment with the now-co-host of my podcast, Luke Bailey. We made a brand new Facebook account and I spent the week manually liking conservative Facebook pages and then every subsequent page the platform recommended for me. The Right-wing Ryan radicalized and hard. My feed jumped from normal Republican content to creepy boomer posts about sexy women to Alex Jones posts within a week.

Facebook was very mad about this! Their response was, at the time, the most aggressive they had ever been with me: “This isn’t an experiment; it’s a stunt. It isn’t how people set up or use Facebook, and suggesting so is misleading.”

I should also point out in 2017 a researcher reporting a vulnerability would have expected a massive bug bounty payment in an infamous reward system of Facebook. However, in this story the security failure was so bad, the vulnerability so deep, Facebook security responded with the opposite — they told the researcher to stop doing things in ways that prove a systemic lack of safety on the site related to business logic flaws (BLF).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.