A “Potemkin Village” is made from fake storefronts built to fraudulently impress a visiting czar and dignitaries. The “front organization” is then torn down once the visit ends.
Step one (PDF): Facebook sets up special pay-to-play access (competitive advantage) to user data and leaks this privileged (back) door to Russia
(October 8, 2014 email in which Facebook engineer Alberto Tretti emails Archibong and Papamiltiadis notifying them that entities with Russian IP addresses have been using the Pinterest API access token to pull over 3 billion data points per day through the Ordered Friends API, a private API offered by Facebook to certain companies who made extravagant ads purchases to give them a competitive advantage against all other companies. Tretti sends the email because he is clearly concerned that Russian entities have somehow obtained Pinterest’s access token to obtain immense amounts of consumer data. Merely an hour later Tretti, after meeting with Facebook’s top security personnel, retracts his statement without explanation, calling it only a “series of unfortunate coincidences” without further explanation. It is highly unlikely that in only an hour Facebook engineers were able to determine definitively that Russia had not engaged in foul play, particularly in light of Tretti’s clear statement that 3 billion API calls were made per day from Pinterest and that most of these calls were made from Russian IP addresses when Pinterest does not maintain servers or offices in Russia)
Step two: Facebook CEO announces his company doesn’t care if information is inauthentic
Most of the attention on Facebook and disinformation in the past week or so has focused on the platform’s decision not to fact-check political advertising, along with the choice of right-wing site Breitbart News as one of the “trusted sources” for Facebook’s News tab. But these two developments are just part of the much larger story about Facebook’s role in distributing disinformation of all kinds, an issue that is becoming more crucial as we get closer to the 2020 presidential election. And according to one recent study, the problem is getting worse instead of better, especially when it comes to news stories about issues related to the election. Avaaz, a site that specializes in raising public awareness about global public-policy issues, says its research shows fake news stories got 86 million views in the past three months, more than three times as many as during the previous three-month period.
Step three: Facebook announces it has used an academic institution led by former staff to measure authenticity of actions (not information)
Working with the Stanford Internet Observatory (SIO) and the Daily Beast, Facebook determined that the shuttered accounts were coordinating to advance pro-Russian agendas through the use of fabricated profiles and accounts of real people from the countries where they operated, including local content providers. The sites were removed not because of the content itself, apparently, but because the accounts promoting the content were engaged in inauthentic and coordinated actions.
The Potemkin Village effect here is thus former staff of Facebook hiding inside an academic village to look like they aren’t working for Facebook, while still working for Facebook on a variation of the thing that Facebook has said it would not be working on.
For example, hypothetically speaking:
If Facebook were a company in 1915 would they have said they don’t care about inauthentic information in “Birth of a Nation” that encouraged restarting the KKK?
Instead, based on this new SIO model, it seems Facebook of 1915 would partner with a University to announce they will target and block films of pro-KKK rallies on the basis of white sheets and burning crosses being inauthentic coordinated action.
It reads to me like a very strange us of API as privacy backdoors, as well as use of “academic” organizations as legal backdoors; both seem to mean false self-regulation, in an attempt to side-step dealing with the obvious external pressure to regulate harms from speech.
Facebook perhaps would have said in 1915 that KKK are fine if they call for genocide and the death of non-whites, as long as the KKK known to be pushing such toxic and inauthentic statements don’t put a hood on to conceal their face while they do it.
Easy to see some irony in how Facebook takes an inauthentic position, with their own staff strategically installed into an academic institution like Stanford, while telling everyone else they have to be authentic in their actions.
Also perhaps this is a good time to remember how a Stanford professor took large payments from tobacco companies to say cigarettes weren’t causing cancer.
[Board-certified otolaryngologist Bill Fees] said he was paid $100,000 to testify in a single case.
Updated November 12 to add latest conclusions of the SIO about Facebook data provided to them.
Considered as a whole, the data provided by Facebook — along with the larger online network of websites and accounts that these Pages are connected to — reveal a large, multifaceted operation set up with the aim of artificially boosting narratives favorable to the Russian state and disparaging Russia’s rivals. Over a period when Russia was engaged in a wide range of geopolitical and cultural conflicts, including Ukraine, MH17, Syria, the Skripal Affair, the Olympics ban, and NATO expansion, the GRU turned to active measures to try to make the narrative playing field more favorable. These active measures included social-media tactics that were repetitively deployed but seldom successful when executed by the GRU. When the tactics were successful, it was typically because they exploited mainstream media outlets; leveraged purportedly independent alternative media that acts, at best, as an uncritical recipient of contributed pieces; and used fake authors and fake grassroots amplifiers to articulate and distribute the state’s point of view. Given that many of these tactics are analogs of those used in Cold-War influence operations, it seems certain that they will continue to be refined and updated for the internet era, and are likely to be used to greater effect.
One thing you haven’t seen and probably will never see is the SIO saying Facebook is a threat, or that privately-held publishing/advertising companies are a danger to society (e.g. how tobacco companies or oil companies are a danger).