Time to patch (Intel released new firmware) and go on with life. Keys in secure hardware reportedly can be exposed in as little as a few minutes:
…timing leakage on Intel firmware-based TPM (fTPM) as well as in STMicroelectronics’ TPM chip. Both exhibit secret-dependent execution times during cryptographic signature generation. While the key should remain safely inside the TPM hardware, we show how this information allows an attacker to recover 256-bit private keys…
In 2013 I was flying around speaking on big data security controls, and waste water analysis was one of my go-to examples of privacy and integrity risks.
The charts I showed sometimes were the most popular drugs detected in each city’s wastewater site (e.g. cocaine in Oregon) and I would joke that we could write a guide-book to the world based on what “logs” were found.
Scientists at that time claimed the ability to look at city-wide water treatment plants and backtrack outputs to city-block locality. In near future they said it would be possible to backtrack to house or building.
For example, you get a prescription for a drug and the insurance company buys your wastewater metadata because it shows you’re taking the generic drug version while putting brand label receipts in claim forms. Or someone looks at past 5 year analysis of drugs you’re on, based on sewer data science, to estimate your insurance rates.
This wasn’t entirely novel for me. As a kid I was fascinated by an archaeologist who specialized in digs of the Old West. Everything in a frontier town might be thrown down the hole (e.g. destroy evidence of “edge” behavior), so she would write narratives about real life based on the bottles, pistols, clothes, etc found in and around where an outhouse once stood.
…Auggi, a gut-health startup that’s building an app for people to track gastrointestinal issues, and Seed Health, which works on applying microbes to human health and sells probiotics — are soliciting poop photos from anyone who wants to send them. The companies began collecting the photos online on Monday via a campaign cheekily called “Give a S–t”…
It’s a novel approach in that you aren’t pinned to the toilet in your home and can go outside and take pictures of poop on a sidewalk to upload.
This could be a game-changer given how many rideshare drivers are relieving themselves in cities like San Francisco.
Here’s the sort of chart we need right now, and not just because it looks like ride-share companies giving us the finger.
Uber’s army of 45,000 people suddenly driving from far-away places into a tiny 7 mile by 7 mile peninsula, with zero plans for their healthcare needs, infamously drove up rates of feces deposited all over public places.
…anecdotal complaints have gotten the attention of San Francisco City Attorney Dennis Herrera. Last week, his office released information for the first time about the number of Uber and Lyft drivers estimated to be working in the city: 45,000. To compare, 1,500 taxi medallions were given out [in 2016], according to the city’s Treasurer & Tax Collector. For perspective, Bruce Schaller, an urban transportation expert, said there are about 55,000 Uber, Lyft and other ride-sharing drivers in New York City, a metropolis of 8 million people, eight times the size of San Francisco.
I’ll just say it again, that a rise in human waste on the streets correlates pretty heavily to a rise of ride share drivers from far away needing a convenient place to relieve themselves (especially as many ended up sleeping in their cars).
In a conversation I had with a man in 2016 who had jumped out of his car to start peeing on the sidewalk in front of my house (despite surveillance cameras pointed right at him), he told me his plight:
Uber driver: I plan to quit as soon as I got my $700 bonus for 100 rides
Me: Because you just needed that quick money?
Uber driver: No, man there are no restrooms. I’m tired of taking a shit on sidewalks and peeing in newspaper boxes. It’s degrading
There definitely was a spike in 2016, which perhaps could have been correlated to gig economy workers seeing that $700 bonus and wandering into the city.
Sightings of human feces on the sidewalks are now a regular occurrence; over the past 10 years, complaints about human waste have increased 400%. People now call the city 65 times a day to report poop, and there have been 14,597 calls in 2018 alone. Last year, software engineer Jenn Wong even created a poop map of San Francisco, showing the concentration of incidents across the city. New mayor London Breed said: “There is more feces on the sidewalks than I’ve ever seen growing up here.” In a revolting recent incident, a 20lb bag of fecal waste showed up on a street in the city’s Tenderloin district.
Do you know what also became a regular occurrence over the past 10 years? Ride share vehicles with drivers needing to poop and no time or place to go.
Many people mistakenly attribute the dirty truth about ride-share driver behavior to homelessness, despite curious facts like “there aren’t actually more homeless people than there have been in the past”.
People also ignore the fact that being homeless and living on the street doesn’t mean that people don’t care about their living environment. Homeless are known actually to clean and sweep, whereas a driver is far more likely to poop at whatever spot they can get away with and then scoot.
I’m not sure why it is so hard for people to admit that a massive rise in ride-sharing drivers and no public restrooms for them becomes an obvious contributor of waste problems.
In one case I even saw an Uber SUV stop in the middle of a street, a passenger with a dog jumped out and peed directly uphill from a small restaurant with sidewalk seating…the Uber crew then jumped back in and sped away as those eating watched helplessly while rivers of hot dog urine flowed under their dining tables.
That kind of scenario is common sense bad, no? Just look at ride-sharing booms in the 1800s for cities like London, which led to special huts being built for driver care and control.
By 1898 newspapers around the world reported “40 shelters in London, accommodating 3500 cabmen, and there was a fund, provided mostly by subscription, for the maintenance of them.”
An app uploading photos for analysis, or even doing checks within the app itself, would both be a privacy threat to all the ride share drivers hoping to get away with their dirty business on streets, as well as give knowledge that would prove a city’s most vulnerable (homeless) populations aren’t always to blame.
It’s a highly political topic, such that a “wasteland” interactive map with 2014 data turned into a crazy right-wing propaganda campaign to generate fear about San Francisco sanitation.
No mention ever is made in these political fights about unregulated ride-share drivers despite the obvious impact of at least 40,000 people driving into the city and around in circles all day every day generating pollution, noise, congestion and ultimately desperate for places to poop.
Waste analysis sensors could change all that and the real cost of Uber, Lyft etc could lead to sanitation fees (maintenance funds) for a modern-day Rideshare Shelter, which of course would have sensors on toilets.
However, already there’s a security issue mentioned in the plan for these startups. Their data collection requires people uploading photos to manually classify, which sounds to me like an integrity disaster. A recipe for shitty data, if you will.
[Jack Gilbert, a professor of pediatrics at the University of California San Diego School of Medicine and cofounder of the American Gut Project, a science project that solicits fecal samples from people] said that people are asked to rate their stool on the Bristol stool chart in pretty much every clinical trial he conducts, and automating this process would reduce bias and variation in data collection. “Human beings are just not very good at recording things,” he said.
Hopefully the startups will transition to the automated app and then traditional San Francisco residents who still walk on sidewalks, instead of calling a car to drive them three blocks, can use AI to efficiently report the prevalence of Uber poops.
Joshua Maddux tweeted easily reproducible evidence that the Facebook app turns on your iPhone camera without notifying you and at times you weren’t expecting. TNW picked up the story:
By now, everyone should be well aware that any iOS app that has been granted access to your camera can secretly record you. Back in 2017, researcher Felix Krause spoke to TNW about the same issue.
At the time, the researcher noted one way to deal with this privacy concern is to revoke camera access (though that arguably doesn’t make for a smooth software experience). Another thing he suggested is covering up your camera — like former FBI director James Comey and Facebook‘s own emperor Mark Zuckerberg do.
Before saying that everyone should expect allowing “emperor Zuckerberg” access to your camera means he will spy on you, however, the author backs down and says it’s unclear whether Facebook secretly taking video is to be expected by iPhone users.
It remains unclear if this is expected behavior or simply a bug in the software for iOS (we all know what Facebook will say; spoiler: “Muh, duh, guh, it’s a bug. We sorry.”). For what it’s worth, we’ve been unable to reproduce the issue on Android (version 10, used on Google Pixel 4).
See my earlier post on neo-absolutist card indexes for a historic reference of what life was like for those who couldn’t quit Facebook of the 1800s.
One reason Facebook could repeatedly issue blanket denials “we don’t use your sensors for ads” could be that they shovel meta data into analytic engines, and sell that to affiliates. Those other companies pay for the meta data. Someone else advertises to you, through this tortured logic.
Would that enable Facebook to claim they don’t consider themselves to be using the data for advertising? We’d have to do a deeper line of auditing to find out for sure. Looking at transfer of data is not enough anymore, as analytics increasingly can be done onboard mobile devices including drones collecting massive amounts of sensor data.
This also means Facebook could claim they have no evidence of photos, videos, etc being transmitted to them, while transmitting rich meta data about users based on sensor capture.
See this example thread, which claims Spotify was the one who decided to target ads.
The most direct question is whether Facebook is able to use listening to sell data to companies like Spotify as profile/targeting meta information, without revealing to Spotify or anyone else that a microphone or camera actually was used?
Police have embraced an emerging tactic that may be giving paws to cyber criminals.
…English springer spaniels who can detect hidden electronic devices. They follow the scent of a chemical coating used in manufacturing just as police dogs can sniff out blood, explosives and narcotics.
This dogmatic approach is not far removed from SIM sniffers used in prisons
The dogs can do this because cell phones have a smell. The psychologist Stanley Coren once wrote that he left a collection of cell phone parts in boxes for ten days and opened them to find “a sweet metallic smell that I might fantasize that a newly built robot would have, with perhaps a faint ozone-like overtone.”
Also in the news lately is an infamous Russian “seabed warfare” ship that suddenly appeared in Caribbean waters.
She can deploy deep-diving submarines and has two different remote-operated vehicle (ROV) systems. And they can reach almost any undersea cable on the planet, even in deep water where conventional wisdom says that a cable should be safe.
In the same news story, the author speculates that ship is engaged right now in undersea cable attacks.
…search patterns are different from when she is near Internet cables. So we can infer that she us doing something different, and using different systems.
So has she been searching for something on this trip? The journey from her base in the Arctic to the Caribbean is approximately 5,800 miles. With her cruising speed of 14.5 knots it should have taken her about two weeks. Instead it has taken her over a month. So it does appear likely.
Maps of the Caribbean waters illustrate the relevance of any ship’s position to Internet cables and seabed warfare.
A Russian ship on the northwest coast of Trinidad means it’s either inspecting or even tapping into the new DeepBlue cable, listed as going online 2020. Trinidad is in the lower right corner of the above map. Here’s a zoomed in look at the area to compare with the ship position map above:
And the DeepBlue cable specs give a pretty good idea of why a Russian seabed warfare ship would be hovering about in those specific waters…
Spanning approximately 12,000 km and initially landing in 14 markets, the Deep Blue Cable will meet an urgent demand for advanced telecom services across the Caribbean. This resilient state-of-the-art cable has up to 8 fibre pairs with an initial capacity of 6Tbps and ultimate capacity of approximately 20Tbps per fibre pair. It is designed to be fully looped maximizing system resiliency. With more than 40 planned landings, Deep Blue Cable will bring 28 island nations closer to each other and better connected to the world.
By the next day, he had a response and he was even more unhappy. “THE ALGORITHM”, described similarly to Kafka’s 1915 novel “The Trial“, became the focus of his complaint:
She spoke to two Apple reps. Both very nice, courteous people representing an utterly broken and reprehensible system. The first person was like “I don’t know why, but I swear we’re not discriminating, IT’S JUST THE ALGORITHM”. I shit you not. “IT’S JUST THE ALGORITHM!”. […] So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Apple and GS are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance.
And the following day he appeals to regulators for a transparency regulation:
It should be the law that credit assessments produce an accessible dossier detailing the inputs into the algorithm, provide a fair chance to correct faulty inputs, and explain plainly why difference apply. We need transparency and fairness. What do you think @ewarren?
Transparency is a reasonable request. Another reasonable request in the thread was evidence of diversity within the team that developed the AppleCard product. These solutions are neither hard nor hidden.
What algorithms are doing, time and again, is accelerating and spreading historic wrongs. The question fast is becoming whether centuries of social debt in forms of discrimination against women and minorities is what technology companies are prepared for when “THE ALGORITHM” exposes the political science of inequality and links it to them.
Woz, founder of Apple, correctly states that only the government can correct these imbalances. Companies are too powerful for any individual to keep the market functioning to any degree of fairness.
And the women named in the original tweet also correctly states that her privileged status, achieving a correction for her own account, is no guarantee of a social system of fairness for anyone else.
I care about justice for all. It’s why, when the AppleCard manager told me she was aware of David’s tweets and that my credit limit would be raised to meet his, without any real explanation, I felt the weight and guilt of my ridiculous privilege. So many women (and men) have responded to David’s twitter thread with their own stories of credit injustices. This is not merely a story about sexism and credit algorithm blackboxes, but about how rich people nearly always get their way. Justice for another rich white woman is not justice at all.
Again these are not revolutionary concepts. We’re seeing the impact from a disconnect between history, social science of resource management, and the application of technology. Fixing technology means applying social science theory in the context of history. Transparency and diversity work only when applied in that manner.
In my recent presentation to auditors at the annual ISACA-SF conference, I conclude with a list and several examples of how AI auditing will perform most effectively.
One of the problems we’re going to run into with auditing Apple products for transparency will be (from denying our right-to-repair hardware to forcing “store” bought software) they have been long waging a war against any transparency in technology.
Apple’s subtle, anti-competitive practices don’t look terrible in isolation, but together they form a clear strategy.
The closed-minded Apple model of business is also dangerous as it directly inspires others to repeat the mistakes.
A good analogy I give to our customers is, what we used to do [with industrial technology] was like a Nokia phone. It was a phone. Supposed to talk. Or you can do text. That’s all our systems are. They’re supposed to do energy management. They do it. They’re supposed to protect against fire. They do it. Right? Now our systems are more like Apple. It’s a platform. You can load any app. It works. But you can also talk, and you can also text. But you can also listen to the music. Possibilities emerge based upon what you want.
That closing concept of possibilities can be a very dangerous prospect if “what you want” comes from a privileged position of power with no accountability. In other words do you want to live in a building run by a criminal brain?
When an African American showed up to rent an apartment owned by a young real-estate scion named Donald Trump and his family, the building superintendent did what he claimed he’d been told to do. He allegedly attached a separate sheet of paper to the application, marked with the letter “C.” “C” for “Colored.” According to the Department of Justice, that was the crude code that ensured the rental would be denied.
Somehow THE ALGORITHM in that case ended up in the White House. And let us not forget that building was given such a peculiar name by Americans trying to appease white supremacists and stop blacks from entering even as guests of the President.
…Mississippi senator suggesting that after the dinner [allowing a black man to attend] the Executive Mansion was “so saturated with the odour of the nigger that the rats have taken refuge in the stable”. […] Roosevelt’s staff went into damage control, first denying the dinner had taken place and later pretending it was actually a quick bite over lunch, at which no women were in attendance.
A recent commentary about fixing closed minds, closed markets, and bias within in the technology industry perhaps explained it best:
The burden to fix this is upon white people in the tech industry. It is incumbent on the white women in the “women in tech” movement to course correct, because people who occupy less than 1% of executive positions cannot be expected to change the direction of the ship. The white women involved need to recognize when their narrative is the dominant voice and dismantle it. It is incumbent on white women to recognize when they have a seat at the table (even if they are the only woman at the table) and use it to make change. And we need to stop praising one another—and of course, white men—for taking small steps towards a journey of “wokeness” and instead push one another to do more.
Those sailing the ship need to course correct it. We shouldn’t expect people outside the cockpit to drive necessary changes. The exception is when talking about the governance group that licenses ship captains and thus holds them accountable for acting like an AppleCard.
An author claims to have distilled down AI risks of great importance, which they refer to as “Misaligned objectives”
The point is: nobody ever intends for robots that look like Arnold Schwarznegger to murder everyone. It all starts off innocent enough – Google’s AI can now schedule your appointments over the phone – then, before you know it, we’ve accidentally created a superintelligent machine and humans are an endangered species.
Could this happen for real? There’s a handful of world-renowned AI and computer experts who think so. Oxford philosopher Nick Bostrom‘s Paperclip Maximizer uses the arbitrary example of an AI whose purpose is to optimize the process of manufacturing paperclips. Eventually the AI turns the entire planet into a paperclip factory in its quest to optimize its processes.
Seems to me this article is missing the point entirely that mis-aligned objectives can be aligned yet with unexpected or sloppy ways that people are reluctant to revise and clarify.
It reminds me of criticisms in economics of using poor measurements on productivity, which in Soviet Russia was a constant problem (e.g. window factories spit out panes that nobody could use). Someone is benefiting from massive paperclip production but who retains authorization over output?
If we’re meant to be saying a centrally-planned centrally-controlled system of paperclip production is disastrous for eveyrone but dear leader (in this case an algorithm), we might as well be talking about market theory texts from the 1980s,
Let’s move away from these theoretical depictions of AI as future Communism and instead consider a market based application today of automation that kills.
Cement trucks have automation in them. They repeatedly run over cyclists and kill because they operate with too wide a margin of error in society with vague accountability, not due to mis-aligned objectives.
Take for example how San Francisco has just declared a state of emergency over pedestrians and cyclists dying from automated killing machines roaming city streets.
As of August 31, the 2019 death toll from traffic fatalities in San Francisco was 22 people — but that number doesn’t include those who were killed in September and October, including Pilsoo Seong, 69, who died in the Mission last week after being hit by a truck. On Tuesday, the Board of Supervisors responded to public outcry over the issue by passing a resolution to declare a state of emergency for traffic safety in San Francisco.
Everyone has similar or same objectives of moving about on the street, it’s just that some are allowed to operate with such low levels of quality that they can indiscriminately murder others and say it’s within their expected operations.
I can give hundreds of similar examples. Jaywalking is an excellent example, as machines already have interpreted that racist law (human objective to criminalize non-white populations) as license to kill pedestrians without accountability.
During several presentations and many meetings this past week I’ve had to discuss an accumulation of insider threat stories related to cloud service providers.
It has gotten so bad that our industry should stop saying “evil maid” and say “evil SRE” instead. Anyway, has anyone made a list? I haven’t seen a good one yet so here’s a handy reference for those asking.
First, before I get into the list, don’t forget that engineers are expected to plan for people doing things incorrectly. It is not ok for any cloud provider to say they accept no responsibility for their own staff causing harm.
Do not accept the line that victims should be entirely responsible. Amazon has been forced to back down on this before.
After watching customer after customer screw up their AWS S3 security and expose highly sensitive files publicly to the internet, Amazon has responded. With a dashboard warning indicator…
And in the case of an ex-AWS engineer attacking a customer, the careless attitude of Amazon has become especially relevant.
“The impact of SSRF is being worsened by the offering of public clouds, and the major players like AWS are not doing anything to fix it,” said Cloudflare’s Evan Johnson. Now senators Ron Wyden and Elizabeth Warren have penned an open letter to the FTC, asking it to investigate if “Amazon’s failure to secure the servers it rented to Capital One may have violated federal law.” It noted that while Google and Microsoft have both taken steps to protect customers from SSRF attacks, “Amazon continues to sell defective cloud computing services to businesses, government agencies and to the general public.”
Second, a more complicated point regularly put forth by cloud companies including Amazon is how “at the time” means they can escape liability.
When an Uber driver brutally killed a little girl in a crosswalk, Uber tried to claim their driver “at the time” wasn’t their responsibility.
Attorneys for Uber said the ride-sharing company was not liable … because the driver was an independent contractor and had no reason to be actively engaged with the app at the time. […] “It’s their technology. They need to make it safe,” [family attorney] Dolan said in January, suggesting that a hands-free mode would bring the business into compliance with distracted-driving laws.
Uber paid the family of the dead girl an undisclosed sum a year later “to avoid a trial about its responsibility for drivers who serve its customers”.
Lyft ran into a similar situation recently when “rideshare rapists” around America operated “at the time” as private individuals who just happened to have Lyft signage in their car when they raped women. The women were intoxicated or otherwise unable to detect that drivers “at the time” were abusing intimate knowledge of gaps in a cloud service trust model (including weak background checks).
Imagine a bank telling you they have no responsibility for lost money in your account because “at the time” the person stealing from you was no longer their employee. It’s not quite so simple, otherwise companies would fire criminals in process and be able to instantly wash their hands without stopping actual harm.
Let’s not blindly walk into this thinking insiders, such as SRE, are off the hook when someone throws up pithy “at the time” arguments.
The FBI has already arrested a suspect in the case: A former engineer at Amazon Web Services (AWS), Paige Thompson, after she boasted about the data theft on GitHub.
Remember how I said “at the time” arguments should not easily get people off the hook? The former engineer at AWS, who worked there around 2015-2016, used exploits known since 2012 on servers in 2019. “At the time” gets fuzzy here.
Let the timeline sink in, and then take note customers were making mistakes with Amazon for many years, and their staff would have front row seats to the vulnerabilities. Other companies treated the same situation very differently. “Little Man in My Head” put it like this:
The three biggest cloud providers are Amazon AWS, Microsoft Azure, and Google Cloud Platform (GCP). All three have very dangerous instance metadata endpoints, yet Azure and GCP applications seem to never get hit by this dangerous SSRF vulnerability. On the other hand, AWS applications continue to get hit over and over and over again. Does that tell you something?
Yes, it tells us that an inside engineer working for Amazon any time after 2012 would have seen a lot of vulnerabilities sitting ripe for the picking, unlike engineers at other cloud providers. Boasting about theft is what got the former engineer caught, which should make us wonder what about the engineers who never boasted?
In early August 2019, Trend Micro became aware that some of our consumer customers running our home security solution had been receiving scam calls by criminals impersonating Trend Micro support personnel. The information that the criminals reportedly possessed in these scam calls led us to suspect a coordinated attack. Although we immediately launched a thorough investigation, it was not until the end of October 2019 that we were able to definitively conclude that it was an insider threat. A Trend Micro employee used fraudulent means to gain access to a customer support database that contained names, email addresses, Trend Micro support ticket numbers, and in some instances telephone numbers.
Google Site Reliability Engineer (SRE) David Barksdale was recently fired for stalking and spying on teenagers through various Google services…
And in case you were wondering how easily a Google SRE could get away with this kind of obvious bad stalking behavior over the last ten years, in 2018 the company said it was starting to fire people in management for crimes and nearly 50 were out the door:
…an increasingly hard line on inappropriate conduct by people in positions of authority: in the last two years, 48 people have been terminated for sexual harassment, including 13 who were senior managers and above.
Do you realize how huge the investigations team has to be to collect evidence let alone fire 48 people over two years for sexual harassment?
Uber’s lack of security regarding its customer data was resulting in Uber employees being able to track high profile politicians, celebrities, and even personal acquaintances of Uber employees, including ex-boyfriends/girlfriends, and ex-spouses…
Ahmad Abouammo and Ali Alzabarah each worked for the company from 2013 to 2015. The complaint alleges that Alzabarah, a site reliability engineer, improperly accessed the data of more than 6,000 Twitter users. […] Even after leaving the company, Abouammo allegedly contacted friends at Twitter to facilitate Saudi government requests, such as for account verification and to shutter accounts that had violated the terms of service.
…employees were able to use Lyft’s back-end software to “see pretty much everything including feedback, and yes, pick up and drop off coordinates.” Another anonymous employee posted on the workplace app Blind that access to clients’ private information was abused. While staffers warned one another that the data insights tool tracks all usage, there seemed to be little to no enforcement, giving employees free reign over it. They used that access to spy on exes, spouses and fellow Lyft passengers they found attractive. They even looked up celebrity phone numbers, with one employee boasting that he had Zuckerberg’s digits. One source admitted to looking up their significant other’s Lyft destinations: “It was addictive. People were definitely doing what I was.”
Airbnb actually teaches classes in SQL to employees so everyone can learn to query the data warehouses it maintains, and it has also created a tool called Airpal to make it easier to design SQL queries and dispatch them to the Presto layer of the data warehouse. (This tool has also been open sourced.) Airpal was launched internally at Airbnb in the spring of 2014, and within the first year, over a third of all employees at the company had launched an SQL query against the data warehouse.
Childs made national headlines by refusing to hand over administrative control to the City of San Francisco’s FiberWAN network, [a cloud service] which he had spent years helping to create.
One of the most peculiar aspects of the San Francisco case was how the network was sending logs (and perhaps even more, given a tap on core infrastructure) to a cluster of encrypted linux servers, and only Terry Childs had keys. He installed those servers in a metal cabinet with wood reinforcements and padlocks outside the datacenter and behind his desk. Holes were drilled in the cabinet for cable runs back into the datacenter, where holes also were drilled to allow entry.
The details of his case are so bizarre I’m surprised nobody made a movie about it. Maybe if one had been made, we’d have garnered more attention about these insider risks to cloud than the book we published in 2012.
So who would be the equivalent today of Tony Curtis?
…shortfall on leaflet dispersal capability will jeopardize Air Force Central Command information operations,” said Earl Johnson, B-52 PDU-5/B project manager. The “Buff” can carry 16 PDU-5s under the wings, making it able to distribute 900,000 leaflets in a single sortie.
That’s a lot of paper.
Discussing leaflet drops the other night with a pilot set me straight on the latest methods and he emphasized some recent capability for computerized micro-target leaflet fall paths. It sounded too good to be true. I still imagine stuff floating everywhere randomly like snowflakes.
Then again he might have been pulling my leg, given how we were talking about psyops, or the person telling him.
Step one: Facebook sets up privileged access (competitive advantage) to user data and leaks this privileged (back) door to Russia
(October 8, 2014 email in which Facebook engineer Alberto Tretti emails Archibong and Papamiltiadis notifying them that entities with Russian IP addresses have been using the Pinterest API access token to pull over 3 billion data points per day through the Ordered Friends API, a private API offered by Facebook to certain companies who made extravagant ads purchases to give them a competitive advantage against all other companies. Tretti sends the email because he is clearly concerned that Russian entities have somehow obtained Pinterest’s access token to obtain immense amounts of consumer data. Merely an hour later Tretti, after meeting with Facebook’s top security personnel, retracts his statement without explanation, calling it only a “series of unfortunate coincidences” without further explanation. It is highly unlikely that in only an hour Facebook engineers were able to determine definitively that Russia had not engaged in foul play, particularly in light of Tretti’s clear statement that 3 billion API calls were made per day from Pinterest and that most of these calls were made from Russian IP addresses when Pinterest does not maintain servers or offices in Russia)
Step two: Facebook CEO announces his company doesn’t care if information is inauthentic
Most of the attention on Facebook and disinformation in the past week or so has focused on the platform’s decision not to fact-check political advertising, along with the choice of right-wing site Breitbart News as one of the “trusted sources” for Facebook’s News tab. But these two developments are just part of the much larger story about Facebook’s role in distributing disinformation of all kinds, an issue that is becoming more crucial as we get closer to the 2020 presidential election. And according to one recent study, the problem is getting worse instead of better, especially when it comes to news stories about issues related to the election. Avaaz, a site that specializes in raising public awareness about global public-policy issues, says its research shows fake news stories got 86 million views in the past three months, more than three times as many as during the previous three-month period.
Step three: Facebook announces it has used an academic institution led by former staff to measure authenticity of information
Working with the Stanford Internet Observatory (SIO) and the Daily Beast, Facebook determined that the shuttered accounts were coordinating to advance pro-Russian agendas through the use of fabricated profiles and accounts of real people from the countries where they operated, including local content providers. The sites were removed not because of the content itself, apparently, but because the accounts promoting the content were engaged in inauthentic and coordinated actions.
Interesting to see former staff of Facebook hiding inside an academic context to work for Facebook on the thing that Facebook says it’s not working on.