Africa Foreshadowed Abandonment of Allies in Syria: U.S. Weakened Self to Open Doors for Russian Military Expansion

The latest analysis of the Syria crisis increasingly reveals it is a Russian plan that the White House has swallowed hook, line and sinker. Withdrawal clearly harms U.S. interests both short and long term (UN Security Council now comparing it to Bosnia, with regional destabilization) yet America allows it to proceed.

Some may recall just a few months ago a similar withdrawal story was brewing in Africa, that probably should have been a starker warning of what was to come.

Gen Waldhauser said the troops will be deployed to missions where the US sees as high-priority.

“We all realise, you know, Africa, with regards to the prioritisation of our national interests … there’s no doubt about the fact that that it’s, you know, it’s not number one on the list,” Gen Waldhauser was quoted as saying.

The Trump administration views preparation for potential conflicts with China or Russia to be of higher priority than combating terrorism in Africa.

Now with the White House flying a white flag in abandoning its Kurdish allies in Syria, with Russia rolling in soon after, there might be a clearer explanation for abandonment of African forces.

Draw down in Africa seems to have been the opposite of preparation elsewhere for potential conflicts with China or Russia. Turning tail and intentionally opening the door to Russian military sales expansion is manifest in a brand new announcement that Russia is now pushing into African allegiances:

While Moscow is focused primarily on other regions, it regards Africa as an attractive venue to evade international sanctions imposed by Western nations and deepen ties with old and new partners while scoring points at the expense of the United States.

Part of Russia’s engagement in Africa is military in nature. The Russian military and Russian private military contractors linked to the Kremlin have expanded their global military footprint in Africa, seeking basing rights in a half dozen countries and inking military cooperation agreements with 27 African governments

U.S. Fighting DisInformation? Look at Presidential election of 1932

Regulation and targeted response strategies to fight disinformation worked after FDR took office in 1932, and it’s likely to work again today when someone will muster the national trust of residents ready to take action. Without that kind of popular support, and by instead making conciliation to technology companies, it’s unlikely we’ll see any progress today.

DefenseOne writes there’s been a necessary shift in security from a focus entirely on confidentiality towards more integrity. They then propose three steps to get there.

First is better, faster understanding by the U.S. government of what disinformation American adversaries are spreading—or, ideally, anticipation of that spread before it actually happens. […]
Second is, in appropriate circumstances, the swift, clear, and direct intervention of U.S. government spokespersons to expose falsities and provide the truth. […]
Third is an expanded set of U.S. government partnerships with technologies companies to help them identify disinformation poised to spread across their platforms so that they can craft appropriate responses.

Point one sounds like a call for more surveillance, which will obviously run into massive resistance before it even gets off the ground. So there’s a tactical and political headwind. Points two and three are unlikely to work at all. The most effective government spokesperson in past typically was the President. That’s not possible today for obvious reasons. In the past the partnerships with technology companies (radio, newspaper) wasn’t possible, and it’s similarly not possible today. Facebook’s CEO has repeatedly said he will continue to push disinformation for profit.

I’ve been openly writing and presenting on this modern topic since 2012 (e.g. BSidesLV presentation on using data integrity attacks on mobile devices to foment political coups), with research going back to my undergraduate and graduate degrees in the mid-1990s. What this article misses entirely is what has worked in the past. Unless they address why that wouldn’t work today, I’m skeptical of their suggestions to try something new and untested.

What worked in the past? Look at the timeline after the 1932 Presidential election to 1940, which directly addressed Nazi military disinformation campaigns (e.g. America First) promoting fascism. 1) Breakup of the organizations disseminating disinformation (regulation). 2) Election of a President that can speak truth to power, who aligns a government with values that block attempts to profit on disinformation/harms (regulation). 3) Rapid dissemination of antidotes domestically, and active response abroad with strong countermeasures.

Roosevelt defeats Nazis at the ballot box: “By 1932, Hearst was publishing articles by Adolf Hitler, whom Hearst admired for keeping Germany out of, as Hitler put it in a Hearst paper, “the beckoning arms of Bolshevism.” Hitler instead promoted a transcendent idea of nationalism—putting Germany first—and, by organizing devoted nationalist followers to threaten and beat up leftists, Hitler would soon destroy class-based politics in his country. Increasingly, Hearst wanted to see something similar happen in the United States.”

The question today thus should be not about cooperating with those who have been poisoning the waters. The question should be whether regulation is possible in an environment of get-rich-quick fake-it-til-you-make-it greedy anti-regulatory values.

Take Flint, Michigan water disaster as an example, let alone Facebook/Google/YouTube/WellsFargo.

After officials repeatedly dismissed claims that Flint’s water was making people sick, residents took action.

America has a history of bottom-up (populist) approaches to governance solving top-down exploitation (It’s the “United” part of USA fighting the King for independence). A bottom-up approach isn’t likely to come from the DefenseOne strategy of partnerships between big government and big technology companies.

I’m not saying it will be easy to rotate to populist solutions. It will definitely be hard to take on broad swaths of corrupt powerful leaders who repeatedly profit from poisoning large populations for personal gains. Yet that’s the fork in our road, and even outside entities know they can’t thrive if Americans choose to be united again in their take-down of selfish profiteers who now brazenly argue for their right to unregulated harms in vulnerable populations.

If Zuckerberg were CEO of Juul… right now he’d be trying to excite investors by saying ten new fruity tobacco flavors are coming next quarter for freedom-loving children.

The boss of e-cigratte maker Juul stepped down on Wednesday in the face of a regulatory backlash and a surge in mysterious illnesses linked to vaping products.

I wrote in 2012 about the immediate need for regulation of vaping. Seven years later that regulation finally is happening, sadly after dozens have been dying suddenly and without explanation. A partnership with tobacco companies was never on the table.

Bottom line is if you ever wonder why a Republican party today would undermine FCC and CIA authority, look at FDR’s creation of them to understand how and why they were designed to block and tackle foreign fascist military disinformation campaigns.

‘Poem to Get Rid of Fear’

Fear Poem, or I Give You Back

by Joy Harjo, the current poet laureate of the U.S.

“Because of the fear monster infecting this country, I have been asked for this poem, this song. Feel free to use it, record it, and share. Please give credit. This poem came when I absolutely needed it. I was young and nearly destroyed by fear. I almost didn’t make it to twenty-three. This poem was given to me to share.” — Joy Harjo

I release you, my beautiful and terrible
fear. I release you. You were my beloved
and hated twin, but now, I don’t know you
as myself. I release you with all the
pain I would know at the death of
my children.
You are not my blood anymore.
I give you back to the soldiers
who burned down my home, beheaded my children,
raped and sodomized my brothers and sisters.
I give you back to those who stole the
food from our plates when we were starving.
I release you, fear, because you hold
these scenes in front of me and I was born
with eyes that can never close.
I release you
I release you
I release you
I release you
I am not afraid to be angry.
I am not afraid to rejoice.
I am not afraid to be black.
I am not afraid to be white.
I am not afraid to be hungry.
I am not afraid to be full.
I am not afraid to be hated.
I am not afraid to be loved.
to be loved, to be loved, fear.
Oh, you have choked me, but I gave you the leash.
You have gutted me but I gave you the knife.
You have devoured me, but I laid myself across the fire.
I take myself back, fear.
You are not my shadow any longer.
I won’t hold you in my hands.
You can’t live in my eyes, my ears, my voice
my belly, or in my heart my heart
my heart my heart
But come here, fear
I am alive and you are so afraid
of dying.

Massive Biometric Data Breach Traced to 2014 Yahoo

A massive breach of privacy in June 2014 happened several months after Yahoo had hired a CSO who publicly boasted he personally was the reason users could trust security of the service. He quietly left in disgrace, failing to reveal the breaches, to become the CSO at Facebook and repeated this story, again leaving in disgrace.

Ira Kemelmacher-Shlizerman is a “Science-Entrepreneur” on a Google moonshot project, prior serving two years at Facebook. Her MegaFace “science” project to collect human faces for surveillance technology was done without user consent and is alleged to violate biometric privacy law.

March 2014 saw the following PR campaign:

Watch out, Google. The rumors are true. Yahoo has officially stepped up its security A-game. It’s called Alex Stamos.

Yahoo announced yesterday that it hired the world-renowned cybersecurity expert and vocal NSA critic to command its team of “Paranoids” in bulletproofing all of its platforms and products from threats that will surely come.

Bulletproofing. Who says that? Someone who doesn’t understand the role of CSO. “Vocal NSA critic” is a reference to when Stamos was parroting anti-government talking points (he stood in front of the head of NSA and said the US is morally equivalent to Russia, China, Saudi Arabia…).

What these PR campaigns by Stamos failed to include was the fact that he had no prior experience as a CSO, let alone experience leading security operations for a public company, let alone management experience to handle a large complex organization.

His lack of experience very soon after manifested in some of the largest privacy breaches in history, as revealed by those who ended up involved in his catastrophic tenures.

For example look at June 2014, just three months after those Yahoo “bulletproofing” boasts attempted to juice stocks, an unprecedented breach of privacy happened, violating American biometric data protection law:

In June 2014, seeking to advance the cause of computer vision, Yahoo unveiled what it called “the largest public multimedia collection that has ever been released,” featuring 100 million photos and videos. Yahoo got the images — all of which had Creative Commons or commercial use licenses — from Flickr, a subsidiary.

…researchers who accessed the database simply downloaded versions of the images and then redistributed them, including a team from the University of Washington. In 2015, two of the school’s computer science professors — Ira Kemelmacher-Shlizerman and Steve Seitz — and their graduate students used the Flickr data to create MegaFace.

That breach method should sound familiar. Anyone looking at the Cambridge Analytica incident at Facebook would recognize it.

Stamos abruptly and quietly left Yahoo in June 2015 to join Facebook as their CSO. Then a month later a report surfaced that said American billionaires were actively using data mining in centralized data repositories to drive political coups.

Cambridge Analytica is connected to a British firm called SCL Group, which provides governments, political groups and companies around the world with services ranging from military disinformation campaigns to social media branding and voter targeting.

So far, SCL’s political work has been mostly in the developing world — where it has boasted of its ability to help foment coups.

By December 2015, despite these warnings, the Guardian breaks a story on researchers taking data from Facebook without user consent.

Documents seen by the Guardian have uncovered longstanding ethical and privacy issues about the way academics hoovered up personal data by accessing a vast set of US Facebook profiles, in order to build sophisticated models of users’ personalities without their knowledge.

The FBi has released 2015 internal email threads from Facebook (PDF) where staff were discussing Cambridge Analytica.

Sept 30, 2015. 12:17PM. To set expectations we can’t certify/approve apps for compliance, and it’s very likely these companies are not in violation of any of our terms. …if we had more resources we could discuss a call with the companies to get a better understanding, but we should only explore that path if we do see red flags.

This reflects leadership saying the team only would look for danger in this area if they get more funds and even then would only look for smoke after they do see some evidence of smoke, as provided to them by the arsonists.

It reads basically like someone was running a self-funding plan to deliver an absolute least amount of security services possible. Such a mindset is common for a CTO who would be building a minimum viable product (MVP), yet is entirely inverted from normal CSO operations ethical models.

While Facebook has repeatedly stated after the fact that Cambridge Analytica was a “clear lapse” by their security team, we increasingly see evidence these security lapses may have also been present a year before under the same CSO at a different company.

In somewhat related news, Facebook still is on the hook for a $35 billion class-action lawsuit filed the year Stamos joined as CSO.

The suit alleges that Illinois citizens didn’t consent to having their uploaded photos scanned with facial recognition and weren’t informed of how long the data would be saved when the mapping started in 2011. […] Filed in 2015, Facebook has done everything to try to block the class action case, from objecting to definitions of tons of words in the suit to lobbying against the underlying Biometric Information Privacy Act. The class action poses an even greater penalty than the record-breaking $5 billion settlement Facebook agreed to over violations of its FTC consent decree. Though that payment amounts to a fraction of the $55 billion in revenue Facebook earned last year, it’s also been saddled with tons of new data privacy and transparency requirements. The $35 billion threat coming into focus contributed to a 2.25% share price drop for Facebook today.

There’s a good chance this case can not survive a Supreme Court test. As politics would have it, the Facebook DC office is run by Joel Kaplan. He is the guy who infamously sat next to Kavanaugh while allegations of sexual assault were denied. Kaplan serves the extreme-right nationalist publications like Breitbart and the Daily Caller by linking them to Facebook management. That also is why right now we’re about as likely to see Stamos held accountable for his disasters as anyone who failed upward into the White House.

But the real lesson here is that Americans are overly fixated on a singular individual as savior, despite society taking a huge risk by following their unexpected jumps. Whether it be Stamos, Snowden or Assaange there increasingly is a toxic exhaust from their meteoric failures that a Canadian marketing journal recently described best:

We have become fascinated with strong individuals: Ninjas, rockstars and 30 under 30s. We hail unicorns and disruptors, and we mock those on the decline as dinosaurs or people who couldn’t see the writing on the wall. Celebrating individual achievements is fine, but when we forget about the importance of community, I believe we all suffer…

New York SHIELD Act (S.5575B/A.5635) Deadline Oct 23

The arrival of colonists from both the Netherlands and England in the mid-1600s marked a tragic end to Native Americans living in New York despite their SHIELDS.

In New York political circles it’s called the Stop Hacks and Improve Electronic Data Security (SHIELD) Act. Unless I’m reading that sentence wrong, it should be called the SHIELDS Act.

Leaving off the last S is kind of ironic, when you think about an act meant to prevent people from leaving off security.

In any case, S.5575B/A.5635 was meant to impose stronger regulations to force notification of security breaches for any New York resident’s data. Passed July 25th this year (three days after the NY government announced a $19.2 million settlement with Equifax over their data breach), its breach notification becomes effective one week from today (240 days after passage) on October 23, 2019.

Notable changes:

  • Broader definition of a breach: unauthorized access to private information
  • Broader definition of private information: includes bank account and payment data, biometric information and email addresses with any corresponding passwords or recovery flow (security questions and answers)
  • Broader definition of whose information is protected: any NY resident no matter where their data is stored (not just business operations in NY)
  • New state government notification requirements (deadline for data protection programs is March 21, 2020, but data breaches must be recorded starting October 23, 2019)
  • “Tailored” data security requirements based on “size of a business”

The inclusion of biometric information in state data protection legislation is a huge deal in America. This recently has come to light when people realized their rights were being egregiously violated by technology companies, given Illinois regulations that already are ten years old:

As residents of Illinois, they are protected by one of the strictest state privacy laws on the books: the Biometric Information Privacy Act, a 2008 measure that imposes financial penalties for using an Illinoisan’s fingerprints or face scans without consent. Those who used the [unprecedentedly huge facial-recognition database called MegaFace] — companies including Google, Amazon, Mitsubishi Electric, Tencent and SenseTime — appear to have been unaware of the law, and as a result may have huge financial liability, according to several lawyers and law professors familiar with the legislation.

Turkey: “This is Not a Ceasefire”

After a series of tragic missteps by the White House, which led to tens of thousands of Kurds being killed and hundreds of thousands of refugees, American officials tried to claim they had created a cease-fire.

On the face of it their claim doesn’t make any sense, as America turned tail and abruptly left its allies. A cease-fire would be with whom? Can’t say the Kurds or that would mean Turkey is formally acknowledging a Kurdish authority.

The U.S. departure from its position was so sudden, it had to fly in jets to bomb its own supply depots and structures instead of following normal exit procedures. Russians were said to be within hours inhabiting military structures built by Americans.

The American story-line attempted to say in this context of running away that a cease-fire was negotiated. It gives the impression of Turkey recognizing an authority.

However Turkey officially has said the opposite, as The Economist correspondent in Turkey tweeted:

Turkish FM Çavuşoğlu just now: “We will suspend the Peace Spring operation for 120 hours for the PKK/YPG to withdraw. This is not a ceasefire.”

The joint Turkish-US statement confirms this, calling it a unilateral pause for a Turkish operation.

Authentication is Hard

Cisco announced that it’s wireless access points have an authentication bypass.

The most crucial one is CVE-2019-15260, which could be exploited by attackers by requesting specific URLs from an affected AP and allow them to gain access to the device with elevated privileges.

Kubernetes announced that anyone can be admin by tampering with headers.

…attackers could exploit the bug to authenticate as any user by crafting an invalid header that would go through to the server.

Palo Alto provided an example: “An attacker may send the following request to the proxy: ‘X-Remote-User : admin.’ If the proxy is designed to filter X-Remote-User headers but doesn’t recognize the header because it’s invalid and forwards it to the Kubernetes API server [anyway], the attacker would successfully pass the API request with the roles of the ‘admin’ user.”

Google announced its phones have a facial recognition bypass (you don’t have to be awake).

Google has confirmed the Pixel 4 smartphone’s Face Unlock system can allow access to a person’s device even if they have their eyes closed.

Samsung announced that its phones have a fingerprint reader bypass.

The issue was spotted by a British woman whose husband was able to unlock her phone with his thumbprint just by adding a cheap screen protector.

When the S10 was launched, in March, Samsung described the fingerprint authentication system as “revolutionary”.

…and if anyone remembers 2002 security mailing lists, biometric failure such as Samsung’s was framed as having an important moral.

Matsumoto tried these attacks against eleven commercially available fingerprint biometric systems, and was able to reliably fool all of them. The results are enough to scrap the systems completely, and to send the various fingerprint biometric companies packing. Impressive is an understatement.

There’s both a specific and a general moral to take away from this result. Matsumoto is not a professional fake-finger scientist; he’s a mathematician. He didn’t use expensive equipment or a specialized laboratory. He used $10 of ingredients you could buy, and whipped up his gummy fingers in the equivalent of a home kitchen. And he defeated eleven different commercial fingerprint readers, with both optical and capacitive sensors, and some with “live finger detection” features. (Moistening the gummy finger helps defeat sensors that measure moisture or electrical resistance; it takes some practice to get it right.) If he could do this, then any semi-professional can almost certainly do much much more.

Look at how far we’ve come in 17 years.

Drones With Lasers Reveal Human Secrets

Archaeologists are only a few steps removed from forensic scientists looking for crime scenes, if you know what I mean.

Rewriting history now is even easier than ever because drones can speed the discovery of buried things:

…airborne laser scan of the area has found 900 previously unknown archaeological sites on Arran, promising to rewrite the 6,000-year human history of the island…

Given how much can be revealed and how fast, the next technology shift may have to be artificially intelligent archaeologists that can keep up with laser workloads:

Francisco Estrada-Belli, another member of the archaeological team, told National Geographic: “The fortified structures and large causeways reveal modifications to the natural landscape made by the Maya on a previously unimaginable scale.

“Lidar is revolutionising archaeology the way the Hubble Space Telescope revolutionised astronomy.

“We’ll need 100 years to go through all the data and really understand what we’re seeing.”

One group that isn’t waiting any time to jump (pun not intended) to conclusions are the operators on military missions.

The operators use a tablet and special software to designate an area of interest, dispatch a drone to scan it, and then – in a matter of hours – automatically compile the sensor readings into a 3D map so detailed you can even distinguish different species of trees.

I guess you could say operators are seeking places to hide that others could use as much as themselves.

The next step from 3D maps is to attach photo-realistic data. Nearly five years ago AutoDesk boasted of their ability to 3D map anything on their cloud using drone photography. Earlier this year Here engineers said pushing photographic-level details to operators at city-wide scale is hitting performance bottlenecks, yet already is being done.

This opens up huge new ethical issues, including adversarial response and countermeasures to seeing and being seen, as the geospatial experts in the defense industry already have been flagging:

Efforts to correct mistakes, respond to disasters, or map poverty warm the heart. But other aspects of geospatial intelligence are rife with ethical challenges, from potential invasions of privacy to the violation of the confidentiality of individuals who agree to provide income or other demographic information. “Don’t expect lawyers to catch up,” warned Schwartz. “There are going to be guidelines that need to be created by those who are doing the work.”

[…]

“The reason we exist is to give advantage to our country,” said Munsell, “and as director [Robert] Cardillo used to say, ‘to never allow a fair fight.’”

Is Facebook the Wells Fargo of Social Media?

Headlines report “Wells Fargo takes $1.6 billion hit linked to fake-account scandal” and their CEOs are being punished for lack of ethics.

For the second time in 2½ years, a chief executive of Wells Fargo & Co. resigned abruptly on Thursday as the scandal-ridden bank took another stab at putting its problems behind it.

Tim Sloan became chief executive in October 2016, succeeding John Stumpf after the company was hit with the first of what would become billions of dollars in penalties for having opened millions of bank accounts without customers’ authorization.

That’s a far cry from Facebook, though, where a CEO continues to run the company despite ethics demonstrably worse than Wells Fargo.

CNN compared them like this:

One company said goodbye to its CEO and other top executives, clawed back tens of millions of dollars in pay, installed a new chair and hired a law firm to find out what went wrong.

The other company hired thousands of new security workers, conducted opposition research on George Soros and insisted senior executives are here to stay.

In recent polling Wells Fargo and Facebook rank near each other near the bottom of consumer perceptions, barely ahead of the bottom-dwelling Trump Organization.

The Race to Regulate AI

Several standards are emerging in parallel around the world to make AI safer. There’s so much regulatory activity going on that one almost needs AI to keep up with what the humans are doing. Some recent examples:

the poetry of information security