Spoiler alert. Inarticulate Grief is a poem by Richard Aldington about WWI that is still relevant today.
Let the sea beat its thin torn hands
In anguish against the shore,
Let it moan
Between headland and cliff;
Let the sea shriek out its agony
Across waste sands and marshes,
And clutch great ships,
Tearing them plate from steel plate
In reckless anger;
Let it break the white bulwarks
Of harbour and city;
Let it sob and scream and laugh
In a sharp fury,
With white salt tears
Wet on its writhen face;
Ah! let the sea still be mad
And crash in madness among the shaking rocks —
For the sea is the cry of our sorrow
Now read Inarticulate Grief, by Sean Patrick Hughes, a beautiful prose about America’s endless Bush-Cheney Wars.
No deployment I had was hard enough to make me deal with the pain it caused. Someone always had it harder. No loss suffered; no trauma absorbed was bad enough to acknowledge. Someone always had it tougher. Acknowledging it, in some way, dishonored them.
In a bizarre screed about their view of technology futures, a16z literally advocates for the least ethical data practices as their “best” strategy to profit with AI. They refer to your data being difficult to remove from a castle as a “defensive moat”, which doesn’t make it any easier to justify as unlawful incarceration:
Great software companies are built around strong defensive moats. Some of the best moats are strong forces like network effects, high switching costs, and economies of scale.
That investment moat definition is basically a way of rationalizing insiders being prevented from leaving, historically the opposite of what “best moats” were actually engineered to do (protect against outsiders harming the residents sheltering inside).
Lock all your users’s data up and throw away the key seems to be the thinking behind this “best moat” definition for AI, although I’m sure people will argue locking up everyone inside a moat so they can’t leave when they want is somehow a rational defensive mindset for AI castle leadership.
Abolition of the unjust moats sounds like a good response to investor posts calling for forced incarceration of your body of data for their AI machine profits. Crossing a moat to leave should be your right, not denied by a castle that wants your body of data to pay for their moats and jails.
Choose liberty for your data, and walk away from for-profit prisons by giant “defensive moat” development barons.
The IAPP has a good article on privacy considerations for things in cars and the data streams associated with them. Here’s a notable section:
On appeal, the California Appellate Court agreed, noting that “a person has no reasonable expectation of privacy in speed on a public highway” because speed is easily observable by the public, through radar detectors or estimation by a trained police expert. Thus, Diaz had no Fourth Amendment expectation of privacy; “technology merely captured information defendant knowingly exposed to the public.”
That doesn’t fairly reflect the actual court documents, which speak directly to the brake indicators being designed for public view.
…a person has no reasonable expectation of privacy in speed on a public highway because speed may readily be observed and measured through, for example, radar devices (e.g., People v. Singh (2001) 92 Cal.App.4th Supp. 13, 15 [112 Cal.Rptr.2d 74]), pacing the vehicle (e.g., People v. Lowe (2002) 105 Cal.App.4th Supp. 1, 5 [130 Cal.Rptr.2d 249]), or estimation by a trained expert (e.g., People v. Zunis (2005) 134 Cal.App.4th Supp. 1, 6 [36 Cal.Rptr.3d 489]). Similarly, a person has no reasonable expectation of privacy in use of a vehicle’s brakes because statutorily required brake lights (Veh. Code, § 24603) announce that use to the public.
The article makes many good points about secrecy of the logs collected, as well as manufacturer defects in devices that challenge the integrity of any observation (e.g. broken brake light). Above all, it’s interesting to see how courts ruled on something designed to publicly disclose, when the user claims its use was a protected secret.
I’ve been getting involved in a counter-drone market for many years now, including time spent in government offices with operators discussing the “latest” technology advances. Not everyone seems excited to hear about details in this area of security research.
One thing that regularly has come up is whether venerable laser weapons are effective yet. I say venerable because the US Air Force itself will tell you they’ve been experimenting with lasers shooting down drones since the early 1970s (according to AFD-070404-025).
…1972 when technicians fired a ground based 100 kilowatt CO2 laser that propagated at 10.6 microns against a variety of stationary targets. The tests went so well the project elevated to firing the laser at a moving airborne target. On November 13, 1973, the laser was used against a 12 foot long Northrop MQM33B
radio controlled aerial target, a drone, in an attempt to knock it out of the air. Indeed, the drone did drop, but not precisely as planned.
Unfortunately the story was written around “a simple promotional video for Rafael’s Drone Dome, an anti-drone laser weapon”, making it a bit of PR extending the PR released by the manufacturer themselves.
Instead of taking the video at face value, better analysis is in order.
Here are a few thoughts on why perhaps it’s not such a bright idea for journalists to uncritically post a laser vendor’s demonstration.
1) Light reflection. Mirrors are a simple and logical countermeasure. As Dr. Seuss might put it, any chrome drone would bounce a drone dome. The dissipation of energy, to be fair, isn’t child’s play so the mirrors have problems to tackle. But an Office of Navy Research is definitely proving the point with their work on Counter Directed Energy Weapons. More to the point, the Air Force says the latest reflective anti-heat technology developed for energy efficient buildings (windows and roofs) is something that could be applied to all their weapons systems.
2) Dissipation of energy. In a famous case in Mexico, a liquid-cooled door greatly slowed police battering rams. The point here really is to push energy into heat sinks or disposable parts to slow absorption. Again, energy efficient buildings are developing things like phase change materials to absorb energy that easily could be applied to drones. Slowing the energy effectiveness on the drones could mean a moderately-sized swarm might easily overwhelm or avoid laser weapons.
3) Obfuscation. Both above technologies have very useful civilian applications, and thus are likely to improve faster than any expensive laser weapon can innovate. There’s also a more traditional countermeasure, which is to foul the environment a laser has to pass through. Drones could generate a synthetic cloud or fog. A swarm of drones could even create a blanket or corridor that renders laser weapons ineffective. NASA a couple years ago described a version of this working.
10 canisters about the size of a soft drink can will be deployed in the air, 6 to 12 miles away from the 670-pound main payload. The canisters will deploy between 4 and 5.5 minutes after launch forming blue-green and red artificial clouds.
Again slowing down the laser weapon is all that is needed. As one counter-counter-drone researcher put it to me “the glitter bomb is a zero cost defense”.
4) Counterattack. Lasers depend on being able to see, and be seen, so drones can fire lasers back at the source in order to blind the tracking systems or disrupt the light waves.
There are four devastating examples and more probably exist. In every one it’s economics, a matter of having inexpensive and rapidly iterating countermeasures that bypass the extremely expensive and slow-developing laser weapons.
Let me be clear, laser weapons are effective against operations that are not explicitly trying to build countermeasures to laser weapons. There is still a need for laser weapons. However, journalists do us no favors by promoting vendor PR and repeating nonsense like “100% effective”, given we have nearly 50 years of evidence how and why laser weapons fail.
FEW-View™ is an online educational tool that helps U.S. residents and community leaders visualize their supply chains with an emphasis on food, energy, and water. This tool lets you see the hidden connections and benchmark your supply chain’s sustainability, security, and resilience.
FEW-View™ is developed by scientists at Northern Arizona University and at the Decision Theater® at Arizona State University. FEW-View™ is an initiative of the FEWSION™ project, a collaboration between scientists at over a dozen universities (https://fewsion.us/team/).
FEWSION™ was founded in 2016 by a grant from the INFEWS basic research program of the National Science Foundation (NSF) and the U.S. Department of Agriculture (USDA). The opinions expressed are those of the researchers, and not necessarily the funding agencies.
However, there are two problems I see already with the map. First, it doesn’t go backward in time. The illustrations would be far more useful if I could pivot through 1880 to 1980. Second, the interactive maps allow you to break out a booze category but I have yet to find a way to filter on bananas and pineapples let alone ingredients for three flavors of ice cream.
First a recent DARPA video shows how a swarm of drones would be carrying out an urban exercise:
Second, special operations describes their “future fights” training as assessing trustworthiness of partners in the field:
..instructors hear a gunshot echo in the woods. An extrajudicial killing ‘is obviously not ideal,’ one Special Forces instructor said.
Add these two together and you get special operators dropping into urban areas to identify and ultimately eliminate untrustworthy partners, which obviously means drones in the near future.
That pretty much sounds like the thesis of Blade Runner, which is finding presence of machines that lack empathy and then eliminating them. The tough question being, as the instructor said, is an assessment of imminent harm judicial or scientific enough to warrant hitting the off button?
For example, CVE-2019-0708 (Remote Desktop Services Remote Code Execution Vulnerability: May 14, 2019) has a EPSS threat score of 95.2% being exploited in the next 12 months, with a CVSS score of 9.8 (Critical).
That might be an obvious outcome, but it hopefully illustrates some of the importance in adding threat data to the vulnerability remediation timeline.
The real trick is finding CVSS that are low with EPSS that are high because that indicates a risk perception imbalance that quickly can lead to disaster.
On top of this advancement, consider also the riskquant tool recently released that does basic likelihood/severity mapping that probably has been debated in every disaster recovery planning audit meeting for the last 20 years let alone NIST SP 800-30.
…annualized loss is the mean magnitude averaged over the expected interval between events, which is roughly the inverse of the frequency (e.g. a frequency of 0.1 implies an event about every 10 years)…
Both tools are meant to help move from point scores of severity to trends of probabilistic likelihood and should be given a look sometime in the near future.
Some new analysis from the alliance for securing democracy shows how this all works. Their “Hamilton Dashboard” highlights two important findings in a post titled “Why the Jeffrey Epstein saga was the Russian government-funded media’s top story of 2019”
…few topics dominated the Russian government-funded media landscape quite like the arrest and subsequent suicide of billionaire financier and serial sex offender Jeffrey Epstein. In its year-end review, RT named the Epstein saga “2019’s major scandal,” and RT UK media personality George Galloway listed it as his number one “truth bomb” of the year (ahead of all the aforementioned events). Given the lack of any notable connection between Epstein and Russian interests, the focus on Epstein highlights the Kremlin’s clear prioritization of content meant to paint a negative image of the West rather than a positive image of Russia.
The first finding is a somewhat obvious one that Russia actively uses seeds that are meant to destroy positive imagery of the West (i.e. reverse the “Hope” campaigns that had resulted in President Obama). Epstein falls into this category.
The second finding is more subtle and implicit. Russia fails miserably to generate any positive image of itself. Every analysis I have read suggests Putin is both desperate and incompetent at forming a national identity, despite ruthlessly positioning himself as a long-term dictator with total control of all resources.
To put it in some context, Putin is a trained assassin, with little to no evidence he can develop a sense of national interest or ability to convey any leadership story about belonging. In fact, these two positions may be contradictory (inherent weakness of being an assassin) given how anyone forming greater identity and purpose would be assassinated; rise of identity could be seen as potential threat to the man with an artificially inflated sense of self worth above everyone else.
Anyway the graphic for the Hamilton Dashboard of the securing democracy site really caught my eye as a beautifully done rendition of the classic Soviet propaganda art that Putin seems incapable of achieving (a bit like doing the work for him):
For comparison here’s some actual Soviet propaganda that celebrates creating a powerful aviation industry (a suspicious claim given staggering death tolls in their airline: in 1973 alone the Soviet aviation industry had 27 incidents and 780 people were killed)
This genre of “positive” spin poster of prosperity was backed by a complete suppression of any and all “unfavorable” communication that would challenge a progressive narrative (e.g. propaganda seeds of despair pushed by running a story about Epstein). Especially suppressed by the Russians were news of crimes against humanity (massacres, famines and energy/environmental disasters on Russian soil).
In other words, two diametrically opposed threads can be tracked in Cold War propaganda, posters of hope by the Soviets and counter-posters of despair by the CIA (the subject of Putin’s study while in the KGB).
Example of a Soviet poster pushing a positive narrative of prosperity from labor:
Contrarian example of a CIA poster pushing negative narratives (indirectly via Italian media platforms) of demoralizing labor brutality:
In the modern context, being the typical self-promoting KGB agent trained in the art of copying everything the CIA did and trying to use it for his own gain, we see clear evidence in the Hamilton Dashboard that Putin is pushing a despair campaign using today’s social media platforms. He doesn’t, however, seem to be able to come up with any positive sense of identity for his own nation.
And I have to say, despite me being a student of these communication methods (even having a degree related to their usage) my attempts at art in this domain simply pale in comparison to what the Hamilton Dashboard has come up with.
Hats off to them…although really I would expect some despair in their graphic if they wanted to play this game right. I mean it seems a bit counter productive to gift the enemy with banner-level positive glorification imagery that everyone sees when they come to study the enemy.
The same mistake probably should be said for me, in retrospect, as here’s my 2017 image that used to show up in many of my presentations:
It was a refresh of the 2016 rendition that was even more snarky about the U.S. being way ahead in kinetic yet woefully behind in the more pressing cyber domain…
Granted that the group is defined by a quantitative measure, it is not clear how qualitative measures (type of data) would change the discussion.
Applying qualitative measures doesn’t explain, for example, why three of the biggest breaches of all time (on the relatively new “best in business” identity platforms containing all information about a person) saw a CSO treated so incredibly lightly compared to the breach of the antique Equifax.
When you look for a correlation of CSO to massive breaches (both quantity and quality of data), all of the following track back to a single person who never did the job before (or even a similar job at a public or large organization) and arguably never should be allowed to attempt it again:
Yahoo 2013 (undisclosed until 2016) 3 billion breached
Facebook 2017-2019 over 600 million breached
Yahoo 2014 500 million breached
And yet nothing like the following seems to exist for Yahoo or Facebook…
We need to seriously consider whether an Equifax CSO was treated by social media pundits as an outlier and pilloried because she is a woman.
Why wasn’t the Yahoo/Facebook CSO scrutinized in a similar fashion given his documented/obvious lack of qualifications in organizational leadership, let alone all the other CSO within the “100-150 million tier” of breached companies?
On top of the massive confidentiality breaches under the Facebook CSO, his legacy also is some of the biggest data integrity failures in history (given 50 million accounts breached, failed to block unfiltered harmful content and is alleged to have facilitated political destabilization and atrocity crimes).
The bottom line is one person attempted to be CSO twice, with no prior experience, and seems to have a track record now of nearly 4 billion accounts compromised with highly questionable disclosure practices. Yet this man seems to have escaped all the scrutiny applied to a woman.
Update Feb 3, 2020: Vice reports “penalties for data breaches and lax security are often too pathetic to drive meaningful change”.
Update Feb 10, 2020: While Facebook pivoted its CSO role to an external academic appointment at Stanford, and thus continues to be embroiled in breaches, Equifax went the other direction and has stayed above board.
This morning, the DOJ identified the perpetrators who attacked Equifax in 2017. With breaches, identification of the attackers (or “attribution”) can be incredibly difficult—even impossible. Being able to share this information is the result of an enormous amount of work by authorities. We cannot thank the U.S. Department of Justice, Federal Bureau of Investigation (FBI), and so many others enough for their tireless efforts to achieve this result.
In parallel, Equifax has been transforming our security program—embedding security into our DNA by driving cultural change, implementing advanced controls tailored to the specific threats we face, achieving relevant certifications, and—just as importantly—sharing what we’ve learned with our customers, partners, and authorities.
Equifax partnered with authorities right from the beginning, and two-way information sharing remains a key part of our security program. The importance of partnering with authorities cannot be overstated. If your security team doesn’t know who to contact at the FBI and the Secret Service, change that today.
At Equifax, we are doing our best to make sure that this never happens again and to support others who want to learn from our experience.
Nothing even close to that for Facebook has appeared, only more breaches.
In multi-factor authentication systems, you typically are dealing with three data categories to establish uniqueness: something you know, something you have or something you are.
While you can create knowledge, create a thing to hold, it is the third category of “being” that often raises concern. There’s an inherent contradiction in treating a thing you expose everywhere and that in theory never changes, as some kind of unique secret that can’t be replayed by someone else. The state of “being” tends to be inherently observable, else you cease to exist.
For example you’ll be hard pressed to avoid leaving your fingerprints all over the place.
On top of the exposure contradiction of biometric secrecy, there also is a complexity and cost consideration in the biometric business, which lowers challenge quality (look for a couple spots that match instead of every detail and thousands of points) to profit/margin and is usually how we see decades of simple bypasses.
Nonetheless, despite the contradictions and bypasses, stark warnings about biometrics do appear. Consider the “lasting damage” claimed in an analysis of Digital ID applications:
In Zimbabwe, we spoke to people who did not know why the government was transitioning from the old metal ID to a biometric ID. There were theories about the ID system’s connection to national security and surveillance but little knowledge of the government’s intentions or the purpose of collecting biometric data (i.e., unique physical measurements such as fingerprints and iris scans)–which isn’t essential for providing legal identity. This type of data is forever associated with a person’s body, meaning that these systems can lead to privacy violations that cause lasting damage.
Meanwhile in RPI research news, we see the march of science challenging our sense of reality:
Scientists have created 3D-printed skin complete with blood vessels, in an advancement which they hope could one day prevent the body rejecting grafted tissue. The team of researchers at Rensselaer Polytechnic Institute in New York and Yale School of Medicine combined cells found in human blood vessels with other ingredients including animal collagen, and printed a skin-like material. After a few weeks, the cells started to form into vasculature. The skin was then grafted onto a mouse, and was found to connect with the animal’s vessels.
“We can sew pouches, create tubes, valves and perforated membranes,” says Nicholas L’Heureux, who led the work at the French National Institute of Health and Medical Research in Bordeaux. “With the yarn, any textile approach is feasible: knitting, braiding, weaving, even crocheting.”
This suggests we are entering an entirely new level of impersonation possibilities, which both are bad (unwanted) and good (wanted). You could knit a new set of fingerprints that even have blood-flowing in them.
Somehow I doubt the scientists considered the impact of bypassing authentication systems as part of their research, yet we’re clearly approaching a time when you can really do an about face and give the finger to biometric authentication vendors.
It all begs the ancient philosophical questions of whether quaint notions of authenticity are really something to hold a hard line on (e.g. authorize authenticity policing), or instead we should focus on harms and virtue ethics. For a simple quiz, would you sooner criminalize actors for doing modern voice impersonations or appearance impersonations?