Amazon’s About Face on GovCloud: “Physical Location Has No Bearing”

Amazon never seemed very happy about building a dedicated physical space, kind of the opposite of cloud, to achieve compliance with security requirements of the US federal government.

AWS provides customers with the option to store their data in AWS GovCloud (US) managed solely by US Persons on US soil. AWS GovCloud (US) is Amazon’s isolated cloud region where accounts are only granted to US Persons working for US organizations.

That’s a very matter-of-fact statement, suggesting it was doing what it had been told was necessary as opposed to what it wanted (destroy national security requirements as antiquated while it augers towards a post-national corporate-led system of control).

While that might have seemed speculative before now, Amazon management just released a whitepaper showing its true hand.

The other two “realities” are “Most Threats are Exploited Remotely” and “Manual Processes Present Risk of Human Error”…

I want you all to sit down, take a deep breath, and think about the logic of someone arguing physical location has no bearing on threats being exploited remotely.

First, vulnerabilities are exploited. Threats exploit those vulnerabilities. Threats aren’t usually the ones being exploited via connectivity to the Internet (as much as we talk about hack back), vulnerabilities are. Minor thing, I know, yet it speaks to the familiarity of the author with the subject.

Second, if physical location truly had no bearing, the author of this paper would have not bothered with any “remotely” modifier. They would say vulnerabilities are being exploited. Full stop. To say exploits are something coming from remote locations is them admitting there is a significance of physical location. Walls being vulnerable to cannon-balls does not mean cannons fired from 1,000 miles away are the same as from 1 mile.

Third, and this is where it truly gets stupid, “Insider Threats Prevail as a Significant Risk” again uses a physical metaphor of “insider”. What does insider mean if not someone inside a space delimited by controls? That validates physical location having bearing on risk, again.

Fourth, this nonsense continues throughout the document. Page six advises, without any sense of irony “systems should be designed to limit the ‘blast radius’ of any intrusion so that one compromised node has minimal impact on any other node in the enterprise”. You read that right, a paper arguing that physical location has no bearing…just told you that blast RADIUS is a critical component to safety from harm.

Come on.

This paper seems like it is full of amateur security mistakes made by someone who has a distinctly political argument to make against government-based controls. In other words, Amazon’s anti-government paper is an extremist free-market missive targeting US-based ITAR and undermining national security, although it probably thought it was trying to knock down laws written in another physical location.

Something tells me the blast radius of this paper was seriously miscalculated before it was dropped. Little surprise, given how weak their grasp of safety control is and how strong their desire to destroy barriers to Amazon’s entry.

SHA-1 versus SHA-2 performance tests

Moving to SHA256 has become an increasingly common topic ever since SHA-1 went through the bad news cycle of being vulnerable faster than brute-force. Even in cases where not relevant, such as authentication mechanisms (SCRAM), it feels like only a short time from now regulators will push a SHA-2 family as minimum requirement. For most people that means moving to a 256 bit key length (SHA256) sooner rather than later.

Will SHA256 cause a performance issue when replacing SCRAM-SHA-1? It’s hard to say, given that many variables are involved in testing, yet generally we expect a 50% performance change with 256 bit key length of SHA-2 compared with 160 bit key length of SHA-1.

Assuming proper construction a larger bit size means more possible combinations, which means strength through slowing down brute force attempts. A cryptographic hashing algorithm is only as great as its ability to make truly unique, non-guessable, hashes. So here’s a way for you to compare speeds:

ubuntu17:~$ openssl speed -multi 2 -decrypt sha1 sha256

The 'numbers' are in 1000s of bytes per second processed
sha1            176979.32k   479049.54k  1017926.06k  1451719.34k  1652667.73k
sha256          144534.98k   302692.57k   576607.91k   697034.07k   740136.28k

Are Self-Organizing Maps Just an Exercise in Relativism?

The key to unlocking the power of a self-organizing map seems to be in this phrase by Diego Vicente:

…instead of a grid we declare a circular array of neurons, each node will only be conscious of the neurons in front of and behind it…

He offers the example of Uruguay

traversing 734 Uruguay cities only 7.5% longer than the optimal in less than 25 seconds

In other words, each node should dispense with attempts to measure on an absolutist grid and instead calculate its own position relative to other nodes in the immediate vicinity. Like modadism, but nodadism. Also like the difference between racing single-track on a mountain bike (stay ahead of the person behind, get in front of person ahead) and racing road bikes on a highway (pre-calculate best times of pursuit, rest and attack).

Diego refers to a node’s immediate vicinity as “moderate exploitation of the local minima of each part” of a larger grid. That makes perfect sense for anyone familiar with navigating by asking around. Ask a local which way to the closest next town, if you can find a trusted local. Don’t bother asking them for a way to towns they never see, and be able to recognize the difference.

The more I research flaws in AI security the more the world bifurcates into the grey and ill-defined transition from relative to absolute models of authentication and authorization. In between there are many exploits to be found.

The problem set here is called the National Travelling Salesman by mathematicians. Of course in security terms we should think of this as drone routes to destroy privacy (gather knowledge, if you prefer that angle) or an estimation of resources for a comprehensive integrity attack plan (defense, if you prefer that angle).

2018 AppSec California: “Unpoisoned Fruit: Seeding Trust into a Growing World of Algorithmic Warfare”

My latest presentation on securing big data was at the 2018 AppSec California conference:

When: Wednesday, January 31, 3:00pm – 3:50pm
Where: Santa Monica
Event Link: Unpoisoned Fruit: Seeding Trust into a Growing World of Algorithmic Warfare

Artificial Intelligence, or even just Machine Learning for those who prefer organic, is influencing nearly all aspects of modern digital life. Whether it be financial, health, education, energy, transit…emphasis on performance gains and cost reduction has driven the delegation of human tasks to non-human agents. Yet who in infosec today can prove agents worthy of trust? Unbridled technology advances, as we have repeatedly learned in history, bring very serious risks of accelerated and expanded humanitarian disasters. The infosec industry has been slow to address social inequalities and conflict that escalates on the technical platforms under their watch; we must stop those who would ply vulnerabilities in big data systems, those who strive for quick political (arguably non-humanitarian) power wins. It is in this context that algorithm security increasingly becomes synonymous with security professionals working to avert, or as necessary helping win, kinetic conflicts instigated by digital exploits. This presentation therefore takes the audience through technical details of defensive concepts in algorithmic warfare based on an illuminating history of international relations. It aims to show how and why to seed security now into big data technology rather than wait to unpoison its fruit.

Copy of presentation slides: UnpoisonedFruit_Export.pdf