Category Archives: Food

Mining and Visualizing YouTube Metadata for Threat Models

For several years I’ve been working on ways to pull metadata from online video viewers into threat models. In terms of early-warning systems or general trends, metadata may be a useful input on what people are learning and thinking about.

Here’s a recent example of a relationship model between viewers that I just noticed:

A 3D map (from a company so clever they have managed to present software advertisements as legitimate TED talks) indicates that self-reporting young viewers care more about sewage and energy than they care about food or recycling.

The graph also suggests video viewers who self-identify as women watch videos on food rather than energy and sewage. Put young viewers and women viewers together and you have a viewing group that cares very little about energy technology.

I recommend you watch the video. However, I ask that you please first setup an account with false gender to poison their data. No don’t do that. Yes, do…no don’t.

Actually what the TED talk reveals, if you will allow me to get meta for a minute, is that TED talks often are about a narrow band of topics despite claiming to host a variety of presenters. Agenda? There seem to be extremely few outliers or innovative subjects, according to the visualization. Perhaps this is a result of how the visual was created — categories of talks were a little too broad. For example, if you present a TED talk on password management and sharks and I present on reversing hardware and sharks, that’s both just interest in nature, right?

The visualization obscures many of the assumptions made by those who painted it. And because it is a TED talk we give up 7 minutes of our lives yet never get details below the surface. Nonetheless, this type of analysis and visualization is where we all are going. Below is an example from one of my past presentations, where I discussed capturing and showing high-level video metadata on attack types and specific vulnerabilities/tools. If you are not doing it already, you may want to think about this type of input when discussing threat models.

Here I show the highest concentrations of people in the world who are watching video tutorials on how to use SQL injection:

#HeavyD and the Evil Hostess Principle

At this year’s ISACA-SF conference I will present how to stop malicious attacks against data mining and machine learning.

First, the title of the talk uses the tag #HeavyD. Let me explain why I think this is more than just a reference to the hiphop artist or nuclear physics.

HeavyD
The Late Great Heavy D

Credit for the term goes to @RSnake and @joshcorman. It came up as we were standing on a boat and bantering about the need for better terms than “Big Data”. At first it was a joke and then I realized we had come upon a more fun way to describe the weight of big data security.

What is weight?

Way back in 2006 Gill gave me a very tiny and light racing life-jacket. I noted it was not USCG Type III certified (65+ newtons). It seemed odd to get race equipment that wasn’t certified, since USCG certification is required to race in US Sailing events. Then I found out the Europeans believe survival of sailors requires about 5 fewer newtons than the US authorities.

Gill Buoyancy Aid
Awesome Race Equipment, but Not USCG Approved

That’s a tangent but perhaps it helps frame a new discussion. We think often about controls to protect data sets of a certain size, which implies a measure at rest. Collecting every DB we can and putting it in a central hadoop, that’s large.

If we think about protecting large amounts of data relative to movement then newton units come to mind. Think of measuring “large” in terms of a control or countermeasure — the force required to make one kilogram of mass go faster at a rate of one meter per second:

Newtons

Hold onto that thought for a minute.

Second, I will present on areas of security research related to improving data quality. I hinted at this on Jul 15 when I tweeted about a quote I saw in darkreading.

argh! no, no, no. GIGO… security researcher claims “the more data that you throw at [data security], the better”.

After a brief discussion with that researcher, @alexcpsec, he suggested instead of calling it a “Twinkies flaw” (my first reaction) we could call it the Hostess Principle. Great idea! I updated it to the Evil Hostess Principle — the more bad ingredients you throw at your stomach, the worse. You are prone to “bad failure” if you don’t watch what you eat.

I said “bad failure” because failure is not always bad. It is vital to understand the difference between a plain “more” approach versus a “healthy” approach to ingestion. Most “secrets of success” stories mention that reaction speed to failure is what differentiates winners from losers. That means our failures can actually have very positive results.

Professional athletes, for example are said to be the quickest at recovery. They learn and react far faster to failure than average. This Honda video interviews people about failure and they say things like: “I like to see the improvement and with racing it is very obvious…you can fail 100 times if you can succeed 1”

So (a) it is important to know the acceptable measure of failure. How much bad data are we able to ingest before we aren’t learning anymore — when do we stop floating? Why is 100:1 the right number?

And (b) an important consideration is how we define “improvement” versus just change. Adding ever more bad data (more weight), as we try to go faster and be lighter, could just be a recipe for disaster.

Given these two, #HeavyD is a presentation meant to explain and explore the many ways attackers are able to defeat highly-scalable systems that were designed to improve. It is a technical look at how we might setup positive failure paths (fail-safe countermeasures) if we intend to dig meaning out of data with untrusted origin.

Who do you trust?

Fast analysis of data could be hampered by slow processes to prepare the data. Using bad data could render analysis useless. Projects I’ve seen lately have added weeks to get source material ready for ingestion; decrease duplication, increase completeness and work towards some ground rule of accurate and present value. Already I’m seeing entire practices and consulting built around data normalization and cleaning.

Not only is this a losing proposition (e.g. we learned this already with SIEM), the very definition of big data makes this type of cleaning effort a curious goal. Access to unbounded volumes with unknown variety at increasing velocity…do you want to budget to “clean” it? Big data and the promise of ingesting raw source material seems antithetical to someone charging for complicated ground-rule routines and large cleaning projects.

So we are searching for a new approach. Better risk management perhaps should be based on finding a measure of data linked to improvement, like Newtons required for a life-jacket or healthy ingredients required from Hostess.

Look forward to seeing you there.

Rosasolis

by Penguin Café Orchestra

In 1972 I was in the south of France. I had eaten some bad fish and was in consequence rather ill. As I lay in bed I had a strange recurring vision, there, before me, was a concrete building like a hotel or council block. I could see into the rooms, each of which was continually scanned by an electronic eye. In the rooms were people, everyone of them preoccupied. In one room a person was looking into a mirror and in another a couple were making love but lovelessly, in a third a composer was listening to music through earphones. Around him there were banks of electronic equipment. But all was silence. Like everyone in his place he had been neutralized, made gray and anonymous. The scene was for me one of ordered desolation. It was as if I were looking into a place which had no heart. Next day when I felt better, I went to the beach. As I sat there a poem came to me. It began ‘I am the proprietor of the Penguin Cafe. I will tell you things at random.’

Does your company actually need a security department?

Gunnar Peterson prompted us yesterday in Dark Reading with this provocative question:

Does your company actually need a security department? If you are doing CYA instead of CIA, the answer is probably no

It’s easy to agree with Gunnar when you read his analysis. He offers a false dichotomy fallacy.

Standing up a choice between only awful pointless policy wonks in management and brilliant diamonds found in engineering, it’s easy to make the choice he wants you to make. Choose diamonds, duh.

However, he does not explain why we should see security management as any more of a bureaucratic roadblock than any/all management, including the CEO. Review has value. Strategy has value. Sometimes.

The issue he really raises is one of business management. Reviewers have to listen to staff and work together with builders to make themselves (and therefore overall product/output) valuable. This is not a simple, let alone binary decision, and Gunnar doesn’t explain how to get the best of both worlds.

A similar line of thinking can be found by looking across all lines of management. I found recent discussion of the JAL recovery for example, addressing such issues, very insightful.

Note the title of the BBC article “Beer with boss Kazuo Inamori helps Japan Airlines revival

My simple philosophy is to make all the staff happy….not to make shareholders happy

Imagine grabbing a six-pack of beer, sitting down with engineering and talking about security strategy, performing a review together to make engineers happy. That probably would solve Gunnar’s concerns, right? Mix diamonds with beer and imagine the possbilities…

Inamori had interesting things to say about management’s hand in the financial crisis and risk failures in 2009, before he started the turnaround of JAL

Top executives should manage their companies by earning reasonable profits through modesty, not arrogance, and taking care of employees, customers, business partners and all other stakeholders with a caring heart. I think it’s time for corporate CEOs of the capitalist society to be seriously questioned on whether they have these necessary qualities of leadership.

Gunnar says hold infosec managers accountable. Inamori says hold all managers accountable.

Only a few years later JAL under the lead of Inamori surged ahead in profit and is now close to leading the airline industry. What did Inamori build? He reviewed, nay audited, everything in order to help others build a better company.

An interesting tangent to this issue is a shift in IT management practices precipitated by cloud. Infrastructure as a Service (IaaS) options will force some to question whether they really need administrators within their IT department. Software as a Service (SaaS) may make some ask the same of developers. Once administrators and developers are gone, where is security?

Those who choose a public cloud model, and transition away from in-house resources, now also face a question of whether they should pursue a similar option for their security department. Technical staff often wear multiple hats but that option diminishes as cloud grows in influence.

In fact, once admin and dev technical staff are augmented or supplanted by cloud, the need for a security department to manage trust may be more necessary than ever. This is how the discrete need for a security department could in fact increase where none was perceived before — security as a service is becoming an interesting new development in cloud.

Bottom line: if you care about trust, whether you use shared staff or dedicated services, dedicated staff or shared services, you most likely need security. At the same time I agree with Gunnar that bad management is bad, so perhaps a simple solution is to build the budget to allow for a “beer” method of good security management.

I recommend an Audit Ale

This style had all but disappeared by the 1970s, but originated in the 1400s to be consumed when grades were handed out at Oxford and Cambridge universities…. At 8 percent ABV, it has helped celebrate many a good “audit” or soften the blow of a bad one.