Skip to content


2017 BSidesLV: Hidden Hot Battle Lessons of Cold War

My presentation on machine learning security opened the Ground Truth track at the 2017 BSidesLV conference:

When: Tuesday, July 25, 11:00 – 11:30
Where: Tuscany, Las Vegas
Cost: Free (as always!)
Event Link: Hidden Hot Battle Lessons of Cold War: All Learning Models Have Flaws, Some Have Casualties

In a pursuit of realistic expectations for learning models can we better prepare for adversarial environments by examining failures in the field?

All models have flaws, given any usual menu of problems with learning; it is the rapidly increasing risk of a catastrophic-level failure that is making data /robustness/ a far more immediate concern.

This talk pulls forward surprising and obscured learning errors during the Cold War to give context to modern machine learning successes and how things quickly may fall apart in evolving domains with cyber conflict.

Copy of Presentation Slides: 2017BSidesLV.daviottenheimer.pdf (4 MB)

Full Presentation Video:

Prior BSides Presentations

Posted in History, Security.


This Day in History — 1886 Haymarket Affair

On this day in 1886 a Civil War veteran from Texas, Albert Richard Parsons, was accused along with several others of a conspiracy to murder in Chicago, Illinois.

Albert volunteered as a 13 year old to serve in the Civil War under units in Texas led by his brothers. First he was infantry for his Confederate captain brother, next a cannoneer and finally cavalry for the Confederate colonel William Henry Parsons.

After the American “slave-holders’ rebellion” was defeated, Albert studied in college and became a member of the “Radical Republicans” working in Central Texas on suffrage for Freedmen; he helped register blacks to vote despite threats of violence and exploitation by white supremacists. After marrying Lucy Parsons he traveled through the Midwest and settled in Chicago in 1873. In his “auto-biography” published by his wife he wrote…

I incurred thereby the hate and contumely of many of my former army comrades, neighbors, and the Ku Klux Klan. My political career was full of excitement and danger. I took the stump to vindicate my convictions.

In April 1886 Chicago saw dozens of protests where people were calling for an eight-hour workday. Similar to his suffrage work to help the Freedmen, Albert spoke and wrote about industrial labor conditions as a cause of voter disenfranchisement.

On the 1st of May tens of thousands walked off their job for better working conditions. After more protests on May 3rd the police responded to a large group by shooting wildly at protesters they called violent, killing at least one and injuring many others.

The following day on May 4th Albert spoke at a meeting in Haymarket Square and left. Although Mayor Harrison had instructed the police to stay away by the end of the day hundreds of armed officers marched in and demanded protesters disperse. A bomb exploded and police again opened fire wildly into crowds. Many were killed (seven police, a few protesters) and injured (sixty police, unknown protesters).

Prominent speakers and writers such as Albert then were charged with murder because protests could be violent, despite not being there. After an unfair trial most of the accused were sentenced to death. One died violently in prison, judged a suicide. Then Albert and three others were hanged in 1887. He stood on the gallows and asked “Will I be allowed to speak, O men of America? Let me speak…” as the Sheriff opened trap doors to kill him.

Two other men had asked and received commuted sentences. Six years later, in 1893, the Illinois’ Governor Altgeld known for his “patriotic love of liberty” pardoned those convicted in the Haymarket Affair and called the unfair trial methods used a “menace to the Republic“.

Altgeld feared that when the law was bent to deprive immigrants of their civil liberties, it would later be bent to deprive native sons and daughters of theirs as well.

The City of Chicago Haymarket Memorial describes these events as “A Tragedy of International Significance“:

…those who organized and spoke at the meeting—and others who held unpopular political viewpoints—were arrested, unfairly tried and, in some cases, sentenced to death even though none could be tied to the bombing itself.

Police targeted and killed the leaders of a group who advocated for better quality of life and voting rights in poor and immigrant homes. Although Albert had survived having unpopular views in Texas opposing the Klu Klux Klan after the Civil War, in Chicago he couldn’t escape being falsely accused of violence and sentenced to death for becoming popular.

Posted in History, Security.


Where does the expression 101 come from?

Book_Mission101Lately I’ve been reading about Mission 101, where just a few thousand men in an Allied expeditionary force were sent into Ethiopia to defeat a far greater Italian occupation force 100 times their size. It’s mentioned in books like “Fire in the night : Wingate of Burma, Ethiopia, and Zion,” or the more obvious title of…wait for it… “Mission 101”.

In late 1940 a group of five young Australian soldiers set out on a secret mission. Leading a small force of Ethiopian freedom fighters on an epic trek across the harsh African bush from the Sudan, the small incursion force entered Italian-occupied Ethiopia and began waging a guerilla war against the 250,000-strong Italian army. One of these men, Ken Burke, was Duncan McNab’s uncle.

The mission wasn’t actually about five Aussies, and I’ll get to that in a minute. The name seemed strange to me, in modern context, because we use 101 to imply some kind of basic level. Someone saying “Mission 101” today sounds almost exactly opposite to the task of taking a few soldiers into unknown territory against massive odds. That sounds really hard, right?

Top-ranked answer on a search engine is a Slate post called “101 101” that tells us the expression since the 1930s has meant a starter course for beginning students:

Many freshmen will kick off their college careers with courses like Psychology 101, English 101, or History 101. When did introductory classes get their special number? In the late 1920s. The Oxford English Dictionary finds the first use of “101” as an introductory course number in a 1929 University of Buffalo course catalog. Colleges and universities began to switch to a three-digit course-numbering system around this time.

This is a wholly unsatisfying answer. It takes for granted that in transition to a three number system someone would only use 101, and not 100, 010, 001, 000 or any other possible combinations.

Why 101?

I needed more. And the search engines were doing little to help, not least of all because searching for anything + “101” gives you an introduction to that topic and not the expression.

Instead I dug into the details mentioned in Fire in the Night. It says Mission 101 had a leader named Lt Col Dan Sandford who served as artillery during WWI, and then as British Consul to Abyssinia before retiring there at the end of his term in office.

The key to this story is British artillery in WWI commonly used a fuse numbered 101 on their shells. So why would Sandford name his mission after a fuse on an explosive shell? This is where the story gets interesting.

Britain had used their presence in Sudan to train Ethiopians as guerrilla forces after 1935, due to war with Italy that year. When Italy invaded Sanford was forced to escape back to England. In October 1939, as Britain saw war with Italy fast approaching, Sanford was sent into Khartoum to gather Ethiopian exiles, round up military supplies, and trigger a popular uprising inside Ethiopia to push the Italians out.

There you have it. Using a small fuse to trigger or initiate a much larger explosion makes sense, given the 101 fuse history. Best guess from a war museum file (found at © IWM MUN 2582) is the term simply went from military use to more common public use in 1920s as slang or example of “starting” or “initiating”.

fuze101

The No 101 percussion fuze was introduced in 1916 and represented an attempt to overcome the No 100 fuze’s weak points. The 101 did away with the ‘cocked pellet’, used a fixed needle, and placed the detonator in the graze pellet. The needle was originally pressed in, but loose needles often caused premature explosions and a screwed-in needle was used after the Mark I. The 101 ran to five ‘Marks’ and was declared obsolete in 1921.

A good example of this public use and general knowledge comes from the Encyclopedia Britannica of 1922, which explained in detail on page 130, that the 101 is an example of the graze fuze for artillery shells. It even uses the phrase “this class”.EB-graze-fuze-101

Posted in History, Security.


“My Lost Youth” by Longfellow

A curious thing about writing a poem is how it can suggest to the reader a topic while subtly communicating a tangent. Recently I was being peppered by questions of attribution in security that reminded me of Henry Wadsworth Longfellow’s poem:


		My Lost Youth

OFTEN I think of the beautiful town	 
  That is seated by the sea;	 
Often in thought go up and down	 
The pleasant streets of that dear old town,	 
  And my youth comes back to me.			5
    And a verse of a Lapland song	 
    Is haunting my memory still:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
 
I can see the shadowy lines of its trees,		10
  And catch, in sudden gleams,	 
The sheen of the far-surrounding seas,	 
And islands that were the Hesperides	 
  Of all my boyish dreams.	 
    And the burden of that old song,			15
    It murmurs and whispers still:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
 
I remember the black wharves and the slips,	 
  And the sea-tides tossing free;			20
And Spanish sailors with bearded lips,	 
And the beauty and mystery of the ships,	 
  And the magic of the sea.	 
    And the voice of that wayward song	 
    Is singing and saying still:			25
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
 
I remember the bulwarks by the shore,	 
  And the fort upon the hill;	 
The sunrise gun with its hollow roar,			30
The drum-beat repeated o'er and o'er,	 
  And the bugle wild and shrill.	 
    And the music of that old song	 
    Throbs in my memory still:	 
    'A boy's will is the wind's will,			35
And the thoughts of youth are long, long thoughts.'	 
 
I remember the sea-fight far away,	 
  How it thunder'd o'er the tide!	 
And the dead sea-captains, as they lay	 
In their graves o'erlooking the tranquil bay		40
  Where they in battle died.	 
    And the sound of that mournful song	 
    Goes through me with a thrill:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	45
 
I can see the breezy dome of groves,	 
  The shadows of Deering's woods;	 
And the friendships old and the early loves	 
Come back with a Sabbath sound, as of doves	 
  In quiet neighbourhoods.				50
    And the verse of that sweet old song,	 
    It flutters and murmurs still:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
 
I remember the gleams and glooms that dart		55
  Across the schoolboy's brain;	 
The song and the silence in the heart,	 
That in part are prophecies, and in part	 
  Are longings wild and vain.	 
    And the voice of that fitful song			60
    Sings on, and is never still:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
 
There are things of which I may not speak;	 
  There are dreams that cannot die;			65
There are thoughts that make the strong heart weak,	 
And bring a pallor into the cheek,	 
  And a mist before the eye.	 
    And the words of that fatal song	 
    Come over me like a chill:				70
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
 
Strange to me now are the forms I meet	 
  When I visit the dear old town;	 
But the native air is pure and sweet,			75
And the trees that o'ershadow each well-known street,	 
  As they balance up and down,	 
    Are singing the beautiful song,	 
    Are sighing and whispering still:	 
    'A boy's will is the wind's will,			80
And the thoughts of youth are long, long thoughts.'	 
 
And Deering's woods are fresh and fair,	 
  And with joy that is almost pain	 
My heart goes back to wander there,	 
And among the dreams of the days that were		85
  I find my lost youth again.	 
    And the strange and beautiful song,	 
    The groves are repeating it still:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	90

This could happen anywhere, despite being about a specific place. Supposedly in 1855 he set out to describe an idyllic life in Portland, Oregon. And yet what city “beautiful town that is seated by the sea” does not have “pleasant streets” with “shadowy lines of its trees”? Is anyone surprised to hear of an old American shipping town with “black wharves and the slips” below “the fort upon the hill”?

Even more to the point, after a long vague description leaving the reader without any unique Portlandish details, the writer admits “there are things of which I may not speak”. Vague by design?

Ok, then, decoding the poem to suggests a series of fleeting (pun not intended) feelings that defy direct attribution to a particular city. Action words give away bundles of emotion from a young boy excited by a generalized theory of adventure. No real location is meant, which leaves instead the importance of stanza action lines (7th); they seem to unlock a message about generic youthful rotations: haunting, murmurs, singing, throbs, goes, flutters, sings, come, sighing, repeating. “Lost youth” indeed….

Posted in Poetry, Security.


Could truck drivers lose their jobs to robots?

automation failNext time you bang on a vending machine for a bottle that refuses to fall into your hands, ask yourself if restaurants soon will have only robots serving you meals.

Maybe it’s true there is no future for humans in service industries. Go ahead, list them all in your head. Maybe problems robots have with simple tasks like dropping a drink into your hands are the rare exceptions and the few successes will become the norm instead.

One can see why it’s tempting to warn humans not to plan on expertise in “simple” tasks like serving meals or tending a bar…take the smallest machine successes and extrapolate into great future theories of massive gains and no execution flaws or economics gone awry.

Just look at cleaning, sewing and cooking for examples of what will be, how entire fields have been completely automated with humans eliminated…oops, scratch that, I am receiving word from my urban neighbors they all seem to still have humans involved and providing some degree of advanced differentiation.

Maybe we should instead look at darling new startup Blue Apron, turning its back on automation, as it lures millions in investments to hire thousands of humans to generate food boxes. This is such a strange concept of progress and modernity to anyone familiar with TV dinners of the 1960s and the reasons they petered out.

Blue Apron’s meal kit service has had worker safety problems

Just me or is anyone else suddenly nostalgic for that idyllic future of food automation (everything containerized, nothing blended) as suggested in a 1968 movie called “2001”…we’re 16 years late now and I still get no straw for my fish container?

2001 prediction of food

I don’t even know what that box on the top right is supposed to represent. Maybe 2001 predicted chia seed health drinks.

Speaking of cleaning, sewing and cooking with robots…someone must ask at some point why much of automation has focused on archetypal roles for women in American culture. Could driverless tech be targeting the “soccer-mom” concept along similar lines; could it arguably “liberate” women from a service desired from patriarchal roles?

Hold that thought, because instead right now I hear more discussion about a threat from robots replacing men in the over-romanticized male-dominated group of long-haul truckers. (Protip: women are now fast joining this industry)

Whether measuring accidents, inspections or compliance issues, women drivers are outperforming males, according to Werner Enterprises Inc. Chief Operating Officer Derek Leathers. He expects women to make up about 10 percent of the freight hauler’s 9,000 drivers by year’s end. That’s almost twice the national average.

The question is whether American daily drivers, of which many are professionals in trucks, face machines making them completely redundant just like vending machines eliminating bartenders.

It is very, very tempting to peer inside any industry and make overarching forecasts of how jobs simply could be lost to robots. Driving a truck on the open roads, between straight lines, sounds so robotic already to those who don’t sit in the driver’s seat. Why has this not already been automated, is the question we should be answering rather than how soon will it happen.

Only at face value does driving present a bar so low (pun not intended) machines easily could take it over today. Otto of the 1980 movie “Airplane” fame comes to mind for everyone I’m sure, sitting ready to be, um, “inflated” and take over any truck anywhere to deliver delicious TV dinners.

Otto smokes a cig

Yet when scratching at barriers, maybe we find trucking is more complicated than this. Maybe there could be more to human processes, something really intelligent, than meets a non-industry specific robotic advocate’s eye?

Systems that have to learn, true robots of the future, need to understand a totality of environment they will operate within. And this begs the question of “knowledge” about all tasks being replaced, not simply the ones we know of from watching Hollywood interpretations of the job. A common mistake is to underestimate knowledge and predict its replacement with an incomplete checklist of tasks believed to point in the general direction of success.

Once the environmental underestimation mistake is made another mistake is to forecast cost improvements by acceleration of checklists towards a goal of immediate decision capabilities. We have seen this with bank ATMs, which actually cost a lot of money to build and maintain and never achieved replacement of teller decision-trees; even more security risks and fraud were introduced that required humans to develop checklists and perform menial tasks to maintain ATMs, which still haven’t achieved full capability. This arguably means new role creation is the outcome we should expect, mixed with modest or even slow decline of jobs (less than 10% over 10 years).

Automation struggles at eliminating humans completely because of the above two problems (need for common sense and foundations, need for immediate decision capabilities based on those foundations) and that’s before we even get to the need for memory and a need for feedback loops and strategic thinking. The latter two are essential for robots replacing human drivers. Translation to automation brings out nuances in knowledge that humans excel in as well as long-term thoughts both forwards and backwards.

Machines are supposed to move beyond limited data sets and be able to increase minimum viable objectives above human performance, yet this presupposes success at understanding context. Complex streets and dangerous traffic situations are a very high bar to achieve, so high they may never be reached without human principled oversight (e.g. ethics). Without deep knowledge of trucking in its most delicate moments the reality of driver replacement becomes augmentation at best. Unless the “driver” definition changes, goal posts are moved and expectations for machines are measured far below full human capability and environmental possibility, we remain a long way from replacement.

Take for example the amount of time it takes to figure out risk of killing someone in an urban street full of construction, school and loading zones. A human is not operating within a window 10 seconds from impact because they typically aim to identify risks far earlier, avoiding catastrophes born from leaving decisions to last-seconds.

I’m not simply talking about control of the vehicle, incidentally (no pun intended), I also mean decisions about insurance policies and whether to stay and wait for law enforcement to show up. Any driver with rich experience behind the wheel could tell you this and yet some automation advocates still haven’t figured that out, as they emphasize sub-second speed of their machines is all they need/want for making decisions, with no intention to obey human imposed laws (hit-and-run incidents increased more than 40% after Uber was introduced to London, causing 11 deaths and 5,000 injuries per year).

For those interested in history we’re revisiting many of the dilemmas posed the first time robotic idealism (automobiles) brought new threat models to our transit systems. Read a 10 Nov 1832 report on deaths caused by ride share services, for example.

The Inquest Jury found a verdict of man- slaughter against the driver,—a boy under fifteen years of age, and who appeared to have erred more from incapacity than evil design; and gave a deodand of 50/. against the horse and cabriolet, to mark their sense of the gross impropriety of the owner in having in- trusted the vehicle to so young and inexperienced a person.

1896 London Public CarriagesYoung and inexperienced is exactly what even the best “learning” machines are today. Sadly for most of 19th Century London authorities showed remarkably little interest in shared ride driving ability. Tests to protect the public from weak, incapacitated or illogical drivers of “public carriages” started only around 1896.

Finding balance between insider expertise based on experience and outsider novice learner views is the dialogue playing out behind the latest NHTSA automation scales meant to help regulate safety on our roads. People already are asking whether costs to develop systems that can go higher than “level three” (cede control under certain conditions and environments) autonomous vehicle are justified. That third level of automation is what typically is argued by outsiders to be the end of the road for truck drivers (as well as soccer moms).

The easy answer to the third level is no, it still appears to be years before we can SAFELY move above level three and remove humans in common environments (not least of all because hit-and-run murder economics heavily favoring driverless fleets). Cost reductions today through automation make far more sense at the lower ends of the scale where human driver augmentation brings sizable returns and far fewer chances of disaster or backlash. Real cost, human life error, escalates quickly when we push into a full range of even the basic skills necessary to be a safe driver in every environment or any street.

There also is a more complicated answer. By 2013 we saw Canadian trucks linking up in Alberta’s open road and using simple caravan techniques. Repeating methods known for thousands of years, driver fatigue and energy costs were significantly dropped though caravan theory. Like a camel watching the tail of one in front through a sandstorm…. In very limited private environments (e.g. competitions, ranches, mines, amusement parks) the cost of automation is less and the benefits realized early.

I say the answer is complicated because level three autonomous vehicle still must have a human at the controls to take over, and I mean always. The NHTSA has not yet provided any real guidance on what that means in reality. How quickly a human must take-over leaves a giant loophole in defining human presence. Could the driver be sleeping at the controls, watching a movie, or even reposing in the back-seat?

The Interstate system in America has some very long-haul segments with traffic flowing at similar speed with infrequent risk of sudden stop or obstruction. Tesla, in their typically dismissive-of-safety fashion despite (or maybe because of) their cars repeatedly failing and crashing, called major obstructions on highways a “UFO” frequency event.

Cruise control and lane-assist in pre-approved and externally monitored safe-zones in theory could allow drivers to sleep as they operate, significantly reducing travel times. This is a car automation model actually proposed in the 1950s by GM and RCA, predicted to replace drivers by 1974. What would the safe-zone look like? Perhaps one human taking over the responsibility by using technology to link others, like a service or delegation of decision authority, similar to air traffic control (ATC) for planes. Tesla is doing this privately, for those in the know.

Ideally if we care about freedom and privacy, let alone ethics, what we should be talking about for our future is a driver and a co-pilot taking seats in the front truck of a large truck caravan. Instead of six drivers for six trucks, for example, you could find two drivers “at the controls” for six trucks connected by automation technology. This is powerful augmentation for huge cost savings, without losing essential control of nuanced/expert decisions in myriad local environments.

This has three major benefits. First, it helps with the shortage of needed drivers, mentioned above being filled by women. Second it allows robot proponents to gather real-world data with safe open road operations. Third, it opens the possibility of job expansion and transitions for truckers to drone operations.

On the other end of the spectrum from boring unobstructed open roads, in terms of driverless risks, are the suburban and urban hubs (warehouses and loading docks) that manage complicated truck transactions. Real human brain power still is needed for picking up and delivering the final miles unless we re-architect the supply-chain. In a two-driver, six-truck scenario this means after arriving at a hub, trucks return to one driver one truck relationship, like airplanes reaching an airport. Those trucks lacking human drivers at the controls would sit idle in queue or…wait for it…be “remotely” controlled by the locally present human driver. The volume of trucks (read percentage “drones”) would increase significantly as number of drivers needed might actually decline only slightly.

Other situations still requiring human control tend to be bad weather or roads lacking clear lines and markings. Again this would simply mean humans at the controls of a lead vehicle in a caravan. Look at boats or planes again for comparison. Both have had autopilots far longer, at least for decades, and human oversight has yet to be cost-effectively eliminated.

Could autopilot be improved to avoid scenarios that lead to disaster, killing their human passengers? Absolutely. Will someone pay for autopilots to avoid any such scenarios? Hard to predict. For that question it seems planes are where we have the most data to review because we treat their failures (likely due to concentrated loss of life) with such care and concern.

There’s an old saw about Allied bombers of WWII being riddled with bullet holes yet still making it back to base. After much study the Air Force put together a presentation and told a crowded room that armor would be added to all the planes where concentrations of holes were found. A voice in back of the crowd was heard asking “but shouldn’t you put the armor where the holes aren’t? Where are the holes on planes that didn’t come back”.

It is time to focus our investments on collecting and understanding failures to improve driving algorithms of humans, by enhancing the role of drivers. The truck driver already sits on a massively complex array of automation (engines and networks) so adding more doesn’t equate to removing the human completely. Humans still are better at complex situations such as power loss or reversion to manual controls during failures. Automation can make both flat open straight lines into the sunset more enjoyable, as well as the blizzard and frozen surface, but only given no surprises.

Really we need to be talking about enhancing drivers, hauling more over longer distance with fewer interruptions. Beyond reduced fatigue and increased alertness with less strain, until systems move above level three automation the best-case use of automation is still augmentation.

Drivers could use machines for making ethical improvements to their complex logistics of delivery (less emissions, increased fuel efficiency, reduced strain on the environment). If we eliminate drivers in our haste to replace them, we could see fewer benefits and achieve only the lowest-forms of automation, the ones outsiders would be pleased with while those who know better roll their eyes with disappointment.

Or maybe Joe West & the Sinners put it best in their classic trucker tune “$2000 Navajo Rug

I’ve got my own chakra machine, darlin’,
made out of oil and steel.
And it gives me good karma,
when I’m there behind the wheel

Posted in History, Poetry, Sailing, Security.


Today in History: 1863 The Emancipation Proclamation

The American Civil War, initiated in April 1861 by those resorting to violence to prevent abolition of slavery, was in its third year when President Lincoln made his famous Emancipation Proclamation:

…all persons held as slaves [within the rebellious states] are, and henceforward shall be free.

The exact phrasing, developed and revised over many prior months, targeted only the “rebellious” states; those who quit the Union with intent to preserve slavery (as detailed by their secession papers, which clearly listed keeping slaves as a most pressing concern). Thus a notable exception was granted for other states, such as those already won by Union forces and no longer in the rebellion. His Proclamation neither applied to them nor those states remaining loyal to the Union, even bordering on Confederate territory. In other words slaves were proclaimed free for those states still in rebellion, areas intent on dissolving a Union to preserve slavery.

The relevance of Lincoln’s words to states in rebellion obviously pivoted on ability of the Union to reassert its authority within. This essentially changed the tone of conflict, as a mission of liberty from terror was proclaimed. After January 1, 1863 any Union recapture of territory meant Northern troops were said to be bringing freedom to Americans, ending a Southern reign of violence against those who had dared speak about abolition.

Moreover, this Proclamation literally allowed liberated territory slaves to join the Union in its fight against rebellion. Hundreds of thousands of black men, freed from the injustices put upon them by Confederates, signed up to serve in federal Army and Navy forces to end white police state terror that secession had intended to preserve.

Posted in History.


Kiwicon X: Pwning ML for Fun and Profit

I presented “Pwning ML for Fun and Profit” at Kiwicon X

When: Friday, Nov 18th, 2016 at 14:15
Where: Michael Fowler Centre, Wellington

Everyone is talking ML this and AI that as if they expect some kind of Utopian beast to be waiting just behind the next door and whisk us all away to a technological-paradise. It would seem dire warnings of every Sci-Fi book and movie ever haven’t been enough to dissuade people from cooking statistics and math into an techno-optimist soup of dubious origin and expecting us to swallow. Obviously security can’t just sit here and watch the catastrophes unfold. I aim to lay out some of the most awful yet still amusing examples of how and why we can and will break things. This presentation attempts to offer the audience a refreshingly realistic look at the terrible flaws in ML, the ease of altering outcomes and the dangers ahead.

Copy of Presentation: kiwiconX.daviottenheimer.pdf (5 MB)

Posted in History, Security.


“Using Behavioral Economics to Inform Policy” – Dr. Adam Oliver

Here is a copy for convenience of the 2014 presentation by Dr. Adam Oliver, a London School of Economics (LSE) Reader in Health Economics and Policy:

nudge.oliver
(PDF 2.1 MB)

Dr. Oliver is published in the areas of health equity, economic evaluation, risk and uncertainty, and the economics and policy of health care reform. The interface between economics and political science in health care policy analysis motivates his current research.

Since 2001 he has worked at the LSE, where currently he is Lecturer in Health Economics and Policy in the Department of Social Policy, and Senior Research Fellow and Deputy Director of LSE Health and Social Care, one of the largest research institutes in the health-related social sciences in Europe. 2005-06 Commonwealth Fund Harkness Fellow in Health Care Policy, Dr. Oliver holds a doctorate in economics from the University of Newcastle and an MSc in health economics from the University of York. He is a 1995-97 Japanese Ministry of Education (Monbusho) Research Scholar, Founding Chair of the Health Equity Network, Founding Coordinator of the Preference Elicitation Group, and a former Coordinator of the European Health Policy Group. He also is Founding Co-Editor of the journal, Health Economics, Policy and Law.

See also the forthcoming “Behavioural Public Policy“, an interdisciplinary and international peer-reviewed journal devoted to behavioural research and its relevance to public policy.

Posted in Security.


I should blog more again, I know

Thanks to everyone recently telling me they miss my blog posts. To be honest I have a queue of written posts unreleased because I went through one of those writing phases where erratic Tweets seemed like an easier public legacy than slogging through full paragraphs and illustrations. For six years I wrote a post every day, come rain or shine. Now it’s down to a post every harvest moon if that.

Of course in private I write for a living, typing up analysis and trying to expose fun facts of history for the many corporations building security teams and products. As the private load increased, my public writing necessarily changed to keep some distance. Balance wasn’t really expected.

Recent posts in particular I have been asked to release include defense of backdoors, surveillance camera economics and models for patching IoT…with a little elbow grease and a hammer applied to this rusty site they may soon be appearing.

Posted in Security.


2016 BSidesLV Ground Truth Keynote: Great Disasters of Machine Learning

I presented the Ground Truth Keynote at the 2016 BSidesLV conference:

Great Disasters of Machine Learning: Predicting Titanic Events in Our Oceans of Math

When: Wednesday, August 3, 10:00 – 10:30
Where: Tuscany, Las Vegas
Cost: Free (as always!)
Event Link: ground-truth-keynote-great-disasters-of-machine-learning

This presentation sifts through the carnage of history and offers an unvarnished look at some spectacular past machine learning failures to help predict what catastrophes may lay ahead, if we don’t step in. You’ve probably heard about a Tesla autopilot that killed a man…

Humans are great at failing. We fail all the time. Some might even say intelligence is so hard won and infrequent let’s dump as much data as possible into our “machines” and have them fail even faster on our behalf at lower cost or to free us. What possibly could go wrong?

Looking at past examples, learning from failures, is meant to ensure we avoid their repetition. Yet it turns out when we focus our machines narrowly, and ignore safety decision controls or similar values, we simply repeat avoidable disasters instead of achieving faster innovations. They say hindsight is 20-20 but you have to wonder if even our best machines need corrective lenses. At the end of the presentation you may find yourself thinking how easily we could have saved a Tesla owner’s life.

Copy of Presentation Slides: 2016BSidesLV.daviottenheimer.pdf (8 MB)

Full Presentation Video:

Some of my other BSides presentations:

Posted in History, Sailing, Security.