Category Archives: Poetry

A Time to Break Silence…Together

Dr. Martin Luther King’s speech of April 4, 1967: A Time to Break Silence

We must rapidly begin the shift from a “thing-oriented” society to a “person-oriented” society. When machines and computers, profit motives and property rights are considered more important than people, the giant triplets of racism, materialism, and militarism are incapable of being conquered.

And a remix by Nordic Giants

DARPA’s Heraclitus Drone

Heraclitus of Ephesus (530-470 BCE) famously wrote about the ephemeral nature of knowledge, let alone existence:

“It is impossible to step into the same river twice.”

“We both step and do not step in the same rivers. We are and are not.”

“Those who step into the same rivers, different and different waters flow.”

His poetry is considered a powerful influence on philosophers for many centuries after.

Today DARPA is sewing these old philosophical threads into physical designs for their Fast Light Autonomy program (FLAP), as Kelsey Atherton writes in c4isrnet:

Every map is an outdated map. Buildings change, people relocate, and what was accurate a decade ago may mean nothing to someone on patrol today.

One quote in Kelsey’s article that stood out to me is from FLAP’s program manager, who says he sees cost deflation as the real driver for autonomy.

We don’t want to deploy a world-class FPV racer with every search and rescue team

This brings to mind a story from this past January, which only was recently published by the sensationalist tabloid Daily Star. They describe the high cost of an assassination plan led by the British. During a raid the targets retreated to a cave network, and a highly-trained SAS soldier engaged to finish the mission.

“It was a brutal fight to the death. The SAS sergeant emerged from the tunnel half an hour later covered in blood, both his own and those of the men he had killed.”

The soldier was unable to speak for at least an hour because he was so traumatised.

He later said the air was so thin it was almost impossible to breathe.

The SAS man, an Iraq veteran, later said that the 30 minutes he spent in the tunnels was the hardest of his entire military career.

Deploying world-class talent has prohibitive cost, which is exactly why targets retreat into tunnels that force world-class talent to be deployed. Drones that inexpensively can map high-risk topography clearly changes the equations more in favor of those in pursuit of targets, whether it be rescue or the opposite.

There are two big wrinkles, however, in the development of any sort of Heraclitus drone to keep humans abreast of the latest changes in the environments being stepped into.

First, communications are imperfect in availability. A recent TeamWerx “challenge” to develop amplifier repeater for RF highlights the opportunities to improve ad hoc networks for drones to operate through difficult and closed terrain.

SOF operators have a need for rapidly deployable, interconnected repeaters that can transmit and receive a 1775-2250 MHz range of RF energy that may include near-real time video, audio, and modulated digital data messages. The system of interconnected repeaters should be easily extendable by inserting additional repeaters.

I can imagine here is where the DARPA folks would say we don’t want to deploy a world-class radio technician with every search and rescue team.

Second, communications are imperfect in integrity. Attackers or even just natural interference degrades signal to levels that perhaps shouldn’t be trusted. Yet who knows when that point is crossed and will they know soon enough? Unlike availability, where signal is degraded in terms of loss, subtle quality changes are a more difficult metric to monitor.

A green beret recently related a story to me from his training in the 1960s, where two teams walked through nearly impenetrable jungle. They proceeded in separate columns, with extreme caution, one led by a “local” guide.

Despite all the training and signals, the column without a guide in front tripped a mock trigger for mines. They asked the guide why didn’t he warn the second column and apparently he replied “why should I?”

The green beret told me “from that point forward we had a different trust”. So here is where I add in the modern modifier, he had a different trust in the quality of information from commodity drones, which takes us back to the old concept of “we both step and do not step in the same rivers”.

The Psychology of “Talking Paper”

Sometime in the late 1980s I managed to push a fake “bomb” screen to Macintosh users in networked computer labs. It looked something like this:

There wasn’t anything wrong with the system. I simply wanted the users in a remote room to restart because I had pushed an “extension” to their system that allowed me remote control of their speaker (and microphone). They always pushed the restart button. Why wouldn’t they?

Once they restarted I was able to speak to them from my microphone. In those days it was mostly burps and jokes, mischievous stuff, because it was fun to surprise users and listen to their reactions.

A few years later, as I was burrowing around in the dusty archives of the University of London (a room sadly which no longer exists because it was replaced by computer labs, but Duke University has a huge collection), I found vivid color leaflets that had been dropped by the RAF into occupied Ethiopia during WWII.

There in my hand was the actual leaflet credited with psychological operations “101”, and so a color copy soon became a page in my graduate degree thesis. In my mind these two experiences were never far apart.

For years afterwards when I would receive a greeting card with a tiny speaker and silly voice or song, of course I would take it apart and look for ways to re-purpose or modify its message. Eventually I had a drawer full of these tiny “talking paper” devices, ready to deploy, and sometimes they would end up in a friend’s book or bag as a surprise.

One of my favorite “talking” devices had a tiny plastic box that upon sensing light would yodel “YAHOOOOOO!” I tended to leave it near my bed so I could be awakened by yodeling, to set the tone of the new day. Of course when anyone else walked into the room and turned on the light their eyes would grow wide and I’d hear the invariable “WTF WAS THAT?”

Fast forward to today and I’m pleased to hear that “talking paper” has become a real security market and getting thinner, lighter and more durable. In areas of the world where Facebook doesn’t reach, military researchers still believe psychological manipulation requires deploying their own small remote platforms. Thus talking paper is as much a thing as it was in the 1940s or before and we’re seeing cool mergers of physical and digital formats, which I tried to suggest in my presentation slides from recent years:

While some tell us the market shift from printed leaflets to devices that speak is a matter of literacy, we all can see clearly in this DefenseOne story how sounds can be worth a thousand words.

Over time, the operation had the desired effect, culminating in the defection of Michael Omono, Kony’s radio telephone operator and a key intelligence source. Army Col. Bethany C. Aragon described the operation from the perspective of Omono.

“You are working for a leader who is clearly unhinged and not inspired by the original motivations that people join the Lord’s Resistance Army for. [Omono] is susceptible. Then, as he’s walking through the jungle, he hears [a recording of] his mother’s voice and her message begging him to come home. He sees leaflets with his daughter’s picture begging him to come home, from his uncle that raised him and was a father to him.”

Is anyone else wondering if Omono had been a typewriter operator instead of radio telephone whether the US Army could have convinced him via print alone?

Much of the story about the “new” talking paper technology is speculative about the market, like allowing recipients to be targeted by biometrics. Of course if you want a message to spread widely and quickly via sound (as he’s walking through the jungle), using biometric authenticators to prevent it from spreading at all makes basically no sense.

On the other hand (pun not intended) if a written page will speak only when a targeted person touches it, that sounds like a great way to evolve the envelope/letter boundary concepts. On the paper is the address of the recipient, which everyone and anyone can see, much like how an email address or phone number sits exposed on encrypted messaging. Only when the recipient touches it or looks at it, and their biometrics are verified, does it let out the secret “YAHOOOO!”

Holding Facebook Executives Responsible for Crimes

Interesting write-up on Vox about the political science of Facebook, and how it has been designed to avoid governance and accountability:

…Zuckerberg claims that precisely because he’s not responsible to shareholders, he is able instead to answer his higher responsibility to “the community.”

And he’s very clear, as he says in interview after interview and hearing after hearing, that he takes this responsibility very seriously and is very sorry for having violated it. Just as he’s been sorry ever since he was a first-year college student. But he’s never actually been held responsible.

I touched on this in my RSA presentation about driverless cars several years ago. My take was the Facebook management is a regression of many centuries (pre-Magna Carta). Their primitive risk control concepts, and executive team opposition to modern governance, puts us all on a path of global catastrophe from automation systems, akin to the Cuban Missile Crisis.

I called it “Dar-Win or Lose: The Anthropology of Security Evolution

It is not one of my most watched videos, that’s for certain.

It seems like talks over the years where I frame code as poetry, with AI security failures like an ugly performance, I garner far more attention. If the language all programmers know best is profanity, who will teach their machines manners?

Meanwhile, my references to human behavior science to describe machine learning security, such as this one about anthropology, fly below radar (pun intended).

Amazon’s About Face on GovCloud: “Physical Location Has No Bearing”

Amazon never seemed very happy about building a dedicated physical space, kind of the opposite of cloud, to achieve compliance with security requirements of the US federal government.

AWS provides customers with the option to store their data in AWS GovCloud (US) managed solely by US Persons on US soil. AWS GovCloud (US) is Amazon’s isolated cloud region where accounts are only granted to US Persons working for US organizations.

That’s a very matter-of-fact statement, suggesting it was doing what it had been told was necessary as opposed to what it wanted (destroy national security requirements as antiquated while it augers towards a post-national corporate-led system of control).

While that might have seemed speculative before now, Amazon management just released a whitepaper showing its true hand.

The other two “realities” are “Most Threats are Exploited Remotely” and “Manual Processes Present Risk of Human Error”…

I want you all to sit down, take a deep breath, and think about the logic of someone arguing physical location has no bearing on threats being exploited remotely.

First, vulnerabilities are exploited. Threats exploit those vulnerabilities. Threats aren’t usually the ones being exploited via connectivity to the Internet (as much as we talk about hack back), vulnerabilities are. Minor thing, I know, yet it speaks to the familiarity of the author with the subject.

Second, if physical location truly had no bearing, the author of this paper would have not bothered with any “remotely” modifier. They would say vulnerabilities are being exploited. Full stop. To say exploits are something coming from remote locations is them admitting there is a significance of physical location. Walls being vulnerable to cannon-balls does not mean cannons fired from 1,000 miles away are the same as from 1 mile.

Third, and this is where it truly gets stupid, “Insider Threats Prevail as a Significant Risk” again uses a physical metaphor of “insider”. What does insider mean if not someone inside a space delimited by controls? That validates physical location having bearing on risk, again.

Fourth, this nonsense continues throughout the document. Page six advises, without any sense of irony “systems should be designed to limit the ‘blast radius’ of any intrusion so that one compromised node has minimal impact on any other node in the enterprise”. You read that right, a paper arguing that physical location has no bearing…just told you that blast RADIUS is a critical component to safety from harm.

Come on.

This paper seems like it is full of amateur security mistakes made by someone who has a distinctly political argument to make against government-based controls. In other words, Amazon’s anti-government paper is an extremist free-market missive targeting US-based ITAR and undermining national security, although it probably thought it was trying to knock down laws written in another physical location.

Something tells me the blast radius of this paper was seriously miscalculated before it was dropped. Little surprise, given how weak their grasp of safety control is and how strong their desire to destroy barriers to Amazon’s entry.

The Chaos

by Dr. Gerard Nolst Trenité
(Netherlands, 1870-1946)

Dearest creature in creation,
Study English pronunciation.
I will teach you in my verse
Sounds like corpse, corps, horse, and worse.
I will keep you, Suzy, busy,
Make your head with heat grow dizzy.
Tear in eye, your dress will tear.
So shall I! Oh hear my prayer.
Pray, console your loving poet,
Make my coat look new, dear, sew it!

Just compare heart, beard, and heard,
Dies and diet, lord and word,
Sword and sward, retain and Britain.
(Mind the latter, how it’s written.)
Now I surely will not plague you
With such words as plaque and ague.
But be careful how you speak:
Say break and steak, but bleak and streak;
Cloven, oven, how and low,
Script, receipt, show, poem, and toe.

Hear me say, devoid of trickery,
Daughter, laughter, and Terpsichore,
Typhoid, measles, topsails, aisles,
Exiles, similes, and reviles;
Scholar, vicar, and cigar,
Solar, mica, war and far;
One, anemone, Balmoral,
Kitchen, lichen, laundry, laurel;
Gertrude, German, wind and mind,
Scene, Melpomene, mankind.

Billet does not rhyme with ballet,
Bouquet, wallet, mallet, chalet.
Blood and flood are not like food,
Nor is mould like should and would.
Viscous, viscount, load and broad,
Toward, to forward, to reward.
And your pronunciation’s OK
When you correctly say croquet,
Rounded, wounded, grieve and sieve,
Friend and fiend, alive and live.

Ivy, privy, famous; clamour
And enamour rhyme with hammer.
River, rival, tomb, bomb, comb,
Doll and roll and some and home.
Stranger does not rhyme with anger,
Neither does devour with clangour.
Souls but foul, haunt but aunt,
Font, front, wont, want, grand, and grant,
Shoes, goes, does. Now first say finger,
And then singer, ginger, linger,
Real, zeal, mauve, gauze, gouge and gauge,
Marriage, foliage, mirage, and age.

Query does not rhyme with very,
Nor does fury sound like bury.
Dost, lost, post and doth, cloth, loth.
Job, nob, bosom, transom, oath.
Though the differences seem little,
We say actual but victual.
Refer does not rhyme with deafer.
Foeffer does, and zephyr, heifer.
Mint, pint, senate and sedate;
Dull, bull, and George ate late.
Scenic, Arabic, Pacific,
Science, conscience, scientific.

Liberty, library, heave and heaven,
Rachel, ache, moustache, eleven.
We say hallowed, but allowed,
People, leopard, towed, but vowed.
Mark the differences, moreover,
Between mover, cover, clover;
Leeches, breeches, wise, precise,
Chalice, but police and lice;
Camel, constable, unstable,
Principle, disciple, label.

Petal, panel, and canal,
Wait, surprise, plait, promise, pal.
Worm and storm, chaise, chaos, chair,
Senator, spectator, mayor.
Tour, but our and succour, four.
Gas, alas, and Arkansas.
Sea, idea, Korea, area,
Psalm, Maria, but malaria.
Youth, south, southern, cleanse and clean.
Doctrine, turpentine, marine.

Compare alien with Italian,
Dandelion and battalion.
Sally with ally, yea, ye,
Eye, I, ay, aye, whey, and key.
Say aver, but ever, fever,
Neither, leisure, skein, deceiver.
Heron, granary, canary.
Crevice and device and aerie.

Face, but preface, not efface.
Phlegm, phlegmatic, ass, glass, bass.
Large, but target, gin, give, verging,
Ought, out, joust and scour, scourging.
Ear, but earn and wear and tear
Do not rhyme with here but ere.
Seven is right, but so is even,
Hyphen, roughen, nephew Stephen,
Monkey, donkey, Turk and jerk,
Ask, grasp, wasp, and cork and work.

Pronunciation — think of Psyche!
Is a paling stout and spikey?
Won’t it make you lose your wits,
Writing groats and saying grits?
It’s a dark abyss or tunnel:
Strewn with stones, stowed, solace, gunwale,
Islington and Isle of Wight,
Housewife, verdict and indict.

Finally, which rhymes with enough —
Though, through, plough, or dough, or cough?
Hiccough has the sound of cup.
My advice is to give up!!!

Originally transcribed by Pete Zakel .

“My Lost Youth” by Longfellow

A curious thing about writing a poem is how it can suggest to the reader a topic while subtly communicating a tangent. Recently I was being peppered by questions of attribution in security that reminded me of Henry Wadsworth Longfellow’s poem:

		My Lost Youth

OFTEN I think of the beautiful town	 
  That is seated by the sea;	 
Often in thought go up and down	 
The pleasant streets of that dear old town,	 
  And my youth comes back to me.			5
    And a verse of a Lapland song	 
    Is haunting my memory still:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
I can see the shadowy lines of its trees,		10
  And catch, in sudden gleams,	 
The sheen of the far-surrounding seas,	 
And islands that were the Hesperides	 
  Of all my boyish dreams.	 
    And the burden of that old song,			15
    It murmurs and whispers still:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
I remember the black wharves and the slips,	 
  And the sea-tides tossing free;			20
And Spanish sailors with bearded lips,	 
And the beauty and mystery of the ships,	 
  And the magic of the sea.	 
    And the voice of that wayward song	 
    Is singing and saying still:			25
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
I remember the bulwarks by the shore,	 
  And the fort upon the hill;	 
The sunrise gun with its hollow roar,			30
The drum-beat repeated o'er and o'er,	 
  And the bugle wild and shrill.	 
    And the music of that old song	 
    Throbs in my memory still:	 
    'A boy's will is the wind's will,			35
And the thoughts of youth are long, long thoughts.'	 
I remember the sea-fight far away,	 
  How it thunder'd o'er the tide!	 
And the dead sea-captains, as they lay	 
In their graves o'erlooking the tranquil bay		40
  Where they in battle died.	 
    And the sound of that mournful song	 
    Goes through me with a thrill:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	45
I can see the breezy dome of groves,	 
  The shadows of Deering's woods;	 
And the friendships old and the early loves	 
Come back with a Sabbath sound, as of doves	 
  In quiet neighbourhoods.				50
    And the verse of that sweet old song,	 
    It flutters and murmurs still:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
I remember the gleams and glooms that dart		55
  Across the schoolboy's brain;	 
The song and the silence in the heart,	 
That in part are prophecies, and in part	 
  Are longings wild and vain.	 
    And the voice of that fitful song			60
    Sings on, and is never still:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
There are things of which I may not speak;	 
  There are dreams that cannot die;			65
There are thoughts that make the strong heart weak,	 
And bring a pallor into the cheek,	 
  And a mist before the eye.	 
    And the words of that fatal song	 
    Come over me like a chill:				70
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	 
Strange to me now are the forms I meet	 
  When I visit the dear old town;	 
But the native air is pure and sweet,			75
And the trees that o'ershadow each well-known street,	 
  As they balance up and down,	 
    Are singing the beautiful song,	 
    Are sighing and whispering still:	 
    'A boy's will is the wind's will,			80
And the thoughts of youth are long, long thoughts.'	 
And Deering's woods are fresh and fair,	 
  And with joy that is almost pain	 
My heart goes back to wander there,	 
And among the dreams of the days that were		85
  I find my lost youth again.	 
    And the strange and beautiful song,	 
    The groves are repeating it still:	 
    'A boy's will is the wind's will,	 
And the thoughts of youth are long, long thoughts.'	90

This could happen anywhere, despite being about a specific place. Supposedly in 1855 he set out to describe an idyllic life in Portland, Oregon. And yet what city “beautiful town that is seated by the sea” does not have “pleasant streets” with “shadowy lines of its trees”? Is anyone surprised to hear of an old American shipping town with “black wharves and the slips” below “the fort upon the hill”?

Even more to the point, after a long vague description leaving the reader without any unique Portlandish details, the writer admits “there are things of which I may not speak”. Vague by design?

Ok, then, decoding the poem to suggests a series of fleeting (pun not intended) feelings that defy direct attribution to a particular city. Action words give away bundles of emotion from a young boy excited by a generalized theory of adventure. No real location is meant, which leaves instead the importance of stanza action lines (7th); they seem to unlock a message about generic youthful rotations: haunting, murmurs, singing, throbs, goes, flutters, sings, come, sighing, repeating. “Lost youth” indeed….

Could truck drivers lose their jobs to robots?

Next time you bang on a vending machine for a bottle that refuses to fall into your hands, ask yourself if restaurants soon will have only robots serving you meals.

Maybe it’s true there is no future for humans in service industries. Go ahead, list them all in your head. Maybe problems robots have with simple tasks like dropping a drink into your hands are the rare exceptions and the few successes will become the norm instead.

One can see why it’s tempting to warn humans not to plan on expertise in “simple” tasks like serving meals or tending a bar…take the smallest machine successes and extrapolate into great future theories of massive gains and no execution flaws or economics gone awry.

Just look at cleaning, sewing and cooking for examples of what will be, how entire fields have been completely automated with humans eliminated…oops, scratch that, I am receiving word from my urban neighbors they all seem to still have humans involved and providing some degree of advanced differentiation.

Maybe we should instead look at darling new startup Blue Apron, turning its back on automation, as it lures millions in investments to hire thousands of humans to generate food boxes. This is such a strange concept of progress and modernity to anyone familiar with TV dinners of the 1960s and the reasons they petered out.

Blue Apron’s meal kit service has had worker safety problems

Just me or is anyone else suddenly nostalgic for that idyllic future of food automation (everything containerized, nothing blended) as suggested in a 1968 movie called “2001”…we’re 16 years late now and I still get no straw for my fish container?

2001 prediction of food

I don’t even know what that box on the top right is supposed to represent. Maybe 2001 predicted chia seed health drinks.

Speaking of cleaning, sewing and cooking with robots…someone must ask at some point why much of automation has focused on archetypal roles for women in American culture. Could driverless tech be targeting the “soccer-mom” concept along similar lines; could it arguably “liberate” women from a service desired from patriarchal roles?

Hold that thought, because instead right now I hear more discussion about a threat from robots replacing men in the over-romanticized male-dominated group of long-haul truckers. (Protip: women are now fast joining this industry)

Whether measuring accidents, inspections or compliance issues, women drivers are outperforming males, according to Werner Enterprises Inc. Chief Operating Officer Derek Leathers. He expects women to make up about 10 percent of the freight hauler’s 9,000 drivers by year’s end. That’s almost twice the national average.

The question is whether American daily drivers, of which many are professionals in trucks, face machines making them completely redundant just like vending machines eliminating bartenders.

It is very, very tempting to peer inside any industry and make overarching forecasts of how jobs simply could be lost to robots. Driving a truck on the open roads, between straight lines, sounds so robotic already to those who don’t sit in the driver’s seat. Why has this not already been automated, is the question we should be answering rather than how soon will it happen.

Only at face value does driving present a bar so low (pun not intended) machines easily could take it over today. Otto of the 1980 movie “Airplane” fame comes to mind for everyone I’m sure, sitting ready to be, um, “inflated” and take over any truck anywhere to deliver delicious TV dinners.

Otto smokes a cig

Yet when scratching at barriers, maybe we find trucking is more complicated than this. Maybe there could be more to human processes, something really intelligent, than meets a non-industry specific robotic advocate’s eye?

Systems that have to learn, true robots of the future, need to understand a totality of environment they will operate within. And this begs the question of “knowledge” about all tasks being replaced, not simply the ones we know of from watching Hollywood interpretations of the job. A common mistake is to underestimate knowledge and predict its replacement with an incomplete checklist of tasks believed to point in the general direction of success.

Once the environmental underestimation mistake is made another mistake is to forecast cost improvements by acceleration of checklists towards a goal of immediate decision capabilities. We have seen this with bank ATMs, which actually cost a lot of money to build and maintain and never achieved replacement of teller decision-trees; even more security risks and fraud were introduced that required humans to develop checklists and perform menial tasks to maintain ATMs, which still haven’t achieved full capability. This arguably means new role creation is the outcome we should expect, mixed with modest or even slow decline of jobs (less than 10% over 10 years).

Automation struggles at eliminating humans completely because of the above two problems (need for common sense and foundations, need for immediate decision capabilities based on those foundations) and that’s before we even get to the need for memory and a need for feedback loops and strategic thinking. The latter two are essential for robots replacing human drivers. Translation to automation brings out nuances in knowledge that humans excel in as well as long-term thoughts both forwards and backwards.

Machines are supposed to move beyond limited data sets and be able to increase minimum viable objectives above human performance, yet this presupposes success at understanding context. Complex streets and dangerous traffic situations are a very high bar to achieve, so high they may never be reached without human principled oversight (e.g. ethics). Without deep knowledge of trucking in its most delicate moments the reality of driver replacement becomes augmentation at best. Unless the “driver” definition changes, goal posts are moved and expectations for machines are measured far below full human capability and environmental possibility, we remain a long way from replacement.

Take for example the amount of time it takes to figure out risk of killing someone in an urban street full of construction, school and loading zones. A human is not operating within a window 10 seconds from impact because they typically aim to identify risks far earlier, avoiding catastrophes born from leaving decisions to last-seconds.

I’m not simply talking about control of the vehicle, incidentally (no pun intended), I also mean decisions about insurance policies and whether to stay and wait for law enforcement to show up. Any driver with rich experience behind the wheel could tell you this and yet some automation advocates still haven’t figured that out, as they emphasize sub-second speed of their machines is all they need/want for making decisions, with no intention to obey human imposed laws (hit-and-run incidents increased more than 40% after Uber was introduced to London, causing 11 deaths and 5,000 injuries per year).

For those interested in history we’re revisiting many of the dilemmas posed the first time robotic idealism (automobiles) brought new threat models to our transit systems. Read a 10 Nov 1832 report on deaths caused by ride share services, for example.

The Inquest Jury found a verdict of man- slaughter against the driver,—a boy under fifteen years of age, and who appeared to have erred more from incapacity than evil design; and gave a deodand of 50/. against the horse and cabriolet, to mark their sense of the gross impropriety of the owner in having in- trusted the vehicle to so young and inexperienced a person.

1896 London Public CarriagesYoung and inexperienced is exactly what even the best “learning” machines are today. Sadly for most of 19th Century London authorities showed remarkably little interest in shared ride driving ability. Tests to protect the public from weak, incapacitated or illogical drivers of “public carriages” started only around 1896.

Finding balance between insider expertise based on experience and outsider novice learner views is the dialogue playing out behind the latest NHTSA automation scales meant to help regulate safety on our roads. People already are asking whether costs to develop systems that can go higher than “level three” (cede control under certain conditions and environments) autonomous vehicle are justified. That third level of automation is what typically is argued by outsiders to be the end of the road for truck drivers (as well as soccer moms).

The easy answer to the third level is no, it still appears to be years before we can SAFELY move above level three and remove humans in common environments (not least of all because hit-and-run murder economics heavily favoring driverless fleets). Cost reductions today through automation make far more sense at the lower ends of the scale where human driver augmentation brings sizable returns and far fewer chances of disaster or backlash. Real cost, human life error, escalates quickly when we push into a full range of even the basic skills necessary to be a safe driver in every environment or any street.

There also is a more complicated answer. By 2013 we saw Canadian trucks linking up in Alberta’s open road and using simple caravan techniques. Repeating methods known for thousands of years, driver fatigue and energy costs were significantly dropped though caravan theory. Like a camel watching the tail of one in front through a sandstorm…. In very limited private environments (e.g. competitions, ranches, mines, amusement parks) the cost of automation is less and the benefits realized early.

I say the answer is complicated because level three autonomous vehicle still must have a human at the controls to take over, and I mean always. The NHTSA has not yet provided any real guidance on what that means in reality. How quickly a human must take-over leaves a giant loophole in defining human presence. Could the driver be sleeping at the controls, watching a movie, or even reposing in the back-seat?

The Interstate system in America has some very long-haul segments with traffic flowing at similar speed with infrequent risk of sudden stop or obstruction. Tesla, in their typically dismissive-of-safety fashion despite (or maybe because of) their cars repeatedly failing and crashing, called major obstructions on highways a “UFO” frequency event.

Cruise control and lane-assist in pre-approved and externally monitored safe-zones in theory could allow drivers to sleep as they operate, significantly reducing travel times. This is a car automation model actually proposed in the 1950s by GM and RCA, predicted to replace drivers by 1974. What would the safe-zone look like? Perhaps one human taking over the responsibility by using technology to link others, like a service or delegation of decision authority, similar to air traffic control (ATC) for planes. Tesla is doing this privately, for those in the know.

Ideally if we care about freedom and privacy, let alone ethics, what we should be talking about for our future is a driver and a co-pilot taking seats in the front truck of a large truck caravan. Instead of six drivers for six trucks, for example, you could find two drivers “at the controls” for six trucks connected by automation technology. This is powerful augmentation for huge cost savings, without losing essential control of nuanced/expert decisions in myriad local environments.

This has three major benefits. First, it helps with the shortage of needed drivers, mentioned above being filled by women. Second it allows robot proponents to gather real-world data with safe open road operations. Third, it opens the possibility of job expansion and transitions for truckers to drone operations.

On the other end of the spectrum from boring unobstructed open roads, in terms of driverless risks, are the suburban and urban hubs (warehouses and loading docks) that manage complicated truck transactions. Real human brain power still is needed for picking up and delivering the final miles unless we re-architect the supply-chain. In a two-driver, six-truck scenario this means after arriving at a hub, trucks return to one driver one truck relationship, like airplanes reaching an airport. Those trucks lacking human drivers at the controls would sit idle in queue or…wait for it…be “remotely” controlled by the locally present human driver. The volume of trucks (read percentage “drones”) would increase significantly as number of drivers needed might actually decline only slightly.

Other situations still requiring human control tend to be bad weather or roads lacking clear lines and markings. Again this would simply mean humans at the controls of a lead vehicle in a caravan. Look at boats or planes again for comparison. Both have had autopilots far longer, at least for decades, and human oversight has yet to be cost-effectively eliminated.

Could autopilot be improved to avoid scenarios that lead to disaster, killing their human passengers? Absolutely. Will someone pay for autopilots to avoid any such scenarios? Hard to predict. For that question it seems planes are where we have the most data to review because we treat their failures (likely due to concentrated loss of life) with such care and concern.

There’s an old saw about Allied bombers of WWII being riddled with bullet holes yet still making it back to base. After much study the Air Force put together a presentation and told a crowded room that armor would be added to all the planes where concentrations of holes were found. A voice in back of the crowd was heard asking “but shouldn’t you put the armor where the holes aren’t? Where are the holes on planes that didn’t come back”.

It is time to focus our investments on collecting and understanding failures to improve driving algorithms of humans, by enhancing the role of drivers. The truck driver already sits on a massively complex array of automation (engines and networks) so adding more doesn’t equate to removing the human completely. Humans still are better at complex situations such as power loss or reversion to manual controls during failures. Automation can make both flat open straight lines into the sunset more enjoyable, as well as the blizzard and frozen surface, but only given no surprises.

Really we need to be talking about enhancing drivers, hauling more over longer distance with fewer interruptions. Beyond reduced fatigue and increased alertness with less strain, until systems move above level three automation the best-case use of automation is still augmentation.

Drivers could use machines for making ethical improvements to their complex logistics of delivery (less emissions, increased fuel efficiency, reduced strain on the environment). If we eliminate drivers in our haste to replace them, we could see fewer benefits and achieve only the lowest-forms of automation, the ones outsiders would be pleased with while those who know better roll their eyes with disappointment.

Or maybe Joe West & the Sinners put it best in their classic trucker tune “$2000 Navajo Rug

I’ve got my own chakra machine, darlin’,
made out of oil and steel.
And it gives me good karma,
when I’m there behind the wheel

“T V E S L E”: The Poetry of Encryption in 1080s AD

1550-boite-a-chiffrerWhile reading about the French use of encryption during the 16C I ran into a reference that said French Kings borrowed cryptography concepts from Arabs. A little more digging and I found an example by Hervé Lehning in “L’Univers des codes secrets: De l’antiquité à Internet”.

He writes that Muhammad ibn Abbad al-Mu’tamid (المعتمد بن عباد), King of Seville from 1069-1092, used birds in poetry for secret correspondence. For example:

La tourterelle du matin craint le vautour,
Qui pourtant préfère les nuées d’étourneaux,
Ou au moins les sarcelles et les loriots
Qui plus que tout craignent les éperviers.

Matching names of birds to their first letter we get “t v e s l e”, which Lehning contends is the message “tues-le”: kill him

My translation:

The morning dove fears the vulture,
yet who prefers swarms of starlings,
or at least teal and orioles,
who most of all fear the hawk.

Would love to find the original imagery as I imagine the King’s poetry to be highly calligraphic or even a form of pictorial encoding.

Our Digital Right to Die

With so many, so many, blog posts about Apple and FBI I have yet to see one get to the core issue.

Do we have a digital right to die? After we are dead, in other words, who controls the destiny of our data and what authority do we have over them?

Having been in the security industry for more than two decades I have worked extensively on this problem, not only because of digital forensics. Over the past five years we’ve developed some of the best technical solutions yet to help kill your data, forever, at massive scale.

The market has not seemed ready. Knowledge in this area has been for specialists.

Although I could bring up many cases and examples, most people do not run into them because discussion is usually around how to preserve things. The digital death is seen as edge or outlying situations (regulatory/legal compliance, dead soldier’s email, hiker’s cell phone, famous literary artist’s archives).

It feels like this is about to change, finally.

Everyone seems now to be talking about whether the FBI should be allowed to compel a manufacturer to disable a cell phone’s dead-man switch, for lack of a better term. A dead-man switch (or dead man’s, or kill switch) is able to operate automatically if the person who set it becomes incapacitated.

Dead-man switches can have sophisticated logic. Some are very simple. In the current news the cell phone uses a simple count. After several failed attempts to guess a PIN for a phone, the key needed to access data on that phone is erased.

Philosophically this situation presents a very difficult ethical question: Under what circumstances should law enforcement be able to disarm a dead-man switch to save data from deletion?

In this particular case we have a simple, known trigger in the dead-man switch. Bypassing it in principle is easy because you turn off the counter. Without a count the owner can try forever until they guess the PIN.

Complicating the case is that the vendor in question sells proprietary devices. They, by design, want to be the only shop with capability to modify their devices. They do not allow anyone to modify a device without their approval.

If there is any burden or effort here, arguably it is from such a business model to lock away knowledge needed to make the simple configuration change (stop the counter) to a complex device. Some see the change as a massive engineering effort, others say it is a trivial bit flip on existing code, yet no one is actually testing these theories because by design no one but the manufacturer is allowed to.

Further complicating the case is that the person using the device is dead, and technically the device is owned by someone else. Are we right to honor the intentions, unknown, of a dead person who set the dead-man switch over the living owner of the device who wants the switch disabled?

Let me put it this way. Your daughter dies suddenly. You forget the PIN to unlock the phone you gave her to communicate with you. You ask the vendor to please help disable the control that will kill your daughter’s data. Is it your data, because your device, or your daughter’s data?

If the vendor refuses to assist and you go to court, proving that you own the phone and the data is yours, do you have a case to compel the vendor to disable the control so that your data will not die?

What if the vendor says a change to the phone is a burden too great? What if they claim it would take an entirely new version of the iPhone operating system for them to make one trusted yet simple change to disable the dead-man counter? How would you respond to self-serving arguments that your need undermines their model?

It is not an easy problem to solve. This is not about two simple sides to chose from. Really it is about building better solutions for our digital right to die, which can be hard to do right, if you believe such a thing exists at all.

Updated to add reference to “kill switch” regulation:

Apple introduced Activation Lock in iOS 7. The feature “locks” iOS devices with the owner’s iCloud account credentials, and requires that they be authenticated with Apple before the device can be erased and set up again.

Activation Lock was the first commercially available “kill switch” for mobile operating systems, and similar features have since been implemented by Google and Samsung. California passed a law last August requiring that all smartphones sold in the state implement kill switches by July 2015, and an FCC panel in December recommended that the commission establish a similar nationwide framework, citing Activation Lock as model deterrent.