Category Archives: Security

CHANGEMAKERS: Data Ethics and How to Save the Web

Honored to be a part of the Inrupt mission as profiled by Andrew Sears on All Tech is Human (ATIH) CHANGEMAKERS. Here’s a full text version of my interview:

Davi Ottenheimer is the Vice President of Trust and Digital Ethics at Inrupt, a company striving to restore the power of balance on the web through data decentralization technology. For over 25 years, he has worked to apply security models to preserve human rights and freedoms. He is the co-author of Securing the Virtual Environment: How to Defend the Enterprise Against Attack (2012) and the author of The Realities of Securing Big Data, which is due to release this year. Davi spoke with ATIH about where the web went wrong and how decentralization technology can get things back on track.

ATIH: Tell us about your journey to your present role at Inrupt. How did you first become interested in digital ethics work?

My interest in digital ethics goes back to at least sometime in the early 1980s. The 414s seemed to me a foreshadowing of where the world was headed, at least in terms of society defining shades of lawful and unlawful data access. Their story felt very normal, not at all exceptional, because at that time it was similar to what I was experiencing in school and it was on the cover of big publications like Newsweek.

My family also exposed me very early to authorization concepts in both digital and analog tech. I basically grew up seeing computers as a natural next step like a tractor replaces the ox; no one really would want to be without one. That gave me a fluid understanding of ethics across a wide technology spectrum. For example as a child in very rural Kansas we had only a shared “party line” for telephone; my parents would of course tell me it was wrong to listen to our neighbors’ calls. I was fascinated by it all and by the time I was in college studying philosophy I was running my own copper from taps on building wires to connect dorm rooms, shifting underutilized resources to community service by taking over computer labs and all kinds of typical mischief. At the same time I was playfully exploring, I also ended up helping investigate or clean up some clever abuses of the lines by others (e.g. toll fraud, illegal reselling).

More to the point in college I always tried to turn in digital versions of assignments including a hypercard stack (precursor to websites) on ethics of linguistic regulation of Internet hate speech. That felt more exceptional, substantially entering digital ethics, because my teachers sometimes bristled at being handed a floppy instead of the usual paper. I was deep in a world at this time many professors had access to yet barely seen. I still figured at that point since I could dive into it anyone could and soon would. It was around 1990 that I excitedly showed a political science professor a 30 second video clip that I had spent 12 hours downloading and reconstituting. I had been studying information warfare and told him dissemination and manipulation was entering a whole new domain with Internet video… he told me “just do your damn homework” (some typical assignment on Middle East peace options) and walked away shaking his head. I felt at that moment I wasn’t giving up or going back, digital ethics had become my thing.

After college I applied to do political research at LSE and they countered with an offer in the history course. I accepted and explored far more historic cases of ethics in intervention (information warfare by Orde Wingate, and power dynamics in taking over large scale systems while not really owning them — 1940 British invasion of Ethiopia). My history advisor was truly amazing. He encouraged me to go professional with technology work and even told me it wouldn’t be a bad idea to pursue as a career.

It was great advice and I went straight into working for a DEC reseller in California pushing decentralization with PCs and TCP/IP. Getting paid to take hardware and software completely apart to fix it was like heaven for me. From those first phases of interest we can fast forward through twenty-five years of hands-on security within many industries around the world of all sizes and shapes. My journey has always been about convincing people from field to board-level that unsafe technology alters power dynamics, and that we protect liberties by bringing safety principles into engineering as well as policy.

A few years ago a very small database company reached out for help fixing their widely publicized product security flaws. Literally millions of people were being harmed, and they told me they weren’t finding people willing or able to help. I agreed to jump into it on the condition they let me drive end-to-end encryption at the field-level into their product as a feature, while I also cleaned up management practices. It was after we released that end-to-end field-level encryption feature, and after I guided them through IPO and massive growth to a much safer and more proper course including external oversight, that Bruce Schneier strongly suggested I consider the new Inrupt mission to bring Solid to life. I was thrilled to be given the opportunity to join such an important and challenging role.

ATIH: Inrupt is advancing the development of Solid, an open source platform designed to remake the web. What’s wrong with the web that we have today?

Solid presents a powerful yet very simple concept to remake the web: your data lives in a pod controlled by you. Any data generated by you or your things (e.g. watch, TV, car, computer, phone, thermometer, thermostat) goes to your pod. You then control access at a meaningful level, where consent has real power. I call it the need for an off button and a reset button for big data. Don’t want your data used anymore by who knows who? You have that choice. Want to be the authoritative source on data about you? Also your choice. If your doctor wants to look at your fitness tracker data, you grant that. When a family wants to share moments in photos, they grant that. Want your machines to talk with each other, not the manufacturer, and only at certain times? Up to you, through your pod controls.

We expect this to evolve with completely distributed models, although sounding idealistic, because they are necessary and thus not out of the question. At the same time, efficiencies of scale and basic economics tell us many people will have pod service providers instead of going with homegrown or localized varieties. As a long-time self-repair and build-your-own-kernel linux advocate I see no conflict innovating towards both off-grid piece-meal installations, as well as abstract and monolithic cloud services. You win because you have a lot more flexibility in a world where you seamlessly can flow between different worlds of control that suit you.

Sir Tim Berners-Lee calls the Solid project of decentralization being pro-human, as opposed to what he calls the current anti-human web platforms. For me perhaps the best way to explain the current problem with the web might be aggressive centralization, which historians used to say about the neo-absolutist surveillance state of 1850s Austria. I find it useful to reference history events to explain socio-economics that we see today with Google.

Another aspect of the problem, which I have been giving presentations about recently, is how our “digital bodies” owned by large proprietary platforms becomes a form of human trafficking.

Unfortunately 1850s American slave plantations seem to be an appropriate history reference as well. It actually explains Facebook data management and expansionist habits. I don’t say this lightly, especially as Memorial Day was created in 1868 specifically to honor those who made the ultimate sacrifice to win the war that was supposed to end slavery in America.

In my presentations on big data security, for example, I literally ask people to consider how the cotton gin was invented by a woman to end slavery, yet instead it led to state sanctioned rape of American women and forced births to rapidly expand human trafficking. That’s not the kind of history anyone really hears in school, and those are the actual facts. The web was invented to bring freedom, to end our digital selves being locked away, yet it has led to state sanctioned collection methods with vastly expanded proprietary control over almost our entire lives.

ATIH: How did these problems with the web come about?

That’s a great question. There has been massive pressure for data centralization from so many camps that have failed, it’s almost a wonder at all that some succeeded in cornering the web. I’d like to think the problems are the exception (e.g. like nationalization of the telephone under President Woodrow Wilson, or invasive inspection and destruction of U.S. mail under President Andrew Jackson) and we’re course-correcting to get back on track.

Cable television and AOL dial-up services both, believe it or not, were considered threats at some point to the success of a decentralized web. Microsoft too, although it obviously found itself in US government regulatory trouble when it aggressively tried to center the web around its browser and operating system. Some might point to RFC2109 but I find the socio-economics to be more important than this technical specification that helped build statefulness.

Perhaps the real turning point that set back decentralization came soon after the web was being panned as just a fad that would never rebound after the dot-com disaster. We witnessed in a time of crisis the giant transfer from small businesses to conglomerates, which might feel familiar to economists looking at today’s pandemic.

The optimism of the hugely diverse commercialization efforts by startups, which in a large part led to the crash, generated a kind of popular herd momentum that was picked up by the few dominant remaining technology firms. They in fact roared out of the dot-com crash with far more influence, far more human connectivity, and the market rewarded a kind of fast monopolistic growth as it escaped financial downturn. The web’s standardization and ease of use, once transformation to it was popular, made it a perfect vehicle for massive-scale growth.

The next market crash, from the mortgage crisis, then served as another accelerator on the trend for centralization coupled with more powerful devices becoming less expensive and default connected to the standards-based web. The technology sector became seen as a stable financial engine and attracted business innovators who believed user generated content had the best potential value and they set out to build systems that flipped the web on its head; would keep users connected by making it difficult to exit.

What’s notable about this history is the financial conditions and technological shifts that may never again materialize in quite the same way. That’s why I see dangerous centralization as a form of regression, an error that requires applied humanitarian correction. It’s like firing a CISO who steals, or countering the rise of extremist anti-science cults that typically form in response to recent scientific breakthroughs. I don’t believe in an inherent centralization need, or natural monopoly, in this context. In fact I see more the opposite, that we should think about Facebook in terms of why abolition of slavery never should even have been a debate. Had there not been the stress effects that led to over-centralization as a form of wealth preservation (arguably an incorrect response fueled by other unfortunate market conditions) the web could have continued to evolve in the more naturally pro-human model.

ATIH: Inrupt’s Co-Founder, Sir Tim Berners-Lee, calls Solid a project “to restore the power and agency of individuals on the web.” How does Solid accomplish this?

Giving users consent controls over their data, making the technology that represents a human to be actually human-centric, is the path forward. The pod is a representation of one’s self that should be liberating, and is manifestly difficult to do without Solid. Many people now seem to agree we can and need to fundamentally alter the balance of power, whether they call it Solid or something else. Given that everywhere you are something is a computer generating data that represents you, taking control of your digital body has become essential.

In terms of projects at massive scale I’ve lived through early days on multiple data sharing standard initiatives (DICOM, TCP/IP, HTTP, HTML, KMIP) and worked deep inside the security teams for the biggest data platform companies (EMC, Yahoo, ArcSight, etc) so much of the scope and ambition feels familiar. I mean when you really think about it, the Solid protocol could be even bigger and more exciting, like a 1968 Carterfone moment or even 1862 Emancipation Proclamation, in terms of how small decisions restore power to individuals at massive scale.

The market either will keep building costly bilateral bridges and proprietary hubs people are trapped on, like we’re seeing with Apple and Google attempts at a Covid-19 alerting API. Or standard languages and protocols will emerge to lower barriers to human-centric innovation and expand the web again by decentralizing it and shifting to Solid. If it’s the latter we’ll see a boom in more generalized global prosperity. Trust is an essential step towards making it all work, just like always, yet user security awareness levels are so much higher than I’ve ever seen before. Doing things the right way for humans to preserve their agency, to help everyone avoid becoming victims of digital trafficking, brings up all kinds of granular authorization discussions. We are quite literally reimagining technology in ways that the Solid protocols will better augment and protect positive human conditions, complex communities, cultures and all.

ATIH: What does your role as VP of Trust and Digital Ethics look like day-to-day?

I get up and wonder if I can configure my toothbrush to start streaming its data to my pod. Pour a cup of tea and think about a digital assistant reading data in my pod to know exactly what leaf and temperature “Earl Grey, Hot” means, and how accurate automated speech recognition would become when designed from the start to be highly localized on pod data and trained for my particular accent within my community or even my family.

But seriously, every day is running a gamut of working on trust concepts for the specification and protocols being designed, including security for the related products being engineered and the services offered to support both at a global production-ready scale.

People are coming to us with very real problems to solve across every sector and industry. It’s a great feeling to be helping them, working with such a passionate team that carries a real vision of a better future. Lately there has been a lot of discussion with both large and small organizations that have similar values and find Solid and Inrupt deliver a valuable piece in their own human-centered technology development projects.

Almost every day I am reminded of the important lessons and legacy of John James Ingalls such as his application of “ad astra, per aspera”, roughly translated as purposefully shooting for the stars, which became the Kansas state motto in 1861.

Bletchley Park Codebreaker Obituary: Ann Mitchell

The death of Ann Mitchell, aged 97, was just announced in Edinburgh.

One of only 5 women accepted to read mathematics at Oxford in 1940, she finished her degree a year early and went on to play a key role in Hut 6 “Machine Room” at Bletchley Park.

Hut 6 dealt with the high priority German army and air force codes, most important of which was the “Red” code of the Luftwaffe. They wrote out some of the jumbled nonsense which had been received and underneath wrote a “crib” of the probable German text. Ann’s key role was the next step in breaking the code, composing a menu that showed links between the letters in the text received and the crib, with the more compact the menu, the better. As every code for every unit of the German forces was changed at midnight, each day the work began all over again to identify clues to the new day’s codes. It was an intense intellectual process, working against the clock, and the urgency provided a constant challenge. Ann and her colleagues in Hut 6, most of whom had degrees in economics, law or maths, worked around the clock in shifts, with one free day each week. As the war came to a close, the number of messages declined until there were no more. “I did go up to London for VE Day on 8 May 1945 but I remember very little about the celebrations,” she said. The codebreakers returned to normal life and, having signed the Official Secrets Act and sworn not to divulge any information about her work, Ann never told anyone, not even her husband, about her wartime role.

She led a life of great service delivered quietly — her groundbreaking WWII work in mathematics was not officially recognized until 2009.

Women, whose stories have been told far less widely than the men they worked with, reportedly made up three-quarters of the workforce at Bletchley Park.

Whatever the reason for the remarkable women codebreakers to be rarely mentioned while their male colleagues were profiled, historians lately have been trying to update and correct the message.

Food for thought when you consider the origins of cyber security had such a high percentage of women, and yet in the latest surveys “women accounted for 10% of the cybersecurity workforce in the Asia-Pacific region, 9% in Africa, 8% in Latin America, 7% in Europe and 5% in the Middle East.”

Like many veterans after the war Ann contributed to other areas. She researched social impacts of divorce and made significant contributions to Scots family law, “which ensured that the needs of children were properly taken into account in a divorce settlement”.

The BBC also has details of her life.

NRA Supports Governor’s Capitol Building Gun Ban

I’ve read so many articles about the gun-toting American protesters entering a state capitol building that I’ve lost track of the number. It’s a hot news item for sure. What to do?

However, only very rarely have I seen any mention that the NRA position on this issue has been to ban guns. They backed Governor Ronald Reagan when he said it was a necessary law.

The display so frightened politicians—including California governor Ronald Reagan—that it helped to pass the Mulford Act, a state bill prohibiting the open carry of loaded firearms, along with an addendum prohibiting loaded firearms in the state Capitol. The 1967 bill took California down the path to having some of the strictest gun laws in America and helped jumpstart a surge of national gun control restrictions.

To be fair, Ronald Reagan was a bit of a racist exaggerator, so here’s the Snopes perspective on his rush to ban guns.

“The Black Panthers had invaded the legislative chambers in the Capitol with loaded shotguns and held these gentlemen under the muzzles of those guns for a couple of hours. Immediately after they left, Don Mulford introduced a bill to make it unlawful to bring a loaded gun into the Capitol Building. That’s the bill I signed. It was hardly restrictive gun control.”

[This recount by Ronald Reagan] wasn’t true, however, that the Black Panthers had held legislators “under the muzzles of guns” for hours. They were disarmed by the capitol police soon after entering the building, and, according to most contemporaneous accounts (including that of the Associated Press) were escorted out of the chambers 30 minutes later.

Source: Sacramento Bee

Of course the NRA we know today, as I’ve written elsewhere, remains very much the same organization with the same values as this period in time when it pushed for a ban on guns.

GDPR Fine Print: 720,000 Euro Penalty for Collecting Biometrics

Fine issued for misuse of fingerprints.

The logic of this huge enforcement action was simple, biometric data was collected disproportionate to need.

Employees of a company had to have their fingerprints scanned for attendance and time registration. After investigation, the Dutch Data Protection Authority concluded that the company should not have processed employee fingerprints. The company cannot rely on an exception ground for the processing of special personal data. The company will be fined 725,000 euros for this.

Humans were at put risk because privacy wasn’t being properly minded. Attendance and time authentication were not reasonable use-cases, as they have effective ID options that do not need collection of biometrics.

Exception for collection would be made if fingerprints were an appropriate control mechanism, such as in a system protecting the user’s data by verifying them by something they are.

Over 20K Dead in NYC: 30 Days of COVID-19

The numbers are expected to go even higher, but for now the NYT has said it’s reasonable to assume 20,000 people in NYC were killed in just 30 days.

Part of the reason for the revised count has been COVID-19 visualizations that compare current death rates against historic ones.

Source: NYT

The empire state building looking thing on the right is the rate of death in NYC during COVID-19 relative to the historic rates to the left.

The Financial Times did a similar analysis globally.

Global coronavirus death toll could be 60% higher than reported: Mortality statistics show 122,000 deaths in excess of normal levels across 14 countries analysed by the FT

Source: FT

This kind of comparison of current deaths against historic averages seems an extremely wise way to estimate severity of COVID-19 right now for several reasons:

First: COVID-19 autopsies have confirmed the early signs that the virus kills people in novel ways.

We already heard that EMT crews couldn’t keep defibrillator batteries charged during a single shift because cardiac arrest calls during the pandemic suddenly tripled or higher.

EMT also reported unexpected trauma of low survival rates.

The cardiac arrests are the hardest calls right now. More than once, we have been present at the moment of capture and yet were unable to save the patient. In the past, if a patient goes into cardiac arrest and we witness it or are there within three minutes, we can often save them. We use a defibrillator to shock them and restart their hearts. But for COVID-19 patients, this is not happening. We are not getting any of them back — and now the Department of Health doesn’t want us to bring dead patients to the hospital, so we are pronouncing them dead in the field and turning the bodies over to the police who have to wait for a coroner.

Second: CDC has started to release reports that even early February deaths in California homes were from COVID-19

Officials say they originally thought that the first COVID-19 death in the [Santa Clara] county was on March 9. Autopsies were performed on two people who died on February 6 and February 17. The CDC received tissue samples from the coroner and were able to confirm that both cases were positive for SARS-CoV-2. The third individual who died on March 6 was also confirmed to have been positive for COVID-19.

That death at the start of February was a woman who had a “burst heart”, further proving the point above about novel ways of COVID-19 killing people.

County health officials have said if they knew at the time the woman had coronavirus, they might have issued shelter in place orders earlier. […] “There’s an indication the heart was weakened.” [Dr. Judy Melinek, a Bay Area forensic pathologist who reviewed the autopsy report] said “The immune system was attacking the virus and in attacking the virus it damaged the heart and then the heart basically burst.” Dowd’s husband, citing his wife’s strong exercise habits and overall good health before falling ill, had requested an autopsy.

NYC still maintains that March 11 was their first date of death for a confirmed death, which obviously will need to be changed.

Given how a healthy American abruptly died February 6th from the virus, consider also how the White House was operating at that time. Just Security provides a detailed timeline:

February 10-March 2, 2020 … five rallies across the United States, each attracting thousands of attendees in confined spaces. The rallies take place in New Hampshire (2/10), Arizona (2/19), Colorado (2/20), Nevada (2/21), South Carolina (2/28), and North Carolina (3/2).

These rallies, like a death cult gathering, will most certainly be a cause of fatalities in America.

Third: Low numbers are controversial. I’ve been tracking death rates from the NYC Department of Health since the first cases reported (tragically unreported in the JHU dashboard, as I wrote about March 3rd).

When I posted this following chart the other day, for example, I immediately heard backlash from people with family in NYC. They complained deaths were known to be very high so there was no possible way my graph could have such low numbers showing a decline let alone tapering off.

Red is death, Grey is hospitalization.

It’s true, while the actual death rate is high, it likely is even higher than what these official NYC Department of Health numbers show. Confirmed COVID-19 test results increasingly looks like a subset of deaths far above normal trends compared to death rates of prior years.

I’m not saying the low count graph I made is wrong in terms of a trend. That trend is real and does reflect the case load on NYC services. The numbers definitely are in decline and pressure is considerably lower on EMT.

What’s surely low confidence is the daily count. When I can find the data and the time, I will add in a low/high estimate to show actual deaths daily and not just the shape of the pandemic curve.

One final thought. Often when I post a visualization of deaths some middle-aged white man invariably will come forward and say a per capita rate is the only thing that matters. Imagine a close relative dying and some random guy says to you “don’t worry, your sister’s death per capita is insignificant, given how siblings overall in this region are doing just fine”

Having to care about others drives some people to minimize human life through “per capita” models. Leaving off the per capita calculation tends to reveal callous and selfish thinking by viewers.

Per capita still has a place. Experts are good at finding ways to make different population numbers relevant (to measure likelihood or severity) yet that shouldn’t be turned by just anyone into a license to dismiss every human life as a percentages game.

The better model is vision zero, which says 40,000 American traffic deaths per year is 40,000 too many.

Replace Your Door Peephole With a DIY Thermal Camera

Source: not FLIR

In November 2019 a DIY article was posted on how to build and train an inexpensive RPi thermal camera.

Even with more complex network architectures, the optical model wouldn’t score above a 91% accuracy in detecting the presence of people, while the thermal model would achieve around 99% accuracy within a single training phase of a simpler neural network.

Despite the high potential, there’s not much out there in the market — there’s been some research work on the topic (if you google “people detection thermal camera” you’ll mostly find research papers) and a few high-end and expensive products for professional surveillance. In lack of ready-to-go solutions for my house, I decided to take on my duty and build my own solution — making sure that it can easily be replicated by anyone.

Now you can easily make one to mount on your door and give thermal readings for guests as well as announce known visitors.

The commercial FLIR thermal camera site gives this image as proof of its utility, although I expect they soon will update to reflect pandemic uses as well.

Source: FLIR

Not surprisingly, despite the long list of reasons to use thermal imaging (e.g. higher integrity of signal, more resilient to environmental interference) the EFF makes a very tone-deaf argument against its future use:

Terrorism is one thing — because it’s an ongoing problem. But there’s no reason why this kind of technology would need to stick around after the COVID-19 crisis is over.

That reads to me that the EFF believes after COVID-19 crisis is over there will no longer be any other threats, let alone a need for higher integrity in visual signals (e.g. authentication).

This Day in History: 1812 Luddites Attack a Zoom Mill

“Luddites confined their attacks to manufacturers who used machines in what they called ‘a fraudulent and deceitful manner’ to get around standard labor practices. ‘They just wanted machines that made high-quality goods and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.’ The British authorities responded by deploying armed soldiers to crush the protests.
On this day in 1812 a group of a hundred or more (some say thousands) Luddites near Manchester attempted to enter Burton’s Mill in protest. Armed guards of the mill as well as British soldiers fired live rounds into the crowd, killing up to a dozen people.

So why were these Luddites protesting and why were they murdered for it?

There’s a common misnomer among those who say Luddites were an anti-technology group, which the Smithsonian fortunately has tried to dispel.

The label now has many meanings, but when the group protested 200 years ago, technology wasn’t really the enemy.

Let me put it like this. To say Luddites were anti-technology is like saying Robin Hood was anti-technology.

Does anyone say “that Robin Hood really hated the bow and arrow”? No. That makes no sense. His story was about the moral use of bow and arrow (disruptive technology of his day, as proven in the 1415 Battle of Agincourt).

Robin Hood was a folk hero who popularly protested the misuse of technology by elites.

Similarly to the legend of Robin Hood, a powerful Ludd character rose out of the Sherwood forest area of Nottingham to fight for morality as a crucial factor in use of technology; Luddites then demanded quality and expertise in tech to be valued above exploitation.

The Luddites therefore were experts at using technology who disliked owners using machinery in ways known to increase death and suffering.

Think of these heavily armed mill owners in 1800s, targeted by Luddites, as the Sheriff of Sherwood Forrest from 400 years earlier. Then ask who really was on the side of the Sheriff in Robin Hood’s time?

Nottingham Forrest Sheriff, known for being “completely unsympathetic to the poverty of the town’s people, using immoral ways to collect taxes”

Or in today’s terms, think of this like people protesting Zoom’s immoral practices. Those (including myself) calling for Zoom usage to be ended immediately until their ethics show signs of improvement… we are not rejecting technology by holding it to a higher bar!

Luddites today would be the ones calling for and end to Zoom’s obviously deceitful and harmful business practices, to make technology safer for everyone.

Those who have been taught that Luddites didn’t like technology have been misled; don’t forget the entire point of a group who righteously protested against technology used immorally (wielded selfishly by owners and with obvious harms).

Even more tragically, people often leave out the fact that Luddites were ruthlessly murdered by factory gunmen and hanged for daring to defend society under a concept of greater good.

In truth, they inflicted less violence than they encountered. In one of the bloodiest incidents, in April 1812, some 2,000 protesters mobbed a mill near Manchester. The owner ordered his men to fire into the crowd, killing at least 3 and wounding 18. Soldiers killed at least 5 more the next day.

Earlier that month, a crowd of about 150 protesters had exchanged gunfire with the defenders of a mill in Yorkshire, and two Luddites died. Soon, Luddites there retaliated by killing a mill owner, who in the thick of the protests had supposedly boasted that he would ride up to his britches in Luddite blood. Three Luddites were hanged for the murder; other courts, often under political pressure, sent many more to the gallows or to exile in Australia before the last such disturbance, in 1816.

At least 8 killed in just one protest. Some estimates are double. But in all cases the government was using overwhelming force.

To be fair, Luddites reportedly also did commit violent acts against people, even though it ran counter their overall goals of social good.

Some claims were made that Luddites intimidated local populations into sheltering and feeding them, similar to charges against Robin Hood. That seems like dubious government propaganda, however, as Luddites were a populist movement and “melting away” was again a sign of popular support rather than violent intimidation tactics.

Indeed, more often there were accounts of Luddites sneaking into factories at night and cleverly taking soldiers’ guns away to destroy only the machines as a form of protest. People were set free and unharmed.

An exception was in the case above where a mill owner “boasted” of murdering Luddites and was arming guards and calling in the military… escalation unfortunately was set on a path where Luddites stepped up their defense/retaliation.

Don’t forget 1812 was a very violent time overall for the British, with tensions rising around inequality (food shortages) and protracted European war (1803–1815), including rising tangles with America over its relations with France.

Prime Minister Spencer Perceval, who extremely opposed the Luddites, was assassinated May 11, 1812 by a merchant named John Bellingham.

Bellingham walked up and shot Perceval point-blank, then calmly sat down on a bench nearby to wait his arrest. Conspiracy theories soon circled, suggesting American merchants and British banks were conspiring to end trade blockades with France.

A month after the May assassination was when the War of 1812 began with America.

All that being said, if you want to ensure technology improves, and doesn’t just exploit unsuspecting consumers to benefit a privileged few, read more about the populist Luddite as well as Robin Hood stories from Nottingham.

These legends represent disadvantaged groups appealing for justice against a tyranny of elites.

Also, consider how “General Ludd” was another fictional character of the Sherwood Forest by design. Here’s a quick Ludd rhyme that was turned into a ticket to entry for meetings.

“This simple stamped ticket with its message showing support for General Ludd would have allowed entrance to one of the local meetings.”

It was his (and Robin Hood’s) inauthenticity, as a face of the very real populist cause that made them impossible to kill.

The legend of Ludd kept “his” cause of justice alive despite overwhelming oppositional military forces. Allegedly British authorities invoked “posse comitatus” (it’s a thing Sheriffs are known to do) and deployed more military soldiers domestically to stop Luddites than during war with Napoleon.

Nottingham took on the appearance of a wartime garrison… authorities estimated the number of rioters at 3,000, but at any one time, no more than 30 would gather…

In American history we have similar heroes, such as the inauthentic yet also real General Tubman. She fought plantation owners in the same sense that Ludd fought mill owners; targeting the immoral use of machinery.

Surely slave owners would have called Tubman an anti-technology radical at war with their manufacturing if they could have made such absurd accusations stick (instead of her being remembered rightly as an American patriot, veteran, abolitionist and human rights champion).

Sadly people incorrectly brand Luddites as anti-technology, when in fact they very much were in favor of proper and skilled use of technology. Hopefully someday soon this chapter in history will stand corrected.

Anonymous Donates Rare Film to Bletchley Park

Codebreakers seen working at Bletchley Park in rare old film reel recently revealed.

Big news from the Park itself:

Dr. David Kenyon, Research Historian at Bletchley Park highlights the rarity of this find: “No other film footage of a site intimately connected with Bletchley Park exists. We don’t know who filmed it and the footage doesn’t gives away any state secrets or any clues about the work the people in it are doing. If it fell into the wrong hands, it would have given little away, but for us today, it is an astonishing discovery and important record of one of the most secret and valuable aspects of Bletchley Park’s work.”

The reel of wartime footage, preserved in its original canister, has been donated to Bletchley Park by a donor who wishes to remain anonymous.

A 5 minute documentary about the new film already has been posted to YouTube

White House Reveals Secret Head of COVID-19 Policy

In a breathtaking move of transparency, the White House has come forward to reveal the head of its COVID-19 policy and response coordination all along.

Behind the scenes, held as a tight secret until now, was the highly decorated and very well known General Buck Turgidson. The General formerly had led efforts to drive the world towards global cyber annihilation.

The American government plan to delay its COVID-19 response is said now to have been intentional, pushing to the highest death rates in the world within just one month.

Source: Johns Hopkins

Mass casualties was estimated to have the effect of positioning US government as the most helpless victim, to set up vicious attack campaigns and fire angry missives against China and WHO.

Proud of delays and confusion of the American people, that led so quickly to tens of thousands killed, the White House posted also the following example of detailed analysis from the General:

Mr. President. I’m not saying we wouldn’t get our hair mussed. But I do say no more than 10 to 20 million killed, tops.

General Buck Turgidson in mid-January, advising White House to wait on #covid19 response and then launch attack campaigns against China and WHO

Simple Illustration of Zoom Encryption Failure

Zoom engineering management practices have been exposed as far below industry standards of safety and product security. They have been doing a terrible job, and it is easy now to explain how and why. Just look at their encryption.

The Citizen Lab April 3rd, 2020 report broke the news on Zoom practicing deception with weak encryption and gave this top-level finding:

Zoom documentation claims that the app uses “AES-256” encryption for meetings where possible. However, we find that in each Zoom meeting, a single AES-128 key is used in ECB mode by all participants to encrypt and decrypt audio and video. The use of ECB mode is not recommended because patterns present in the plaintext are preserved during encryption.

It’s a long report with excellent details, definitely worth reading if you have the time. It even includes the famous electronic codebook (ECB) mode penguin, which illustrates why ECB is considered so broken for confidentiality that nobody should be using it.

Tux

I say famous here because anyone thinking about writing software to use AES surely knows of or has seen this image. It’s from an early 2000s education campaign meant to prevent ECB mode selection.

There’s even an ECB Penguin bot on Twitter that encrypts images with AES-128-ECB that you send it so you can quickly visualize how it fails.

A problem is simply that using ECB means identical plaintext blocks generate identical ciphertext blocks, which maintains recognizable patterns. This also means when you decipher one block you see the contents in all of the identical blocks. So it is very manifestly the wrong choice for streams of lots of data intended to be confidential.

However, while Citizen Lab included the core image to illustrate this failure, they also left out a crucial third frame on the right that can drive home what industry norms are compared to Zoom’s unfortunate decision.

The main reason this Linux penguin image became famous in encryption circles is because it shows huge weakness faster than trying to explain ECB cracking. It makes it obvious why Zoom really screwed up.

Now, just for fun, I’ll still try to explain here the old-fashioned way.

Advanced Encryption Standard (AES) is a U.S. National Institute of Standards and Technology (NIST) algorithm for encryption.

Here’s our confidential message that nobody should see:

zoom

Here’s our secret (passphrase/password) we will use to generate a key:

whywouldyouuseECB

Conversion of password from ASCII to Hex could simply give us a 128 bit block (16 bytes of ASCII into 32 HEX characters):

77 68 79 77 6f 75 6c 64 79 6f 75 75 73 65 45 43

Yet we want to generate a SHA256 hash from our passphrase to get ourselves a “strong” key (used here just as another example of poor decision risks, since PBKDF2 is a far safer choice to generate an AES key):

cbc406369f3d59ca1cc1115e726cac59d646f7fada1805db44dfc0a684b235c4

We then take our plaintext “zoom” and use our key to generate the following ciphertext blocks (AES block size is always 128 bit — 32 Hex characters — even when keys used are longer such as AES-256, which uses 256 bit keys):

a53d9e8a03c9627d2f0e1c88922b7f3f
ad850495b0fc5e2f0c7b0bf06fdf5aad
ad850495b0fc5e2f0c7b0bf06fdf5aad

b3a9589b68698d4718236c4bf3658412

I’ve kept the 128 bit blocks separate above and highlighted the middle two because you can see exactly how “zoom” repetitive plaintext is reflected by two identical blocks.

It’s not as obvious as the penguin, but you still kind of see the point, right?

If we string these blocks together, as if sending over a network, to the human eye it is deceptively random-looking, like this:

a53d9e8a03c9627d2f0e1c88922b7f3fad850495b0fc5e2f0c7b0bf06fdf5aadad850495b0fc5e2f0c7b0bf06fdf5aadb3a9589b68698d4718236c4bf3658412

And back to the key, if we run decryption on our stream, we see our confidential content padded out in blocks uniformly sized:

z***************o***************o***************m

You also probably noticed at this point that if anyone grabs our string they can replay it. So using ECB also brings an obvious simple copy-and-paste risk.

A key takeaway, pun intended of course, is that Zoom used known weak and undesirable protection by choosing AES-128 ECB. That’s bad.

It is made worse because they told customers it was AES-256; they’re not disclosing their actual protection level and calling it something it’s not. That’s misleading customers who may run away when they hear AES-128 ECB (as they probably should).

Maybe run away is too strong, but I can tell you all the cloud providers treat AES-256 as a minimum target (I’ve spent decades eliminating weak cryptography from platforms, nobody today wants to hear AES-128). At least two “academic” attacks have been published for AES-128: “key leak and retrieval in cache” and “finding the key four times faster“.

And the NSA published a revealing doc in 2015 saying AES-256 was their minimum guidance all the way up to top secret information.

On top of all that, the keys for Zoom were being generated in China even for users in America not communicating with anyone in China.

Insert conspiracy theory here: AES-128 was deemed unsafe by NSA in 2015 and ECB has been deemed unsafe for streams by everyone since forever… and then Zoom just oops “accidentally” generates AES-128 ECB keys on Chinese servers for American meetings? Uhhhh.

It’s all a huge mess and part of a larger mismanagement pattern, pun intended of course. Weak confidentiality protections are pervasive in Zoom engineering.

Here are some more examples to round out why I consider it pervasive mismanagement.

Zoom used no authentication for their “record to cloud” feature, so customers were unwittingly posting private videos onto a publicly accessible service with no password. Zoom stored calls with a default naming scheme that users stored in insecure open Amazon S3 “buckets” that could be easily discovered.

Do you know what encrypted video that needs no password is called? Decrypted.

If someone chose to add authentication to protect their recorded video, the Zoom cloud only allowed a 10 character password (protip: NIST recommends long passwords. 10 is short) and Zoom had no brute force protections for these short passwords.

They also used no randomness in their meeting ID, kept it a short number and left it exposed permanently on the user interface.

Again all of this means that Zoom fundamentally didn’t put the basic work in to keep secrets safe; didn’t apply well-known industry-standard methods that are decades old. Or to put it another way, it doesn’t even matter that Zoom chose broken unsafe encryption routed through China and lied about it when they also basically defaulted to public access for the encrypted content!

Zoom sold you an unsafe barn AND forgot to put doors on. Any reasonable person should be very surprised to find horses inside.

It would be very nice, preferred really, if there were some way to say these engineering decisions were naive or even accidental.

However, there are now two major factors prohibiting that comfortable conclusion.

  1. The first is set in stone: Zoom CEO was the former VP of engineering at WebEx after it was acquired by Cisco and tried to publicly shame them for using his “buggy code“. He was well aware of both safe coding practices as well as the damage to reputation from bugs, since he tried to use that as a competitive weapon in direct competition with his former employer.
  2. The second is an entirely new development that validates why and how Zoom ended up where they are today: the CEO announced he will bring on board the ex-CSO of Facebook (now working at Stanford, arguably still for Facebook) to lead a group of CSO. The last thing Zoom needs (or anyone for that matter) is twelve CSO doing steak dinners and golf trips while chatting at the 30,000 foot level about being safe (basically a government lobby group). The CEO needs expert product security managers with their ear to the ground, digging through tickets and seeing detailed customer complaints, integrated deep into the engineering organization. Instead he has announced an appeal-to-authority fallacy (list of names and associations) with a very political agenda, just like when tobacco companies hired Stanford doctors to tell everyone smoking is now safe.

Here’s the garbage post that Zoom made about their future of security, which is little more than boasting about political circles, authority and accolades.

…Chief Security Officer of Facebook, where he led a team charged with understanding and mitigating information security risks for the company’s 2.5 billion users… a contributor to Harvard’s Defending Digital Democracy Project and an advisor to Stanford’s Cybersecurity Policy Program and UC Berkeley’s Center for Long-Term Cybersecurity. He is also a member of the Aspen Institute’s Cyber Security Task Force, the Bay Area CSO Council, and the Council on Foreign Relations. And, he serves on the advisory board to NATO’s Collective Cybersecurity Center of Excellence.

We are thrilled to have Alex on board. He is a fan of our platform…

None of that, not one sentence is a positive sign for customers. It’s no different, as I said above in point two, from tobacco companies laying out a PR campaign that they’ve brought on board a Stanford or Harvard doctor to be on a payroll to tell kids to smoke.

Even worse is that the CEO admits he can’t be advised on privacy or security by anyone below a C-level

…we are establishing an Advisory Board that will include a subset of CISOs who will act as advisors to me personally. This group will enable me to be a more effective and thoughtful leader…

If that doesn’t say he doesn’t know how to manage security at all, I’m not sure what does. He’s neither announcing promotion of anyone inside the organization, nor is he announcing a hire of someone to lead engineering who he will entrust with day-to-day transformation… the PR is all about him improving his own skills and reputation and armoring against critics by buying a herd to hide inside.

This is not about patching or a quick fix. It really is about organizational culture and management theory. Who would choose ECB mode for encryption, would so poorly manage the weak secrets making bad encryption worse, and after all that… be thrilled to bring on board the least successful CSO in history? Their new security advisor infamously pre-announced big projects (e.g. encryption at Yahoo in 2014) that went absolutely nowhere (never even launched a prototype) is accused of facilitating atrocities and facing government prosecution for crimes, and who demonstrably failed to protect customers from massive harms.

Zoom just hired the ECB of CSOs, so I’m just wondering how and when everyone will see that fact as clearly as with the penguin image. Perhaps it might look something like this.


Update April 12: Jitsi has posted a nice blog entry called “This is what end-to-end encryption should look like!” These guys really get it, so if you ask me for better solutions, they’re giving a great example. Superb transparency, low key modest approach. Don’t be surprised instead when Zoom rolls out some basic config change like AES-256-GCM by default and wants to throw itself a ticker-tape parade for mission accomplished. Again, the issue isn’t a single flaw or a config, it’s the culture.

Update April 13: a third-party (cyber-itl.org) security assessment of the Zoom linux client finds many serious and fundamental flaws, once again showing how terrible general Zoom engineering management practices have been, willfully violating industry standards of safety and product security.

It lacks so many base security mitigations it would not be allowed as a target in many Capture The Flag contests. Linux Zoom would be considered too easy to exploit! Perhaps Zoom using a 5 year out of date development environment helps (2015). It’s not hard to find vulnerable coding in the product either. There are plenty of secure-coding-101 flaws here.

These are really rube, 101-level, flaws that any reasonable engineering management organization would have done something about years ago. It is easy to predict how this form of negligence turns out, so ask why did Zoom believe they could get away with it?