X.org evdev segfault

Aside from the emerging exposure issues in user display (e.g. physical or virtual graphics card using main memory) I’m noticing stack buffer overruns in X windows. Bug 973297 for X.org evdev describes another one related to device names.

[Test Case]
Plug in the headset and see if X crashes. Alternatively, use utouch-evemu to create a virtual headset using the attached Logitech_Wireless_Headset.prop file:

$ sudo utouch-evemu device Logitech_Wireless_Headset.prop

[Regression Potential]
The fix touches code that affects how input device axes are labelled. These labels are used primarily for the GIMP and a few other drawing tools when using a tablet drawing device. It is possible that a regression could occur, causing the axes to not be labeled correctly.

…or it is possible someone could create a virtual device with a malicious label.

Fighting Big (Data) Brother

When I was a full-time student of Cold War history I had to study how the constant watch by an unknown yet omnipresent force affected people. In American classrooms that meant asking questions about the legacy of the 1968 Prague Spring or trying to prove whether Marshal Josip Broz Tito was really as popular as reported publicly.

If you clicked on the links above you will see that I believe some of the best sources for an answer could be from literary and artistic writers. There are classics like The Trial by Kafka and 1984 by Orwell, but we also gain excellent insights from modern work such as The Lives of Others by Donnersmarck.

The real story, however, is not just about the situation in someone else’s backyard. Surveillance society is a risk everywhere and is inextricably linked to advances in communication. It thus seems inevitable to find warnings at home of a “double-edge” to technology. A good example is the historic criticism by U.S. Senator Frank Church of an American-based system meant to spy on the Soviets.

“…this capability at any time could be turned around on the American people” he said in 1975, “and no American would have any privacy left, such is the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide.”

He added that if a dictator ever took over, the N.S.A. “could enable it to impose total tyranny, and there would be no way to fight back.

He was wrong. A dictator is not necessary to achieve what many would consider tyranny, at least if you take into account definitions put forward by the likes of Kafka and Orwell. And the N.S.A. giant spy center is not the only one to consider.

He also was right. The capability officially has been turned around on the American people.

CNET has learned that the FBI has formed a Domestic Communications Assistance Center, which is tasked with developing new electronic surveillance technologies, including intercepting Internet, wireless, and VoIP communications.

And that brings me back to the study of surveillance in the Cold War. The British classroom forced me to expand the scope of discussion. We had to spend many hours debating and trying to make sense of policies all over the world where analysts collected data on citizens to enforce laws and inform leaders. One of the things that stood out to me was how citizen behavior altered in some versions of surveillance, but not others. The difference appeared to me linked to a sense of value and opportunity.

If collection of information is in any way perceived by an individual as a threat to their success then countermeasures are a natural reaction. It was only when risk from surveillance was not perceived (e.g. “I have nothing to hide”), or a greater alternate threat was proposed (e.g. surveillance will save you from a worse fate) that someone might be expected to comply without question.

What countermeasures? We destroy a trail or hide it. In terms of security, people use integrity and confidentiality controls to fight surveillance. The tricky part is that we enjoy and want social approval; it gives the ability to see a more circumspect view. But on the other hand we do not want to feel as though we are being monitored to the point where we are boxed in by our every decision (i.e. the dilemma of whether to have a heavy or light existence, as expressed in the Unbearable Lightness of Being)

Countermeasures might not always be the right term. I am reminded of Manuel Castells’ 1996 book The Rise of the Network Society. He emphasized that a globally interconnected communication system was unlikely to make a work force go completely mobile. He showed that people prefer to keep local social attachment (e.g. owning a house, living near family and friends) intact. However, other social structures such as labor relationships were not as stable. They could evolve because more opportunities would be available and it would make part-time and temporary work the norm.

Thus, rather than call them countermeasures, we tend to keep only trails intact where we perceive value and limited risk. Cycling through connections when more connections are offered at low cost becomes a new norm, which can cause problems for those interested in data collection and analysis.

With that in mind, I noted recently big data researchers saying that their subjects are fighting back.

Ms. Boyd has made a specialty of studying young people’s behavior on the Internet. She says they are now often seeking power over their environment through misdirection, such as continually making and destroying Facebook accounts, or steganography, a cryptographic term for hiding things in plain sight by obscuring their true meaning. “Someone writes, ‘I’m sick and tired of all this,’ and it gets ‘liked’ by 32 people,” she said. “When I started doing my fieldwork I could tell you what people were talking about. Now I can’t.”

Now. Just like in history. The behavior she describes sounds like exactly what could be expected in a surveillance society that has a low cost to connection cycles. What does this mean in terms of future behavior? It’s not clear who the best artistic writers will be yet but there may be much more lightness ahead.

For example, in the past there was a desire in America to make a phone number portable to maintain continuity across providers, but the new trend appears to be more like what Castells predicted and numbers have short-term or temporary use. Another example is demand for peer-based key management, instead of server, for mobile devices. A third example is demand for sandboxes and hypervisors to create safe havens of communication. Hiding or destroying the trail of an application, a machine or even an entire data center, is more possible than ever through virtualization.

Ok, enough with the MD5 already

Today in a meeting I referenced a 2007 paper by Arjen Lenstra and Benne de Weger on how to break MD5 to abuse vendor software updates.

Here it is again for convenience:

Given the recent insights into the weaknesses of MD5, the bottomline of our work is: MD5 should no longer be used as a hash function for software integrity or code signing purposes. By now, everyone should be aware of this.

The paper also explained why their collisions were different from before.

In December 2004 Dan Kaminsky and Ondrej Mikle, and later Peter Selinger, published similar attacks, based on the Wang-type collisions that require two binary files that differ only in the colliding blocks. To create such files from two executables with different behaviour that yet collide under MD5, each of the two files has to contain both executables in full, somehow using the collision to switch on the one and hide the other.

Our colliding files are based on chosen-prefix collisions. This means that we only have to append a few thousand carefully chosen bytes to each file to reach an MD5 collision. Each file by itself contains only one of the two executables. This is less suspicious.

And we also knew in 2007, in terms of real-world data, that Google could easily show collisions were far more common than one might expect.

Fast forward to today and it is like some people completely missed five years of warnings.

…the collision attacks observed in Flame could have been prevented if Microsoft had stopped employing MD5 sooner.

Whether or not you buy into the big compute-power argument or the attack sophistication argument for Flame (neither of which are well quantified) since 2007 the message has been to stop using MD5 for trust.

CVE-2012-2118: X.org input device format string

The xorg-x11-server log may have a format string flaw in certain forms of its log message. An attacker would have to give a device a specific name and then get the local Xorg server instance to use it for the flaw to cause the server to abort or malfunction.

IBM rated this vulnerability high risk yet gave it a base score of 4.4

X.org could allow a local attacker to execute arbitrary code on the system, caused by a format string error when adding an input device with a malicious name. By persuading a victim to open a specially-crafted file containing malicious format specifiers, a local attacker could exploit this vulnerability to execute arbitrary code on the system or cause the application to crash.

NIST gives it a perfect 10, however.

Access Vector: Network exploitable
Access Complexity: Low
Authentication: Not required to exploit
Impact Type: Allows unauthorized disclosure of information; Allows unauthorized modification; Allows disruption of service

The flaw is explained on Patchwork

The culprit here was not the user-supplied format string but rather xf86IDrvMsg using the device’s name, assembling a format string in the form of

 [driver]: [name]: message[/name][/driver]

i.e. “evdev: %n%n%n%n: Device \”%s\”\n”
and using that as format string to LogVWrite.

I also mentioned a serious vulnerability related to evdev in an earlier post.

…crash is related to changing the driver for input devices using evdev (xserver-xorg-input-evdev) — the kernel event delivery mechanism that handles multiple keyboards and mice as separate input devices