Category Archives: Security

Firewalls for Android: WhisperMonitor

If we accept the premise that the perimeter model of security is eroding and systems are becoming a loosely federated collection of compute power and storage…then will firewalls even exist? Yes, and I don’t see the perimeter going away.

Case in point, Whisper Systems’ new product for Andoid:

Dynamic egress filtering.

When enabled, WhisperMonitor will monitor all outbound network traffic and issue dynamic prompts in order to determine egress filter rules.

Excellent feature. ZoneAlarm was famous for this. Knowing who your device is communicating with seems like an obvious requirement for security controls. However, far too many spend all their time focused on blocking inbound traffic only. Filtering outbound traffic is just as important.

Of course that begs the question of monitoring:

Connection history.

WhisperMonitor optionally records the connection history of the software installed on your device, giving you insight into where it is connecting and how often.

What I can’t find in the WhisperMonitor is the ability to setup zones or profiles, a usual feature of firewalls. It would be excellent to be able to switch between a work mode with egress to a certain set of systems, and a personal or home mode with different egress rules.

That might be something more likely to be found in Juniper Pulse, which allows egress filtering for Symbian S60 devices based on configuration policies (not yet for Android).

Speaking of Pulse, imagine if you could tunnel all traffic from the mobile device back to your home router and then filter it there. That could be handy for those who want to manage and monitor a policy for all their phones — a single shared egress point with a perimeter for all mobile users in a family or group.

Under the Covers with CVE-2011-0654

NIST gave CVE-2011-0654 the highest severity rank (10) on February 16th — complete loss of confidentiality and integrity from a trivial remote exploit. They assigned it the following Common Weakness Enumeration

CWE-119: Improper Restriction of Operations within the Bounds of a Memory Buffer

Microsoft apparently failed to follow their own design specification. Section 2.2.3 or “RequestElection Browser Frame” of the Common Internet File System (CIFS) Browser Protocol has the following requirement.

ServerName (variable): MUST be a null-terminated ASCII server name and MUST be less than or equal to 16 bytes in length, including the null terminator.

It seems unusually clear and with heavy emphasis. They “MUST” be less. Is that meant to be a hint or a challenge?

It obviously is not difficult to craft a ServerName variable that violates the 16 byte null-terminated ASCII rule in an SMB response, and it is not hard to push a user into accessing a server. So the attack difficulty is rated as low. Section 3.3.5.8 or “Receiving a RequestElection Frame from CIFS” also makes a good point about elections and forcing a response.

The RequestElectionframe (as specified in section 2.2.3) MUST be sent whenever a browser client or server is unable to retrieve information that is maintained by the local master browser server. It also MUST be issued when a local master browser server receives a frame that indicates that another machine on the machine group also believes it is a local master browser server.

In other words, there are plenty of vectors to send a large ServerName that violates section 2.2.3 of the Microsoft Windows Browser Protocol to compromise a system.

Microsoft gave their response on April 12th to this boundary error, almost two months after disclosure:

Security Bulletin MS11-019 – Critical
Vulnerabilities in SMB Client Could Allow Remote Code Execution (2511455)

…these vulnerabilities could allow remote code execution if an attacker sent a specially crafted SMB response to a client-initiated SMB request. To exploit the vulnerability, an attacker must convince the user to initiate an SMB connection to a specially crafted SMB server.

Their patch is nothing fancy. It basically follows the section 2.2.3 requirement.

The security update addresses the vulnerabilities by correcting the manner in which the CIFS Browser handles specially crafted Browser messages, and correcting the manner in which the SMB client validates specially crafted SMB responses

That could be the start and end of the story, except for the fact that Microsoft is not really new to this game.

Microsoft Security Bulletin MS10-020 – Critical
Vulnerabilities in SMB Client Could Allow Remote Code Execution (980232)

Microsoft Security Bulletin MS10-006 – Critical
Vulnerabilities in SMB Client Could Allow Remote Code Execution (978251)

Microsoft Security Bulletin MS09-001 – Critical
Vulnerabilities in SMB Could Allow Remote Code Execution (958687)

Microsoft Security Bulletin MS08-068 – Important
Vulnerability in SMB Could Allow Remote Code Execution (957097)

Microsoft Security Bulletin MS06-030
Vulnerability in Server Message Block Could Allow Elevation of Privilege (914389)

Microsoft Security Bulletin MS05-027
Vulnerability in Server Message Block Could Allow Remote Code Execution (896422)

Microsoft Security Bulletin MS05-011
Vulnerability in Server Message Block Could Allow Remote Code Execution (885250)

Note the exploit details from February 8, 2005:

The Server Message Block (SMB) implementation for Windows NT 4.0, 2000, XP, and Server 2003 does not properly validate certain SMB packets, which allows remote attackers to execute arbitrary code via Transaction responses containing (1) Trans or (2) Trans2 commands, aka the “Server Message Block Vulnerability,” and as demonstrated using Trans2 FIND_FIRST2 responses with large file name length fields

How strangely familiar that sounds, despite being six years ago…and last but not least (I’ve omitted many others, such as MS02-070)

Microsoft Security Bulletin MS03-024
Buffer Overrun in Windows Could Lead to Data Corruption (817606)

I am reminded of a January 1997 paper by Hobbit (hobbit@avian.org) called “CIFS: Common Insecurities Fail Scrutiny”

By now the reader may be thinking twice before replacing all those Unix servers with NT, and considering the significant risks in yielding to all that marketing rah-rah. In general we now see, in what is hoped to be a clearer way than previously, both how and why to check networks for these additional vulnerabilities. Unix may have its own problems, but overall it is still easier to secure and verify for correctness, and is largely free with all sources included. There are many good people out there proactively finding and fixing Unix problems on a daily basis. And as detailed in this document, Unix still has plenty of fight in it to help kick the NT monster in the ass.

Ah, look how far we have come.

The Filter Bubble in Reverse

Eli Pariser, of MoveOn.org, gave a TED talk on why some people walk through a city looking for the same thing over and over again (the familiar comfort of a Starbucks) while others might seek out new and different tastes and ideas. In his presentation the former represents the opportunity for personalised service while the latter does not.

Ask yourself (especially if you are sipping on the Frappa-Mocha-Latte-Lowfat-Vente made just the way you like it) how a restaurant you have never seen before could give personalised service. Is it even possible? It might seem like a legitimate question today, but it used to be the other way around. Restaurants used to have trained staff to guide you to a meal selection. People used to ask how a restaurant that serves people over and over again (at least in name — the franchise) from an industrialised menu could ever feel personal.

To be fair he is speaking only about search engine results, not the food examples I give above. So the presentation is billed as a new issue with information technology. However, I do not see why this warning would not fit within any service industry.

Some people seek out a genuinely personal experience despite the risk while others actually want to be identified and served in a consistent and predictable manner. If you want something that is not “highly tailored”, you actually do still have options. You can choose not to use Facebook just like you choose not to eat at KFC.

An example he gives of online personalisation is NetFlix, so here’s my view on that specific site: I struggled so much with their suggestion system it made me quit. It not only guessed preferences incorrectly but it started to annoy me to the point where I ended up researching why I was given some discs quickly but others took weeks to arrive.

What I figured out over a few months of opening different accounts was that if I sent movies back too quickly (more than four a month) an algorithm started to “throttle” my next selections. They were pushing suggestions to me not only on an “interest” level, but also on a demand model of profit — the slow accounts (e.g. higher-profit) were given priority on popular titles and the fast accounts received suggestions of unpopular ones. Pariser does not bring this profit element of personalisation up at all, although you might guess it by looking at his Egypt news example.

Omitted from his presentation also is the fact that preferences always shift over time. It may be highly desired by the first generation to experience the latest ideas of personalisation, yet the next generation could easily reject it as a boring, lame or lazy habit and seek out the opposite — non-personal stuff is cool again. “Check me out, I’m anonymous”

The adoption of personalisation therefore may not be assumed to be a constant linear risk at all. We no sooner would approach a “web of one” than we should see all grandparents and grandchildren singing the same tune.

Consistency too could grow out of fashion as people seek sources of information that push self-challenging or contradictory view points.

So while I sympathise with Pariser’s lament and warning about search engines, I don’t think he explains the risk in a historic or broad social context.

Finally, although his Google example mentions 47 points for user identification, there is no mention of why it grew to such a high number. They obviously were not satisfied with fewer data points as users became more mobile, more virtual, more multi-user, multi-platform and harder to recognise. It seems to be intended to be a shocking example of how a search engine really know it’s you, but it also could be seen as a hopeful statistic. It’s hard now and there are many ways it could get even more complex for a site to figure out who is really who — he could have suggested we build a reverse filter bubble.

Visa Releases Mobile Best Practices

A new best practices document is available from Visa. It is meant to address questions related to mobile phones accepting card payments.

…a set of mobile acceptance best practices for merchants, software developers and device manufacturers who are using consumer mobile devices, such as smartphones and tablet computing platforms to facilitate the acceptance of card payments. Visa best practices call for important security considerations such as encryption and tokenization of cardholder data and are designed to foster a better understanding of the merchant and service provider responsibilities related to securing cardholder data when a mobile phone is used as an acceptance device instead of a traditional terminal.

The emphasis on encryption and tokenization is a long time coming. Will this be extended soon into every POS? With the infrastructure in place for mobile, the addition of POS seems very likely in the near term.

It also begs the question of whether strong authentication measures, the entire emphasis of chip-based payment cards, will garner less attention now from Visa (non-chip transactions under 30% used to mean they did not force the PCI requirements).

Perhaps most interesting is Visa’s re-emphasis of a standards role for the industry that clearly is independent of the PCI SSC.

…Eduardo Perez, head of global payment system risk, Visa Inc. “As a payment technology leader, Visa is well positioned to provide the industry security guidance for emerging acceptance solutions.”