Category Archives: Security

Employee Use of Social Media

A few of the tracks at RSA discussed employee use of social media, the security risks it may cause, and employees’ rights and advice for employers. A recent settlement highlights some of the issues.

A Connecticut American Medical Response (AMR) employee was fired for making negative comments on her Facebook account about her boss. According the National Labor Relations Board (NLRB), the complaint alleged that the discharge violated federal labor law because the employee was engaged in protected or concerted activity when she posted the comments about her supervisor, and responded to further comments from her co-workers. Additionally, the company maintained overly-broad rules in its employee handbook regarding blogging, Internet posting, and communications between employees.

Under the National Labor Relations Act, employees may discuss the terms and conditions of their employment with co-workers and others. “Concerted activity” includes any activity by individual employees united in pursuit of a common goal. The activity must be in concert with or on the authority of other employees, and not solely by and on behalf of the employee himself. (Meyers Industries, 281 NLRB 882 (1986)).

There are also potential First Amendment issues when an employer attempts to limit employees’ speech on social media. Consider including in any company policy a provision on the use of social media by employees. Social media can potentially mean huge security risks for employers. Any policy should be clear, concise and understood by employees. It is also highly recommended an attorney review any policy.

Security and the Politics of Humanitarian Aid

My undergraduate honors thesis was on the ethics of US humanitarian assistance to Somalia in 1992. It tried to examine the political influences that determined who to assist and how much in a conflict zone. I just noticed a warning by Oxfam about this exact issue in terms of today’s American international security interests:

If you look at allocations in the course of this past decade to Iraq and Afghanistan, it’s much higher on a per capita basis than the aid given to the Democratic Republic of Congo, which is one of the worst places to live on Earth, and has been for a couple of decades. I think aid per capita at one point in Iraq was 18 times higher than the aid per capita in Congo, even though Iraq – despite the violence at the time – was considerably less badly off than the Congo.

The Deutsche Welle uses some highly charged language as an introduction to the issues:

Billions of dollars are being used for “unsustainable, expensive and sometimes dangerous aid projects” supporting short-term foreign policy and security objectives, while countries in desperate need are being overlooked, according to Oxfam.

Oxfam argues two points against letting security policy be tied to aid.

  1. Military-related assistance can be perceived as tainted and a target of resistance
  2. The military does a poor job identifying and managing assistance areas

They give an area just to the south of Somalia as an example:

If you look at the assistance that the US has given in northern Kenya, which is an area of security interest for the Americans, the US Army has built schools there and then forgotten about them or not ensured that there are teachers and materials for that school to be sustainable. We’ve seen that happen in many parts of Afghanistan as well. The assistance is badly done.

I recently wrote about a little-known US military project in a small African country, very likely to be related to security but portrayed as entirely humanitarian.

SMS protest language censored by phone companies in Uganda

Reuters reveals an interesting African development related to protests in the Middle East and mobile communication:

Uganda has ordered phone companies to intercept text messages with words or phrases including “Egypt”, “bullet,” and “people power” ahead of Friday’s elections that some fear may turn violent.

“Messages containing such words, when encountered by the network or facility owner or operator, should be scrutinised and, if deemed to be controversial or advanced to incite the public, should be stopped or blocked,” he said.

[…]

The other English words or phrases on the list are: “Tunisia”, “Mubarak”, “dictator”, “teargas”, “army”, “police”, “gun”, “Ben Ali” and “UPDF”.

Bad idea. It will not work, not least of all because the black-list can be leaked; I see an impossible goal of staying abreast of slang and permutations already typical of SMS.

Who would type dictator when they can say tator, or tater, or tot? Who uses police when they can put cops, 5-0 or bobs? Wikipedia provides a list of euphemisms for police that covers every letter in the alphabet. I would use gas, or mace, or lach (short for lachrymatory), or pep(per), or RCA (riot control agent) instead of teargas.

I mean the obvious and historic defense is encoded language: the words gas and pepper have many meanings, and thus are hard to ban. This is a form of substitution. The key to decipher their correct (intended) meaning using message context or metadata. That easily defeats word-list censorship. How cool is that? Or should I say how radical? I’ve mentioned this before in terms of songs and poems like Kumbaya.

EFF: Higher Stakes Brings Worse Security

A blog post by the EFF has a curious phrase towards the end:

…the higher the stakes, the worse the security…

Sample size? The author clarifies that “worse” means “easily resolved”. This seems to assert a shade of negligence — a decision to not fix security even when it is easy. He tries to explain why this would happen:

I suspect the reason for this pattern is that organizations that handle life, health, and money do not think of themselves as software engineering organizations, and so seek to minimize engineering costs. Additionally, engineering-driven companies tend to be disruptive newbies who have not yet made a big enough impact on the market to control much important information.

I find his analysis lacking for at least a couple reasons:

  1. Organizations that handle life, health and money do in fact think of themselves as innovators, not to mention software engineering organizations. Investment firms, for example, or research hospitals often have talented staff dedicated to inventing and building software and hardware.
  2. Engineering-driven companies are not all “newbies”. They have been around for decades and they too have grown old.

My personal experience does not resonate with the EFF. Perhaps what the author is trying to say is more in line with what President Clinton described in his keynote speech at the RSA Conference today: “it is easier to think about solutions in developing nations than developed ones.” I resonate more with that.

Higher stakes (higher asset value) do not automatically bring worse security, in my experience. I have found environments that embrace change are easier to secure because they welcome regulation and will pay for innovative solutions. Conversely, those that resist regulation and fear change fall behind on some forms of security fixes. They tend to demand extensive risk-based analysis and cost predictions before they are willing to agree to apply even “easy” security fixes.

A developed environment, if I can borrow the term, is unlikely to allow a Windows XP-based system to be upgraded to Windows 7 when the system is critical (to life, health, money, etc.). The FDA may not allow any “easy” resolution of a security issue unless they have thoroughly tested for other potential harm. That is why “better security” is not typically measured only on whether a problem is “easily resolved” — resolution can introduce other unforeseen and greater problems that threaten the valuable assets.

The delays may drive an IT security professional mad because it seems incredibly slow compared to an easy fix for the problem they see. Yet, this is an opportunity in security to reflect upon greater principles and exercise caution: will an expedient change always bring “better security”?