NTSB Aircraft Accident Report

A new public report of recent aircraft accidents in America has been posted by the NTSB.

The Members of the National Transportation Safety Board meet in a public session, under the provisions of the Government in the Sunshine Act, generally held on Tuesdays to discuss and adopt accident reports, special investigation reports, safety studies, and other Board products.

One report focuses on the need for better medical examination procedures of pilots to anticipate the risk of brain haemorrhage, as well as how to reduce failure of flight recorders.

The other discusses the catastrophic impact of poor risk management and incident response:

Contributing to the accident were an organizational culture that prioritized mission execution over aviation safety and the pilot’s fatigue, self-imposed pressure to conduct the flight, and situational stress. Also contributing to the accident were deficiencies in the NMSP aviation section’s safety-related policies, including lack of a requirement for a risk assessment at any point during the mission; inadequate pilot staffing; lack of an effective fatigue management program for pilots; and inadequate procedures and equipment to ensure effective communication between airborne and ground personnel during search and rescue missions.

Bicycle Speed Limit on Golden Gate Bridge

An uninspired solution to bicycling risk has been proposed for San Francisco’s Golden Gate Bridge

  1. On a busy day: as many as 6,000 bicyclists and 10,000 pedestrians use the sidewalks
  2. Over the last 10 years, there have been 164 reported bicycle-involved accidents that produced 178 injuries, 119 of those injuries were serious enough to require transport by ambulance
  3. Most common type of accident on the Bridge sidewalks is the solo bike accident
  4. 5 times as many solo bicycle accidents as bicycle-pedestrian accidents
  5. Most common accident location is the west sidewalk, where pedestrians are prohibited
  6. Speed was identified as a contributing factor in 39% of all bike-related accidents.

I hope I’m not the first person to point this out but there are an average of 30 fatalities a year from the bridge — people who jump off. My guess is a consultant is already working on a Golden Gate Bridge jump speed limit. A fall at a slower rate would dramatically reduce fatalities. Other options have already been ruled out.

The original architect called for a higher rail, but the builder was a short man and insisted on lowering the bar so his view of the bay would not be obstructed.

To put this in perspective, the consultants set their speed limit recommendation at half of the measured current mean. 10 mph is very slow for even a beginner cyclist on smooth pavement with a natural decent, let alone a commuter cyclist who is fit from riding every day.

If the consultants had followed the California 85th percentile rule (the speed of 85% of those on the road) they would have set the limit above 20mph. A limit set so far below the natural flow and current conditions is destined for failure and controversy. Asking California Highway Patrol to expand their resources to handle this scenario while cutting back resources elsewhere, for example, seems entirely misguided.

The recommendation also begs the question of why such a giant drop in speed is justified when the report on Golden Gate Bridge shows less than one solo bicycle (bicycle to bicycle) accident per 80,000 miles ridden.

A more thorough analysis might have admitted that speed above 10 mph is not the problem. Difference in speed is the problem. The authorities could just as easily ticket bicyclists who are travelling too slow, causing a hazard, and require all cyclists to stay above 15 mph. After all, insurance companies who look at the statistics know that over 80% of accidents are caused by drivers going too slow, not fast — the more people travel at a similar speed, the less risk from changing lanes into a head-on collision.

Rural two-lane roadways are statistically the most dangerous because of a high incidence of deadly head-on collisions and the difficulty impatient drivers’ face while overtaking slower vehicles.

As long as I’m talking about creative interpretation of data, here’s an idea that I think many would find a lot more interesting: return a lane to the bridge’s original design for public transportation.

San Francisco was a city without much surplus land to use for roads and depended on its cablecars and its Key system, a system operating 230 electric trolleys and trains. Immediately after acquiring controlling interest in the parent company of the Key system, National City Lines announced its plans to replace the entire system with a fleet of—you guessed it—General Motor’s buses. The Key system owned rights of way across the Golden Gate Bridge; these rights of way were paved over to make way for cars and buses.

This approach will reduce the number of people who have to commute by car, increase the number of tourists, add revenue for the bridge to offset the cost of maintenance and…if bicyclists share the trolley lane then the bridge also achieves a high-speed non-distracted corridor for bicyclists to cross at a reasonable speed.

Photo from BikeCal: Bill Oetinger

US to Require Automotive Black Boxes

Wired provides a detailed look at what’s down the road for drivers in America:

Next month, the National Highway Traffic Safety Administration is expected to declare that all vehicles must contain an event data recorder, known more commonly as a “black box.” The device, similar to those found in aircraft, records vehicle inputs and, in the event of a crash, provides a snapshot of the final moments before impact.

That snapshot could be viewed by law enforcement, insurance companies and automakers. The device cannot be turned off, and you’ll probably know little more about it than the legal disclosure you’ll find in the owner’s manual.

What is missing from their report is that these black boxes send data over wireless in real-time. It already has been tested in several cities.

The wireless is not just for incident response. Communication through existing vehicle monitoring infrastructure is under review as a means to enable traffic control (e.g. intersection light timings — yellow signals).

Of course a wireless interface to the event recorders on cars around you, and traffic signals, opens up the possibilities of data integrity and confidentiality loss. What are the chances that the system will be designed to be formally correct?

L4.verified: A Formally Correct Operating System Kernel

The L4.verified project has a beautifully written introduction. They eloquently argue (a good sign for their code) that it is possible to eliminate risk from specific areas of development.

Imagine your company is commissioning a new vending software. Imagine you write down in a contract precisely what the software is supposed to do. And then — it does. Always. And the developers can prove it to you — with an actual mathematical machine-checked proof.

Sounds to me like they’re making the case for compliance. It’s not just a check-list, it’s proof of something.

I have presented this in terms of cloud at conferences for the past few years and tried to make it clear but I have to give kudos to the L4.verified author: Their explanation is tight.

Here’s my spin on things:

  • When someone says to themselves they are secure, they are done.
  • When someone says they are secure to someone else, they then have to prove their statement and show the intervals of confidence (e.g. tests and error rate or deviation).

This is the difference between security and compliance. The latter requires proof with peer review. L4.verified says they can prove security through an automation system — compliance by design.

…the issue of software security and reliability is bigger than just the software itself and involves more than developers making implementation mistakes. In the contract, you might have said something you didn’t mean (if you are in a relationship, you might have come across that problem). Or you might have meant something you didn’t say and the proof is therefore based on assumptions that don’t apply to your situation. Or you haven’t thought of everything you need (ever went shopping?). In these cases, there will still be problems, but at least you know where the problem is not: with the developers. Eliminating the whole issue of implementation mistakes would be a huge step towards more reliable and more secure systems.

Sounds like science fiction?

The L4.verified project demonstrates that such contracts and proofs can be done for real-world software.

It looks something like this:

The goal of L4.verified apparently is to build a system of proof that a machine can handle on its own.

If this reminds you of “The number 42” or “I’m sorry Dave, I’m afraid I can’t do that“, then you obviously have been reading too much science fiction.

The machines will have to be able to handle these three tasks to be successful:

  1. Pose a correct audit question
  2. Answer within a reasonable time
  3. Prove that the answer is reliable

This translates directly into the future of audits, especially in cloud. Simplification and atomisation coupled with verification is a great model for security, but even better for compliance.

I will discuss this in more detail tonight at Cloud Camp, Silicon Valley at the IBM Innovation Center.