MythBuster’s Accident: The Breach Apology

I wrote just a few days ago about the cannonball that got away. Then I forgot about the story until I just happened to notice the Mythbuster’s show hosts being quoted with contradictory statements in two different papers.

Note the sincerity reported in the San Francisco Chronicle:

After assuring Shetty, his two children, his wife and her parents that they would never again blast a home with heavy ordnance, Hyneman and Savage said the incident was the worst thing that had happened during thousands of experiments over eight years on the Discovery Channel show.

They also promised they wouldn’t air the footage they had filmed of the near-catastrophic cannon shot.

Nice breach response. A personal visit and a promise. Small problem, however, here’s a completely different response in the Los Angeles Times:

Savage said that despite the mishap, the cannon episode will still air, mostly likely in the spring.

The accident is “by far the most serious event that has occurred” on the show, Hyneman added. But he and his partner are taking it seriously. “It’s one of the reasons we have such a good safety record overall,” he said.

People that bring up “we have a good safety record overall” right after a breach aren’t thinking about the victims. People that promise not to air an episode but also say it will be seen in the spring….

Maybe this contradiction is a matter of sloppy reporting and misquotes? Or is it just a fine example of how people in northern and southern California view risk differently (reporter bias)?

Amusing example of how breach responses need to be formed carefully and with consistency.

People’s Gas Breach not Infinite

According to the latest reports from the Chicago-area, a contractor who breached an energy company was unable to steal infinity.

It’s still bad news for the “finite” number of records he did access.

Peoples Gas and sister utility North Shore Gas have notified an undisclosed number of customers of the possible theft and potential use of personal information about them by a contract worker.

[…]

They said, though, that the number is “finite and very small.” The companies said they had no information to indicate that the number of customers affected by the possible identity theft would grow.

The contracted employee has been fired and is “subject to criminal investigation and prosecution,” the companies said.

Never mind the X-men. A cartoon comes to mind with an evil character who has the amazing ability to steal an infinite amount of data. Oh no! It’s…it’s…SAN Man! Egress man?

Another U.S. Spy Plane Crash

The news today of another unmanned aerial vehicle (UAV) crash makes me think I should have been more explicit in my last blog post about the risk from UAVs. The latest crash was a Predator stationed in the Seychelles to monitor the Indian Ocean and nearby countries for al Qaeda/al Shabaab activity. Clearly the UAV are very likely to crash, as hinted by a quote in my earlier post:

…a 2008 report by the Congressional Research Service, the nonpartisan analytical arm of Congress, found UAVs have an accident rate 100 percent higher than manned aircraft.

Why do they crash so often? Some like to speculate about the risk of malware. Oh, malware. If only you were not just a symptom. Let’s review some of the many possible factors contributing to a high risk of crashes.

1) Perhaps it is because no human is on board — asset value of risk calculations makes safety procedures less thorough. It is costly to lose a UAV but not costly enough to force better risk management.

2) Perhaps it is caused by emerging/unfamiliar technology. Although planes have been around for a while, remote control seems to have a few quirks especially when communication is interrupted — planes crash instead of entering a fail-safe return-to-base mode (something smarter than a “predetermined autonomous flightpath”).

As an aside, we can’t call these aircraft “drones” because they crash when they lose human guidance. Yet that’s the popular term for them. I try hard not to use it but I still catch myself calling them drones all the time. Given the high rate and sources of error the military probably wishes they were dealing with drones by now. But I digress…

The real story here is a combination of human risk management errors that has made the crashes more likely, as explained last year in the LA Times.

Pentagon accident reports reveal that the pilotless aircraft suffer from frequent system failures, computer glitches and human error.

Design and system problems were never fully addressed in the haste to push the fragile plane into combat over Afghanistan shortly after the Sept. 11 attacks more than eight years ago. Air Force investigators continue to cite pilot mistakes, coordination snafus, software failures, outdated technology and inadequate flight manuals.

Flight manuals. That says “checklist” to me. Note the details of a federal investigation labeled NTSB Identification: CHI06MA121 for a 2008 Predator B plane crash in Arizona.

The investigation revealed a series of computer lockups had occurred since the [U.S. border on a Customs and Border Protection (CBP) Unmanned Aircraft (UA)] began operating. Nine lockups occurred in a 3-month period before the accident, including 2 on the day of the accident before takeoff and another on April 19, 2006, 6 days before the accident. Troubleshooting before and after the accident did not determine the cause of the lockups. Neither the CBP nor its contractors had a documented maintenance program that ensured that maintenance tasks were performed correctly and that comprehensive root-cause analyses and corrective action procedures were required when failures, such as console lockups, occurred repeatedly.

[…]

The pilot’s failure to use checklist procedures when switching operational control from PPO-1 to PPO-2, which resulted in the fuel valve inadvertently being shut off and the subsequent total loss of engine power, and lack of a flight instructor in the [ground control station], as required by the CBP’s approval to allow the pilot to fly the Predator B. Factors associated with the accident were repeated and unresolved console lockups, inadequate maintenance procedures performed by the manufacturer, and the operator’s inadequate surveillance of the UAS program.

I am reminded of a quote in another recent post about aviation risks but from the 1940s.

Life Begins With a Checklist…and it May End if You Don’t Use It

50km wireless link for Farallon Islands

I thought I wrote about this before but it doesn’t seem to show up anywhere. Tim Pozar gave an excellent presentation on how he and Matt Peterson built a wireless link from San Francisco to the Farallon Islands.

WMV and PDF available from NANOG49

The presentation will cover the requirements for a very limited budget and power consumption, issues of remote deployments, long distance microwave links over the ocean, sensitivity to the largest breeding colony the contiguous United States.

Additional network topics will be the requirement to support various services on the island via VLANs, fiber deployment to overcome distance and lightning, RF path calculations, “tuning” of the radio modulations schemes to provide the best up-time and remote support of a location that may only be accessible once a month.


Sailing around the Farallon Islands: Photo by me