AWS Object Expiration

Amazon has announced that you can schedule the deletion of objects in Simple Storage Service (S3); it also warns that there can be a delay between a scheduled expiration and an actual deletion, and that if you leave an empty prefix in an expiration rule then it will expire all your objects.

Some objects that you store in an S3 bucket might have a well-defined lifetime. For example, you might be uploading periodic logs to your bucket. After a period of time, you might not need to retain those log objects. In the past, you were responsible for deleting such objects when you no longer needed them. Now you can use Object Expiration to specify a lifetime for objects in your bucket.

With Object Expiration, when an object reaches the end of its lifetime, Amazon S3 queues it for removal and removes it asynchronously. There may be a small lag between the expiration date and the date at which Amazon S3 removes an object. You are not charged for storage time associated with an object that has expired.

It begs the question whether an expiration action is reversible, since the object technically has not been deleted. This is not just important from a forensics point of view but also for sneakily avoiding charges.

Final ATSB Report on 2008 Qantas flight QF72

The Australian Transportation Safety Bureau has released a comprehensive set of reports with an impressive amount of detail on the “In-flight upset – Airbus A330-303, VH-QPA, 154 km west of Learmonth, WA, 7 October 2008

Although the FCPC [flight control primary computer] algorithm for processing AOA [angle of attack] data was generally very effective, it could not manage a scenario where there were multiple spikes in AOA from one ADIRU [air data inertial reference unit] that were 1.2 seconds apart. The occurrence was the only known example where this design limitation led to a pitch-down command in over 28 million flight hours on A330/A340 aircraft, and the aircraft manufacturer subsequently redesigned the AOA algorithm to prevent the same type of accident from occurring again.

Each of the intermittent data spikes was probably generated when the LTN-101 ADIRU’s central processor unit (CPU) module combined the data value from one parameter with the label for another parameter. The failure mode was probably initiated by a single, rare type of internal or external trigger event combined with a marginal susceptibility to that type of event within a hardware component. There were only three known occasions of the failure mode in over 128 million hours of unit operation. At the aircraft manufacturer’s request, the ADIRU manufacturer has modified the LTN-101 ADIRU to improve its ability to detect data transmission failures.

The failure caused the aircraft to take two abrupt “pitch-down” moves as depicted in a (5MB AVI) report video below. The sudden dive injured over 100 passengers and 75% of the crew; 39 were treated in hospital.

The simple explanation is that frequent data spikes had not been anticipated in the ADIRU. Now that the FCPC algorithms to process AOA have been updated for this rare issue the report is classified as complete.

However, the report leaves open the possibilities of a “rare type of…trigger event”. It does not, for example, completely rule out speculation about interference from Naval Communication Station Harold E. Holt — “the most powerful [kilohertz] transmission station in the Southern hemisphere” — near Learmonth Airport. Note the locations of the three similar spike issues.

The closest the report comes to final word on a trigger event is to say possibilities were “found to be unlikely”

…the failure mode was probably initiated by a single, rare type of trigger event combined with a marginal susceptibility to that type of event within the CPU module’s hardware. The key components of the two affected units were very similar, and overall it was considered likely that only a small number of units exhibited a similar susceptibility.

Some of the potential triggering events examined by the investigation included a software ‘bug’, software corruption, a hardware fault, physical environment factors (such as temperature or vibration), and electromagnetic interference (EMI) from other aircraft systems, other on-board sources, or external sources (such as a naval communication station located near Learmonth). Each of these possibilities was found to be unlikely based on multiple sources of evidence. The other potential triggering event was a single event effect (SEE) resulting from a high-energy atmospheric particle striking one of the integrated circuits within the CPU module. There was insufficient evidence available to determine if an SEE was involved, but the investigation identified SEE as an ongoing risk for airborne equipment.

Although the threat may continue to be unknown, the ATSB report is complete based on confidence that the vulnerability has been resolved by changes to the FCPC algorithms. It’s a good counter-example to my occasional point that to be secure is to be vulnerable. Here is a case where the risk from a vulnerability is so high, while the cost of a fix is so low, it had to be patched.

BSI Study: Vblock Threats and Countermeasures

The Bundesamts für Sicherheit in der Informationstechnik (BSI) in Germany has announced the release of a new cloud security study

Unter Mitwirkung des Bundesamts für Sicherheit in der Informationstechnik (BSI) hat die VCE-Koalition (Virtual Computing Environment Coalition, gebildet von Cisco und EMC mit Investitionen von VMware und Intel) eine Studie zum Thema “Gefährdungen und Gegenmaßnahmen beim Einsatz von VCE Vblock” erstellt. Die Studie beschreibt ausführlich die Gefährdungen, die sich aus Betrieb und Nutzung eines VCE Vblocks ergeben und zeigt in Anlehnung an die IT-Grundschutz-Kataloge des BSI Maßnahmen zum sicheren Betrieb eines Vblocks auf. Hierbei wurde der Fokus auf Cloud-spezifische Aspekte gelegt. Gefährdungen und Maßnahmen, die bereits heute in den IT-Grundschutz-Katalogen aufgeführt sind, werden in der vorliegenden Studie nicht betrachtet. Der VCE Vblock ist ein Infrastrukturpaket, in dem Blade-Server, Virtualisierung, Netzwerk- und Speichertechnologien, Sicherheitskomponenten sowie Funktionalitäten zum Management der IT-Infrastruktur in einer Komplettlösung vereint sind.

Auf Basis der im Mai 2011 veröffentlichten “Sicherheitsempfehlungen für Cloud Computing Anbieter” des BSI ist dies die erste einer Reihe von Studien zum Thema Private Cloud Computing, an denen das BSI zusammen mit verschiedenen Technologieanbietern arbeitet. Ziel der Studien ist es, die Sicherheitsempfehlungen des BSI um detailliertere und tiefergehende Sicherheitsanalysen von Cloud Computing Systemen mit besonderem Fokus auf Private Clouds zu erweitern. Zielgruppe der Studien sind in erster Linie IT-Verantwortliche in Unternehmen, Behörden und Institutionen, Administratoren sowie IT-Architekten für Virtualisierung und Informationssicherheit.

I couldn’t find a translation, so here’s mine:

The German Federal Office for Information Security has published a study on “threats and countermeasures in the use of VCE Vblock“. The VCE (Virtual Computing Environment) coalition was formed by Cisco and EMC with investments from VMware and Intel. The study describes risks of VCE Vblock; it shows, based on the IT Baseline Protection Manual from BSI, the appropriate measures for safe operation. The study is focused on specific aspects of cloud; risks and controls already listed in the IT Baseline Protection catalog are not included in the study. The VCE Vblock is an infrastructure package, which is made up of virtualization servers, networking and storage technologies, as well as security components and functionality to manage IT infrastructure in one complete solution.

Based on the BSI “Safety Recommendations for Cloud Computing Providers” published in May 2011, this is the first of a series of studies of private cloud computing by BSI in collaboration with various technology providers. The study aims to extend BSI security recommendations to more detailed and in-depth security analysis of cloud computing systems with particular emphasis on private clouds. Target groups of the studies are IT managers, administrators and IT architects for virtualization and information security.

Although the document text is in German many of the diagrams are still in English. A few use both languages and real-world examples, such as this one, which shows the risk of an Ost VLAN invading a West VLAN. I’m kidding. Not really

The When and How of Static Code Analysis

Excellent blog post by John Carmack on performing assessments relative to risk management — how to find benefit from static code analysis

It is important to say right up front that quality isn’t everything, and acknowledging it isn’t some sort of moral failing. Value is what you are trying to produce, and quality is only one aspect of it, intermixed with cost, features, and other factors. There have been plenty of hugely successful and highly regarded titles that were filled with bugs and crashed a lot; pursuing a Space Shuttle style code development process for game development would be idiotic. Still, quality does matter.

[…]

I probably would have talked myself into paying Coverity eventually, but while I was still debating it, Microsoft preempted the debate by incorporating their /analyze functionality into the 360 SDK. /Analyze was previously available as part of the top-end, ridiculously expensive version of Visual Studio, but it was now available to every 360 developer at no extra charge. I read into this that Microsoft feels that game quality on the 360 impacts them more than application quality on Windows does. :-)