L4.verified: A Formally Correct Operating System Kernel

The L4.verified project has a beautifully written introduction. They eloquently argue (a good sign for their code) that it is possible to eliminate risk from specific areas of development.

Imagine your company is commissioning a new vending software. Imagine you write down in a contract precisely what the software is supposed to do. And then — it does. Always. And the developers can prove it to you — with an actual mathematical machine-checked proof.

Sounds to me like they’re making the case for compliance. It’s not just a check-list, it’s proof of something.

I have presented this in terms of cloud at conferences for the past few years and tried to make it clear but I have to give kudos to the L4.verified author: Their explanation is tight.

Here’s my spin on things:

  • When someone says to themselves they are secure, they are done.
  • When someone says they are secure to someone else, they then have to prove their statement and show the intervals of confidence (e.g. tests and error rate or deviation).

This is the difference between security and compliance. The latter requires proof with peer review. L4.verified says they can prove security through an automation system — compliance by design.

…the issue of software security and reliability is bigger than just the software itself and involves more than developers making implementation mistakes. In the contract, you might have said something you didn’t mean (if you are in a relationship, you might have come across that problem). Or you might have meant something you didn’t say and the proof is therefore based on assumptions that don’t apply to your situation. Or you haven’t thought of everything you need (ever went shopping?). In these cases, there will still be problems, but at least you know where the problem is not: with the developers. Eliminating the whole issue of implementation mistakes would be a huge step towards more reliable and more secure systems.

Sounds like science fiction?

The L4.verified project demonstrates that such contracts and proofs can be done for real-world software.

It looks something like this:

The goal of L4.verified apparently is to build a system of proof that a machine can handle on its own.

If this reminds you of “The number 42” or “I’m sorry Dave, I’m afraid I can’t do that“, then you obviously have been reading too much science fiction.

The machines will have to be able to handle these three tasks to be successful:

  1. Pose a correct audit question
  2. Answer within a reasonable time
  3. Prove that the answer is reliable

This translates directly into the future of audits, especially in cloud. Simplification and atomisation coupled with verification is a great model for security, but even better for compliance.

I will discuss this in more detail tonight at Cloud Camp, Silicon Valley at the IBM Innovation Center.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.