[Haskell] Re: Formal verification of high-level language
implementation of critical software?
David von Oheimb
David.von.Oheimb at siemens.com
Fri Mar 20 14:51:22 EDT 2009
Hello,
thanks again to all who have responded to the request I sent on Feb 9.
We got a couple of really good pointers and inspiring discussions. As
requested by Bruce Watson, below I reproduce them for he benefit of all.
Cheers,
David
Prof.Dr. Bruce W. Watson wrote:
> Hi David, I'm sorry to say that I don't have the answers for you,
> but coming from Eindhoven, I'm also hoping that someone will a good
> response to you. If they do, will you summarize for this group?
> Thanks and best regards to you, Bruce.
***********************************************************************
Jan Jürjens wrote:
> Hi again,
>
> you should have a look at Andy's work using F-sharp if you don't already know it.
>
> Jan
Andy Gordon (MSR) wrote:
> Hi David,
>
> The work Jan is referring to involved a suite of verification tools for F#
Inserted by DvO:
http://research.microsoft.com/en-us/um/cambridge/projects/fsharp/default.aspx
> programs called the CVK (Crypto Verification Kit).
> See http://research.microsoft.com/cvk for an overview.
> We have done formal verification of self-written F# programs for the CardSpace and TLS protocols, for example. The paper
on CardSpace is here.
> http://research.microsoft.com/en-us/um/people/adg/Publications/details.htm#wcf-cardspace
> The TLS implementation is here:
> http://www.msr-inria.inria.fr/projects/sec/fs2cv/tlsimpl.pdf
>
> Regards,
>
> Andy
***********************************************************************
John Matthews wrote:
> Hi David,
>
> You posted a question a month back on the Haskell mailing list about
> whether any software components have been written in a high level
> language that have also been formally verified. I can think of a few:
>
> * There have been lots of software programs written in ACL2's first
> order logic, which also happens to be a subset of Common Lisp.
>
> * NICTA's L4.Verified project uses Haskell to write an executable high
> level model of their L4 microkernel. But they end up writing the actual
> application in C and proving refinement, so that may not be what you are
> looking for.
>
> * At Galois we have developed a domain specific functional language
> called Cryptol for specifying cryptographic algorithms like DES and AES.
> Cryptol is a purely functional language with compilers for VHDL, C, and
> Haskell. Cryptol also comes with formal methods tools, including a a
> MiniSat-based equivalence checker, a translator to Berkeley's ABC
> equivalence checker, and a prototype translator to Isabelle. You can
> find out more about Cryptol here:
>
> http://www.cryptol.net
>
> * We also developed a small monadic imperative language in Isabelle/HOL
> for writing a security-critical cross-domain disk block access
> controller. We proved safety and data separation properties about the
> controller, and in fact the data separation proof method was inspired in
> part by your ESORICS 2004 paper!
inserted by DvO: http://david.von-oheimb.de/cs/papers/Noninfluence.html
> We then wrote a simple translator from our monadic Isabelle/HOL subset to C.
>
> My colleague Paul Graunke actually carried out the work on this project,
> I mainly acted as an Isabelle/HOL consultant. Paul published an experience
> report at the System Software Verification workshop (SSV'08), titled
> "Verified Safety and Information Flow of a Block Device".
>
>
> * Levent Erkok and I wrote a joint paper with Alex Krauss, Florian
> Haftmann, and Lukas Bulwahn in TPHOLs 2008 titled "Imperative Functional
> Programming in Isabelle/HOL",
inserted by DvO: http://web.cecs.pdx.edu/~jmatthew/papers/TPHOLs08.pdf
> where we showed how to model a
> Haskell-style state monad with first-class polymorphic heap references
> and mutable arrays. Isabelle can then generate efficient imperative
> Haskell, SML, and OCaml code for programs written in this monad. The
> case studies include an efficient SAT checker and an imperative bytecode
> verifier for Jinja, both of which were verified.
[...]
>
> All the best,
> -john
***********************************************************************
Eric Verhulst wrote:
>
> Dear Mr. Oheimb,
>
> I am not really surprised about this observation. Higher level languages
> provide more abstraction and most programming errors are due to the
> nitty-gritty details of the implementation. When using a language like C,
> one should be aware that the language is basically a form of a higher level
> assembler exposing the hardware to the program. In addition, it has a
> "dirty" syntax resulting in about 14O MISRA rules + 50 non-documented ones.
> Why not change the language instead?
>
> But performance is often the issue and hence C is often used. Small code
> size also means less power and less dead code, the latter also being safety
> and security issues. In my personal view, it is hence a fallacy to say that
> higher level languages are the solution. HLL contribute but a clean and
> hence simple semantic model and a clean syntax are probably of higher
> interest than a high level language that carries too much complexity and
> overhead.
>
> Our own experience developing a network centric RTOS using formal modeling
> also proved that the architecture is an important factor that is often
> overlooked. While the initial set-up was to use formal modeling mainly for
> verifying the software, it was used from the beginning for developing the
> architecture and the functionality. The result was a big surprise. While the
> team had more than ten years experience developing a similar RTOS in the
> traditional way, the resulting architecture was much more straightforward
> and clean, resulting in a code size that was 5 to 1O times smaller. The
> architecture was also much safer (no risk of buffer overflow) and much more
> scalable. The initial design phase took about 18 months but we had running
> code in just 2 weeks. Later on, most issues were related to hardware issues
> e.g. when developing drivers.
>
> My personal view is that for software like an RTOS, high level languages are
> not really suitable. But for application I believe that code writing should
> be avoided at all cost by using code generators that start from user defined
> models, eventually complemented by formal provers (although it is early days
> for such tools). An example is Matlab/simulink, at least in the domain of
> e.g. algorithms. Another promising approach is to use languages (like B)
> that allow incremental refinement with formal model checking in parallel.
> Once such a design is complete, the risk of introducing errors even when
> recoding into C is much lower than when writing by hand.
> Hence, we should see these modeling tools like HLL. Nevertheless, for
> educational purposes, such HLLs should be used a lot more often as they help
> to learn to think in an abstract way (while most software engineers are
> immediately influenced by their prior implementations and patterns.) Once
> they master this, they will also write better programs in C.
>
> For more information about the project see www.altreonic.com and
> www.openlicensesociety.org. The product is called OpenComRTOS.
>
> Best regards,
>
> Eric Verhulst
***********************************************************************
Gerwin Klein wrote [translated]:
> our seL4 kernel of course! It is written in Haskell and verified,
> then re-implemented in C, optimized, and verified (the latter is
> not yet finished, but just 6-7 weeks to go).
I responded:
Cool! Having a look at the picture and description at
http://ertos.org/research/l4.verified/ is seems that the verification
relation between the Haskell prototype and the actual code is not the
usual one, where one would have (security/access control) objectives,
against to which one verifies the HLD, against to which one verifies the
LLD which is the Haskell implementation, against to which one verifies
the optimized C implementation, right? Why did you use at the LLD level
an automatically translated C version of the Haskell prototype?
Gerwin Klein responded:
> Greg Kimberly wrote:
>> L4.verified does look interesting.
>>
>> Is the reason that Haskell version is not part of the proof chain because
>> the "Automatic Translation" from Haskell to C is problematic from a proof
>> perspective?
>
> The translation from Haskell to C in this case is not automatic, the C code is
> completely hand-written. It follows the structure of the Haskell code closely,
> but it is hand-optimised and very close to the bare metal. It is a real,
> high-performance microkernel, competitive in performance with L4::Pistachio
> and the current commercial OKL4 releases on ARM by Open Kernel Labs.
> These are the fastest microkernels in the world.
>
> Greg Kimberly wrote:
>> If that's the case, what would it take to fix that?
>
> Having completed about 60% of the C proofs, we think, we might have been able to generate "boring" parts of the C kernel from Haskell, but for this extremely performance-critical application, there will always be a lot of manual tweaking involved, so it is unclear if we could get rid of this step usefully.
>
> Some parts of the current kernel (packed bitfield data structures in tagged unions) are automatically generated, with automatically generated implementation proofs, but not from Haskell directly. The data structures have their own specification language specifying hardware layout etc. This technique is used in other L4 kernels as well (without the proofs), because implementers don't trust the compiler to get packed bitfields right.
>
> It turns out, though, that the Haskell <-> C proof is the smallest of the big proof steps in this project. We were careful to shift all deep semantic reasoning to the upper level so that really only syntactic reasoning and optimisations remain.
>
> If the application is not a microkernel, but an application on top of a microkernel, the picture might look quite different. Depending on how large a trusted computing base your are willing to live with, you could run Haskell directly on seL4 (we have a project going on that ports Haskell to bare seL4). You would still have the Haskell compiler and runtime in the TCB, but we think that Haskell is good for application level programming. That means you would only be proving things on the Haskell level, not on C. Much more convenient and efficient.
>
> I believe the guys from Galois Inc in the US (John Matthews et al) might have similar views on this.
Gerwin Klein responded:
> (see also other email). The C version is not automatically translated, it is carefully hand-crafted. In this project, we see the Haskell prototype as an executable specification. The performance difference is enormous (we never intended the Haskell prototype to be fast, so no thought whatsoever went into optimising it).
>
> David von Oheimb wrote:
>> Good to hear that at least for the application level you believe a
>> relatively easily verifiable HLL implementation is efficient enough.
>>
>> No doubt that the translation to the lowest level (down from the LLD) is
>> manual for efficiency reasons. Sorry that I put my question slightly
>> wrong. http://ertos.org/research/l4.verified/ states:
>>
>>> The next level up, the low-level design, is a detailed, executable
>>> specification of the intended behaviour of the C implementation.
>>> This executable specification is derived automatically from a prototype
>>> of the kernel, developed in the seL4 project. The prototype was written
>>> in the high-level, functional programming language Haskell
>>
>> So the LLD is not written in C (as I wrongly assumed above) but in some
>> other executable language (Isabelle/HOL), as mentioned in section 4.7 of
>> http://ertos.org/publications/papers/Klein_08.pdf) my question should
>> have read: Why did you use at the LLD level not the Haskell prototype
>> itself, but an automatically translated version of it?
>
> Yes, I should have been clearer on that. We implemented a prototype in Haskell and then translated it automatically into Isabelle/HOL. This is necessary firstly just because Haskell is not any kind of direct input format of Isabelle. Also, the translation forced us to stay within a subset of Haskell that maps nicely to HOL (Haskell as any real language has too many features).
>
>
>
>>> In this project, we see the Haskell
>>> prototype as an executable specification. The performance difference is
>>> enormous (we never intended the Haskell prototype to be fast, so no
>>> thought whatsoever went into optimising it).
>>
>> Still I do not understand which was the main part of my question, namely
>> why according to http://ertos.org/research/l4.verified/ your Haskell
>> reference implementation is not directly in the usual verification
>> chain.
>
> It cannot be, you can always only verify translations of programs unless you write them directly in the theorem prover (Konrad Slind et al have some nice work on compiling HOL programs directly into assembly). We make a qualitative distinction between the translation Isabelle <-> Haskell and Isabelle <-> C, because we did not have to care about the correctness of the former, but we do care about the correctness of the latter. We therefore kept Isabelle <-> C absolutely minimal and as syntactic as possible. It is really more parsing C into Isabelle than really doing any translation. Formally, both of them are translations, though.
>
> There is no reason that Isabelle <-> Haskell has to be a big translation step. In fact TUM and NICTA have some student projects on precisely a better, sound translation mechanism for Haskell into Isabelle.
>
>
>> I am actually confused by what you wrote in the other email:
>>
>>> It turns out, though, that the Haskell <-> C proof is the smallest of the big
>>> proof steps in this project.
>>
>> which seems to contradict http://ertos.org/research/l4.verified/ as it
>> suggests that the Haskell version itself is in the verification chain.
>
> Sorry, I tend to shorten "the Isabelle translation of the Haskell prototype" to just "Haskell" too often. I can see that it was confusing. Did it now become clear, though?
>
>
>> So as stated above I wonder why you use an independent executable LLD
>> rather than the Prototype written in HLL (here Haskell).
>> (You could also have used the Haskell prototype in between the HLD and
>> LLD or in between the LLD and the hand-written C code, but this would
>> add an extra (presumably superfluous) node in the verification chain.)
>
> No, it's actually just not possible. Isabelle does not understand Haskell (there is no theorem prover that does), it's always a translation (possibly a very small translation step). Our LLD is not independent, though. It's the product of the automatic translation. It's Haskell parsed into Isabelle if you want, although in this case there was quite a bit of transformation going on.
I wrote:
Maybe you have also seen on the Haskell mailing list the response of
Eric Verhulst <eric.verhulst at openlicensesociety.org>:
[see above]
Gerwin Klein responded:
>> Eric Verhulst wrote:
>>> I am not really surprised about this observation. Higher level languages
>>> provide more abstraction and most programming errors are due to the
>>> nitty-gritty details of the implementation. When using a language like C,
>>> one should be aware that the language is basically a form of a higher level
>>> assembler exposing the hardware to the program.
>
> I completely agree, that's why we're using it as the kernel implementation language. For most things on top of the kernel it might not be as good a choice, though. Much nicer to work in something like Haskell directly.
>
>> Eric Verhulst wrote:
>>> In addition, it has a "dirty" syntax resulting in about 14O MISRA rules + 50 non-documented ones.
>
> We solve this by accepting only a true subset of C into the theorem prover. This can be made large enough to do kernel programming comfortably. It still has a lot of dirty semantic constructs that kernel programmers like to use, but that's the whole point of using it.
>
> David von Oheimb wrote:
>> I guess you mean not only as input to the theorem prover, but also for
>> the actual kernel code. This is certainly very helpful for the cleanness
>> of the code and lowers the verification burden.
>
> Yes, absolutely. Although the kernel programmers did not necessary like all parts of the restriction, they tend to agree that it produced cleaner code.
>
> David von Oheimb wrote:
>> I agree one cannot use HLL for very performance critical systems.
>> But how about using a HLLs as form of (executable) formal model / LLD?
>
> Modulo a (possibly very small) translation step into the logic of the theorem prover, that's what we are advocating.
>
> David von Oheimb wrote:
>> This, actually includes part of the answer to my above question: The LLD
>> is an Isabelle/HOL model (which is isomorphic to the Haskell prototype).
>
> Yes!
>
> David von Oheimb wrote:
>> In this way you avoid having the semantics of Haskell directly in the
>> verification chain, but you have it indirectly (and not really formally)
>> because of the automatic translation between the Haskell prototype and
>> the Isabelle/HOL version of it. Therefore, the Haskell prototype is,
>> strictly speaking, not an authoritative specification but the technical
>> inspiration for the LLD in Isabelle/HOL.
>
> Exactly. If you want to build more directly on the HLL, the translation step becomes part of your trusted tool chain and you have to invest some work into making sure that it is sound. This is clearly worthwhile, though, and would be a once-for-all effort.
>
> David von Oheimb wrote:
>> I presume the HLD is also in Isabelle/HOL. Despite your general
>> bottom-up process, wouldn't it also make sense to have the HLD influence
>> the LLD (and still get something useful, but hopefully more elegant)?
>
> That did indeed happen. We started with the LLD, but we developed the HLD concurrently (lagging slightly behind, but giving input into new iterations). Some generalisations/restrictions become more obvious when you abstract from implementation data structures.
>
>
>> Eric Verhulst wrote:
>>> But performance is often the issue and hence C is often used. Small code
>>> size also means less power and less dead code, the latter also being safety
>>> and security issues. In my personal view, it is hence a fallacy to say that
>>> higher level languages are the solution. HLL contribute but a clean and
>>> hence simple semantic model and a clean syntax are probably of higher
>>> interest than a high level language that carries too much complexity and
>>> overhead.
>
> This depends on the abstraction level of the application, I'd say. For the RTOS/microkernel level, I agree. Too much overhead is involved in using a language like Haskell directly. We used it to describe the semantics, not the actual implementation.
>
>> Eric Verhulst wrote:
>>> Our own experience developing a network centric RTOS using formal modeling
>>> also proved that the architecture is an important factor that is often
>>> overlooked. While the initial set-up was to use formal modeling mainly for
>>> verifying the software, it was used from the beginning for developing the
>>> architecture and the functionality. The result was a big surprise. While the
>>> team had more than ten years experience developing a similar RTOS in the
>>> traditional way, the resulting architecture was much more straightforward
>>> and clean, resulting in a code size that was 5 to 1O times smaller. The
>>> architecture was also much safer (no risk of buffer overflow) and much more
>>> scalable. The initial design phase took about 18 months but we had running
>>> code in just 2 weeks. Later on, most issues were related to hardware issues
>>> e.g. when developing drivers.
>
> Absolutely the same experience here. We used Haskell as the formal modelling language, because the kernel design team was more comfortable with it than with Isabelle/HOL directly, but the difference is almost syntactic only. We have had a very rapid, iterative design cycle with new features often being implemented in less than a day.
>
>> Eric Verhulst wrote:
>>> My personal view is that for software like an RTOS, high level languages are
>>> not really suitable. But for application I believe that code writing should
>>> be avoided at all cost by using code generators that start from user defined
>>> models, eventually complemented by formal provers (although it is early days
>>> for such tools). An example is Matlab/simulink, at least in the domain of
>>> e.g. algorithms. Another promising approach is to use languages (like B)
>>> that allow incremental refinement with formal model checking in parallel.
>>> Once such a design is complete, the risk of introducing errors even when
>>> recoding into C is much lower than when writing by hand.
>>> Hence, we should see these modeling tools like HLL. Nevertheless, for
>>> educational purposes, such HLLs should be used a lot more often as they help
>>> to learn to think in an abstract way (while most software engineers are
>>> immediately influenced by their prior implementations and patterns.) Once
>>> they master this, they will also write better programs in C.
>
> I'd even go further and say that there are HLL now that are directly suitable for application use. We tend to use Haskell, because there is a big research group at UNSW doing work on it, but I'm sure there are more.
>
> David von Oheimb wrote:
>> Do you second his view about the benefits of a formal HLD, in his case
>> TLA+/TLC according to http://www.openlicensesociety.org/drupal55/node/9
>
> A formal HLD is definitely beneficial in my view. It all depends a bit on the development process you are using. Our kernel design team was used to a bottom-up process and there is no reason this cannot be supported by formal verification -- we started with the LLD and developed the HLD in parallel. I would guess that the most important aspect of all this was that we found a formal language (here Haskell) that the *kernel design team* was comfortable with. *They* need to design the kernel, they were experienced kernel designers and programmers and new to formal modelling. They had help from us formal guys, but they did most of the LLD modelling. If we (the formal guys) had designed the kernel it would be much more elegant and much more useless.
>
> David von Oheimb wrote:
>> Unfortunately they did not verify the C code against that model. In
>> order to do that, it seems that an implementation in a HLL like Haskell
>> would be a good intermediate step, wouldn't it?
>
> I think so. If you already have the model, it doesn't need to be a programming language. It could be logic directly if the team is comfortable with using it.
>
> David von Oheimb wrote:
>> Yes, and this would avoid bringing the semantics of the HLL into play.
>
> True.
I wrote:
Taking into account that future hardware architectures are multicore
machines which are hard to program manually - Greg Kimberly writes:
> The idea that C is a good representation of what's really going down at the hardware
> level is obviously very compelling when you care about performance. But I wonder how
> the coming world of high-number multi-core changes that? Are we entering a world where
> the cost of maintaining cache coherence becomes more important than a close correspondence
> to a register machine? Functional languages, with their lack of shared state, might be
> a better model then. Might C ironically become considered "too far" from the hardware
> to be a good choice? :-)
Do you see a chance that on multicore systems, future compilers for HLLs
are (on average, for large chunks of code) smarter than hand-crafted
C/assembly code, such that they can be used even for performance
critical code close to the system level, maybe even OS implementation?
Gerwin Klein responded:
> I would agree that they will be much smarter than C compilers, but at the lowest level of software like the kernel, you still need to know precisely what's going on. Too much of the safety and correctness depends on it and there smarter is not better. It's bad enough if the kernel implementers are smart (almost always means harder proofs..). HLLs also tend to rely on OS services to implement their primitives. My current feeling is still that you want to keep the kernel close to bare metal with something like C, but I would expect even more benefit from HLLs on the application and OS component level for multi core.
>
> All this is not to say that C is in any way a perfect language, not even for OS implementation for which it was basically developed. In my view an ideal OS implementation language would have a modern, safe type system possibly with a controlled way of "breaking" it for hardware/performance (but breaking it in a way that produces clear and simple proof obligations), no or extremely minimal required run-time, language constructs for mapping data structures to hardware layouts precisely (even C sucks in this respect), and a very clean semantics. And it shouldn't have all these "features" from the 60s like goto etc, of course..
>
Greg continued writing:
> And to give some idea of why you might expect a greater performance-driven interest in FL's
> in the presence of very-multi-core hardware, check out the recent article in Queue about
> the experience of an IBM team building a general purpose STM (soft trans memory) system:
> http://queue.acm.org/detail.cfm?id=1454466
>
> Basically the cost of read/write barriers are a big problem. You can see the cost
> of the read barriers in particular in the detailed perf analyses. From the summary:
> "Based on our results, we believe that the road ahead for STM is quite challenging.
> Lowering the overhead of STM to a point where it is generally appealing is a difficult
> task, and significantly better results have to be demonstrated. If we could stress
> a single direction for further research, it is the elimination of dynamically
> unnecessary read and write barriers-possibly the single most powerful lever toward
> further reduction of STM overheads. Given the difficulty of similar problems explored
> by the research community such as alias analysis, escape analysis, and so on, this
> may be an uphill battle. Because the argument for TM hinges upon its simplicity and
> productivity benefits, we are deeply skeptical of any proposed solutions to
> performance problems that require extra work by the programmer"
>
> So when they call out "elimination of dynamically unnecessary read and write
> barriers-possibly the single most powerful lever toward further reduction of STM
> overheads" -- it leads one to think about that being precisely the sort of thing
> a powerful type system and pure functional approach would seem to be ideally
> suited to helping. Perhaps functional programming comes under the "extra work
> by the programmer" category :-)
> (Note the comment below the article specifically wondering about Haskell's STM.)
Unfortunately I do not have the time to look at the above in detail;
Are there are others who might do or already have an opinion/insight in
this issue?
Greg continued writing:
> So if folks can find a way to make writing correct programs more verifiable in
> FL's the performance issue might very well be a non-issue in a few years....
Wouldn't this be great!
Greg also wrote:
> Still, I clearly need to look at current practice in code generators (from models).
> Though I imagine they still involve even larger space/time compromises than using
> HLL's relative to coding in C. So I'm a bit surprised at Mr. Verhulst's enthusiasm
> for them as compared the HLL's. It might be a case of the grass being greener on the
> other side of the fence, I think.
It seems to me that Mr. Verhulst would also be fine with HLLs as a
formal model, and that for very performance critical code he is just
dissatisfied with the efficiency of compilers and code generators, and
therefore prefers (any) formal model that this manually translated to C
for efficiency, right?
Gerwin Klein responded:
> Sounds like it. I might add, that for us it's not just about the efficiency (although that is a big factor), but also about knowing precisely how things map to the hardware.
Gerwin Klein wrote:
> All this is not to say that C is in any way a perfect language, not even
> for OS implementation for which it was basically developed. In my view
> an ideal OS implementation language would have a modern, safe type
> system possibly with a controlled way of "breaking" it for
> hardware/performance (but breaking it in a way that produces clear and
> simple proof obligations), no or extremely minimal required run-time,
> language constructs for mapping data structures to hardware layouts
> precisely (even C sucks in this respect), and a very clean semantics.
> And it shouldn't have all these "features" from the 60s like goto etc,
> of course..
This is obviously why you, and others, use some strict subset of C.
Just a pity that neither of them has become a widely accepted coding
standard, or even better, evolved into a new, clean version of C.
Do you happen to know Scala, and what you think about it?
http://www.scala-lang.org/node/25
Gerwin Klein responded:
> I haven't used it extensively, but I like it as a language. Makarius is using it for Isabelle quite a bit these days, he might have more insight.
***********************************************************************
Grit Denker wrote:
> David,
> One thing that comes to mind is the work from Mark-Oliver Stehr and
> Carolyn Talcott.
> In particular, I think they formalized in Maude a secure group protocol
> and did some verification on it.
>
> See http://formal.cs.uiuc.edu/stehr/cliques_eng.html
> If you need more info, contact Mark-Oliver Stehr at stehr at csl.sri.com
>
> John Rushby should have a lot more stories about successful use of
> FM.Maybe there is something on http://fm.csl.sri.com/
>
> grit
***********************************************************************
Lars-Henrik Eriksson wrote:
> Hi,
>
> I carried out a formal verification of a Prolog program 10 years ago
> while I was working at Industrilogik L4i AB, a Swedish formal methods
> consultancy which was acquired by Prover Technology (www.prover.com)
> some years ago. The program was a front end to a SAT solver and
> converted a formula in a first order language with finite domains to a
> propositional logic formula. The conversion was slightly tricky since it
> included several simplification steps. The program was just over 500
> lines in size (excluding comments and blank lines).
>
> The verified program was going to be used as part of a commercial formal
> verification tool and the client (now Bombardier Transportation Rail
> Control Solutions) wanted some assurance that the tool worked correctly.
>
> I used Robert Stärk's Logic Program Theorem Prover
> (http://www.inf.ethz.ch/personal/staerk/lptp.html). As this was a
> commercial project, there was no report written on the verification work
> as such. As far as I remember, Stärk's LPTP system worked quite well.
>
> Best regards,
>
> Lars-Henrik Eriksson, PhD, Senior Lecturer
> Computing Science, Dept. of Information Technology, Uppsala University, Sweden
***********************************************************************
Amine Chaieb wrote:
> Hi,
>
> Maybe not that "big" components, but we verified formally (in
> Isabelle/HOL) some decision procedure implementations (mainly quantifier
> elimination for linear real, integer arithmetic and their combination).
> Using Isabelle's push button Technology you transform the formally
> Isabelle/HOL implementation in to SML, Ocaml or Haskell.
>
> Here are some pointers:
>
> http://www.cl.cam.ac.uk/~ac638/pubs/pdf/mir.pdf
> http://www.cl.cam.ac.uk/~ac638/pubs/pdf/frpar.pdf
> http://www.cl.cam.ac.uk/~ac638/pubs/pdf/linear.pdf
>
> Best wishes,
> Amine.
***********************************************************************
Niklas Holsti wrote:
> Hello David,
>
> I don't know if by "such a language" you mean specifically
> functional/logic languages, but in case not: You probably know about the
> Tokeener software,
> http://www.adacore.com/home/gnatpro/tokeneer/
> which is implemented in SPARK Ada. Not a functional language, but one in
> which data flow is controlled and verified, using the SPARK Examiner
> toolset.
>
> Regards,
>
> Niklas Holsti
> Tidorum Ltd
***********************************************************************
David von Oheimb wrote:
> Dear all,
>
> at Siemens Corporate Technology we are currently doing a study with
> Boeing on how to enhance assurance of open-source security-critical
> software components like OpenSSL. Methods considered range from standard
> static analysis tools to formal verification.
>
> One observation is that the higher the programming language used is,
> the less likely programming mistakes are, and the easier a formal model
> can be obtained and the more likely it is to be faithful and verifiable.
> In this sense, implementations e.g. in a functional/logic programming
> language would be ideal.
>
> Are you aware of any (security critical or other) software component
> that has been implemented in such a high-level language and has been
> formally verified? Any quick pointers etc. are appreciated.
>
> Thanks,
> David v.Oheimb
>
> +------------------------------------------------------------------<><-+
> | Dr. David von Oheimb Senior Research Scientist |
> | Siemens AG, CT IC 3 Phone : +49 89 636 41173 |
> | Otto-Hahn-Ring 6 Fax : +49 89 636 48000 |
> | 81730 München Mobile: +49 171 8677 885 |
> | Germany E-Mail: David.von.Oheimb at siemens.com |
> | http://scd.siemens.de/db4/lookUp?tcgid=Z000ECRO http://ddvo.net/ |
> +----------------------------------------------------------------------+
>
>
_______________________________________________
fortia mailing list
fortia at fmeurope.org
http://fmeurope.hosting.west.nl/mailman/listinfo/fortia
More information about the Haskell
mailing list