[Haskell-cafe] Verifying Haskell programs vs. generating verified Haskell programs
Dominic Mulligan
dominic.p.mulligan at googlemail.com
Wed Jun 22 17:11:41 CEST 2011
> I'm no expert with compcert, but as I understand it their approach is
> to only do semantic preserving changes to the program at each step in
> the translation from C source to binary. I'm not sure what they used
> as their specification of C and it seems like the correctness of their
> approach would still depend on a spec.
The CompCert C compiler's verified pathway starts at C-Light, passes
through a number of intermediate languages, and ends at ARM/Power PC
assembly code. The existence of a reliable assembler for Power PC or
ARM assembler is assumed, as is the correctness of the CIL tool which
transforms C code into C-Light. Blazy and Leroy produced an
operational semantics of C-Light in [1]. Naturally, you can then ask
how reliable a model of a "stripped-down C" C-Light really is. I
believe Regehr's failure to find any bugs in the verified "middle-end"
of the CompCert compiler goes some way to clearing that up (he did find
bugs in the unverified front-end, though) [2]!
Note, in regards to CompCert, what you mean by "semantics preserving" is
actually preservation of a program's extensional semantics. There's no
guarantee that the CompCert C compiler will not transform an efficient
piece of C code into some computational atrocity, albeit with the same
extensional "meaning", at the assembly level. We consider this problem
in CerCo [3], a related project to CompCert, which uses essentially the
same intermediate languages as CompCert (including starting at C-Light),
but also proving that each transformation step also preserves, or
improves, the concrete complexity of the code under transformation.
> There's a second (haha) approach, which I use basically every day.
> Use the typing language fragment from a strongly typed programming
language to express a specification, and then rely on "free"
functions/theorems and the Howard-Curry isomorphism theorem to guarantee
correctness of implementation relative to the specification.
How does this count as a distinct approach to the problem? It's
essentially what happens when you verify a program in Coq.
Further, there's much too sharp a distinction in the OP's mind between
constructing a verified program in a proof assistant and verifying an
existing program. Every large scale verification effort starts out with
a prototype written in a high-level language, as far as I can tell.
seL4's verification started with a Haskell prototype, IIRC, and CerCo's
compiler started as an O'Caml prototype, before we began translating it
into Matita's internal language [4]. CompCert, AFAIK, did exactly the
same thing as we do, using an O'Caml prototype. It is *much* cheaper to
make large scale design changes in a prototype written in a high-level
programming language than it is in the internal language of a proof
assistant.
[1]: http://gallium.inria.fr/~xleroy/publi/Clight.pdf
<http://gallium.inria.fr/%7Exleroy/publi/Clight.pdf>
[2]: http://www.cs.utah.edu/~regehr/papers/pldi11-preprint.pdf
<http://www.cs.utah.edu/%7Eregehr/papers/pldi11-preprint.pdf>
[3]: http://cerco.cs.unibo.it
[4]: http://matita.cs.unibo.it
More information about the Haskell-Cafe
mailing list