[Haskell-cafe] Pure extremism (was: Re: Why?)
Luke Palmer
lrpalmer at gmail.com
Sat Dec 12 03:03:31 EST 2009
On Fri, Dec 11, 2009 at 7:07 PM, Jason Dusek <jason.dusek at gmail.com> wrote:
> 2009/12/11 Luke Palmer <lrpalmer at gmail.com>:
>> The idea being that any code that is pure could be evaluated
>> anywhere with a very simple interpreter. If you have pure
>> code, you can trace it back and evaluate it in a sandbox where
>> you don't need a C runtime, a linker, or really anything but
>> the simplest substitution engine. *All* effects bubble their
>> way up to the top level, so that we know from the type
>> signature of a value the machinery we will need to run it.
>
> This sounds good but there is something puzzling about it.
> Where do we draw the line between "machinery" and "packages"?
> The types don't tell us what libraries we need.
I don't follow this. Do you mean external libraries, such as "the
library used to execute an IO type"? That makes sense to me, though
in that case the types do tell. Or you might mean what *haskell*
libraries does a piece of code depend on? To address that, note that
I consider a statement like:
import Control.Monad.State
As a form of impurity in a sense (it is hard to say what I consider
it, actually, but I think it inhibits reasoning and verification).
What version are you importing? From where are you importing it ;
where is it defined? These things depend on the compiler environment,
which means that the meaning of a piece of code depends on who is
compiling it.
I have brainstormed solutions to this problem while thinking about my
(currently on hold or dead) Udon project:
http://lukepalmer.wordpress.com/2008/11/19/udon-sketch-2/ . But it is
just a brainstorm, nothing I would sign in blood. I'd love it if more
people were thinking about these kinds of problems (Any papers you
well-read folks know about?)
> They don't tell us how much RAM/CPU we need, either.
And these!
My ideas for these are now firmly beyond Haskell and way out in the
ill-defined vaporware space. I discuss in abstract the idea of
distinguishing the performance semantics from the denotation of a
piece of code here:
http://lukepalmer.wordpress.com/2009/06/12/the-role-of-a-core-calculus/
. Brainstorm disclaimer applies, and I want to see more research!
(Papers?)
>
>> Pure functional code as the minimal essence of pure
>> computation -- everything else an EDSL.
>
> Partial or total code?
When the state of the art catches up, total for sure. Why reason
with DCPOs when you can reason with the mathematical objects
themselves? But when you require functions to be total, you end up
needing to prove a lot more about them just to convince the compiler
that they will terminate. Functional proof assistants are a wonderful
area of research, but are still very early and it is frickin' hard to
prove stuff about code. My belief is that time is all that is
necessary here -- the research will continue and it will become easier
and easier to prove properties about code (probably never completely
automatic, but someday I hope to "handwave the essence" of a proof and
have the computer fill in the bullshit details).
But when it's easy enough, then partiality is definitely an "effect"
to be modeled as well. It has the advantage, unlike most effects,
that given a good model, you can prove your way out of it. I.e. write
a function with a partiality effect, then elsewhere prove that it
actually is total so you can drop the effect. I seem to recall a
proof-of-concept of this idea for Coq.
There are many levels to purity, and I think as research progresses we
will gradually be able to go to the stricter levels while remaining
productive. It gives me comfort to see a long road ahead -- I don't
like to see perfect languages, because that's just me seeing my own
ignorance :-)
Luke
More information about the Haskell-Cafe
mailing list