[Haskell-cafe] Clean Dynamics and serializing code to disk
jed at 59A2.org
Wed Dec 5 08:47:28 EST 2007
On 5 Dec 2007, gwern0 at gmail.com wrote:
> Since from my Lisp days I know that code is data, it strikes me that
> one could probably somehow smuggle Haskell expressions via this route
> although I am not sure this is a good way to go or even how one would
> do it (to turn, say, a list of the chosen ADT back into real
> functions, you need the 'eval' function, but apparently eval can only
> produce functions of the same type - so you'd need to either create as
> many adts and instances as there are varieties of type signatures in
> Haskell '98 and the libraries, I guess, or somehow encode in a lambda
> calculus). Is that a route worth pursuing?
I too am interested in serializing functions, but for a different
purpose: distributed computing without emulating global shared memory
like GdH. The hard part, as I understand it, is tracking down all the
references in a function. Once they are identified, we can wrap the
whole thing up (sort of lambda lifting at runtime) and send that. I
believe this is what the GUM runtime does internally. I am unaware of a
way to get at this information without modification of the runtime.
If the function we want to serialize is available at compile time, the
compiler should be able to do the lambda lifting and give us a binary
object that we can serialize. I don't know if this is possible now or
if it would need a compiler modification.
Perhaps the Mobile Haskell approach is a good idea---serializing
byte-code generated by GHCi. In one paper, they reference new functions
packV :: a -> IO CString
unpackV :: CString -> IO a
although I'm skeptical of these type signatures. At least, they are
only valid for byte-code, so they don't tell the whole story. This
byte-code is dynamically linked on the receiving end so the same
libraries must be compiled there, but compiled code is never serialized.
From the article:
Packing, or serializing, arbitrary graph structures is not a trivial
task and care must be taken to preserve sharing and cycles. As in
GpH, GdH and Eden, packing is done breadth-first, closure by closure
and when the closure is packed its address is recorded in a temporary
table that is checked for each new closure to be packed to preserve
sharing and cycles. We proceed packing until every reachable graph
has been serialised.
As long as the real work takes place in compiled code, sending the
byte-code might not be a bad idea and it has the added benefit of being
platform-independent. However, I haven't been able to find specifics
about the implementation of packV/unpackV, and I would think the runtime
is better positioned to do this analysis itself.
Perhaps someone on this list who knows a thing or two about the
internals can offer some insight. :-)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 196 bytes
Desc: not available
Url : http://www.haskell.org/pipermail/haskell-cafe/attachments/20071205/d9cfed8a/attachment-0001.bin
More information about the Haskell-Cafe