[Haskell-cafe] Serialization of (a -> b) and IO a

Jesse Schalken jesseschalken at gmail.com
Thu Nov 11 08:15:49 EST 2010


2010/11/11 Gábor Lehel <illissius at gmail.com>

> Obviously there are questions here with regards to the functions which
> the to-be-serialized function makes use of -- should they be
> serialized along with it? Required to be present when it is
> deserialized? Is it OK for the function to do something different when
> it is loaded compared to when it was stored if its environment is
> different, or not OK?


I would have say Yes, No, No. At the moment, when you serialise data
structure A which references data structure B which references data
structure C, using Data.Binary for example, the whole lot (A, B, and C) gets
serialised, so that the resulting deserialization of A is
denotationally equivalent to the original, regardless of the environment. I
don't see why this shouldn't be the case for functions also.

So a serialized function should include all its direct and indirect callees.
This might result in potentially simple functions ending up enormous when
serialized, simply because the call graph, including all it's libraries and
their libraries etc, is that size, but such would be pure function
serialization.

This raises the question of what is left. The assembled machine code? For
the architecture of the serializer or of the deserializer? Or LLVM IR
for architecture independence? C--? Core? I don't know, but it would be
awesome for the serialized representation to be both low-level and
architecture independent, then having it JIT compiled when it is
deserialized. To me, this means a virtual machine, which I guess is what you
need when you want fast mobile code, but I'm just musing here as I know
little about programming language implementation.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.haskell.org/pipermail/haskell-cafe/attachments/20101111/5f3afb25/attachment.html


More information about the Haskell-Cafe mailing list