[Haskell-cafe] evaluating CAFs at compile time

Carter Schonwald carter.schonwald at gmail.com
Sun Jan 19 02:57:22 UTC 2014

Point being, I think your pointing to an idea other people are
(also) interested in exploring for ghc, and that there's some interesting
subltelties to it.

On Saturday, January 18, 2014, Carter Schonwald <carter.schonwald at gmail.com>

> You ask for something that ghc doesnt have yet, but perhaps could have at
> some point. (If I'm reading you right).  Currently ghc doesn't have a way
> of doing what you want! Eg, I don't think there's even really support as
> yet for that sort of notion in the context of just boxed/unboxed/storable
> arrays.
> There's definitely a few example pieces of code here it'd be nice to
> express a read only lookup array that's fixed before run time for various
> bit fiddling etc algs.
> On Saturday, January 18, 2014, Evan Laforge <qdunkan at gmail.com<javascript:_e({}, 'cvml', 'qdunkan at gmail.com');>>
> wrote:
>> On Sat, Jan 18, 2014 at 4:56 PM, adam vogt <vogt.adam at gmail.com> wrote:
>> > Check out <https://hackage.haskell.org/package/th-lift>. Also, there
>> > is a of zeroTH here https://github.com/mgsloan/zeroth which works with
>> > a haskell-src-exts < 1.14.
>> Thanks, I'll take a look.  Though since I have my faster-but-uglier
>> solution, at this point I'm mostly only theoretically interested, and
>> hoping to learn something about compilers and optimization :)
>> > I'm not sure what benefit you'd get from a new mechanism (beside TH)
>> > to calculate things at compile-time. Won't it have to solve the same
>> > problems which are solved by TH already? How can those problems
>> > (generating haskell code, stage restriction) be solved without ending
>> > up with the same kind of complexity ("TH dependency gunk")?
>> Well, TH is much more powerful in that it can generate any expression
>> at compile time.  But in exchange, it slows down compilation a lot,
>> introduces an order dependency in the source file, and causes
>> complications for the build system (I don't remember exactly, but it
>> came down to needing to find the .o files at compile time).  I would
>> think, in the handwaviest kind of way, that the compiler could compile
>> a CAF, and then just evaluate it on the spot by just following all the
>> code thunk pointers (similar to a deepseq), and then emit the raw data
>> structure that comes out.  Of course that assumes that there is a such
>> thing as "raw" data, which is why I got all side tracked wondering
>> about compile time optimization in general.  I expect it's not like C
>> where you would wind up with a nested bunch of structs you could just
>> write directly to the .TEXT section of the binary and then mmap into
>> place when the binary is run.  Even in C you'd need to go fix up
>> pointers.  At which point it sounds like a dynamic loader :)
>> _______________________________________________
>> Haskell-Cafe mailing list
>> Haskell-Cafe at haskell.org
>> http://www.haskell.org/mailman/listinfo/haskell-cafe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/haskell-cafe/attachments/20140118/b17fe4c5/attachment.html>

More information about the Haskell-Cafe mailing list