Templates in FPL?
Juan Carlos Arevalo Baeza
jcab@roningames.com
Fri, 25 May 2001 10:45:59 -0700
At 04:34 PM 5/25/2001 +0200, Ketil Malde wrote:
> > This is close to my personal vision. That is, a language that handles both
> > run-time and compile-time computations seamlessly. Thus, the compiler would
> > evaluate as much as it can, and then what's left is put in the
> > executable.
>
>I must be missing something crucial here - I thought that was exactly
>what GHC did?
I don't presume to know what GHC does. After all, Haskell was
completely unknown to me three months ago. O:-)
I did hope that was the case, though: "I assume that's something
optimizing compilers for Haskell already do, at least in part".
> > Problem is that, ideally, the language should allow the programmer to
> > be explicit about things that should be done at compile-time and things
> that
> > should be done at run-time, which Haskell doesn't quite have.
>
>Why/when do you need this? I would go for -O implying optimizing away
>as much as possible (i.e all constant expressions), and else no
>optimization (for compilation speed).
Oh, there are several reasons for the programmer to make her wishes
clear to the compiler. Especially when having something NOT calculated at
compile time is not acceptable. In that case, you want the compiler to
return an error, instead of emitting the runtime calculations.
A silly example of this that comes to mind is non-converging
calculations. Say we have a wrong factorial function:
tcaf 0 = 1
tcaf n = n * tcaf (n+1)
Pass this program through GHC:
main = print (tcaf 3)
Will it evaluate "tcaf 3" at compile-time? Will it error-out at
compile-time? I don't have the compiler here (I'm writing from work, but
I'm trying Haskell out at home).
My guess would be that GHC will evaluate as much as it can, and leave
the rest for the run-time program. Specifically, the best I'd expect it to
do is to lessen the recursion by expanding several levels of it. For
example, for two levels, it'd be reduced to:
tcaf = (\n -> case n of 0 = 1; -1 = -1; _ = n * (n+1) * tcaf (n+2))
Of course, I might be completely wrong here. I don't think it'd be
terribly hard for the compiler to deduce that a function like this, called
with a positive argument, is never going to finish evaluating. But in more
complex cases, it might not be possible for the compiler to know.
Then again, GHC might actually be evaluating EVERYTHING that doesn't
depend on IO actions... without exception... This brings me to a question.
What would a good optimizing compiler (GHC) do in this case:
main = do
num <- return 3
print (tcaf num)
I'll be trying all these things tonight.
Salutaciones,
JCAB
---------------------------------------------------------------------
Juan Carlos "JCAB" Arevalo Baeza | http://www.roningames.com
Senior Technology programmer | mailto:jcab@roningames.com
Ronin Entertainment | ICQ: 10913692
(my opinions are only mine)
JCAB's Rumblings: http://www.metro.net/jcab/Rumblings/html/index.html