[jhc] what to work on next?

Isaac Dupree ml at isaac.cedarswampstudios.org
Fri Jun 26 13:01:37 EDT 2009


Interesting, interesting, interesting!  Your points make more sense to 
me now -- i'll comment on a couple

John Meacham wrote:
> Though, you bring up another interesting issue when it comes to
> developing the libraries. JHC and GHC actually have a very different
> philosophy here in that in jhc, I rely on the underlying operating
> systems services whenever possible. For instance, I don't try to roll my
> own unicode or buffered file support. I rely on 'iconv' and 'FILE' and
> friends when they are available. In general, the thinking is, A whole
> lot of people have worked very hard already at optimizing FILE, the OS
> may even do things like use zero-copy mmaped buffers to back it. So, I
> use existing resources whenever possible.

ah, re-use outside the haskell world!  A worthy idea, encourages the 
existence of good C libraries etc.  I guess the two ways I can think of 
weaknesses there is,
- often C libraries don't think of features that would be obvious in 
Haskell, like duplicating/backtracking with state, which make a Haskell 
version have to be unnecessarily imperative.  (But the C libraries 
should then be improved where possible! Sometimes they have an awful lot 
of momentum though, like iconv)
- the Haskell compiler can't optimize as much, when lots of parts of the 
code are in C :-)

still, it's an idea.  Like when someone found out that replacing GHC's 
complicated (-threaded) thread system with basically just 1-thread = 
1-OS-thread, on Linux it was just as efficient as the original [Windows 
and maybe OSX were slower though].  I wonder if someday someone'll take 
on a project of minimizing GHC RTS code (GHC devs like the idea in the 
abstract but are in no great hurry to make it happen)... 
garbage-collector seemed to be the biggest feature that has to stay in 
the RTS (unless I wonder if JHC could detect automatically whether a 
program would benefit from having a garbage collector?)

> Now cabals dependency model is not only akward (having to specify
> dependencies without knowing what you are compiling on), but actively
> inhibits the one thing build systems should do, adapt to unknown
> environments. It is so concerned about not attempting to compile on
> something for which there is a slight chance the compile will fail, that
> it pretty much blocks out any ability for it to adapt to unknown
> systems. I mean, if a compile was going to fail, then it is going to
> fail anyway, so really the only thing the strictness does is cut out
> platforms where it would have worked.

Well, actually the strictness also allows/helps Cabal to choose the 
package-versions where the compile *will* succeed.  But.. API breakages 
are usually truly in the interface, rather than silently changing the 
semantics.  JHC could probably automatically search all the APIs for 
compatibility (though I'm not sure what kind of indexing you'd need to 
make that efficient for thousands+ of known package-versions).

If you get to being able to compile even a moderate fraction of Hackage, 
then you have a lot of versions and APIs you can play with to see how 
your approach might numerically compare to Cabal's :-) (e.g. success 
rate and asymptotic speed)

-Isaac



More information about the jhc mailing list