cabal design
Frederik Eaton
frederik at a5.repetae.net
Mon Aug 22 03:49:26 EDT 2005
> (snip)
> >> I caution you, though, on spending too much time making plans and
> >> designs and not much time writing code; the problem is usually that we
> >> have plenty of ideas about what needs to get done and not enough
> >> coders. The code base is pretty manageable, so if you want to help
> >> out, just pick something you think needs to get done, or ask me what
> >> is most important and start working on it :) I usually ask for help on
> >> this list when something comes up.
> >
> > I caution you on the opposite. Cabal is very format and
> > protocol-heavy. I imagine that much of the work has gone into
> > documentation and mindshare, in which case it becomes very important
> > to keep people informed and working together. Suggesting that
> > individuals need to contribute patches unilaterally before anything
> > will happen seems counterproductive to me, at least when it comes to
> > some of the larger parts.
>
> I'd be happy to try the wiki idea. I haven't done something like this
> recently because I haven't identified such a need; if I had a wiki
> page, I think I'd be the only one who reads it and the only one who
> modifies it. That's the case with the TODO list, which is under
> version control, so theoretically anyone could modify it, but no one
> does.
Well, a wiki page is a bit more accessible. For one example, it could
serve to alert potential users of Cabal, who have not downloaded the
package yet, to open bugs.
Besides, is there a way for an arbitrary person to change the TODO
list without checking with you first? You have to remember that when
people come up with publicly useful ideas, they want to publish them,
get them out of the way and move on, without adding several layers of
communication latency overhead. Even if it was just a matter of
sending you a patch which you would apply within the next few hours,
it's not the same as a wiki. They can't fire-and-forget. Human context
switching is expensive.
> I usually avoid adding "process" until a need arises. If we were ever
> in a situation where we need more analysis than code, then I would
> spend more time writing documents and wikis (which we used to do much
> more; you'll find old pages of discussion on the wiki, but not many
> people besides me contributed; it was mostly copying and pasting from
> mailing lists). We spent a lot of time on discussion and
> documentation early on in the project (probably a year), and the
> original proposal was the product of that work. I think that at some
> point, we will want to commit to a "cabal version two" where we will
> go into an analysis phase again. I just don't think we're at that
> point yet. I still think we just need to get code out there for
> people to try and see what works, what doesn't, and what the needs are
> that we hadn't anticipated. Cabal is still a very young project,
> believe it or not, and we have spent a great deal of its lifetime
> writing documentation and having discussions. I hope you won't
> begrudge me a few more months of writing code ;)
If there have been a lot of discussions and decisions, I don't think
that mailing list archives, or wherever the analysis is located, are a
good repository for design documents. I believe (and I'm not saying
you disagree at this point) that things which are planned and which we
want people to potentially help out with should go on the wiki, along
with their rationale.
> > On a meta-tangent, if we could be discussing things on a wiki, rather
> > than a mailing list, then I think discussions could go a lot faster.
> (snip for instance)
>
> I am willing to try doing more on the wiki, but in the spirit of wiki,
> I think you should just go ahead and start it :)
Later this week, if everything goes well.
> > I've attached a patch.
>
> Thanks.
>
> (snip)
> >> In a way, the problem isn't "lack of support" but a different model of
> >> finding packages... It's not like a compiler extension that one system
> >> supports and one doesn't; this flag breaks abstraction between
> >> compilers in a way that --in-place does not. I haven't heard any use
> >> cases where --in-place won't work.
> >
> > Well, we're talking about two different features here.
>
> I'm talking about use cases, not features :)
We're talking about two different use cases.
> > I want to be able to specify an arbitrary location. You want to be
> > able to specify the current working directory. Not the same thing,
> > is it? I think it should be possible to specify an arbitrary
> > location, at least for ghc.
>
> Since this breaks abstraction, I would prefer to avoid this until I
> see a convincing use case.
I thought I already gave one: http://toastball.net/toast/
Here's another: http://www.wigwam-framework.org/doc/overview.html
But let's think about what you're saying. You don't want to break some
abstractions in your implementation, so you're causing the abstraction
that you are supposed to be providing your users with to break. You
are breaking it by removing the ability to virtualize the most basic
aspect of the package management process, the package database. A bit
myopic, methinks.
We need to be able to deal with packages getting installed anywhere
and everywhere and every which way. Everything about the installation
of a package needs to be virtualizable. The package database should
not be tied to the user or the system or the compiler version. If some
compilers are not capable of providing Cabal with the interface it
needs to implement a suitably virtualizable installation environment,
then Cabal should simply refuse to continue in certain situations.
Those compilers can catch up when they do.
Seriously, why is it that functional programmers profess to care so
much about modularity on a small scale, yet are so quick to
defenestrate it when confronted by larger systems? :)
> (snip)
> >
> >> >> > 6) I think it should be easy to use 'cabal' for development; however,
> >> >> > when I am building a package with multiple executables, every 'build'
> >> >> > seems to re-link each executable, regardless of whether it needs
> >> >> > re-linking or not, which is quite slow.
> >> >>
> >> >> I'm not sure why ghc --make re-links executables every time.
> >> >>
> >> >> For libraries, I think we could use support from ghc to tell whether
> >> >> we need to re-link the library; ghc goes through and skips stuff that
> >> >> doesn't need to get built, and then we link everything in the library;
> >> >> if ghc could somehow let us know that nothing needed to get built,
> >> >> that would be very helpful; otherwise, someone has to write the logic
> >> >> to go through and check it all just like ghc does. This code should
> >> >> be out there somewhere.
> >> >
> >> > Isn't there a way to get ghc to emit Makefile fragments which solves
> >> > this problem? Not that solving it in ghc wouldn't be good as well.
> >>
> >> Relying on Make isn't any good for us for the simple build
> >> infrastructure; it needs to be more portable than that.
> >
> > Really? Surely every platform has some basic 'make' installed.
>
> I don't think it's the case that every platform has make installed.
> Windows, for instance. In fact, one convincing argument for cabal is
> that it doesn't require make.
>
> > And just parsing out the rules can't be that hard.
>
> Interpreting a makefile is probably pretty hard. I don't want to do
> it, personally. But the rules that ghc produces are probably a subset
> of the entire make language... still, I'd think that it would be far
> more productive to write a dependency analizer that doesn't rely on
> ghc than to write a makefile interpreter that does.
Makefiles are a pretty standard way for compilers to output dependency
information. Dependency analysis would be nice, but I imagine that
there are all sorts of compiler options you'd have to know about to
properly locate things.
Frederik
More information about the Libraries
mailing list