Build system idea
Duncan Coutts
duncan.coutts at worc.ox.ac.uk
Wed Aug 27 17:18:59 EDT 2008
On Wed, 2008-08-27 at 06:13 -0700, John Meacham wrote:
> The problem with the way cabal wants to mix with make/autoconf is that
> it is the wrong way round. make is very good at managing pre-processors,
> dependency tracking and calling external programs in the right order, in
> parallel, and as needed. cabal is generally good at building a single
> library or executable given relatively straightforward haskell source.
> (I know it _can_ do more, but this is mainly what it is good at).
>
> The way this should work is that make determines what haskell libraries
> need to be built, and what haskell files need to be generated to allow
> cabal to run and calls cabal to build just the ones needed. cabal as a
> build tool that make calls is much more flexible and in tune with each
> tools capabilities.
I'd say if you're using make for all that, then use it to build the
haskell modules too. That gives the advantage of incremental and
parallel builds, which Cabal does not do yet (though we've got a GSoC
project just coming to an end which does this).
> The other issue is with cabal files themselves which are somewhat
> conflicted in purpose. on one hand, you have declarative stuff about a
> package. name, version, etc... information you want before you start to
> build something. but then you have build-depends, which is something
> that you cannot know until after your configuration manager (whatever it
> may be, autoconf being a popular one) is run.
Ah, but that's where the autoconf and Cabal models part ways.
> What packages you depend on are going to depend on things like what
> compiler you have installed, your configuration options, which
> packages are installed, what operating system you are running on,
> which kernel version you are running, which c libraries you have
> installed. etc. things that cannot be predicted before the
> configuration is actually run.
So Cabal takes the view that the relationship between features and
dependencies should be declarative. autoconf is essentially a function
from a platform environment to maybe a configuration. That's a very
flexible approach, the function is opaque and can do whatever feature
tests it likes. The downside is that it is not possible to work out what
the dependencies are. It might be able to if autoconf explained the
result of its decisions, but even then, it's not possible to work out
what dependencies are required to get a particular feature enabled. With
the Cabal approach these things are explicit.
The conditionals in a .cabal file can be read in either direction so it
is possible for a package manager to automatically work out what deps
would be needed for that optional libcurl feature, or GUI.
The other principle is that the packager, the environment is in control
over what things the package 'sees'. With autoconf, the script can take
into account anything it likes, even if you'd rather it did not. Eg it's
important to be able to build a package that does not have that optional
dependency, even though the C lib is indeed installed on the build
machine, because I may be configuring it for a machine without the C
lib. Sure, some good packages allow those automagic decisions to be
overridden, but many don't and of course there is no easy way to tell if
it's picking up deps it should not. So one of the principles in Cabal
configuration is that all decisions about how to configure the package
are transparent to the packager and can be overridden.
Now currently, Cabal only has a partial implementation of the concept
because when it tries to find a configuration that works in the current
environment (which it only does if the configuration is not already
fully specified by the packager) it only considers dependencies on
haskell packages. Obviously there are a range of other dependencies
specified in the .cabal file and it should use them all, in particular
external C libs.
So I accept that we do not yet cover the range of configuration choices
that are needed by the more complex packages (cf darcs), but I think
that we can and that the approach is basically sound. The fact that we
can automatically generate distro packages for hundreds of packages is
not insignificant. This is just not possible with the autoconf approach.
> Then you have cabal as a packaging system (or perhaps hackage/cabal
> considered together). Which has its own warts, if it is meant to live in
> the niche of package managers such as rpm or deb, where are the
> 'release' version numbers that rpms and debs have for one example? If it is
> meant to be a tarball like format, where is the distinction between
> 'distribution' and 'source' tarballs?
Right, it's supposed to be the upstream release format, tarballs. Distro
packages obviously have their additional revision numbers.
> For instance, jhc from darcs for developers requires
> perl,ghc,DrIFT,pandoc,autotools, and happy. however the jhc
> tarball requires _only_ ghc. nothing else. This is because the make
> dist target is more interesting than just taring up the source. (and
> posthooks/prehooks don't really help. they are sort of equivalent to
> saying 'write your own build system'.)
Right. Cabal does that too (or strictly speaking, the Simple build
system can do this). For pre-processors that are platform independent
(like alex, happy etc) it puts the pre-processed source into the release
tarball. It's also possible to make tarballs without the pre-generated
files if it's important.
> One of the biggest sources of conflict arise from using cabal as a
> configuration manager. A configuration managers entire purpose is to
> examine the system and figure out how to adapt your programs build to
> the system.
Well, that's the autoconf view. It's not the only way of looking at it
as I explained above (perhaps not very clearly). I'd say a configuration
manager should negotiate between the package and the
packager/user/environment to find a configuration that is satisfactory
to all (which requires information flow in both directions).
> this is completely 100% at odds with the idea of users
> having to 'upgrade' cabal. Figuring out how to adapt your build to
> whatever cabal is installed or failing gracefully if you can't is
> exactly the job of the configuration manager. something like autoconf.
> This is why _users_ need not install autoconf, just developers. since
> autoconf generates a portable script is so that users are never told to
> upgrade their autoconf. if a developer wants to use new features, he
> gets the new autoconf and reruns 'autoreconf'. The user is never
> asked to update anything that isn't actually needed for the project
> itself. This distinction is key fora configuration manager and really
> conflicts with cabal wanting to also be a build system and package
> manager. It is also what is needed for forwards and backwards
> compatibility.
I suppose in principle it'd be possible to ship the build system in
every package like autoconf/automake does. Perhaps we should allow that
as an option. It's doable since the Setup.hs can import local modules.
> All in all, I think these conflicting goals of cabal make it hard to use
> in projects and have led to very odd design choices. I think external
> tools should not be the exception but rather the rule. Not that cabal
> shouldn't come with a full set of said tools. But as long as they are
> integrated I don't see cabal's design problems being fixed, meerly
> augmented with various work-arounds.
One issue, with a pick and mix approach is what is the top level
interface that users/package managers use? The current choice (which I'm
not at all sure is the right one) is a Setup.hs file that imports its
build system from a library that's already on the system (or a custom
one implemented locally). So a system that uses make underneath still
has to present the Setup.hs interface so that package managers can use
it in a uniform way. You mention at the top that you think the
make/cabal relationship is the wrong way round, but the Cabal/Setup.hs
interface has to be the top level one (at least at the moment) so you'd
have Setup.hs call make and make call it back again to build various
bits like libs etc?
Do you think that separating the Simple build system from the
declarative part of Cabal would help? It'd make it more obvious that the
build system part really is replaceable which currently is not so
obvious since they're in the same package. I'm not averse to splitting
them if it'd help. They're already completely partitioned internally.
Duncan
More information about the Glasgow-haskell-users
mailing list