Platform policy question: API compatability in minor releases

Duncan Coutts duncan.coutts at worc.ox.ac.uk
Sat May 9 20:20:07 EDT 2009


On Sat, 2009-05-09 at 19:50 +0100, Duncan Coutts wrote:

> The question is this:
> 
>         Should we allow compatible API additions to a library in a minor
>         release of the platform?
>         
>         The choice is between allowing only bug fixes in minor releases,
>         or also allowing new features that add APIs but do not change
>         any existing APIs.


To add my personal opinion, I think we should have only bug fixes in
minor releases and save new APIs for major releases.

I would argue that the distinction between major and minor releases is
an important one. As a developer community, major releases are what
synchronise our sets of dependencies. It is that periodic
synchronisation that ensures that the programs we distribute have the
greatest portability.

Look at it this way, currently we have a 12 month major release cycle.
That's how often major versions of GHC come out, which is what has
defined the platform up until now. This is what developers test against.
Historically GHC has been pretty firm about not allowing new APIs the
base lib and other core libs in minor GHC releases. Indeed the CPP
symbol that GHC defines containing its version number deliberately does
not contain the GHC minor version.

So by arguing for new features in minor releases we're saying we should
have a 4-6 week cycle. New library features every 6 weeks.

This makes life harder for distributions and users. Instead of just
picking up the first version of a major release that works ok for them,
they have to track minor releases because minor releases are going to be
including new features that new programs are going to start depending
on. It means everything goes out of date that much quicker.

Now perhaps 12 months is too long (I think it is). Perhaps 6 months is
too long even (though many other groups have settled on 6 months). But
surely 6 weeks is far too short. That means essentially no
synchronisation effect at all.

If the majority opinion is that 6 months is too long to wait for new
features, then how about 4 months, or even 3? Do people see the point
I'm trying to make about the value of synchronisation?

It's the ability to say to platform users "this is what you need to
build all the haskell code!". (Strictly speaking it's not everything you
need, but you can be pretty sure most other packages will have been
tested against this set of versions). But if the developers writing that
code are constantly depending on the release from 6 weeks ago then we
have not achieved that. Then we are not synchronising package versions
between devs, users and distros for a reasonable period of time.

We do actually want to compress changes into nice well-labelled,
synchronised jumps where we all change versions at once. We don't want
to tell users "oh! the problem is your distro packages HP 2000.4.1 but
this program only works with 2000.4.2". If that's the situation we end
up in then I think we've failed.

There is a balance to be struck in how long the periods of stability
are. Too long and we stagnate but do people also see that there is a
problem with the other extreme too? We initially picked on 6 months
because that's what many other projects have found to be a good balance
(eg GNOME, many Linux distros). 6 months is not carved in stone though.
Perhaps initially we would be better served with 4 months. Experience
should tell in the end.

I accept that from the point of view of package maintainers that it is
less work to not have to maintain more than one branch and to make
releases that mix both fixes and new features. Of course we should also
consider the perspective of users of the platform.

Core libraries that are have been released with GHC have of course
always had stable branches. This hasn't proved to be too big of a burden
on the maintainers. Darcs (and probably other systems) makes this fairly
easy.

Duncan



More information about the Libraries mailing list