[Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends

Chris Dornan chris at chrisdornan.com
Sun Aug 19 23:22:17 CEST 2012


I agree with Bryan's proposal unreservedly. 

However, I think there might be a way to resolve the tension between:

  * maintaining and publishing a definite dependent-package configuration
that is known to work and

  * having a package unencumbered with arbitrary restrictions on which
future versions of its dependent packages it will work with.

Can't we have both? The cabal file would only eliminate package version that
are known not to work as Bryan suggests but the versions of the dependent
packages that the package has been tested with -- the 'reference
configuration' -- could be recorded separately. A separate build tool could
take the reference configuration and direct cabal to rebuild the reference
instance, specifying the fully qualified packages to use.

I have a set of tools for doing this because (see http://justhub.org -- and
if you have root access to an RPM-based Linux then the chances are you can
try it out, otherwise the sources are available).

For each project or package, separate from the cabal file recording the hard
dependencies,  I record the current reference configuration in a file which
lists the base installation (generally a Haskell platform but it can be a
bare compiler) and the list of fully-qualified dependent modules. Like the
cabal file the reference configuration would get checked into the VCS and/or
included in the Hackage tarball.

A normal build process first builds the environment ensuring that the
correct platform is selected and the exact package dependencies are
installed -- this is usually just a checking step unless the package
environment has been disturbed or the reference configuration has been
revised. Once the environment has been checked the normal program build
process proceeds as normal, where the real work generally happens.

Once the project is checked out on another system (or a package is installed
anew) the build step would actually build all of the dependent packages.

For this to really work a sandbox mechanism is needed -- merely trying out a
package/project shouldn't trash your only development environment! 

If a library package is to be integrated into a live project the reference
environment probably won't be the one you need but I find it useful to be
able to build it anyway in a clean environment and incrementally
up/downgrade the packages, searching out a compatible configuration. (Being
able to easily push and recover sandboxes is helpful here too.)

Would this way of working resolve the tension we are seeing here? I am so
used to working this way it is difficult for me to say.

(As others have said here and elsewhere, functional packaging is really
cool.)

Chris


-----Original Message-----
From: haskell-cafe-bounces at haskell.org
[mailto:haskell-cafe-bounces at haskell.org] On Behalf Of MightyByte
Sent: 16 August 2012 04:02
To: Ivan Lazar Miljenovic
Cc: Haskell Cafe
Subject: Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not
our friends

On Wed, Aug 15, 2012 at 9:19 PM, Ivan Lazar Miljenovic
<ivan.miljenovic at gmail.com> wrote:
> On 16 August 2012 08:55, Brandon Allbery <allbery.b at gmail.com> wrote:
>> Indeed.  But the ghc release that split up base broke cabalised 
>> packages with no warning to users until they failed to compile.  
>> Upper bounds were put in place to avoid that kind of breakage in the
future.
>
> I like having upper bounds on version numbers... right up until people 
> abuse them.

I also tend to favor having upper bounds.  Obviously they impose a cost, but
it's not clear to me at all that getting rid of them is a better tradeoff.
I've had projects that I put aside for awhile only to come back and discover
that they would no longer build because I hadn't put upper bounds on all my
package dependencies.  With no upper bounds, a package might not be very
likely to break for incremental version bumps, but eventually it *will*
break.  And when it does it's a huge pain to get it building again.  If I
have put effort into making a specific version of my package work properly
today, I want it to always work properly in the future (assuming that
everyone obeys the PVP).  I don't think it's unreasonable that some
activation energy be required to allow one's project to work with a new
version of some upstream dependency.

Is that activation energy too high right now?  Almost definitely.  But
that's a tool problem, not a problem with the existence of upper bounds
themselves.  One tool-based way to help with this problem would be to add a
flag to Cabal/cabal-install that would cause it to ignore upper bounds.
(Frankly, I think it would also be great if Cabal/cabal-install enforced
upper version bounds automatically if none were specified.)  Another
approach that has been discussed is detecting dependencies that are only
used internally[1], and I'm sure there are many other possibilities.  In
short, I think we should be moving more towards purely functional builds
that reduce the chance that external factors will break things, and it seems
like removing upper version bounds is a step in the other direction.

[1]
http://cdsmith.wordpress.com/2011/01/21/a-recap-about-cabal-and-haskell-libr
aries/

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe at haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




More information about the Haskell-Cafe mailing list