[Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends
dave at zednenem.com
Wed Aug 22 18:35:26 CEST 2012
As I see it, there are four possibilities for a given version of dependency:
1. The version DOES work. The author (or some delegate) has compiled
the package against this version and the resulting code is considered
2. The version SHOULD work. No one has tested against this version,
but the versioning policy promises not to break anything.
3. The version MIGHT NOT work. No one has tested against this version,
and the versioning policy allows breaking changes.
4. The version DOES NOT work. This has been tested and the resulting
code (if any) is considered not good.
Obviously, cases 1 and 4 can only apply to previously released
versions. The PVP requires setting upper bounds in order to
distinguish cases 2 and 3 for the sake of future compatibility.
Leaving off upper bounds except when incompatibility is known
essentially combines cases 2 and 3.
So there are two failure modes:
I. A version which DOES work is outside the bounds (that is, in case
3). I think eliminating case 3 is too extreme. I like the idea of
temporarily overriding upper bounds with a command-line option. The
danger here is that we might actually be in case 4, in which case we
don't want to override the bounds, but requiring an explicit override
gives users a chance to determine if a particular version is
disallowed because it is untested or because it is known to be
II. A version which DOES NOT work is inside the bounds (that is, in
case 2). This happens when a package does not follow its own version
policy. For example, during the base-4 transition, a version of
base-3.0 was released which introduced a few breaking changes (e.g.,
it split the Arrow class). Alternately, a particular version might be
buggy. This can already be handled by adding constraints on the
command line, but it's better to release a new version of the package
with more restrictive constraints.
(This might not be enough, though. If I release foo-1.0.0 which
depends on bar-1.0.*, and then bar-1.0.1 is released with with a bug
or breaking change, I can release foo-220.127.116.11 which disallows
bar-1.0.1. But we need some way of preventing cabal from using
foo-1.0.0. Can Hackage deprecate specific versions?)
On Wed, Aug 22, 2012 at 9:18 AM, Leon Smith <leon.p.smith at gmail.com> wrote:
> I think we actually agree more than we disagree; I do think distinguishing
> hard and soft upper bounds (no matter what they are called) would help,
> and I'm just trying to justify them to some of the more dismissive attitudes
> towards the idea
> The only thing I think we (might) disagree on is the relative importance of
> distinguishing hard and soft bounds versus being able to change bounds
> easily after the fact (and *without* changing the version number associated
> with the package.)
> And on that count, given the choice, I pick being able to change bounds
> after the fact, hands down. I believe this is more likely to significantly
> improve the current situation than distinguishing the two types of bound
> alone. However, being able to specify both (and change both) after the
> fact may prove to be even better.
> On Sat, Aug 18, 2012 at 11:52 PM, wren ng thornton <wren at freegeek.org>
>> On 8/17/12 11:28 AM, Leon Smith wrote:
>>> And the
>>> difference between reactionary and proactive approaches I think is a
>>> potential justification for the "hard" and "soft" upper bounds; perhaps
>>> should instead call them "reactionary" and "proactive" upper bounds
>> I disagree. A hard constraint says "this package *will* break if you
>> violate me". A soft constraint says "this package *may* break if you violate
>> me". These are vastly different notions of boundary conditions, and they
>> have nothing to do with a proactive vs reactionary stance towards specifying
>> constraints (of either type).
>> The current problems of always giving (hard) upper bounds, and the
>> previous problems of never giving (soft) upper bounds--- both stem from a
>> failure to distinguish hard from soft! The current/proactive approach fails
>> because the given constraints are interpreted by Cabal as hard constraints,
>> when in truth they are almost always soft constraints. The
>> previous/reactionary approach fails because when the future breaks noone
>> bothered to write down when the last time things were known to work.
>> To evade both problems, one must distinguish these vastly different
>> notions of boundary conditions. Hard constraints are necessary for
>> blacklisting known-bad versions; soft constraints are necessary for
>> whitelisting known-good versions. Having a constraint at all shows where the
>> grey areas are, but it fails to indicate whether that grey is most likely to
>> be black or white.
>> Live well,
>> Haskell-Cafe mailing list
>> Haskell-Cafe at haskell.org
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
Dave Menendez <dave at zednenem.com>
More information about the Haskell-Cafe