Proposal: Changes to the PVP

MightyByte mightybyte at gmail.com
Thu Apr 10 06:24:26 UTC 2014


On Wed, Apr 9, 2014 at 5:13 PM, Michael Snoyman <michael at snoyman.com> wrote:
>
>
> On Wed, Apr 9, 2014 at 10:57 PM, MightyByte <mightybyte at gmail.com> wrote:
>>
>> Here's my view.
>>
>> 1a. I don't know how to answer this question because I have no idea
>> what "good enough job" means.
>> 1b. Unequivocally yes, we should do our best to support this
>> expectation to the best of our ability.  However we can't state it as
>> strongly as "no matter what happens".  I think we can say "as long as
>> all of his dependencies are well-behaved, the package should be pretty
>> likely to build..."
>
>
> And this is where I think the PVP is doing a disservice with its current
> wording. Users have this expectation, but it doesn't actually hold up in
> reality. Reasons why it may fail include:

The PVP states: "What is missing from this picture is a policy that
tells the library developer how to set their version numbers, and
tells a client how to write a dependency that means their package will
not try to compile against an incompatible dependency."  That's pretty
clear to me.  The disservice is insisting that a complex gray issue is
black and white by using phrases like "no matter what happens".  It's
also clear that this is Haskell we're talking about.  We value purity,
referential transparency, and controlling side effects.  When I
translate those ideas to the package world I tend to think that if
I've gone to the trouble to get code working today, I want to maximize
the probability that it will work at any point in the future.

> * Typeclass instance leaking from transitive dependencies.
> * Module reexports leaking from transitive dependencies.
> * Someone made a mistake in an upload to Hackage (yes, that really does
> happy, and it's not that uncommon).
> * The package you depend on doesn't itself follow the PVP, or so on down the
> stack.
>
> So my point is: even though the *goal* of the PVP is to provide this
> guarantee, it *doesn't* provide this guarantee. Since we have a clear
> alternative that does provide this guarantee (version freezing), I think we
> should make it clear that the PVP does not solve all problems, and version
> freezing should be used.

Nowhere does the PVP state a goal of guaranteeing anything, so this is
a straw man and a completely invalid point in this discussion.  In
fact, the wording makes it pretty clear that there are no guarantees.
In a perfect world the PVP would ensure that code that builds today
will build for all time while allowing for some variation in build
plans to allow small patches and bug fixes to be included in your
already working code.  Alas, we don't live in that world.  But that
doesn't mean that we shouldn't try to get as close to that state of
affairs as we can.

As others have mentioned, version freezing is a completely orthogonal
issue, otherwise we wouldn't even have the notion of bounds to begin
with.  We would simply have cabal files that specify a single version
and be done.  The whole point of the PVP is to NOT lock down to a
single version.  You want to have simple backwards compatible
dependency bug fixes automatically work with your code.

>>
>> 1c. If "always run in the same way" means that it will always be built
>> with the same set of transitive dependencies, then no.
>> 2a is a bit stickier.  I want to say yes, but right now I'll leave
>> myself open to convincing.  My biggest concern in this whole debate is
>> that users of the foo package should specify some upper bound, because
>> as Greg has said, if you don't, the probability that the package
>> builds goes to ZERO (not epsilon) as t goes to infinity.  Personally I
>> like the safe and conservative upper bound of <1.3 because I think
>> practically it's difficult to make more granular contracts work in
>> practice and it follows the PVP's clear meaning.  If you're committing
>> to support the same API up to 2.0, why can't you just commit to
>> supporting that API up to 1.3?  The author of foo still has the
>> flexibility to jump to 2.0 to signal something to the users, and when
>> that happens, the users can change their bounds appropriately.
>>
>
> This sounds more like a personal opinion response rather than interpretation
> of the current PVP. I find it troubling that we're holding up the PVP as the
> standard that all packages should adhere to, and yet it's hard to get an
> answer on something like this.

Here's what the PVP says:

"When publishing a Cabal package, you should ensure that your
dependencies in the build-depends field are accurate. This means
specifying not only lower bounds, but also upper bounds on every
dependency.
At some point in the future, Hackage may refuse to accept packages
that do not follow this convention."

I'm pretty sure this is why Johan answered "no" to your question of
whether it violates the PVP.  So ok, by the letter of the PVP I guess
this isn't a violation.  But it depends on what point in time we're
talking about.  If 1.2 is the most recent version, then the PVP points
out that having an upper bound of < 1.3 carries a little risk with it.
 Using an upper bound of 2.0 at that point in time carries a lot more
risk because the PVP (the closest thing we have to a standard) allows
for breakages within that bound, and the fact is that there's no
guarantee the package will ever get to 2.0.  If however the currently
released version is 2.0 and you have discovered that it breaks your
package, then that version bound is fine.

So at one point in time `foo >= 1.2 && < 2` is quite risky, but at
another it's the right thing.  For awhile now I've been advocating a
new feature that allows us to specify a bound of <! 2 when you know
your package breaks at that version and <1.3 when it's simply untested
and might actually work.  This kind of a solution adds information to
the whole system unlike your approach of throwing out upper bounds
which throws information away, making an already hard problem
hopeless.

Alternatively, if you foo is your package and you have set up some
scheme where you agree that breaking changes in some subset of your
API will come with a bump to 2.0 and breaking changes to the rest will
have a bump to 1.3, then I don't have a problem with you setting your
bound to 2.0 if you only use those more stable functions.  But I don't
think that as a general rule other people should use the < 2.0 bound
because it's subjective, difficult to negotiate, and cannot be
automatically checked.  I think clear and simple wins the day here.

> The point of giving a guarantee to 2.0 is that it involves less package
> churn, which is a maintenance burden for developers, and removes extra
> delays waiting for maintainers to bump version bounds, which can lead to
> Hackage bifurcation.

Less package churn?  Package churn is solely related to the number of
backwards-incompatible changes per unit time and nothing else.  I
don't care what version numbers you're using or whether you're
following the PVP or not.  If you're breaking backwards compatibility,
then you're churning, pure and simple.  The PVP already gives a
guarantee that those functions should stay the same up to 1.3.  So a
contract with your users that those functions won't change until 2.0
is effectively no different from having a contract that they won't
change until 1.3.  It's just an arbitrary number.  If you haven't
changed the API, then don't bump to 1.3.  The only difference comes if
you want to change some functions, but not others.  And there's simply
no good way to draw that distinction barring MUCH better static
analysis tools that do the full API check for you.


More information about the Libraries mailing list