qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Michael Snoyman michael at snoyman.com
Wed Feb 26 05:39:09 UTC 2014

On Tue, Feb 25, 2014 at 11:52 PM, Gregory Collins
<greg at gregorycollins.net>wrote:

> On Tue, Feb 25, 2014 at 12:38 PM, Michael Snoyman <michael at snoyman.com>wrote:
>> On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins <greg at gregorycollins.net
>> > wrote:
>>> Like Ed said, this is pretty cut and dried: we have a policy, you're
>>> choosing not to follow it, you're not in compliance, you're breaking stuff.
>>> We can have a discussion about changing the policy (and this has definitely
>>> been discussed to death before), but I don't think your side has the
>>> required consensus/votes needed to change the policy. As such, I really
>>> wish that you would reconsider your stance here.
>> I really don't like this appeal to authority. I don't know who the "royal
>> we" is that you are referring to here, and I don't accept the premise that
>> the rest of us must simply adhere to a policy because "it was decided." "My
>> side" as you refer to it is giving concrete negative consequences to the
>> PVP. I'd expect "your side" to respond in kind, not simply assert that
>> we're "breaking Hackage" and other such hyperbole.
> This is not an appeal to authority, it's an appeal to consensus. The
> community comes together to work on lots of different projects like Hackage
> and the platform and we have established procedures and policies (like the
> PVP and the Hackage platform process) to manage this. I think the following
> facts are uncontroversial:
>    - a Hackage package versioning policy exists and has been published in
>    a known location
>    - we don't have another one
>    - you're violating it
> Now you're right to argue that the PVP as currently constituted causes
> problems, i.e. "I can't upgrade to new-shiny-2.0 quickly enough" and "I
> manage 200 packages and you're driving me insane". And new major base
> versions cause a month of churn before everything goes green again.
> Everyone understands this. But the solution is either to vote to change the
> policy or to write tooling to make your life less insane, not just to
> ignore it, because the situation this creates (programs bitrot and become
> unbuildable over time at 100% probability) is really disappointing.
You talk about voting on the policy as if that's the natural thing to do.
When did we vote to accept the policy in the first place? I don't remember
ever putting my name down as "I agree, this makes sense." Talking about
voting, violating, complying, etc, in a completely open system like
Hackage, makes no sense, and is why your comments come off as an appeal to

If you want to have more rigid rules on what packages can be included,
start a downstream, PVP-only Hackage, and don't allow in violating
packages. If it takes off, and users have demonstrated that they care very
much about PVP compliance, then us PVP naysayers will have hard evidence
that our beliefs were mistaken. Right now, it's just a few people
constantly accusing us of violations and insisting we spend a lot more work
on a policy we believe to be flawed.

>  Now, I think I understand what you're alluding to. Assuming I understand
>> you correctly, I think you're advocating irresponsible development. I have
>> codebases which I maintain and which use older versions of packages. I know
>> others who do the same. The rule for this is simple: if your development
>> process only works by assuming third parties to adhere to some rules you've
>> established, you're in for a world of hurt. You're correct: if everyone
>> rigidly followed the PVP, *and* no one every made any mistakes, *and* the
>> PVP solved all concerns, then you could get away with the development
>> practices you're talking about.
> There's a strawman in there -- in an ideal world PVP violations would be
> rare and would be considered bugs.

Then you're missing my point completely. You're advocating making package
management policy based on developer practices of not pinning down deep
dependencies. My point is that *bugs happen*. And as I keep saying, it's
not just build-time bugs: runtime bugs are possible and far worse. I see no
reason that package authors should go through lots of effort to encourage
bad practice.

> Also, if it were up to me we'd be machine-checking PVP compliance. I don't
> know what you're talking about re: "irresponsible development". In the
> scenario I'm talking about, my program depends on "foo-1.2", "foo-1.2"
> depends on any version of "bar", and then when "bar-2.0" is released
> "foo-1.2" stops building and there's no way to fix this besides trial and
> error because the solver doesn't have enough information to do its work
> (and it's been lied to!!!). The only practical solutions right now are to:
>    - commit to maintaining every program you've ever written on the
>    hackage upgrade treadmill forever, or
>    - write down the exact versions of all of the libraries you need in
>    the transitive closure of the dependency graph.
> #2 is best practice for repeatable builds anyways and you're right that
> cabal freeze will help here, but it doesn't help much for all the programs
> written before "cabal freeze" comes out.
Playing the time machine game is silly. Older programs are broken. End of
story. If we all agree to start using the PVP now, it won't fix broken
programs. If we release "cabal freeze" now, it won't fix broken programs.
But releasing "cabal freeze" *will* prevent this problem from happening in
the future.

>  But that's not the real world. In the real world:
>> * The PVP itself does *not* guarantee reliable builds in all cases. If a
>> transitive dependency introduces new exports, or provides new typeclass
>> instances, a fully PVP-compliant stack can be broken. (If anyone doubts
>> this claim, let me know, I can spell out the details. This has come up in
>> practice.)
> Of course. But compute the probability of this occurring (rare) vs the
> probability of breakage given no upper bounds (100% as t -> ∞). Think about
> what you're saying semantically when you say you depend only on "foo > 3":
> "foo version 4.0 *or any later version*". You can't own up to this
> contract.
That's because you're defining the build-depends to mean "I guarantee this
to be the case." I could just as easily argue that `foo < 4` is also a lie:
how do you know that it *won't* build? This argument has been had many
times, please stop trying to make it seem like a clear-cut argument.

>  * Just because your code *builds*, doesn't mean your code *works*.
>> Semantics can change: bugs can be introduced, bugs that you depended upon
>> can be resolved, performance characteristics can change in breaking ways,
>> etc.
> I think you're making my point for me -- given that this paragraph you
> wrote is 100% correct, it makes sense for cabal not to try to build against
> the new version of a dependency until the package maintainer has checked
> that things still work and given the solver the go-ahead by bumping the
> package upper bound.
Again, you're missing it. If there's a point release, PVP-based code will
automatically start using that new point release. That's simply not good
practice for a production system.

>  This is where we apparently fundamentally disagree. cabal freeze IMO is
>> not at all a kludge. It's the only sane approach to reliable builds. If I
>> ran my test suite against foo version 1.0.1, performed manual testing on
>> 1.0.1, did my load balancing against 1.0.1, I don't want some hotfix build
>> to automatically get upgraded to version 1.0.2, based on the assumption
>> that foo's author didn't break anything.
> This wouldn't be an assumption, Michael -- the tool should run the build
> and the test suites. We'd bump version on green tests.
Maybe you write perfect code every time. But I've seen this process many
times in the past:

* Work on version 2 of an application.
* Create a staging build of version 2.
* Run automated tests on version 2.
* QA manually tests version 2.
* Release version 2.
* Three weeks later, discover a bug.
* Write a hotfix, deploy to staging, run automated tests, QA the changed
code, and ship.

In these circumstances, it would be terrible if my build system
automatically accepted a new point release of a package on Hackage because
the PVP says it's OK. Yes, we should all have 100% test coverage, with
automated testing that covers all functionality of the product, and every
single release would have full test coverage. But we all know that's not
the real world. Letting a build system throw variables into an equation is

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/libraries/attachments/20140226/26a2dbcd/attachment.html>

More information about the Libraries mailing list