qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Michael Snoyman michael at snoyman.com
Wed Feb 26 09:56:58 UTC 2014


On Wed, Feb 26, 2014 at 10:36 AM, John Lato <jwlato at gmail.com> wrote:

> On Tue, Feb 25, 2014 at 11:11 PM, Michael Snoyman <michael at snoyman.com>wrote:
>
>>
>>
>>
>> On Wed, Feb 26, 2014 at 8:03 AM, John Lato <jwlato at gmail.com> wrote:
>>
>>> On Tue, Feb 25, 2014 at 9:25 PM, Michael Snoyman <michael at snoyman.com>wrote:
>>>
>>>>
>>>>
>>>> On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <mightybyte at gmail.com>wrote:
>>>>
>>>>> On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab at snarc.org>
>>>>> wrote:
>>>>> >
>>>>> > I'm not saying this is not painful, but i've done it in the past,
>>>>> and using
>>>>> > dichotomy and educated guesses (for example not using libraries
>>>>> released
>>>>> > after a certain date), you converge pretty quickly on a solution.
>>>>> >
>>>>> > But the bottom line is that it's not the common use case. I rarely
>>>>> have to
>>>>> > dig old unused code.
>>>>>
>>>>> And I have code that I would like to have working today, but it's too
>>>>> expensive to go through this process.  The code has significant value
>>>>> to me and other people, but not enough to justify the large cost of
>>>>> getting it working again.
>>>>>
>>>>>
>>>>
>>>> I think we need to make these cases more concrete to have a meaningful
>>>> discussion. Between Doug and Gregory, I'm understanding two different use
>>>> cases:
>>>>
>>>> 1. Existing, legacy code, built again some historical version of
>>>> Hackage, without information on the exact versions of all deep dependencies.
>>>> 2. Someone starting a new project who wants to use an older version of
>>>> a package on Hackage.
>>>>
>>>> If I've missed a use case, please describe it.
>>>>
>>>> For (1), let's start with the time machine game: *if* everyone had been
>>>> using the PVP, then theoretically this wouldn't have happened. And *if* the
>>>> developers had followed proper practice and documented their complete build
>>>> environment, then PVP compliance would be irrelevant. So if we could go
>>>> back in time and twist people's arms, no problems would exist. Hurray,
>>>> we've established that 20/20 hindsight is very nice :).
>>>>
>>>> But what can be done today? Actually, I think the solution is a very
>>>> simple tool, and I'll be happy to write it if people want:
>>>> cabal-timemachine. It takes a timestamp, and then deletes all cabal files
>>>> from our 00-index.tar file that represent packages uploaded after that
>>>> date. Assuming you know the last date of a successful build, it should be
>>>> trivial to get a build going again. And if you *don't* know the date, you
>>>> can bisect until you get a working build. (For that matter, the tool could
>>>> even *include* a bisecter in it.) Can anyone picture a scenario where this
>>>> wouldn't solve the problem even better than PVP compliance?
>>>>
>>>
>>> This scenario is never better than PVP compliance.  First of all, the
>>> user may want some packages that are newer than the timestamp, which this
>>> wouldn't support.  As people have already mentioned, it's entirely possible
>>> for valid install graphs to exist that cabal will fail to find if it
>>> doesn't have upper bound information available, because it finds other
>>> *invalid* graphs.
>>>
>>> And even aside from that issue, this would push the work of making sure
>>> that a library is compatible with its dependencies onto the library
>>> *users*, instead of the developer, where it rightfully belongs (and your
>>> proposal ends up pushing even more work onto users!).
>>>
>>> Why do you think it's acceptable for users to do the testing to make
>>> sure that your code works with other packages that your code requires?
>>>
>>
>>  You're not at all addressing the case I described. The case was a legacy
>> project that someone is trying to rebuild. I'm not talking about any other
>> case in this scenario. To repeat myself:
>>
>> > 1. Existing, legacy code, built again some historical version of
>> Hackage, without information on the exact versions of all deep dependencies.
>>
>> In *that specific case*, why wouldn't having a tool to go back in time
>> and build against a historical version of Hackage be *exactly* what you'd
>> need to rebuild the project?
>>
>
> I had understood people talking about "legacy projects" to mean something
> other than how you read it.  In which case, I would suggest that there is a
> third use case, which IMHO is more important than either of the use cases
> you have identified.  Here's an example:
>
> 1.  package foo-0.1 appears on hackage
> 2.  package bar-0.1 appears on hackage with a dependency on foo >= 0.1
> 3.  awesomeApp-0.1 appears on hackage, which depends on bar-0.1 and
> text>=1.0
> 4.  users install awesomeApp
> 5.  package foo-0.2 appears on hackage, with lots of breaking changes
> 6.  awesomeApp users notice that it sometimes breaks with Hungarian
> characters, and the problem is traced to an error in text
> 6.  text-1.0.0.1 is released with some bug fixes
> 7.  awesomeApp users attempt to do cabal update; cabal install, which
> fails inscrutably (because it tries to mix foo-0.2 with bar-0.1)
>
> There's nothing in this situation that requires any of these packages be
> unmaintained.  The problem is that, rather than wanting to reproduce a
> fixed set of package versions (which cabal already allows for if that's
> really desired), sometimes it's desirable that updates be held back in
> active code bases.  Replace "foo" with "QuickCheck" for example (where for
> a long time users stayed with quickcheck2 because version 3 had major
> performance regressions in certain use cases).
>
> This sort of conflict used to happen *all the time*, and it's very
> frustrating to users (because something worked before, now it's not
> working, and they're not generally in a good position to know why).  It's
> annoying to reproduce because the install graph cabal produces depends in
> part on the user's installed packages.  So just because something builds on
> a developer's box doesn't mean that it would build on the user's box, or it
> would work for some users but not others (sandboxing has at least helped
> with that problem).
>
>
IIUC, this is *exactly* the case of an unmaintained package. I'm not
advocating leaving a package like bar-0.1 on Hackage without an upper bound
on foo, if it's known that it breaks in that case. In order for the package
to be properly maintained, the maintainer would have to (1) make bar work
with foo-0.2, or (2) add an upper bound. So to me, this falls squarely into
the category of unmaintained.

Let me relax my position just a bit. If package maintainers are not going
to be responsive to updates in the Hackage ecosystem, then I agree that
they should use the PVP. I also think they should advertise their packages
as not being actively maintained, and people should try to avoid using them
if possible. But if an author is giving quick updates to packages, I don't
see a huge benefit to the PVP for users, and instead see some downsides
(inability to test against newer dependencies), not to mention the much
higher maintenance burden for library authors.


>>
>>>
>>>> For (2), talking about older versions of a package is not relevant. I
>>>> actively maintain a number of my older package releases, as I'm sure others
>>>> do as well. The issue isn't about *age* of a package, but about
>>>> *maintenance* of a package. And we simply shouldn't be encouraging users to
>>>> start off with an unmaintained version of a package. This is a completely
>>>> separate discussion from the legacy code base, where- despite the valid
>>>> security and bug concerns Vincent raised- it's likely not worth updating to
>>>> the latest and greatest.
>>>>
>>>
>>> Usually the case is not that somebody *wants* to use an older version of
>>> package 'foo', it's that they're using some package 'bar' which hasn't yet
>>> been updated to be compatible with the latest 'foo'.  There are all sorts
>>> of reasons this may happen, including big API shifts (e.g. parsec2/parsec3,
>>> openGL), poor timing in a maintenance cycle, and the usual worldly
>>> distractions.  But if packages have upper bounds, the user can 'cabal
>>> install', get a coherent package graph, and begin working.  At the very
>>> worst, cabal will give them a clear lead as to what needs to be updated/who
>>> to ping.  This is much better than the situation with no upper bounds,
>>> where a 'cabal install' may fail miserably or even put together code that
>>> produces garbage.
>>>
>>> And again, it's the library *user* who ends up having to deal with these
>>> problems.  Upper bounds lead to a better user experience.
>>>
>>
>> I disagree with that assertion. I get plenty of complaints from users
>> about trying to install packages and getting "confusing error messages"
>> about cabal plan mismatches. I don't disagree that the PVP does make the
>> user experience better in some cases. What I disagree with is the
>> implication that it makes the user experience better in *all* cases. This
>> is simply not a black-and-white issue.
>>
>
> That's a straw man, I don't think anyone has argued that they make the
> user experience better in *all* cases.  The PVP helps significantly, it
> avoids especially problematic situations like the one above, and in
> particular it's quite easy for the developer to fix the simple cases.
>  Unlike the 2006 status quo, when problems required manually solving the
> dependency graph.
>

You said:

> Upper bounds lead to a better user experience.

That's what I'm disagreeing with. I do not believe that, overall, the PVP
is giving users a better experience. I've had a huge downturn in reported
errors with Yesod since I stopped strictly following the PVP. It's
anecdotal, but everything in this thread is really anecdotal.

Michael
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/libraries/attachments/20140226/0a4870e2/attachment.html>


More information about the Libraries mailing list