qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Alain O'Dea alain.odea at gmail.com
Wed Feb 26 11:47:32 UTC 2014


> On Feb 26, 2014, at 7:11, Michael Snoyman <michael at snoyman.com> wrote:
> 
> 
> 
> 
>> On Wed, Feb 26, 2014 at 8:03 AM, John Lato <jwlato at gmail.com> wrote:
>>> On Tue, Feb 25, 2014 at 9:25 PM, Michael Snoyman <michael at snoyman.com> wrote:
>>> 
>>> 
>>>> On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <mightybyte at gmail.com> wrote:
>>>> On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <tab at snarc.org> wrote:
>>>> >
>>>> > I'm not saying this is not painful, but i've done it in the past, and using
>>>> > dichotomy and educated guesses (for example not using libraries released
>>>> > after a certain date), you converge pretty quickly on a solution.
>>>> >
>>>> > But the bottom line is that it's not the common use case. I rarely have to
>>>> > dig old unused code.
>>>> 
>>>> And I have code that I would like to have working today, but it's too
>>>> expensive to go through this process.  The code has significant value
>>>> to me and other people, but not enough to justify the large cost of
>>>> getting it working again.
>>> 
>>> 
>>> I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:
>>> 
>>> 1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
>>> 2. Someone starting a new project who wants to use an older version of a package on Hackage.
>>> 
>>> If I've missed a use case, please describe it.
>>> 
>>> For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).
>>> 
>>> But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?
>> 
>> This scenario is never better than PVP compliance.  First of all, the user may want some packages that are newer than the timestamp, which this wouldn't support.  As people have already mentioned, it's entirely possible for valid install graphs to exist that cabal will fail to find if it doesn't have upper bound information available, because it finds other *invalid* graphs.
>> 
>> And even aside from that issue, this would push the work of making sure that a library is compatible with its dependencies onto the library *users*, instead of the developer, where it rightfully belongs (and your proposal ends up pushing even more work onto users!).
>> 
>> Why do you think it's acceptable for users to do the testing to make sure that your code works with other packages that your code requires?
> 
> You're not at all addressing the case I described. The case was a legacy project that someone is trying to rebuild. I'm not talking about any other case in this scenario. To repeat myself:
> 
> > 1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
> 
> In *that specific case*, why wouldn't having a tool to go back in time and build against a historical version of Hackage be *exactly* what you'd need to rebuild the project?
>  
>>> 
>>> For (2), talking about older versions of a package is not relevant. I actively maintain a number of my older package releases, as I'm sure others do as well. The issue isn't about *age* of a package, but about *maintenance* of a package. And we simply shouldn't be encouraging users to start off with an unmaintained version of a package. This is a completely separate discussion from the legacy code base, where- despite the valid security and bug concerns Vincent raised- it's likely not worth updating to the latest and greatest.
>> 
>> Usually the case is not that somebody *wants* to use an older version of package 'foo', it's that they're using some package 'bar' which hasn't yet been updated to be compatible with the latest 'foo'.  There are all sorts of reasons this may happen, including big API shifts (e.g. parsec2/parsec3, openGL), poor timing in a maintenance cycle, and the usual worldly distractions.  But if packages have upper bounds, the user can 'cabal install', get a coherent package graph, and begin working.  At the very worst, cabal will give them a clear lead as to what needs to be updated/who to ping.  This is much better than the situation with no upper bounds, where a 'cabal install' may fail miserably or even put together code that produces garbage.
>> 
>> And again, it's the library *user* who ends up having to deal with these problems.  Upper bounds lead to a better user experience.
> 
> I disagree with that assertion. I get plenty of complaints from users about trying to install packages and getting "confusing error messages" about cabal plan mismatches. I don't disagree that the PVP does make the user experience better in some cases. What I disagree with is the implication that it makes the user experience better in *all* cases. This is simply not a black-and-white issue.
> 
> Michael

This is not a new problem.

Java users faced it with Maven and it was solved by curation of Maven Central and the ability to add outside repositories as needed.

Node.js users faced it with NPM and solved it with dependency freezing.

Ruby users faced it with Gem and solved it with dependency freezing.

I imagine there are a world of different solutions to this problem.  The PVP isn't a complete solution, but I consider it to be a sensible baseline (like code style conventions and warning free builds) and it appears to me to be in line with best practices from packaging systems of many other languages.

What follows is my opinion, and it comes from a position of relative inexperience with Haskell and considerable experience operating on other language communities.

I feel that the PVP should be encouraged and violations should be considered bugs. Users and concerned community members should report them to maintainers.  I support having a stable set of packages curated as it serves an immediate need, possibly with an alternate that does PVP-only/gated curation.  I believe Hackage should continue to exist as is (without gated curation) to facilitate availability and sharing of new libraries.  I think standard package dependency freezing metadata and tools should be defined and users encouraged to employ them.

One way or another -- as a user of Haskell -- I would benefit significantly from standard answers to these problems and good examples from the community leadership to follow.

Best,
Alain
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/libraries/attachments/20140226/8f6633e1/attachment.html>


More information about the Libraries mailing list