Why upper bound version numbers?
Alan & Kim Zimmerman
alan.zimm at gmail.com
Thu Jun 9 08:07:53 UTC 2016
I think "hard" upper bounds would come about in situations where a new
version of a dependency is released that breaks things in a package, so
until the breakage is fixed a hard upper bound is required. Likewise for
hard lower bounds.
And arguments about "it shouldn't happen with the PVP" don't hold, because
it does happen, PVP is a human judgement thing.
On Thu, Jun 9, 2016 at 10:01 AM, Erik Hesselink <hesselink at gmail.com> wrote:
> What do you expect will be the distribution of 'soft' and 'hard' upper
> bounds? In my experience, all upper bounds currently are 'soft' upper
> bounds. They might become 'hard' upper bounds for a short while after
> e.g. a GHC release, but in general, if a package maintainer knows that
> a package fails to work with a certain version of a dependency, they
> fix it.
> So it seems to me that this is not so much a choice between 'soft' and
> 'hard' upper bounds, but a choice on what to do when you can't resolve
> dependencies in the presence of the current (upper) bounds. Currently,
> as you say, we give pretty bad error messages. The alternative you
> propose (just try) currently often gives the same result in my
> experience: bad error messages, in this case not from the solver, but
> unintelligible compiler errors in an unknown package. So it seems the
> solution might just be one of messaging: make the initial resolver
> error much friendlier, and give a suggestion to use e.g.
> --allow-newer=foo. The opposite might also be interesting to explore:
> if installing a dependency (so not something you're developing or
> explicitly asking for) fails to install and doesn't have an upper
> bound, suggest something like --constaint=foo<x.y.
> Do you have different experiences regarding the number of 'hard' upper
> bounds that exist?
> On 8 June 2016 at 22:01, Michael Sloan <mgsloan at gmail.com> wrote:
> > Right, part of the issue with having dependency solving at the core of
> > workflow is that you never really know who's to blame. When running into
> > this circumstance, either:
> > 1) Some maintainer made a mistake.
> > 2) Some maintainer did not have perfect knowledge of the future and has
> > yet updated some upper bounds. Or, upper bounds didn't get retroactively
> > bumped (usual).
> > 3) You're asking cabal to do something that can't be done.
> > 4) There's a bug in the solver.
> > So the only thing to do is to say "something went wrong". In a way it is
> > similar to type inference, it is difficult to give specific, concrete
> > messages without making some arbitrary choices about which constraints
> > gotten pushed around.
> > I think upper bounds could potentially be made viable by having both hard
> > and soft constraints. Until then, people are putting 2 meanings into one
> > thing. By having the distinction, I think cabal-install could provide
> > better errors than it does currently. This has come up before, I'm not
> > what came of those discussions. My thoughts on how this would work:
> > * The dependency solver would prioritize hard constraints, and tell you
> > which soft constraints need to be lifted. I believe the solver even
> > has this. Stack's integration with the solver will actually first try to
> > get a plan that doesn't override any snapshot versions, by specifying
> > as hard constraints. If that doesn't work, it tries again with soft
> > constraints.
> > * "--allow-soft" or something would ignore soft constraints. Ideally
> > would be selective on a per package / upper vs lower.
> > * It may be worth having the default be "--allow-soft" + be noisy about
> > which constraints got ignored. Then, you could have a
> > flag that forces following soft bounds.
> > I could get behind upper bounds if they allowed maintainers to actually
> > communicate their intention, and if we had good automation for their
> > maintenance. As is, putting upper bounds on everything seems to cause
> > problems than it solves.
> > -Michael
> > On Wed, Jun 8, 2016 at 1:31 AM, Ben Lippmeier <benl at ouroborus.net>
> >> On 8 Jun 2016, at 6:19 pm, Reid Barton <rwbarton at gmail.com> wrote:
> >>> Suppose you maintain a library that is used by a lot of first year uni
> >>> students (like gloss). Suppose the next GHC version comes around and
> >>> library hasn’t been updated yet because you’re waiting on some
> >>> to get fixed before you can release your own. Do you want your
> students to
> >>> get a “cannot install on this version” error, or some confusing build
> >>> which they don’t understand?
> >> This is a popular but ultimately silly argument. First, cabal dependency
> >> solver error messages are terrible; there's no way a new user would
> >> out from a bunch of solver output about things like "base-188.8.131.52" and
> >> "Dependency tree exhaustively searched" that the solution is to build
> >> an older version of GHC.
> >> :-) At least “Dependency tree exhaustively searched” sounds like it’s
> >> the maintainer’s problem. I prefer the complaints to say “can you please
> >> bump the bounds on this package” rather than “your package is broken”.
> >> Ben.
> >> _______________________________________________
> >> ghc-devs mailing list
> >> ghc-devs at haskell.org
> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > _______________________________________________
> > ghc-devs mailing list
> > ghc-devs at haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> ghc-devs mailing list
> ghc-devs at haskell.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ghc-devs