Abstract FilePath Proposal

Sven Panne svenpanne at gmail.com
Sat Jul 4 19:26:31 UTC 2015


2015-07-04 4:28 GMT+02:00 Carter Schonwald <carter.schonwald at gmail.com>:

> [...] What fraction of currently build able hackage breaks with such an
> Api change, and how complex will fixing those breaks.  [...]
>

I think it is highly irrelevant how complex fixing the breakage is, it will
probably almost always be trivial, but that's not the point: Think e.g.
about a package which didn't really need any update for a few years, its
maintainer is inactive (nothing to recently, so that's OK), and which is a
transitive dependency of a number of other packages. This will effectively
mean lots of broken packages for weeks or even longer. Fixing breakage from
the AMP or FTP proposals was trivial, too, but nevertheless a bit painful.

This should be evaluated.  And to what extent can the appropriate
> migrations be mechanically assisted.
> Would some of this breakage be mitigated by changing ++ to be monoid or
> semigroup merge?
>

To me the fundamental question which should be answered before any detail
question is: Should we go on and continuously break minor things (i.e.
basically give up any stability guarantees) or should we collect a bunch of
changes first (leaving vital things untouched for that time) and release
all those changes together, in longer intervals? That's IMHO a tough
question which we somehow avoided to answer up to now. I would like to see
a broader discussion like this first, both approaches have their pros and
cons, and whatever we do, there should be some kind of consensus behind it.

Cheers,
   S.

P.S.: Just for the record: I'm leaning towards the
"lots-of-changes-after-a-longer-time" approach, otherwise I see a flood of
#ifdefs and tons of failing builds coming our way... :-P
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/ghc-devs/attachments/20150704/fc181304/attachment.html>


More information about the ghc-devs mailing list