[Haskell-beginners] coming to grips with hackage
tam at hiddenrock.com
tam at hiddenrock.com
Wed Feb 5 03:29:56 UTC 2014
Hi there --
I've groveled through the Hackage site, I've read a bunch of cabal
documentation, and I've read far more blog entries, mailing list posts, and
other things turned up by Google than I can count. Please forgive me if my
confusions have been addressed elsewhere; if they have, I would appreciate a
pointer.
I'm having trouble wrapping my head around how to work with Hackage in the
context of a Linux distribution that already has its own package management
situation. Between distribution-provided packages containing Haskell programs
and/or libraries, system-wide cabal installs, user-specific cabal installs,
and cabal sandboxes, I can't seem to work out how the hackage/cabal architects
intend people (ie, me) to juggle things.
Here's a potential scenario: I want to use pandoc, for which my distro
provides packages, so I install it along with its various and sundry
dependencies using the package manager. Then I want to use another program
for which my distro *doesn't* provide a package, so I use cabal-install to
install it along with its various and sundry dependencies in ~/.cabal. Then I
have a crazy idea I want to play around with in code. Initially, I've got all
the libraries I need already installed, but eventually I discover there's a
feature in foo-1.1.3.4 that my idea just can't live without. Unfortunately,
foo-1.0.5.12 is required by pandoc.
At this point, it would seem like cabalizing my little idea and letting it
play by itself in a cabal sandbox is the preferred method. This seems
reasonable, I say to myself, and proceed to recompile every single library
upon which my little idea (recursively) depends. But then pandoc gets
updated, as well as many of the libraries upon which it depends, which results
in a wholesale upgrade of most of the system-wide libraries and now the stuff
I've installed in my home directory has all manner of broken dependencies.
There's probably a cabal-install command I can run at this point to clean
things up (though presumably I'd have to run it for every source tree that
uses a library that has been upgraded) but that's beside the point.
My confusion is how this scales. Enough little ideas that depend on
non-trivial libraries and I'm going to have eleventy-billion extremely similar
copies of Data.Text littered about my home directory. Is this really the
intention? Did I miss something? Are there better ``best practices'' than
what I've described? Any guidance is most appreciated! Thank you.
pete
PS. Bonus question, which seems intimately related: if A depends on B and a
new version of B appears, why does A necessarily need to be recompiled? Isn't
one of the big wins of shared objects avoiding this very necessity?
Presumably this is a consequence of how the GHC runtime does its linking, but
I couldn't find helpful documentation on this count.
More information about the Beginners
mailing list