What's next?
Simon Peyton-Jones
simonpj at microsoft.com
Fri Sep 6 09:36:27 CEST 2013
| Let me start by saying that I'm in principle for moving to a Nix-style
| package DB (i.e. multiple instances of the same package and version).
...
| The problem is at least harder to solve than sandboxing. The reason I
| had Mikhail implement sandboxes during GSoC is that I was skeptical
| that we could make the Nix-solution work during a GSoC project, not
| because the student didn't do a good job but because it's hard and
| requires lots of expertise.
This is the bit I still find it hard to get my head round. The basic idea is so simple, that I think if someone (not me, not Johan) had the time to think about it carefully, I bet there is a good path.
But I could be wrong, and I have thought about it less than Johan.
Simon
| If we magically could find for Duncan and
| Simon M to work on it, I think we could do it. I personally don't have
| time to do any cabal work except for managing the project.
|
| The main downside with sandboxes compared to Nix-style package DBs is
| that every new sandbox will induce one long build when all
| dependencies are built. This takes a couple of minutes. After that
| build, sandboxes and Nix packages should behave about the same
| (correct me if I'm wrong). At that point incremental build times
| matter more, as they make the edit-compile-test cycle longer. Avoiding
| extra work (unnecessary linking) and doing things faster (more
| parallelism) help here.
|
| Cheers,
| Johan
|
| On Thu, Sep 5, 2013 at 12:15 AM, Simon Peyton-Jones
| <simonpj at microsoft.com> wrote:
| > Can I ask what the Cabal team's position is with respect to the
| question of allowing the same package to be installed several times,
| each compiled against a different collection of dependencies?
| >
| > This is the problem that you built sandboxes to work around; the "What
| are sandboxes and why are they needed" section of
| http://coldwa.st/e/blog/2013-08-20-Cabal-sandbox.html describes the
| problem well.
| >
| > It would be possible for Cabal to continue to work around deficiencies
| in GHC, but wouldn't it be better for us to work together to fix the
| underlying problem once and for all?
| >
| > Simon wrote a wiki page about this. I think this is it
| >
| http://ghc.haskell.org/trac/ghc/wiki/Commentary/Packages/MultiInstances
| > though it's in a bit less detail than I expected, so there may be
| something else.
| >
| > In my limited understanding, this single change would do more to
| alleviate "cabal hell" so extensively written about than anything else
| we can do. It would require changes in both Cabal and GHC. It can't be
| that hard... shall we just do it? Isn't it more important, for our
| users, than the other things Johan lists below?
| >
| > Simon
| >
| >
| >
| > | -----Original Message-----
| > | From: cabal-devel [mailto:cabal-devel-bounces at haskell.org] On Behalf
| Of
| > | Johan Tibell
| > | Sent: 05 September 2013 05:14
| > | To: cabal-devel at haskell.org
| > | Subject: What's next?
| > |
| > | Hi all,
| > |
| > | With 1.18 out the door it's time to look towards the future. Here
| are
| > | the major themes I'd like to see us work on next:
| > |
| > | ## Faster builds
| > |
| > | There are several interesting things we could do and are doing here.
| > |
| > | * Avoid relinking if possible. This reduces incremental build times
| > | (i.e. when you run cabal build after making some change) by avoiding
| > | relinking e.g. all the test suites and/or benchmarks. See
| > | https://github.com/haskell/cabal/pull/1410 for some in-progress
| work.
| > |
| > | * Build components and different ways (e.g. profiling) in parallel.
| > | We could build both profiling and non-profiling versions in
| parallel.
| > | We could also build e.g. all test suites in parallel. The key
| > | challenge here is to coordinate all parallel jobs so we don't spawn
| > | too many. See https://github.com/haskell/cabal/pull/1413
| > |
| > | * Build modules in parallel. This fine granularity would let us
| > | making building a single package faster, which is the most common
| case
| > | after all. There has been some GSoC work here e.g.
| > | http://hackage.haskell.org/trac/ghc/ticket/910 and some work by
| > | Mikhail.
| > |
| > | ## Do the right thing automatically
| > |
| > | The focus here should be on avoiding manual steps the cabal could do
| > | for the user.
| > |
| > | * Automatically install dependencies when needed. When `cabal
| build`
| > | would fail due to a missing dependency, just install this dependency
| > | instead of bugging the user to do it. This will probably have to be
| > | limited to sandboxes where we can't break the user's system
| > |
| > | * GHCi support could be improved by rebinding :reload to rerun e.g.
| > | preprocessors automatically. This would enable the users to develop
| > | completely from within ghci (i.e. faster edit-save-type-error
| cycle).
| > | We have most of what we need here (i.e. GHC macro support) but
| someone
| > | needs to make the final change to generate a .ghci file to pass in
| the
| > | ghci invocation.
| > |
| > | ## Support large projects
| > |
| > | We need to better support projects with tens or hundreds of
| packages.
| > | As projects grow it's naturally to split them up into a bunch of
| > | packages (libraries and executables). Some changes to the project
| > | might need changes across a couple of packages (e.g. to a supporting
| > | library and to an executable) in a single change and developers need
| > | to be able to
| > |
| > | * build several packages that have local changes, and
| > | * share changes to packages with other developers without making a
| > | release to some local Hackage server every time.
| > |
| > | Both can be done by having a single source repo with all the
| packages
| > | and using `cabal sandbox add-source` to make sure they get built
| when
| > | needed. However, that method only scales up to a handful of
| packages.
| > |
| > | I think we want to think about moving away from a world where all
| > | cabal commands are run in the context of some "current" package
| (i.e.
| > | in the current working directory). I think we want to be able to
| stand
| > | in the root directory of some source repo and build things from
| there.
| > | Example:
| > |
| > | $ git clone git://my-company/repo my-project
| > | $ cd my-project
| > | $ ls
| > | lib-pkg1 lib-pkg2 exe-pkg1 exe-pkg2 ...
| > | $ cabal sandbox init # or something similar
| > | $ edit lib-pkg1/Lib.hs
| > | $ edit exe-pkg1/Exe.hs
| > | $ cabal build exe-pkg1 # picks up changes to lib-pkg1
| > |
| > | This has implication for many things e.g. where the .cabal-sandbo
| and
| > | the dist directories are kept. Perhaps dist would have a
| subdirectory
| > | per package (right now we do something similar for sandbox
| > | dependencies).
| > |
| > | I imagine that the syntax for e.g. cabal build would have to
| extended to
| > |
| > | cabal build [[DIR':']COMPONENT]
| > |
| > | Example:
| > |
| > | cabal build lib-pkg1:some-component
| > |
| > | Similar for `cabal test` and `cabal bench`.
| > |
| > | Cheers,
| > | Johan
| > |
| > | _______________________________________________
| > | cabal-devel mailing list
| > | cabal-devel at haskell.org
| > | http://www.haskell.org/mailman/listinfo/cabal-devel
More information about the cabal-devel
mailing list