Specifying dependencies on Haskell code
Thomas Schilling
nominolo at googlemail.com
Thu May 1 18:28:52 EDT 2008
On 20 apr 2008, at 22.22, Duncan Coutts wrote:
> All,
>
> In the initial discussions on a common architecture for building
> applications and libraries one of the goals was to reduce or eliminate
> untracked dependencies. The aim being that you could reliably deploy a
> package from one machine to another.
>
> We settled on a fairly traditional model, where one specifies the
> names
> and versions of packages of Haskell code.
>
> An obvious alternative model is embodied in ghc --make and in autoconf
> style systems where you look in the environment not for packages but
> rather for specific modules or functions.
>
> Both models have passionate advocates. There are of course advantages
> and disadvantages to each. Both models seem to get implemented as
> reactions having the other model inflicted on the author. For example
> the current Cabal model of package names and versions was a
> reaction to
> the perceived problem of untracked dependencies with the ghc --make
> system. One could see implementations such as searchpath and franchise
> as reactions in the opposite direction.
>
> The advantages and disadvantages of specifying dependencies on module
> names vs package names and versions are mostly inverses. Module name
> clashes between packages are problematic with one system and not a
> problem with the other. Moving modules between packages is not a
> problem
> for one system and a massive pain for the other.
>
> The fact is that both module name and package name + version are being
> used as proxies to represent some vague combination of required
> Haskell
> interface and implementation thereof. Sometimes people intend only to
> specify an interface and sometimes people really want to specify
> (partial) semantics (eg to require a version of something including
> some
> bug fix / semantic change). In this situation the package version is
> being used to specify an implementation as a proxy for semantics.
>
> Neither are very good ways of identifying an interface or
> implementation/semantics. Modules do move from one package to another
> without fundamentally changing. Modules do change interface and
> semantics without changing name. There is no guarantee about the
> relationship between a package's version and its interface or
> semantics
> though there are some conventions.
>
> Another view would be to try and identify the requirements about
> dependent code more accurately. For example to view modules as
> functors
> and look at what interface they require of the modules they import.
> Then
> we can say that they depend on any module that provides a superset of
> that interface. It doesn't help with semantics of course. Dependencies
> like these are not so compact and easy to write down.
>
> I don't have any point here exactly, except that there is no obvious
> solution. I guess I'd like to provoke a bit of a discussion on this,
> though hopefully not just rehashing known issues. In particular if
> people have any ideas about how we could improve either model to
> address
> their weak points then that'd be well worth discussing.
>
> For example the package versioning policy attempts to tighten the
> relationship between a package version and changes in its interface
> and
> semantics. It still does not help at all with modules moving between
> packages.
>
> Duncan
>
[Replying so late as I only saw this today.]
I believe that using tight version constraints in conjunction with
the PVP to be a good solution. For now.
I don't quite know how Searchpath works (the website is rather
taciturn), but I think that we should strive for a better
approximation to real dependencies, specifically, name, interface,
and semantics of imported functions. As I see it, what's missing is
proper tool support to do it practically for both library authors and
users.
Library users really shouldn't need to do anything except to run a
tool to determine all dependencies of a given package. Library
authors should be able to run a tool that determines what's new and
what might have changed. The package author then merely decides
whether semantics was changed and if so, in what way (i.e.,
compatible or not to previous semantics). Packages will still carry
versions, but they are only used to mark changes. Semantic
information is provided via a "change database" which contains enough
information to determine whether a version of a package contains
appropriate implementations of the functions (or, more generally,
entities) used in a dependent package.
For example, if we write a program that uses the function 'Foo.foo'
contained in package 'foo' and we happen to have used 'foo-0.42' for
testing of our program. Then, given the knowledge that 'Foo.foo' was
introduced in 'foo-0.23' and changed semantics in 'foo-2.0' then we
know that 'foo >= 0.23 && < 2.0' is the correct and complete
dependency description.
That's the ideal, maybe we can work towards this?
Or does this sound crazy?
/ Thomas
--
"Today a young man on acid realized that all matter is merely energy
condensed to a slow vibration, that we are all one consciousness
experiencing itself subjectively, there is no such thing as death,
life is only a dream, and we are the imagination of ourselves." --
Bill Hicks
-------------- next part --------------
A non-text attachment was scrubbed...
Name: PGP.sig
Type: application/pgp-signature
Size: 194 bytes
Desc: This is a digitally signed message part
Url : http://www.haskell.org/pipermail/cabal-devel/attachments/20080502/d52106e0/PGP.bin
More information about the cabal-devel
mailing list