[Haskell-cafe] decoupling type classes

Dominique Devriese dominique.devriese at cs.kuleuven.be
Thu Jan 12 09:53:23 CET 2012


Yin,

2012/1/12 Yin Wang <yinwang0 at gmail.com>:
> I have an idea about type classes that I have been experimenting. It
> appears to be a generalization to Haskell’s type classes and seems to
> be doable. It seems to related the three ideas: type classes, implicit
> parameters, and (typed) dynamic scoping. But I don't know whether it
> is good or not. I hope to get some opinions before going further.

I find your ideas interesting. You may be interested in a related
design which I recently implemented for Agda [2], and an ICFP 2011
paper that presents it [1].

Also, you don't seem to have thought about the question of parametric
instances: do you allow them or not, if you do, what computational
power do they get etc.?

> I have an experimental system which “decouples” the dictionary.
> Instead of passing on a dictionary, it passes individual “implicit
> parameters" around. Those implicit parameters are type inferenced and
> they can contain type parameters just as methods in a type class.
> Similarly, they are resolved by their types in the call site's scope.

I'm surprised that you propose passing all "type class" methods
separately. It seems to me that for many type classes, you want to
impose a certain correspondence between the types of the different
methods in a type class (for example, for the Monad class, you would
expect return to be of type (a -> m a) if (>>=) is of type (m a -> (a
-> m b) -> m b)). I would expect that inferencing these releations in
each function that uses either of the methods will lead to overly
general inferenced types and the need for more guidance to the type
inferencer?

By separating the methods, you would also lose the laws that associate
methods in a type class, right?

An alternative to what you suggest, is the approach I recommend for
using instance arguments: wrapping all the methods in a standard data
type (i.e. define the dictionary explicitly), and pass this around as
an implicit argument.

> The convenience of this approach compared to Haskell’s type classes is
> that we no longer require a user of a type class to define ALL the
> methods in a type class. For example, a user could just define a
> method + without defining other methods in the Num class: -, *, … He
> can use the method + independently. For example, if + is defined on
> the String type to be concatenation, we can use + in another function:
>
> weirdConcat x y = x + y + y
>
> This has a utility, because the methods in the Num class don’t “make
> sense” for Strings except +, but the current type class design
> requires us to define them. Note here that weirdConcat will not have
> the type (Num a) => a -> a -> a, since we no longer have the Num
> class, it is decoupled into separate methods.

For this example, one might also argue that the problem is in fact
that the Num type class is too narrow, and + should instead be defined
in a parent type class (Monoid comes to mind) together with 0 (which
also makes sense for strings, by the way)?

> There is another benefit of this decoupling: it can subsume the
> functionality of MPTC. Because the methods are no longer grouped,
> there is no “common” type parameter to the methods. Thus we can easily
> have more than one parameter in the individual methods and
> conveniently use them as MPTC methods.

Could you explain this a bit further?

> Here g is explicitly declared as “overloaded”, although my
> experimental system doesn’t need this. Any undefined variable inside
> function body automatically becomes overloaded. This may cause
> unintended overloading and it catches bugs late. That’s why we need
> the “overload” declarations.

I would definitely argue against treating undefined variables as
overloaded automatically. It seems this will lead to strange errors if
you write typo's for example.

> But the automatic overloading of the undefined may be useful in
> certain situations. For example, if we are going to use Haskell as a
> shell language. Every “command” must be evaluated when we type them.
> If we have mutually recursive definitions, the shell will report
> “undefined variables” either way we order the functions. The automatic
> overloading may solve this problem. The undefined variables will
> temporarily exist as automatic overloaded functions. Once we actually
> define a function with the same name AND satisfies the type
> constraints, they become implicit parameters to the function we
> defined before. If we call a function whose implicit parameters are
> not associated, the shell reports error very similar to Haskell’s
> “type a is not of class Num …”

The design you suggest seems to differ from Haskell's current
treatment, where functions can refer to other functions defined
further in the file, but still have them resolved statically?

> RELATIONSHIP TO DYNAMIC SCOPING
>
> It seems to be helpful to think of the “method calls” as referencing
> dynamically scoped variables. They are dispatched depending on the
> bindings we have in the call site's scope (and not the scope where the
> method is defined!). So it is very much similar to the much-hated
> dynamic scoping. But the dispatching is more disciplined — it doesn't
> just match the name. It must match both the name and the inferred
> principle type.
>
> This intuition also explains why local instances shouldn't be allowed,
> because if we capture the variables at the definition site, the method
> call becomes "statically scoped".

I don't understand this argument: do you mean that if I write a
function that does "(4 :: Int)+4", you don't want to statically
resolve this to a certain (local or not) instance of + for Int that is
in scope at this call site?

Dominique


Footnotes:
[1]  https://lirias.kuleuven.be/handle/123456789/304985
[2]  http://wiki.portal.chalmers.se/agda/pmwiki.php?n=ReferenceManual.InstanceArguments



More information about the Haskell-Cafe mailing list