Show, Eq not necessary for Num [Was: Revamping the numeric classes]

Brian Boutel brian@boutel.co.nz
Sat, 10 Feb 2001 14:09:59 +1300


Ketil Malde wrote:
> 
> Brian Boutel <brian@boutel.co.nz> writes:
> 
> > - Having a class hierarchy at all (or making any design decision)
> > implies compromise.
> 
> I think the argument is that we should move Eq and Show *out* of the
> Num hierarchy.  Less hierarchy - less compromise.


Can you demonstrate a revised hierarchy without Eq? What would happen to
Ord, and the numeric classes that require Eq because they need signum? 


> 
> > - The current hierarchy (and its predecessors) represent a reasonable
> > compromise that meets most needs.
> 
> Obviously a lot of people seem to think we could find compromises that
> are more reasonable.

I would put this differently. "A particular group of people want to
change the language to make it more convenient for their special
interests."

> 
> > - Users have a choice: either work within the class hierarchy and
> > accept the pain of having to define things you don't need in order
> > to get the things that come for free,
> 
> Isn't it a good idea to reduce the amount of pain?

Not always.

> 
> > or omit the instance declarations and work outside the hierarchy. In
> > that case you will not be able to use the overloaded operator
> > symbols of the class, but that is just a matter of concrete syntax,
> > and ultimately unimportant.
> 
> I don't think syntax is unimportant.
>

I wrote that *concrete* syntax is ultimately unimportant, not *syntax*.
There is a big difference. In particular, *lexical syntax*, the choice
of marks on paper used to represent a language element, is not
important, although it does give rise to arguments, as do all mattters
of taste and style.

Thre are not enough usable operator symbols to go round, so they get
overloaded. Mathematicians have overloaded common symbols like (+) and
(*) for concepts that have may some affinity with addition and
multiplication in arithmetic, but which are actually quite different.
That's fine, because, in context, expert human readers can distinguish
what is meant. From a software engineering point of view, though, such
free overloading is dangerous, because readers may assume, incorrectly,
that an operator has properties that are typically associated with
operators using that symbol. This may not matter in a private world
where the program writer is the only person who will see and use the
code, and no mission-critial decisions depend on the results, but it
should not be the fate of Haskell to be confined to such use.

Haskell could have allowed free ad hoc overloading, but one of the first
major decisions made by the Haskell Committee in 1988 was not to do so.
Instead, it adopted John Hughes' proposal to introduce type classes to
control overloading. A symbol could only be overloaded if the whole of a
group of related symbols (the Class) was overloaded with it, and the
class hierarchy provided an even stronger constraint by restricting
overloading of the class operators to cases where other classes,
intended to be closely related, were also overloaded. This tended to
ensure that the new type at which the classes were overloaded had strong
resemblences to the standard types. Simplifying the hierarchy weakens
these constraints and so should be approached with extreme caution. Of
course, the details of the classes and the hierarchy have changed over
the years - there is, always has been and always will be pressure to
make changes to meet particular needs - but the essence is still there,
and the essence is of a general-purpose language, not a domain-specific
language for some branches of mathematics.

A consequence of this is that certain uses of overloaded symbols are
inconvenient, because they are too far from the mainstream intended
meaning. If you have such a use, and you want to write in Haskell, you
have to choose other lexical symbols to represent your operators. You
make your choice.

--brian