conor at strictlypositive.org
Fri Sep 2 14:56:01 CEST 2011
On 2 Sep 2011, at 10:55, Jonas Almström Duregård wrote:
> On 31 August 2011 12:22, Conor McBride <conor at strictlypositive.org>
>> I become perplexed very easily. I think we should warn whenever
>> pre-emption (rather than explicit) hiding is used to suppress a
>> instance, because it is bad --- it makes the meaning of an instance
>> declaration rather more context dependent. Perhaps a design principle
>> should be that to understand an instance declaration, you need only
>> know (in addition) the tower of class declaration s above it: it is
>> subtle and worrying to make the meaning of one instance declaration
>> depend on the presence or absence of others.
> Those are all good arguments, and you've convinced me that always
> warning is better.
The question then comes down to whether that warning should ever be
strengthened to an error.
> First of all, I think the design goal is quite clear: "a class C can
> be re-factored into a class C with a superclass S, without disturbing
> any clients". Requiring client C to opt-out from the default
> implementation of S is a clear violation of the design goal. So I
> disagree that option 1 can be compatible with the design goal, but
> like you say the design goal might be at fault.
Design goal 1 does not explicitly distinguish the scenarios where S
is pre-existing or being introduced afresh. If the former, it's
inaccurate to describe what's happening as refactoring C, for S is
experiencing some fall-out, too. We should clearly seek more precision,
one way or another.
> Also, if I understand you correctly, you say the current situation is
> exceptional, and suggest option 2 as a temporary solution to it. You
> seem convinced that these kind of situations will not appear in the
> future, but I'm not as optimistic about that.
> Even when superclass defaults are implemented, people will
> occasionally implement classes without realizing that there is a
> suitable intrinsic superclass (or add the superclass but not the
> default instance). People will start using the new class and give
> separate instances for the superclass, and eventually someone will
> point out that the there should be a default instance for the
> superclass. Now if option 1 is implemented, the library maintainers
> will be reluctant to add the superclass instance because it will break
> a lot of client code.
I agree that such a scenario is possible. The present situation gives
no choice but to do things badly, but things often get done badly the
first time around anyway. Perhaps I'm just grumpy, but I think we
should aim to make bad practice erroneous where practicable. Once
the mistake is no longer forced upon us, it becomes a mistake that
deserves its penalty in labour. Silent pre-emption is bad practice and
code which relies on it should be fixed: it's not good to misconstrue
an instance declaration because you don't know which instance
declarations are somewhere else. Nonmonotonic reasoning is always a
From a library design perspective, we should certainly try to get these
hierarchical choices right when we add classes. I accept that it should
be cheap to fix mistakes (especially when the mistake is lack of
foresight. Sticking with the warning rather than the error reduces the
price of this particular legacy fix at the cost of tolerating misleading
code. I agree that the balance of this trade-off is with the warning,
for the moment, but I expect it to shift over time towards the error.
But if it's clear what the issue is, then we can at least keep it under
> Will there be a solution to this dilemma that I have missed? Should
> the client code be allowed opt-out from the superclass preemptively
> before it is given a default? Won't that cause a similar perplexity?
I don't know what you mean by this. Perhaps you could expand on it?
All the best
More information about the Glasgow-haskell-users