FDs and confluence
Tom.Schrijvers at cs.kuleuven.be
Wed Apr 12 14:13:56 EDT 2006
> On Mon, Apr 10, 2006 at 02:39:18PM +0100, Claus Reinke wrote:
>>> class B a b | a -> b
>>> class C a b c | a -> b
>>> instance B a b => C [a] b Bool
>>> Starting from a constraint set C [a] b Bool, C [a] c d, [...]
>> B a b <=> infer_B a b, memo_B a b.
>> memo_B a b1, memo_B a b2 ==>b1=b2.
>> C a b c <=> infer_C a b c, memo_C a b c.
>> memo_C a b1 c1, memo_C a b2 c2 ==> b1=b2.
>> infer_C [a] b Bool <=> B a b.
>> memo_C [a] b' c' ==> b'=b.
> OK, so you want to modify the usual instance reduction rule by recording
> functional dependencies from the head. That is, you get confluence by
> remembering all the choices you had along the way, so you can still take
> them later.
> My problem with this scheme is that it makes FDs an essential part of
> the semantics, rather than just a shortcut, as they are in the original
> system. With the old system, one can understand instances using a
> Herbrand model, determined by the instance declarations alone. If the
> instances respect FDs, types can be made more specific using improvement
> rules justified by that semantics. With your system, there's only the
> operational semantics.
The logical semantics of the original CHR program and that of Claus with
the memoization is really the same, I'm sure. The justifiable improvemetns
are exactly the same. The main difference is that the memoization makes
the rules confluent in more cases, i.e. the difference is in the
operational semantics. For that matter the operational semantics of the
confluent program would be considered to capture more completely the
logical semantics and justifiable improvements.
> To understand types, programmers must operate
> the rewriting system.
Not at all. The confluent program is really more intuitive, because more
of the logically justifiable improvements are done.
> And the constraint store they must work with will
> be quite cluttered with all these memoized FDs.
This is more of an implementation issue than a conceptual one, isn't it?
I can think of a number of optimizations already to reduce the overhead.
In the above example, the third argument is not involved in the FD, so
does not have to be remembered, yielding:
memo_C a b1, memo_C a b2 ==> b1=b2.
Secondly, you only need to keep one of a number of identical copies. In
CHR there is what's called a simpagation rule:
memo_C a b1 \ memo_C a b2 <=> b1=b2.
that removes the constraint after the backslash. Here it's valid because
that constraint becomes identical to the first one.
Moreover, very few people will actually have to look at the constraint
store, I think.
Department of Computer Science
tel: +32 16 327544
e-mail: tom.schrijvers at cs.kuleuven.be
More information about the Haskell-prime