FDs and confluence
Claus Reinke
claus.reinke at talk21.com
Wed Apr 12 19:36:02 EDT 2006
thanks, Tom!
indeed, for that old thread on an alternative translation, which forked from
an even older thread, I put aside all my wishlist and simply focussed on
removing what I considered anomalies in the operationalisation of type
classes with FDs via CHR in the old translation. the CHR resulting from
the alternative translation should not allow any new constraints to be
proven, it should only make sure that any FD-based information is fully
used in any results, completing any incomplete deductions in the old
system.
I found it highly irregular that whether or not an FD was respected by the
CHR system resulting from the original translation could depend on reduction
strategies - the only way for real implementations to work around this
shortcoming would be to iterate improvement before continuing reduction,
which puts additional constraints on the implementer in an already complex
system; and the only way for a formal system based on this incomplete
translation to be safe was to try and cover all the possible holes with a
set of restrictions that backfire by complicating the language design and
making the whole idea of FDs look more tricky than it needs to be.
there are interesting problems in FDs, but it seems that the confluence
problems were merely problems of the old translation, not anything
inherent in FDs! I really had hoped we had put that phantom to rest.
(btw, in a real implementation, I wouldn't expect the memo constraints
to enter the constraint store at all - they are just a CHR-based way to
express a memo table, hence the name; as, in fact, I explained before
even starting that alternative translation thread..).
cheers,
claus
>> OK, so you want to modify the usual instance reduction rule by recording
>> functional dependencies from the head. That is, you get confluence by
>> remembering all the choices you had along the way, so you can still take
>> them later.
>>
>> My problem with this scheme is that it makes FDs an essential part of
>> the semantics, rather than just a shortcut, as they are in the original
>> system. With the old system, one can understand instances using a
>> Herbrand model, determined by the instance declarations alone. If the
>> instances respect FDs, types can be made more specific using improvement
>> rules justified by that semantics. With your system, there's only the
>> operational semantics.
>
> The logical semantics of the original CHR program and that of Claus with
> the memoization is really the same, I'm sure. The justifiable improvemetns
> are exactly the same. The main difference is that the memoization makes
> the rules confluent in more cases, i.e. the difference is in the
> operational semantics. For that matter the operational semantics of the
> confluent program would be considered to capture more completely the
> logical semantics and justifiable improvements.
>
>> To understand types, programmers must operate
>> the rewriting system.
>
> Not at all. The confluent program is really more intuitive, because more
> of the logically justifiable improvements are done.
>
>> And the constraint store they must work with will
>> be quite cluttered with all these memoized FDs.
>
> This is more of an implementation issue than a conceptual one, isn't it?
> I can think of a number of optimizations already to reduce the overhead.
More information about the Haskell-prime
mailing list