[Haskell-cafe] memoizing with a default
Edward Z. Yang
ezyang at mit.edu
Tue Dec 13 06:26:58 UTC 2016
I've been looking for an elegant, functional implementation of DFA
minimization and I have not found it. If you discover a way to do
it please share.
However, I have seen some papers which might be relevant to the
problem (although not necessarily good for writing a Haskell
The CoCaml paper <https://www.cs.cornell.edu/~kozen/papers/cocaml.pdf>
discusses how to write recursive functions on infinite objects, which
would ordinarily not terminate, but in their language are instead
treated as a set of recursive equations and solved by some extralinguistic
constraint solver. This is similar in spirit to your desire to "detect
diversion" when you loop around the memotable.
The Datafun paper <https://people.mpi-sws.org/~neelk/datafun.pdf>
recently reminded me that Datalog-style programming is all about
fixpoints. I wonder if DFA minimization can be done in a Datalog-like
I have no idea if these are actually relevant, and would be interested
to hear if they are.
Excerpts from Michael George's message of 2016-12-10 11:50:03 -0500:
> I've come across a slight variant of memoization and I'm curious if
> either there is a better way to think about (I have yet to reach Haskell
> Enlightement), or if it's been thought about before.
> I'm writing a program that does DFA minimization, as described here:
> For me, the simplest way to understand the algorithm is we form an
> equivalence relation, where two states are different if either one is
> final and the other isn't, or if there are transitions from them to
> states that are different.
> different a b = isFinal a && not (isFinal b) || isFinal b && not
> (isFinal a) || exists chars (\c -> different (step a c) (step b c))
> This definition diverges; it never decides that two states are the same.
> The standard algorithm builds a table, starting with the assumption
> that all states are the same, and iteratively applying this definition
> until it converges.
> I'm thinking of that algorithm as a memoized version of the
> implementation of different above. But it's not quite:
> different = memoFix2 different' where
> different' different'' a b = isFinal a && not (isFinal b) || isFinal
> b && not (isFinal a) || exists chars (\c -> different'' (step a c) (step
> b c))
> for the same reason; it diverges. But with memoization I can detect the
> diversion by putting a marker in the memo table. If the computation
> makes a recursive call on an input that is started but not computed,
> then (in this case) I want to return false.
> In general, I think what I want is
> memoFixWithDefault :: (a -> b) -> ((a->b) -> (a -> b)) -> (a -> b)
> Where (memoFixWithDefault default f) is like (memoFix f) except that if
> (memoFix f a) would loop, then (memoFixWithDefault default f a) = default a.
> I suspect there's a nice denotational way of describing this, but my
> denotational semantics is rusty (maybe it's the least fixed point of f
> that is greater than g or something, but that's not it). I suspect that
> there needs to be a nice relationship between f and g for
> memoFixWithDefault f g to be sensible. I also suspect that there's a
> cute 3-line implementation like all the other implementations of memoize
> out there, but I don't see it. I also suspect that depth-first search
> has a cute implementation using this combinator.
> So I was wondering if other people had thoughts or references before I
> went off and thought about it.
More information about the Haskell-Cafe