[Haskell-cafe] Parser left recursion
wren ng thornton
wren at freegeek.org
Sun Feb 24 23:52:22 CET 2013
On 2/24/13 7:56 AM, Kim-Ee Yeoh wrote:
> On Sun, Feb 24, 2013 at 7:47 PM, Roman Cheplyaka <roma at ro-che.info> wrote:
>
>> Or perhaps you meant that the production itself, when interpreted as a
>> definition, is corecursive?
>
> I was merely thrown off by your mention of "well-founded" and the assertion
> that you're left with a "strictly smaller" input. I don't see any of this.
But the problem here really is an issue of well-foundedness rather than
productivity. Consider the grammar,
A ::= epsilon | A B
B ::= ...whatever...
In order to use this grammar as-is, when processing a string we must be
able to magically determine how many times to recurse in order to get our
epsilon. This is magic because the correct number of times to recurse is
defined by the rest of the string--- which we haven't looked at! Proof
theoretically speaking, this is the same as magically knowing when to use
the inductive hypothesis vs when to bottom out at a base case. Using this
grammar as-is, the recursive descent parser always decides to use the
inductive hypothesis. Hence, infinite loop.
It should be apparent that this isn't an issue with left-recursion (when
reading the string from right to left), because on left-recursion we're
always consuming some of the string, and therefore the input string is
getting smaller on each recursive call, and therefore it is guaranteed
that we will eventually run out of string, at which point we can backtrack
as necessary. The problem with right-recursion is that we never know when
to stop recurring. If we stop at some arbitrary point, well maybe the
parse will end up failing when it could've succeeded if only we had
recursed a bit further. But recursive descent doesn't allow the kind of
backtracking that would be required to exhaust the search space for
right-recursion.
--
Live well,
~wren
More information about the Haskell-Cafe
mailing list