[Haskell-cafe] Re: MonadFix

Joost Behrends webmaster at h-labahn.de
Thu Dec 20 19:52:58 EST 2007


apfelmus <apfelmus <at> quantentunnel.de> writes:

> How about separating the candidate prime numbers from the recursion
> 
>    factorize :: Integer -> [Integer]
>    factorize = f primes'
>       where
>       primes' = 2:[3,5..]
>       f (p:ps) n
>          | r == 0    = p : f (p:ps) q
>          | p*p > n   = [n]
>          | otherwise = f ps n
>          where
>          (q,r) = n `divMod` p

Providing effectively primes' for that is simply impossible 
(besides: p < intsqrt n must stay, otherwise you have
the expensive p*p at every step) talking about really big numbers 
as i did in my post. There are no fast generators iterating just
through the primes firstly, and these lists get much too big also 
(for 2^120 you cannot even begin to use a list of the primes 
up to 2^(any appropriate x) ).

What can be done is to iterate through odd numbers meeting as many primes 
as possible. We could do this:

iterdivisors x | x == 0 = 3
               | x == 1 = 5
               | otherwise x = iterdivisors (x-1) + ((cycle [2,4]) !! x)

This gives 7,11,13,17,19,23,25,29,31,35,37,41,43,47,49,53,55,59,61,63,67 ...

i.e. exactly all primes and odds with greater primefactors as 3.
We can improve that cycle avoiding the multiples of 5:

 ... | otherwise x = iterdivisors (x-1) + ((cycle [2,4,2,4,2,4,6,2,6] !! x)

and we can do better by avoiding the multiples of 7 and so on 
(the length of these lists grows fast - it gets multiplied 
by every new avoided prime -, but we could provide that lists 
programmatically). And we must be sure, that cycle
doesn't eat up memory for each new pass through the list. 
And we should use a more efficient representaion 
for the list of summands than a list.

This is the first front.

But the title of my post and much more interesting topic 
for learning Haskell is, how to avoid memory exhaustion by recursion. 
THIS was my intention and the reason why i erroneously brought MonadFix 
into the game. The recursion i described as follows

> divisions = do
>    y <- get
>    if divisor y <= bound y then do
>        put ( divstep y )
>        divisions
>        else return y

makes a DESTRUCTIVE UPDATE of the DivIters (by put) and this kind of recursion
seems not to "remember" itself (as i have understood, that is achieved by 
"tail recursion"). I just didn't like making DivIters to States. 
It's kind of lying code.

However it worked and improved performance by a factor around 10 
(or oo - perhaps a normal recursion exhausts 512MB memory for 2^120+1, 
as it will do for much smaller Integers, if they are prime) 
not to talk about footprint. Compiled for running standalone, 
it took 17 minutes, an equivalent python script 2 hours.
This factor near 7 is not fully satisfactory. 

There are more workarounds beside the State monad for having 
destructive updates with Haskell. There are IORefs, there are updatable arrays 
and more. THAT is my question: Is this (only basically) the most efficient way
to get them here ? 

Or is there still a way of getting a tail recursive Haskell function 
for iterating through the DivIters (outside any monads) ?? 
I would not get stroke dead by surprise if yes, but i couldn't find any.

Cheers, Joost



More information about the Haskell-Cafe mailing list