[Haskell-cafe] Re: FASTER primes

Daniel Fischer daniel.is.fischer at web.de
Tue Dec 29 14:16:59 EST 2009


Am Dienstag 29 Dezember 2009 14:34:03 schrieb Will Ness:
> Daniel Fischer <daniel.is.fischer <at> web.de> writes:
> > Am Dienstag 29 Dezember 2009 04:38:21 schrieb Will Ness:
> > > Now _this_, when tested as interpreted code in GHCi, runs about 2.5x
> > > times faster than Priority Queue based code from Melissa O'Neill's ZIP
> > > package mentioned at the haskellwiki/Prime_Numbers page, with
> > > about half used memory reported, in producing 10,000 to 300,000 primes.
> > >
> > > It is faster than BayerPrimes.hs from the ZIP package too, in the
> > > tested range, at about 35 lines of code in total.
> >
> > That's nice. However, the important criterion is how compiled code (-O2)
>
> fares. Do the relations continue to hold? How does it compare to a
> bitsieve?
>
>
> Haven't gotten to that part yet. :)
>
> But why is it more important?

I thought the uppercase FASTER in the subject meant you were really interested in speed.
If you're only interested in asymptotics, interpreted may be appropriate.
However, it is possible that optimisation can change the perceived asymptotics of an 
algorithm (determined strictness eliminating thunks for example).

While I haven't detected that with the primes code, I find that in my ghci your code is 
approximately 2.5 times faster than ONeill or Bayer when interpreted (no difference in 
scaling observed), while when compiled with -O2, ONeill is approximately three times as 
fast as your code and twice as fast as Bayer as an executable, about twice as fast as your 
code and slightly slower than Bayer in ghci.
And I have huge memory problems in ghci with your code.
That may be due to my implementation of merge and minus, though. You wrote 'standard' and 
I coded the straightforward methods.

> Would that not tell us more about the compiler performance than the code itself?

Unless you write machine code or assembly, don't all performance tests tell us more about 
the compiler/interpreter performance than the code itself?
That is, of course, with respect to algorithms with the same scaling behaviour.

>
> This code is just an endpoint (so far) in a short procession of natural
> stepwise development of the famous classic Turner's sieve,

That was

sieve (x:xs) = x:sieve (filter ((/= 0) . (`mod` x)) xs)

, was it?

> through the
> "postponed filters", through to Euler's sieve, the merging sieve (i.e.
> Richard Bird's) and on to the tree-fold merging, with wheel. I just wanted
> to see where the simple "normal" (i.e. _beginner_-friendly) functional code
> can get, in a natural way.

Good.

>
> It's not about writing the fastest code in _advanced_ Haskell. It's about
> having clear and simple code that can be understood at a glance - i.e.
> contributes to our understanding of a problem - faithfully reflecting its
> essential elements, and because of _that_, fast. It's kind of like _not_
> using mutable arrays in a quicksort.

What's wrong with mutable arrays? There are a lot of algorithms which can be easily and 
efficiently implemented using mutable unboxed arrays while a comparably efficient 
implementation without mutable arrays is hard. For those, I consider STUArrays the natural 
choice. Sieving primes falls into that category.

>
> Seeing claims that it's _either_ Turner's _or_ the PQ-based code didn't
> feel right to me somehow,

I fully agree.

> especially the claim that going by primes squares
> is "a pleasing but minor optimization",

Which it is not. It is a major optimisation. It reduces the algorithmic complexity *and* 
reduces the constant facors significantly.

> what with the postponed filters
> (which serves as the framework for all the other variants) achieving the
> orders of magnitude speedup and cutting the Turner's O(n^2) right down to
> O(n^1.5) just by doing that squares optimization (with the final version
> hovering around 1.24..1.17 in the tested range). The Euler's sieve being a
> special case of Eratosthenes's, too, doesn't let credence to claims that
> only the PQ version is somehow uniquely authentic and "faithful" to it.

I never found that claim convincing either.

>
> Turner's sieve should have been always looked at as just a specification,
> not a code, anyway, and actually running it is ridiculous. Postponed
> filters version, is the one to be used as a reference point of the basic
> _code_, precisely because it _does_ use the primes squares optimization,
> which _is_ essential to any basic sieve.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.haskell.org/pipermail/haskell-cafe/attachments/20091229/f5c0e20a/attachment.html


More information about the Haskell-Cafe mailing list