[Haskell-cafe] Re: A suggestion for the next high profile Haskell project

ls-haskell-developer-2006 at m-e-leypold.de ls-haskell-developer-2006 at m-e-leypold.de
Tue Dec 19 13:14:09 EST 2006

Tomasz Zielonka <tomasz.zielonka at gmail.com> writes:

> Anyway, don't concentrate on this particular example. All I say is that:
> - sometimes I get efficient programs in Haskell right away (I my case
>   quite often, but YMMV)
> - sometimes efficiency doesn't matter
> I don't think it is contradictory, especially because the two
> "sometimes" can have a non-empty symmetric difference.

I'd like to add, that speed/efficiency is rather less important in
"real world programming" than most people realize. Assume I write
program for some clients with a total install base of N machines.

Assume I don't optimize.

If the program runs fast enough but could run faster, I'm wasting my
time to optimize. I'd have to charge more money to the licensees /
clients without actually delivering a better experience.

Assume now the unoptimized program doesn't run fast enough -- it's
slow enough to seriously affect the user experience and the clients
start grumbling.

Now, something has to be done: Either the customers have to upgrade
their machines (I call that the MS experience ;-) or I have to
optimize. The latter also incurs cost which finally has to be factored
in the licensing / service fees for the program (if not I'd not be
getting revenue from the program and had to close my shop :-). 

Now let's ask: When is it better to optimize instead of urging the
customer to upgrade?

Obviously when

  Cost(Upgrade) > Cost(Optimisation)

for the customer. Those costs are

  Cost(Upgrade)      = Price of more memory or new PC which is fast enough.

  Cost(Optimisation) = Hours_spent * Fee_per_Hour / N

(remember: N was the number of licensed Nodes).

So finally for optimisation to be the better decision, we must have:

  Price_New_PC > Hours_spent * Fee_per_Hour / N

With hardware prices being as cheap as they are, hourly rates being
not cheap (thanks to that :-), only optimizations pay that are either
easy (done fast) or if you have a large number of customers / clients.

The same of course applies to using C instead of a type safe and/or
grabage collected language: You have to weight the win against the
increased cost in developement time and debugging. It seldom pays. If
it does, using a FFI with haskell and optimizing hot spots would
perhaps had paid even better.

Generally users are more likely to like smart programs (that take a bit
to run but do everything right without requiring constant interaction)
than to like fast programs that are not so smart. Haskell (FP in
general) is good to write smart programs, C isn't.

The call for efficiency in constant factors ("C is n to k times faster
than <type safe garbage collected language>") is in my opinion almost
always pure snake oil. I consider myself a rather experienced and good
C programmer (but not only C) and have fallen for it often enough. I
regretted it almost every time.

Regards -- Markus

PS: Talking about smart programs: Is there a library anywhere that
    could be used to implement expert systems in Haskell or to
    evaluate Horn clauses in Prolog style?


More information about the Haskell-Cafe mailing list