can a lazy language give fast code?
D. Tweed
tweed@compsci.bristol.ac.uk
Wed, 31 Jul 2002 09:59:31 +0100 (BST)
On Wed, 31 Jul 2002, Andrew J Bromage wrote:
> Let me clarify what I meant by that and see if you still disagree.
>
> Realistically, _most_ new software installations today (I deliberately
> ignore legacy systems etc) are not overloaded, in that there are more
> "computrons" available than are required to perform the task required
> of them. Of course this is partly because when you do a new
> installation, you add more than you need because you expect to grow.
I don't disagree at all with your conclusion that there are many factors
other than throughput that a programmer wants to know and trade-off when
choosing a language. It's in saying this is warranted by `almost all'
processes being bound by things other than throughput which may be true in
the average sense, but I don't think that all programmers have almost all
their programming tasks being dominated by something other than raw
throughput but rather there are sets of programmers who have all of the
tasks being dominated by the need something else (robustness, say) and
some who have all their tasks being dominated by the need for raw
throughput. To an extent, I'm being pedantic but I do think it's important
when re-thinking benchmarks to recognise that it's a diverse world of
programming out there and ideally we want programmers to be able to
perform comparisons between languages using the criteria that matter to
them (and some may validly value throughput) rather than to change from
measuring on only one variable (throughput) to measuring on a different
variable but still only one variable.
> Secondly, most non-embedded CPUs in the world are not overloaded
> either. Chances are for a given desktop machine, it spends most of
> its waking hours waiting for the next keystroke or mouse movement.
> Web developers in particular know this: For the typical case, your
> application server runs at the speed of the network.
This is a perfect example of where using an average is pretty misleading:
my desktop machine spends a maybe half of its time doing essentially
nothing since my thinking time as I write programs and papers is long
enough that the text editor, etc, spends most of its time waiting on
input. The other half the time it's running image processing code which is
essentially CPU bound, so it's running at close to 100% processor
utilisation. But (even one of the robust-statistics definitions of) the
average would say my machine is using about half the processor power at
any given time instant. Clearly this isn't what's happening, there's
actually two regimes of operation which it switches between.
You make very good points in what I've snipped below, again it's just
the use of `most' in a way that implies (to me) taking an average as
the representative of what everyone has to deal with that I `disagree
with'.
[snip]
> > Of more
> > concern to me is, when's the last time you actually got a well specified
> > computational problem and a reasonable amount of time to write a carefully
> > crafted program to solve it, (particularly when you had some reassurance
> > that the very specification of what to solve wouldn't change after the
> > first time you ran the code :-) )?
>
> Perhaps the ICFP contests are actually fairer as benchmarks than as
> competitions?
Interesting thought, particularly if the judges announced changes to what
the problem to be solved was half-way through :-)
___cheers,_dave_________________________________________________________
www.cs.bris.ac.uk/~tweed/ | `It's no good going home to practise
email:tweed@cs.bris.ac.uk | a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250 | Sung In The Snow' -- Winnie the Pooh