[Haskell-cafe] Re: Great language shootout: reloaded

Simon Marlow simonmarhaskell at gmail.com
Tue Nov 14 08:31:27 EST 2006


Sebastian Sylvan wrote:
> On 11/10/06, Henk-Jan van Tuyl <hjgtuyl at chello.nl> wrote:
> 
>>
>> On Fri, 10 Nov 2006 01:44:15 +0100, Donald Bruce Stewart
>> <dons at cse.unsw.edu.au> wrote:
>>
>> > So back in January we had lots of fun tuning up Haskell code for the
>> > Great Language Shootout[1]. We did quite well at the time, at one point
>> > ranking overall first[2]. [...]
>>
>> Haskell suddenly dropped several places in the overall socre, when the
>> size measurement changed from line-count to number-of-bytes after
>> gzipping. Maybe it's worth it, to study why this is; Haskell programs are
>> often much more compact then programs in other languages, but after
>> gzipping, other languages do much better. One reason I can think of, is
>> that for very short programs, the import statements weigh heavily.
> 
> 
> I think the main factor is that languages with large syntactic
> redundancy get that compressed away. I.e if you write:
> 
> MyVeryLongAndConvlutedClassName MyVeryLargeAndConvulutedObject new
> MyVeryLongAndConvolutedClassName( somOtherLongVariableName );
> 
> Or something like that, that makes the code clumpsy and difficult to
> read, but it won't affect the gzipped byte count very much.
> Their current way of meassuring is pretty much pointless, since the
> main thing the gzipping does is remove the impact of clunky syntax.
> Meassuring lines of code is certainly not perfect, but IMO it's a lot
> more useful as a metric then gzipped bytes.

Sure, since gzip is the metric, then we can optimise for that.  For example, 
instead of writing a higher-order function, just copy it out N times 
instantiating the higher-order argument differently each time.  There should be 
no gzipped-code-size penalty for doing that, and it'll be faster :-)

Cheers,
	Simon


More information about the Haskell-Cafe mailing list