[Haskell-cafe] Toy compression algorithms [was: A very edgy
language]
Bulat Ziganshin
bulat.ziganshin at gmail.com
Wed Jul 11 05:34:13 EDT 2007
Hello ajb,
Wednesday, July 11, 2007, 7:55:22 AM, you wrote:
> Not really. LZW is basically PPM with a static (and flat!) frequency
> prediction model. The contexts are build in a very similar way.
what you mean by "flat" and "static" applied to PPM? static PPM models
exist - they carry probabilities as separate table very like static
Huffman encoding. is "flat" the same as order-0?
>> the devil in details. just imagine size of huffman table with 64k
>> entries :)
> If you're allowed to pick your code values, you can do this extremely
> compactly. Some compression schemes optimised for English separates
> words from the intermediate space and assigns a Huffman code to each
> word. The tables aren't that big (though the space to hold the words
> might be).
can you give a link? i never heard about such algorithm
--
Best regards,
Bulat mailto:Bulat.Ziganshin at gmail.com
More information about the Haskell-Cafe
mailing list