[Haskell-cafe] Toy compression algorithms [was: A very edgy
language]
Bulat Ziganshin
bulat.ziganshin at gmail.com
Thu Jul 12 06:49:10 EDT 2007
Hello ajb,
Thursday, July 12, 2007, 12:16:22 PM, you wrote:
>> well, ppm differs from simple order-0 coder in that it uses previous
>> symbols to calculate probability. lzw differs from order-0 coder with
>> flat static probabilities in that it encodes a whole word each time.
oh, i just recalled that flat distribution conventionally called order
-1 model. so starting from non-compressible order -1 model, lzw and
ppm goes into opposite direction:
* ppm uses several previous chars to predict one next character
* lz77/78 predicts (in the order -1 model) several next chars at once
if you interested, there are many algorithms combining both
approaches: lzrw5/rolz, lzp, lzcb, lzma
--
Best regards,
Bulat mailto:Bulat.Ziganshin at gmail.com
More information about the Haskell-Cafe
mailing list