Penguin
Note: You are viewing an old revision of this page. View the current version.

A brief history of LZ Compression

Back in the 70's, the amount of data that people wanted to store started outstripping the media that people had to store it on.

Two computer scientists named A. Lempel and J. Ziv published a paper called "A Universal Algorithm for Sequential Data Compression" in the IEEE Transactions on Information Theory, May 1977. This documented an Algorithm that was referred to subsequently as LZ77.

The basis for the LZ series of algorithms is the building of a dictionary of common phrases in a file. Instead of outputting characters, phrase numbers in this dictionary can be output, achieving compression when you can output the number 1832 (10 bits) instead of the string "foobarbaz" (72 bits).

The decompressor needs nothing more than this sequence of output and a knowledge of how the dictionary is built to be able to built the dictionary itself, and therefore decompress the file.

LZ77 is a window based system: for each string that it compresses, it outputs an index back into to the input (go back N characters), a window size (copy these characters), and then first mismatching character.

Then, in 1978, they came up with a better algorithm (dubbed LZ78, that only required the compressor to output a phrase number and a mismatching character. The phrase extended with the mismatching character would then be added to the dictionary.

LZ77/78 encoding remained the de-facto standard for compression of data (along with some cool stuff like HuffmanCoding, that's how PKZIP works) until 1984, where Terry Welch published "A Technique for High-Performance Data Compression" in Computer magazine. He realised that with a dictionary that had all the possible characters in it already, you wouldn't have to output the mismatching character like in LZ78; you could output a phrase number that matched it. This algorithm is called LZW encoding (Lempel, Ziv, Welch).

How it works

The algorithm is really quite simple, and will be described here.

Compression

  • Start with a dictionary that contains all possible characters.
  • P refers to the previous phrase (initially empty).
  • Read a character, C.
  • Is the string P+C in the dictionary? (It is - P is empty so P+C = C)
  • Output the code number for string C (C.num) to the LZW file
  • Set the previous string P = C.
  • While there are still character remaining

    • Read a character
    • Is the string P+C present in the dictionary?

      • If yes, set P to P+C and keep going (to make a longer string of matches)
      • If no,

        1. output the code number for P (P.num) to the LZW file
        2. add the string P+C to the dictionary
        3. Set P = C
  • (we have run out of characters) output the code word for the string P (P.num) to the LZW file and end.

You'll note that the first stage can be built into the while loop; I put it up the top to make it absolutely clear what to do.

Decompression

  • Add all the possible characters to the dictionary as in the compressor.
  • Read a code C.num from the LZW file. As this is the first character that was compressed, it is one of the 'root' characters that were added to the dictionary in the first step.
  • Output the character C.
  • Set P (previous string) to C.
  • While there are codes remaining in the LZW file

    • Read a code C.num
    • Is there a string C matching C.num in the dictionary?

      • If yes:

        1. Output the string C to the LZW file
        2. Add the string P+the first character of C to the dictionary
      • If no:

        1. Add the string P+the first character of P to the dictionary
        2. Output the string you just added to the LZW file

The second case looks a bit strange; it is basically the case where you have a repeating phrase in your input file, so the code number you output is the newest code you add to the dictionary. Therefore if you receive a number you haven't got yet, the next character must be the first character of the previous stream.

The easiest way to understand LZW is by looking at some examples. I won't reproduce them here, but you can find some (and a really good description of the algorithm) at http://www.rasip.fer.hr/research/compress/algorithms/fund/lz/lzw.html.


Part of CategoryAlgorithm