Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Lossless compression in small blocks with precomputed dictionary

I have an application where I am reading and writing small blocks of data (a few hundred bytes) hundreds of millions of times. I'd like to generate a compression dictionary based on an example data file and use that dictionary forever as I read and write the small blocks. I'm leaning toward the LZW compression algorithm. The Wikipedia page (http://en.wikipedia.org/wiki/Lempel-Ziv-Welch) lists pseudocode for compression and decompression. It looks fairly straightforward to modify it such that the dictionary creation is a separate block of code. So I have two questions:

  1. Am I on the right track or is there a better way?
  2. Why does the LZW algorithm add to the dictionary during the decompression step? Can I omit that, or would I lose efficiency in my dictionary?

Thanks.

Update: Now I'm thinking the ideal case be to find a library that lets me store the dictionary separate from the compressed data. Does anything like that exist?

Update: I ended up taking the code at http://www.enusbaum.com/blog/2009/05/22/example-huffman-compression-routine-in-c and adapting it. I am Chris in the comments on that page. I emailed my mods back to that blog author, but I haven't heard back yet. The compression rates I'm seeing with that code are not at all impressive. Maybe that is due to the 8-bit tree size.

Update: I converted it to 16 bits and the compression is better. It's also much faster than the original code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;

namespace Book.Core
{
  public class Huffman16
  {
    private readonly double log2 = Math.Log(2);

    private List<Node> HuffmanTree = new List<Node>();

    internal class Node
    {
      public long Frequency { get; set; }
      public byte Uncoded0 { get; set; }
      public byte Uncoded1 { get; set; }
      public uint Coded { get; set; }
      public int CodeLength { get; set; }
      public Node Left { get; set; }
      public Node Right { get; set; }

      public bool IsLeaf
      {
        get { return Left == null; }
      }

      public override string ToString()
      {
        var coded = "00000000" + Convert.ToString(Coded, 2);
        return string.Format("Uncoded={0}, Coded={1}, Frequency={2}", (Uncoded1 << 8) | Uncoded0, coded.Substring(coded.Length - CodeLength), Frequency);
      }
    }

    public Huffman16(long[] frequencies)
    {
      if (frequencies.Length != ushort.MaxValue + 1)
      {
        throw new ArgumentException("frequencies.Length must equal " + ushort.MaxValue + 1);
      }
      BuildTree(frequencies);
      EncodeTree(HuffmanTree[HuffmanTree.Count - 1], 0, 0);
    }

    public static long[] GetFrequencies(byte[] sampleData, bool safe)
    {
      if (sampleData.Length % 2 != 0)
      {
        throw new ArgumentException("sampleData.Length must be a multiple of 2.");
      }
      var histogram = new long[ushort.MaxValue + 1];
      if (safe)
      {
        for (int i = 0; i <= ushort.MaxValue; i++)
        {
          histogram[i] = 1;
        }
      }
      for (int i = 0; i < sampleData.Length; i += 2)
      {
        histogram[(sampleData[i] << 8) | sampleData[i + 1]] += 1000;
      }
      return histogram;
    }

    public byte[] Encode(byte[] plainData)
    {
      if (plainData.Length % 2 != 0)
      {
        throw new ArgumentException("plainData.Length must be a multiple of 2.");
      }

      Int64 iBuffer = 0;
      int iBufferCount = 0;

      using (MemoryStream msEncodedOutput = new MemoryStream())
      {
        //Write Final Output Size 1st
        msEncodedOutput.Write(BitConverter.GetBytes(plainData.Length), 0, 4);

        //Begin Writing Encoded Data Stream
        iBuffer = 0;
        iBufferCount = 0;
        for (int i = 0; i < plainData.Length; i += 2)
        {
          Node FoundLeaf = HuffmanTree[(plainData[i] << 8) | plainData[i + 1]];

          //How many bits are we adding?
          iBufferCount += FoundLeaf.CodeLength;

          //Shift the buffer
          iBuffer = (iBuffer << FoundLeaf.CodeLength) | FoundLeaf.Coded;

          //Are there at least 8 bits in the buffer?
          while (iBufferCount > 7)
          {
            //Write to output
            int iBufferOutput = (int)(iBuffer >> (iBufferCount - 8));
            msEncodedOutput.WriteByte((byte)iBufferOutput);
            iBufferCount = iBufferCount - 8;
            iBufferOutput <<= iBufferCount;
            iBuffer ^= iBufferOutput;
          }
        }

        //Write remaining bits in buffer
        if (iBufferCount > 0)
        {
          iBuffer = iBuffer << (8 - iBufferCount);
          msEncodedOutput.WriteByte((byte)iBuffer);
        }
        return msEncodedOutput.ToArray();
      }
    }

    public byte[] Decode(byte[] bInput)
    {
      long iInputBuffer = 0;
      int iBytesWritten = 0;

      //Establish Output Buffer to write unencoded data to
      byte[] bDecodedOutput = new byte[BitConverter.ToInt32(bInput, 0)];

      var current = HuffmanTree[HuffmanTree.Count - 1];

      //Begin Looping through Input and Decoding
      iInputBuffer = 0;
      for (int i = 4; i < bInput.Length; i++)
      {
        iInputBuffer = bInput[i];

        for (int bit = 0; bit < 8; bit++)
        {
          if ((iInputBuffer & 128) == 0)
          {
            current = current.Left;
          }
          else
          {
            current = current.Right;
          }
          if (current.IsLeaf)
          {
            bDecodedOutput[iBytesWritten++] = current.Uncoded1;
            bDecodedOutput[iBytesWritten++] = current.Uncoded0;
            if (iBytesWritten == bDecodedOutput.Length)
            {
              return bDecodedOutput;
            }
            current = HuffmanTree[HuffmanTree.Count - 1];
          }
          iInputBuffer <<= 1;
        }
      }
      throw new Exception();
    }

    private static void EncodeTree(Node node, int depth, uint value)
    {
      if (node != null)
      {
        if (node.IsLeaf)
        {
          node.CodeLength = depth;
          node.Coded = value;
        }
        else
        {
          depth++;
          value <<= 1;
          EncodeTree(node.Left, depth, value);
          EncodeTree(node.Right, depth, value | 1);
        }
      }
    }

    private void BuildTree(long[] frequencies)
    {
      var tiny = 0.1 / ushort.MaxValue;
      var fraction = 0.0;

      SortedDictionary<double, Node> trees = new SortedDictionary<double, Node>();
      for (int i = 0; i <= ushort.MaxValue; i++)
      {
        var leaf = new Node()
        {
          Uncoded1 = (byte)(i >> 8),
          Uncoded0 = (byte)(i & 255),
          Frequency = frequencies[i]
        };
        HuffmanTree.Add(leaf);
        if (leaf.Frequency > 0)
        {
          trees.Add(leaf.Frequency + (fraction += tiny), leaf);
        }
      }

      while (trees.Count > 1)
      {
        var e = trees.GetEnumerator();
        e.MoveNext();
        var first = e.Current;
        e.MoveNext();
        var second = e.Current;

        //Join smallest two nodes
        var NewParent = new Node();
        NewParent.Frequency = first.Value.Frequency + second.Value.Frequency;
        NewParent.Left = first.Value;
        NewParent.Right = second.Value;

        HuffmanTree.Add(NewParent);

        //Remove the two that just got joined into one
        trees.Remove(first.Key);
        trees.Remove(second.Key);

        trees.Add(NewParent.Frequency + (fraction += tiny), NewParent);
      }
    }

  }

}

Usage examples:

To create the dictionary from sample data:

var freqs = Huffman16.GetFrequencies(File.ReadAllBytes(@"D:\nodes"), true);

To initialize an encoder with a given dictionary:

var huff = new Huffman16(freqs);

And to do some compression:

var encoded = huff.Encode(raw);

And decompression:

var raw = huff.Decode(encoded);
like image 669
Fantius Avatar asked Jul 02 '09 03:07

Fantius


People also ask

What are the techniques used for lossless compression?

Different kinds of algorithms are used to reduce file sizes in lossless and lossy compression. The algorithms used in lossless compression are: Run Length Encoding. Lempel-Ziv-Welch (LZW)

What is the best lossless compression algorithm?

The most successful compressors are XM and GeCo. For eukaryotes XM is slightly better in compression ratio, though for sequences larger than 100 MB its computational requirements are impractical.

Is dictionary based compression an adaptive compression?

Dictionary- based compression methods do not use a statistical model, nor do they use variable- length codes. Instead they select strings of symbols and encode each string as a token using a dictionary. The dictionary holds strings of symbols, and it may be static or dynamic (adaptive).

How do you compress a dictionary?

Compute the frequencies of symbols in the corpus from which the dictionary words have been removed. Number the words and symbols in descending order of their frequencies. To encode a text, replace each dictionary word and each symbol that does not belong to a word with its corresponding number.


2 Answers

The hard part in my mind is how you build your static dictionary. You don't want to use the LZW dictionary built from your sample data. LZW wastes a bunch of time learning since it can't build the dictionary faster than the decompressor can (a token will only be used the second time it's seen by the compressor so the decompressor can add it to its dictionary the first time its seen). The flip side of this is that it's adding things to the dictionary that may never get used, just in case the string shows up again. (e.g., to have a token for 'stackoverflow' you'll also have entries for 'ac','ko','ve','rf' etc...)

However, looking at the raw token stream from an LZ77 algorithm could work well. You'll only see tokens for strings seen at least twice. You can then build a list of the most common tokens/strings to include in your dictionary.

Once you have a static dictionary, using LZW sans the dictionary update seems like an easy implementation but to get the best compression I'd consider a static Huffman table instead of the traditional 12 bit fixed size token (as George Phillips suggested). An LZW dictionary will burn tokens for all the sub-strings you may never actually encode (e.g, if you can encode 'stackoverflow', there will be tokens for 'st', 'sta', 'stac', 'stack', 'stacko' etc.).

At this point it really isn't LZW - what makes LZW clever is how the decompressor can build the same dictionary the compressor used only seeing the compressed data stream. Something you won't be using. But all LZW implementations have a state where the dictionary is full and is no longer updated, this is how you'd use it with your static dictionary.

like image 129
Tony Lee Avatar answered Sep 20 '22 08:09

Tony Lee


LZW adds to the dictionary during decompression to ensure it has the same dictionary state as the compressor. Otherwise the decoding would not function properly.

However, if you were in a state where the dictionary was fixed then, yes, you would not need to add new codes.

Your approach will work reasonably well and it's easy to use existing tools to prototype and measure the results. That is, compress the example file and then the example and test data together. The size of the latter less the former will be the expected compressed size of a block.

LZW is a clever way to build up a dictionary on the fly and gives decent results. But a more thorough analysis of your typical data blocks is likely to generate a more efficient dictionary.

There's also room for improvement in how LZW represents compressed data. For instance, each dictionary reference could be Huffman encoded to a closer to optimal length based on the expected frequency of their use. To be truly optimal the codes should be arithmetic encoded.

like image 26
George Phillips Avatar answered Sep 21 '22 08:09

George Phillips