Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C# Compressing a lot of data blocks fast/efficiently

I have around 270k data block pairs, each pair consists of one 32KiB and one 16KiB block.

When I save them to one file I of course get a very large file. But the data is easily compressed.
After compressing the 5.48GiB file with WinRAR, with strong compression, the resulting file size is 37.4MiB.

But I need random access to each individual block, so I can only compress the blocks individually.
For that I used the Deflate class provided by .NET, which reduced the file size to 382MiB (which I could live with).
But the speed is not good enough.

A lot of the speed loss is probably due to always creating a new MemoryStream and Deflate instance for each block. But it seems they aren't designed to be reused.

And I guess (much?) better compression can be achieved when a "global" dictionary is used instead having one for each block.

Is there an implementation of a compression algorithm (preferably in C#) which is suited for that task?

The following link contains the percentage with which each byte number occurs, divided into three block types (32KiB blocks only). The first and third block type has an occurrence of 37,5% and the second 25%. Block type percentages

Long file short story: Type1 consists mostly of ones. Type2 consists mostly of zeros and ones Type3 consists mostly of zeros Values greater than 128 do not occur (yet).

The 16KiB block consists almost always of zeros

like image 473
Arokh Avatar asked Nov 19 '11 04:11

Arokh


2 Answers

If you want to try different compression you can start with RLE which shoud be suitable for your data - http://en.wikipedia.org/wiki/Run-length_encoding - it will be blazingly fast even in simplest implemetation. The related http://en.wikipedia.org/wiki/Category:Lossless_compression_algorithms contains more links to start on other algorithm if you want to roll you own or find someone's implementation.

Random comment: "...A lot of the speed loss is probably ..." is not a way to solve performance problem. Measure and see if it really is.

like image 112
Alexei Levenkov Avatar answered Sep 28 '22 17:09

Alexei Levenkov


Gzip is known to be "fine", which means compression ratio is okay, and speed is good. If you want more compression, other alternatives exist, such as 7z.

If you want more speed, which seems your objective, a faster alternative will provide a significant speed advantage at the cost of some compression efficiency. "Significant" shall be translated into many times faster, such as 5x-10x. Such algorithms are favored for "in-memory" compression scenarios, such as yours, since they make accessing the compressed block almost painless.

As an example, Clayton Stangeland just released LZ4 for C#. The source code is available here under a BSD license : https://github.com/stangelandcl/LZ4Sharp

There are some comparisons metrics with gzip on the project homepage, such as :

i5 memcpy 1658 MB/s
i5 Lz4 Compression 270 MB/s Decompression 1184 MB/s  
i5 LZ4C# Compression 207 MB/s Decompression 758 MB/s 49%
i5 LZ4C# whole corpus Compression 267 MB/s Decompression 838 MB/s Ratio 47%
i5 gzip whole corpus Compression 48 MB/s Decompression 266 MB/s Ratio 33%

Hope this helps.

like image 21
Cyan Avatar answered Sep 28 '22 17:09

Cyan