Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Compression library using Nvidia's CUDA [closed]

Does anyone know a project which implements standard compression methods (like Zip, GZip, BZip2, LZMA,...) using NVIDIA's CUDA library?

I was wondering if algorithms which can make use of a lot of parallel tasks (like compression) wouldn't run much faster on a graphics card than with a dual or quadcore CPU.

What do you think about the pros and cons of such an approach?

like image 777
Xn0vv3r Avatar asked Jan 19 '09 07:01

Xn0vv3r


People also ask

Can 7zip use GPU?

If you want to compress a GPU, first you need to construct one for each GPU using Compressor. Using LZ4Compressor in this case is likely. Afterwards, you must allocate the necessary amount of GPU bandwidth …

Can GPU be used for compression?

On the GPU, compression can accelerate inter-GPU communications for collaborative workflows. It can increase the size of datasets that a single GPU can handle by compressing data before it's stored to global memory. It can also accelerate the data link between the CPU and GPU.

Do you need Nvidia GPU for CUDA?

Nvidia Tesla is the best options for scientific computing. CUDA is Computed Unified Device Architecture, Its like some extensions for different languages like C, c++ Fortran to use an Nvidia Gpu card. There are some external card but must be Nvidia brand to use CUDA extensions language.


2 Answers

We have finished first phase of research to increase performance of lossless data compression algorithms. Bzip2 was chosen for the prototype, our team optimized only one operation - Burrows–Wheeler transformation, and we got some results: 2x-4x speed up on good compressible files. The code works faster on all our tests.

We are going to complete bzip2, support deflate and LZMA for some real life tasks like: HTTP traffic and backups compression.

blog link: http://www.wave-access.com/public_en/blog/2011/april/22/breakthrough-in-cuda-data-compression.aspx

like image 90
Alexander Azarov Avatar answered Nov 07 '22 05:11

Alexander Azarov


Not aware of anyone having done that and made it public. Just IMHO, it doesn't sound very promising.

As Martinus points out, some compression algorithms are highly serial. Block compression algorithms like LZW can be parallelized by coding each block independently. Ziping a large tree of files can be parallelized at the file level.

However, none of these is really SIMD-style parallelism (Single Instruction Multiple Data), and they're not massively parallel.

GPUs are basically vector processors, where you can be doing hundreds or thousands of ADD instructions all in lock step, and executing programs where there are very few data-dependent branches.

Compression algorithms in general sound more like an SPMD (Single Program Multiple Data) or MIMD (Multiple Instruction Multiple Data) programming model, which is better suited to multicore cpus.

Video compression algorithms can be accellerated by GPGPU processing like CUDA only to the extent that there is a very large number of pixel blocks that are being cosine-transformed or convolved (for motion detection) in parallel, and the IDCT or convolution subroutines can be expressed with branchless code.

GPUs also like algorithms that have high numeric intensity (the ratio of math operations to memory accesses.) Algorithms with low numeric intensity (like adding two vectors) can be massively parallel and SIMD, but still run slower on the gpu than the cpu because they're memory bound.

like image 22
Die in Sente Avatar answered Nov 07 '22 07:11

Die in Sente