Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using GPU to speed up BigInteger calculations

I am almost done with an algorithm that processes some very large integers (around the order of 2 raised to the power 100,000,000). This takes a couple of hours of highly-parallel code on a 16 core server with more than adequate memory since the algorithm is not memory intensive. I make use of the BigInteger class in .NET 4.

The specifics of the algorithm are not important but for context, following is a pretty exhaustive list of operations performed on these integers and some salient features of the algorithm:

  • Addition / Subtraction.
  • Multiplication of large numbers by small numbers.
  • Division of large numbers by very small numbers (e.g. 2).
  • Base 2 Log.
  • Base 2 Power.
  • Comparison of two or more large numbers (Min/Max).
  • No involvement whatsoever of prime numbers.
  • The algorithm is specifically designed not to be memory intensive since the performance hit of memory access is more than that of some smart on-the-fly calculations. Nevertheless, if memory access were to improve, the algorithm could benefit reasonably.

I have optimized the code as much as possible and profiling now shows only two bottlenecks:

  • Calculating base 2 Log for such large numbers.
  • Checking for pre-defined patterns of binary digits in these numbers. This is because the only way to access the BigInteger underlying data is by first using ToByteArray rather than in-place operations. Also, operating on byte-sized chunks does not help performance.

Considering memory access and Log operations, I started thinking about GPUs and whether I could offload some of the work effectively. I know very little about GPUs except that they are optimized for floating point operations.

My question is, using a library like GPU .NET, how can I process such large numbers on the GPU? Can I somehow make use of the floating point optimizations to calculate Log for such large numbers?

Looking for a starting point to form a strategy.

like image 638
Raheel Khan Avatar asked Aug 17 '12 07:08

Raheel Khan


2 Answers

I am looking around for GPU work in C# and am considering Tidepowerd.com GPU.NET and CUDAfy.NET. Both Nvidia specific and CUDAfy did not (yet) support mono when I last checked. But they both allow for for reasonably normal looking code within C# that runs on the GPU.

Also, did you consider using a 3d party library?. There are several very good BigInteger libraries, also open source. GMP is very good and free; http://gmplib.org/, there is at least one C# wrapper (which I have no experience with) http://www.emilstefanov.net/Projects/GnuMpDotNet/

The BigInteger class in .NET is immutable and in my experience that is not handy. If you have 2 ints of your size (around 100MB) the Add operation results in a third 100MB BigInt. It can be done way faster if would for instance modify one of the two originals.

C = A + B means allocating 100MB for C (this is what BigInt does)
A = A + B means you no longer have the original A, but a much faster calculation
like image 124
IvoTops Avatar answered Oct 20 '22 08:10

IvoTops


If anyone finds it helpful, here is a Log Base 2 implementation for BigInteger that is much faster than using built in functions.

private static BigInteger LogBase2(BigInteger num) {
    if (num <= Zero)
        return MinusOne; //does not support negative values.
    BigInteger i = Zero;
    while (!(num >>= 1).IsZero)
        i++;
    return i;
}
like image 42
MathuSum Mut Avatar answered Oct 20 '22 10:10

MathuSum Mut