Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can I use 128-bit integer in MSVC++? [duplicate]

I'm coding a C++/MFC application using Visual Studio 2010 and I need to maintain a cumulative running total that will be used to calculate an average transfer rate, as such:

//Let's assume that INT128 is a 128-bit integer type
static INT128 iRunningTotal = 0;
static INT128 iCounter = 0;

LONGLONG iteration_get_current_average(LONGLONG iRate)
{
    //May be called repeatedly...
    iRunningTotal += iRate;
    iCounter++;

    //Calculate the current average
    return iRunningTotal / iCounter;
}

I searched for C++ 128-bit integer and pretty much everywhere people suggest using Boost library. Well, that's a possibility, but I'm not familiar with it and don't use it anywhere else in my project.

So Boost aside, I'm curious, is there a way to do this with pure C/C++?

like image 628
c00000fd Avatar asked May 21 '14 06:05

c00000fd


2 Answers

I shall leave aside the question of whether it's a good idea, or whether the physical quantity you're measuring could even in theory ever exceed a value of 2^63, or 10^19 or thereabouts. I'm sure you have your reasons. So what are your options in pure C/C++?

The answer is: not many.

  • 128 bit integers are not part of any standard, nor are they supported on the compilers I know.
  • 64 bit double will give you the dynamic range (10^308 or so). An excellent choice if you don't need exact answers. Unfortunately if you have a number with enough zeros and you add one to it, it isn't going to change.
  • 80 bit double is natively support by the floating point processor, and that gives you the 63 bit mantissa together with the extended dynamic range.

So, how about roll-your-own 128 bit integer arithmetic? You would really have to be a masochist. It's easy enough to do addition and subtraction (mind your carries), and with a bit of thought it's not too hard to do multiplication. Division is another thing entirely. That is seriously hard, and the likely outcome is bugs similar to the Pentium bug of the 1990s.

You could probably accumulate your counters in two (or more) 64 bit integers without much difficulty. Then convert them into doubles for the calculations at the end. That shouldn't be too hard.

After that I'm afraid it's off to library shopping. Boost you mentioned, but there are much more specialised libraries around, such as cpp-bigint.

Not surprisingly, this question has been asked before and has a very good answer: Representing 128-bit numbers in C++.

like image 186
david.pfx Avatar answered Oct 07 '22 12:10

david.pfx


Let's try to compute the point at which your numbers could become large enough to overflow 64-bit numbers.

Let's assume you're doing your measurements once per microsecond. At a rate of 1 million increments per second, it'll take 264/1'000'000 seconds for a 64-bit number to overflow. That works out to over a half million years of counting. Even if you increase the rate to once per nanosecond, it would still take well over 500 years.

For the running total, you could (theoretically) run out a little sooner. If, for example, you had 100 Gigabit Ethernet, and had it running at maximum theoretical bandwidth all the time, you'd run out in (a little) less than 47 years.

If you limit yourself to technologies most of us can actually afford, about the fastest transfer rates most of use deal with are to/from SSDs. Assuming you had drives that could handle it, the most recent SATA Express specification supports transfers at up to 16 Gb/s. You'd need to saturate that 24/7 for well over 200 years before you used up the full range of a 64-bit integer.

Hmm...maybe we should look at main memory. Let's assume 4 channels of the fastest DDR 4 memory yet specified, and (as always) the grossly unrealistic assumption that you'll keep it operating at maximum theoretical bandwidth 24/7. With this, you could still count all all transfers to and from memory for over 4 years at a time before you'd be in any danger of a 64-bit integer overflowing.

Of course, you could try to over-clock the CPU and RAM to get there a little faster, but that would probably be a losing game--anything more than the very most modest overclock will probably reduce the life expectancy of the parts, so the machine would probably die before the 64-bit integer overflowed.

Bottom line: Your need for 128-bit integers seems questionable at best.

like image 27
Jerry Coffin Avatar answered Oct 07 '22 12:10

Jerry Coffin