Basic math (128 / 8 = 16) speaks differently. I'm kinda disappointed and want some answers - since from what I've been used to, that notation(type_num_of_bytes_t) describes not just the amount of data you can put into the variable, but also cross-platform fixed variable size, and the latter is IMHO even more important. What am I doing wrong?
#include "boost/multiprecision/cpp_int.hpp"
using boost::multiprecision::uint128_t;
...
qDebug() << sizeof(uint128_t);
Output: 24.
I'm using standard x86/64 architecture CPU, compiling with vs2013 on Windows.
UPDATE: boost version is 1.61.
cpp_int 1.6.1
When used at fixed precision, the size of this type is always one machine word larger than you would expect for an N-bit integer: the extra word stores both the sign, and how many machine words in the integer are actually in use. The latter is an optimisation for larger fixed precision integers, so that a 1024-bit integer has almost the same performance characteristics as a 128-bit integer, rather than being 4 times slower for addition and 16 times slower for multiplication (assuming the values involved would always fit in 128 bits). Typically this means you can use an integer type wide enough for the "worst case scenario" with only minor performance degradation even if most of the time the arithmetic could in fact be done with a narrower type.
The extra machine word (on x86/64
8
bytes) makes the size 24 instead of the expected 16.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With