Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Programmatically determining max value of a signed integer type

Tags:

c

signed

overflow

This related question is about determining the max value of a signed type at compile-time:

C question: off_t (and other signed integer types) minimum and maximum values

However, I've since realized that determining the max value of a signed type (e.g. time_t or off_t) at runtime seems to be a very difficult task.

The closest thing to a solution I can think of is:

uintmax_t x = (uintmax_t)1<<CHAR_BIT*sizeof(type)-2;
while ((type)x<=0) x>>=1;

This avoids any looping as long as type has no padding bits, but if type does have padding bits, the cast invokes implementation-defined behavior, which could be a signal or a nonsensical implementation-defined conversion (e.g. stripping the sign bit).

I'm beginning to think the problem is unsolvable, which is a bit unsettling and would be a defect in the C standard, in my opinion. Any ideas for proving me wrong?

like image 649
R.. GitHub STOP HELPING ICE Avatar asked Jan 27 '11 05:01

R.. GitHub STOP HELPING ICE


People also ask

What is the maximum value of a signed integer?

A 32-bit signed integer. It has a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647 (inclusive).

What is the maximum value of a 16 bit signed integer?

Integer, 16 Bit: Signed Integers ranging from -32768 to +32767. Integer, 16 bit data type is used for numerical tags where variables have the potential for negative or positive values.

What is the highest positive number that can be stored in signed int data type?

Being a signed data type, it can store positive values as well as negative values. Takes a size of 32 bits where 1 bit is used to store the sign of the integer. A maximum integer value that can be stored in an int data type is typically 2, 147, 483, 647, around 231 – 1, but is compiler dependent.

What is the maximum possible value of an unsigned integer?

The number 4,294,967,295, equivalent to the hexadecimal value FFFF,FFFF16, is the maximum value for a 32-bit unsigned integer in computing.


1 Answers

Let's first see how C defines "integer types". Taken from ISO/IEC 9899, §6.2.6.2:

6.2.6.2 Integer types
1
For unsigned integer types other than unsigned char, the bits of the object representation shall be divided into two groups: value bits and padding bits (there need not be any of the latter). If there are N value bits, each bit shall represent a different power of 2 between 1 and 2N−1, so that objects of that type shall be capable of representing values from 0 to 2N − 1 using a pure binary representation; this shall be known as the value representation. The values of any padding bits are unspecified.44)
2
For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M ≤ N). If the sign bit is zero, it shall not affect the resulting value. If the sign bit is one, the value shall be modified in one of the following ways:

— the corresponding value with sign bit 0 is negated (sign and magnitude);
— the sign bit has the value −(2N) (two’s complement);
— the sign bit has the value −(2N − 1) (ones’ complement).

Which of these applies is implementation-defined, as is whether the value with sign bit 1 and all value bits zero (for the first two), or with sign bit and all value bits 1 (for ones’ complement), is a trap representation or a normal value. In the case of sign and magnitude and ones’ complement, if this representation is a normal value it is called a negative zero.

Hence we can conclude the following:

  • ~(int)0 may be a trap representation, i.e. setting all bits to is a bad idea
  • There might be padding bits in an int that have no effect on its value
  • The order of the bits actually representing powers of two is undefined; so is the position of the sign bit, if it exists.

The good news is that:

  • there's only a single sign bit
  • there's only a single bit that represents the value 1


With that in mind, there's a simple technique to find the maximum value of an int. Find the sign bit, then set it to 0 and set all other bits to 1.

How do we find the sign bit? Consider int n = 1;, which is strictly positive and guaranteed to have only the one-bit and maybe some padding bits set to 1. Then for all other bits i, if i==0 holds true, set it to 1 and see if the resulting value is negative. If it's not, revert it back to 0. Otherwise, we've found the sign bit.

Now that we know the position of the sign bit, we take our int n, set the sign bit to zero and all other bits to 1, and tadaa, we have the maximum possible int value.

Determining the int minimum is slightly more complicated and left as an exercise to the reader.



Note that the C standard humorously doesn't require two different ints to behave the same. If I'm not mistaken, there may be two distinct int objects that have e.g. their respective sign bits at different positions.



EDIT: while discussing this approach with R.. (see comments below), I have become convinced that it is flawed in several ways and, more generally, that there is no solution at all. I can't see a way to fix this posting (except deleting it), so I let it unchanged for the comments below to make sense.

like image 83
Philip Avatar answered Oct 18 '22 12:10

Philip