I don't understand why C# considers the literal 0xFFFFFFFF as a uint when it also represents -1 for int types.
The following is code was entered into the Immediate Window shown here with the output:
int i = -1;
-1
string s = i.ToString("x");
"ffffffff"
int j = Convert.ToInt32(s, 16);
-1
int k = 0xFFFFFFFF;
Cannot implicitly convert type 'uint' to 'int'. An explicit conversion exists (are you missing a cast?)
int l = Convert.ToInt32(0xFFFFFFFF);
OverflowException was unhandled: Value was either too large or too small for an Int32.
Why can the string hex number be converted without problems but the literal only be converted using unchecked?
C programming language is a machine-independent programming language that is mainly used to create many types of applications and operating systems such as Windows, and other complicated programs such as the Oracle database, Git, Python interpreter, and games and is considered a programming foundation in the process of ...
C is a general-purpose language that most programmers learn before moving on to more complex languages. From Unix and Windows to Tic Tac Toe and Photoshop, several of the most commonly used applications today have been built on C. It is easy to learn because: A simple syntax with only 32 keywords.
What is C? C is a general-purpose programming language created by Dennis Ritchie at the Bell Laboratories in 1972. It is a very popular language, despite being old. C is strongly associated with UNIX, as it was developed to write the UNIX operating system.
Full form of C is “COMPILE”.
Why is 0xFFFFFFFF a uint when it represents -1?
Because you're not writing the bit pattern when you write
i = 0xFFFFFFFF;
you're writing a number by C#'s rules for integer literals. With C#'s integer literals, to write a negative number we write a -
followed by the magnitude of the number (e.g., -1
), not the bit pattern for what we want. It's really good that we aren't expected to write the bit pattern, it would make it really awkward to write negative numbers. When I want -3, I don't want to have to write 0xFFFFFFFD
. :-) And I really don't want to have to vary the number of leading F
s based on the size of the type (0xFFFFFFFFFFFFFFFD
for a long
-3
).
The rule for choosing the type of the literal is covered by the above link by saying:
If the literal has no suffix, it has the first of these types in which its value can be represented:
int
,uint
,long
,ulong
.
0xFFFFFFFF
doesn't fit in an int
, which has a maximum positive value of 0x7FFFFFFF
, so the next in the list is uint
, which it does fit in.
0xffffffff
is 4294967295
is an UInt32 that just happens to have a bit pattern equal to the Int32 -1
due to the way negative numbers are represented on computers. Just because they have the same bit pattern, that doesn't mean 4294967295 = -1. They're completely different numbers so of course you can't just trivially convert between the two. You can force the reintepretation of the bit pattern by using an explicit cast to int: (int)0xffffffff
.
The C# docs say that the compiler will try to fit the number you provide in the smallest type that can fit it. That doc is a bit old, but it applies still. It always assumes that the number is positive.
As a fallback you can always coerce the type.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With