In my C# source code I may have declared integers as:
int i = 5;
or
Int32 i = 5;
In the currently prevalent 32-bit world they are equivalent. However, as we move into a 64-bit world, am I correct in saying that the following will become the same?
int i = 5; Int64 i = 5;
The long long data type makes handling 64 bit integers easy. In C language, an unsigned number over 32 bits cannot exceed the value of 4,294,967,295. You may find you are required to handle larger numbers and for this you need these numbers to be coded in 64-bit.
Int64 is an immutable value type that represents signed integers with values that range from negative 9,223,372,036,854,775,808 (which is represented by the Int64. MinValue constant) through positive 9,223,372,036,854,775,807 (which is represented by the Int64. MaxValue constant.
Microsoft C/C++ features support for sized integer types. You can declare 8-, 16-, 32-, or 64-bit integer variables by using the __intN type specifier, where N is 8, 16, 32, or 64.
No. The C# specification rigidly defines that int
is an alias for System.Int32
with exactly 32 bits. Changing this would be a major breaking change.
The int
keyword in C# is defined as an alias for the System.Int32
type and this is (judging by the name) meant to be a 32-bit integer. To the specification:
CLI specification section 8.2.2 (Built-in value and reference types) has a table with the following:
System.Int32
- Signed 32-bit integerC# specification section 8.2.1 (Predefined types) has a similar table:
int
- 32-bit signed integral type
This guarantees that both System.Int32
in CLR and int
in C# will always be 32-bit.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With