Since C# supports Int8
, Int16
, Int32
and Int64
, why did the designers of the language choose to define int
as an alias for Int32
instead of allowing it to vary depending on what the native architecture considers to be a word
?
I have not had any specific need for int
to behave differently than the way it does, I am only asking out of pure encyclopedic interest.
I would think that a 64-bit RISC architecture could conceivably exist which would most efficiently support only 64-bit quantities, and in which manipulations of 32-bit quantities would require extra operations. Such an architecture would be at a disadvantage in a world in which programs insist on using 32-bit integers, which is another way of saying that C#, becoming the language of the future and all, essentially prevents hardware designers from ever coming up with such an architecture in the future.
StackOverflow does not encourage speculating answers, so please answer only if your information comes from a dependable source. I have noticed that some members of SO are Microsoft insiders, so I was hoping that they might be able to enlighten us on this subject.
Note 1: I did in fact read all answers and all comments of SO: Is it safe to assume an int will always be 32 bits in C#? but did not find any hint as to the why that I am asking in this question.
Note 2: the viability of this question on SO is (inconclusively) discussed here: Meta: Can I ask a “why did they do it this way” type of question?
The ' |= ' symbol is the bitwise OR assignment operator.
In mathematics, the tilde often represents approximation, especially when used in duplicate, and is sometimes called the "equivalency sign." In regular expressions, the tilde is used as an operator in pattern matching, and in C programming, it is used as a bitwise operator representing a unary negation (i.e., "bitwise ...
C operators are one of the features in C which has symbols that can be used to perform mathematical, relational, bitwise, conditional, or logical manipulations. The C programming language has a lot of built-in operators to perform various tasks as per the need of the program.
In C/C++, the # sign marks preprocessor directives. If you're not familiar with the preprocessor, it works as part of the compilation process, handling includes, macros, and more.
I believe that their main reason was portability of programs targeting CLR. If they were to allow a type as basic as int
to be platform-dependent, making portable programs for CLR would become a lot more difficult. Proliferation of typedef
-ed integral types in platform-neutral C/C++ code to cover the use of built-in int
is an indirect hint as to why the designers of CLR decided on making built-in types platform-independent. Discrepancies like that are a big inhibitor to the "write once, run anywhere" goal of execution systems based on VMs.
Edit More often than not, the size of an int
plays into your code implicitly through bit operations, rather than through arithmetics (after all, what could possibly go wrong with the i++
, right?) But the errors are usually more subtle. Consider an example below:
const int MaxItem = 20; var item = new MyItem[MaxItem]; for (int mask = 1 ; mask != (1<<MaxItem) ; mask++) { var combination = new HashSet<MyItem>(); for (int i = 0 ; i != MaxItem ; i++) { if ((mask & (1<<i)) != 0) { combination.Add(item[i]); } } ProcessCombination(combination); }
This code computes and processes all combinations of 20 items. As you can tell, the code fails miserably on a system with 16-bit int
, but works fine with ints of 32 or 64 bits.
Unsafe code would provide another source of headache: when the int
is fixed at some size (say, 32) code that allocates 4 times the number of bytes as the number of ints that it needs to marshal would work, even though it is technically incorrect to use 4 in place of sizeof(int)
. Moreover, this technically incorrect code would remain portable!
Ultimately, small things like that play heavily into the perception of platform as "good" or "bad". Users of .NET programs do not care that a program crashes because its programmer made a non-portable mistake, or the CLR is buggy. This is similar to the way the early Windows were widely perceived as non-stable due to poor quality of drivers. To most users, a crash is just another .NET program crash, not a programmers' issue. Therefore is is good for perception of the ".NET ecosystem" to make the standard as forgiving as possible.
Many programmers have the tendency to write code for the platform they use. This includes assumptions about the size of a type. There are many C programs around which will fail if the size of an int would be changed to 16 or 64 bit because they were written under the assumption that an int is 32 bit. The choice for C# avoids that problem by simply defining it as that. If you define int as variable depending on the platform you by back into that same problem. Although you could argue that it's the programmers fault of making wrong assumptions it makes the language a bit more robust (IMO). And for desktop platforms a 32 bit int is probably the most common occurence. Besides it makes porting native C code to C# a bit easier.
Edit: I think you write code which makes (implicit) assumptions about the sizer of a type more often than you think. Basically anything which involves serialization (like .NET remoting, WCF, serializing data to disk, etc.) will get you in trouble if you allow variables sizes for int unless the programmer takes care of it by using the specific sized type like int32
. And then you end up with "we'll use int32 always anyway just in case" and you have gained nothing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With