If I have two byte
s a
and b
, how come:
byte c = a & b;
produces a compiler error about casting byte to int? It does this even if I put an explicit cast in front of a
and b
.
Also, I know about this question, but I don't really know how it applies here. This seems like it's a question of the return type of operator &(byte operand, byte operand2)
, which the compiler should be able to sort out just like any other operator.
It was mainly developed as a system programming language to write an operating system. The main features of the C language include low-level memory access, a simple set of keywords, and a clean style, these features make C language suitable for system programmings like an operating system or compiler development.
In C programming language, %d and %i are format specifiers as where %d specifies the type of variable as decimal and %i specifies the type as integer. In usage terms, there is no difference in printf() function output while printing a number using %d or %i but using scanf the difference occurs.
Role of Semicolon in C: Semicolons are end statements in C. The Semicolon tells that the current statement has been terminated and other statements following are new statements. Usage of Semicolon in C will remove ambiguity and confusion while looking at the code.
Why do C#'s bitwise operators always return int regardless of the format of their inputs?
I disagree with always. This works and the result of a & b
is of type long
:
long a = 0xffffffffffff; long b = 0xffffffffffff; long x = a & b;
The return type is not int
if one or both of the arguments are long
, ulong
or uint
.
Why do C#'s bitwise operators return int if their inputs are bytes?
The result of byte & byte
is an int because there is no &
operator defined on byte. (Source)
An &
operator exists for int
and there is also an implicit cast from byte
to int
so when you write byte1 & byte2
this is effectively the same as writing ((int)byte1) & ((int)byte2)
and the result of this is an int
.
This behavior is a consequence of the design of IL, the intermediate language generated by all .NET compilers. While it supports the short integer types (byte, sbyte, short, ushort), it has only a very limited number of operations on them. Load, store, convert, create array, that's all. This is not an accident, those are the kind of operations you could execute efficiently on a 32-bit processor, back when IL was designed and RISC was the future.
The binary comparison and branch operations only work on int32, int64, native int, native floating point, object and managed reference. These operands are 32-bits or 64-bits on any current CPU core, ensuring the JIT compiler can generate efficient machine code.
You can read more about it in the Ecma 335, Partition I, chapter 12.1 and Partition III, chapter 1.5
I wrote a more extensive post about this over here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With