Can anyone explain in a simple way the codes below:
public unsafe static float sample(){ int result = 154 + (153 << 8) + (25 << 16) + (64 << 24); return *(float*)(&result); //don't know what for... please explain }
Note: the above code uses unsafe function
For the above code, I'm having hard time because I don't understand what's the difference between its return value compare to the return value below:
return (float)(result);
Is it necessary to use unsafe function if your returning *(float*)(&result)
?
" " C is a computer programming language. That means that you can use C to create lists of instructions for a computer to follow. C is one of thousands of programming languages currently in use.
In the real sense it has no meaning or full form. It was developed by Dennis Ritchie and Ken Thompson at AT&T bell Lab. First, they used to call it as B language then later they made some improvement into it and renamed it as C and its superscript as C++ which was invented by Dr.
C is a powerful general-purpose programming language. It can be used to develop software like operating systems, databases, compilers, and so on. C programming is an excellent language to learn to program for beginners. Our C tutorials will guide you to learn C programming one step at a time.
On .NET a float
is represented using an IEEE binary32 single precision floating number stored using 32 bits. Apparently the code constructs this number by assembling the bits into an int
and then casts it to a float
using unsafe
. The cast is what in C++ terms is called a reinterpret_cast
where no conversion is done when the cast is performed - the bits are just reinterpreted as a new type.
The number assembled is 4019999A
in hexadecimal or 01000000 00011001 10011001 10011010
in binary:
10000000
(or 128) resulting in the exponent 128 - 127 = 1 (the fraction is multiplied by 2^1 = 2).00110011001100110011010
which, if nothing else, almost have a recognizable pattern of zeros and ones.The float returned has the exact same bits as 2.4 converted to floating point and the entire function can simply be replaced by the literal 2.4f
.
The final zero that sort of "breaks the bit pattern" of the fraction is there perhaps to make the float match something that can be written using a floating point literal?
So what is the difference between a regular cast and this weird "unsafe cast"?
Assume the following code:
int result = 0x4019999A // 1075419546 float normalCast = (float) result; float unsafeCast = *(float*) &result; // Only possible in an unsafe context
The first cast takes the integer 1075419546
and converts it to its floating point representation, e.g. 1075419546f
. This involves computing the sign, exponent and fraction bits required to represent the original integer as a floating point number. This is a non-trivial computation that has to be done.
The second cast is more sinister (and can only be performed in an unsafe context). The &result
takes the address of result
returning a pointer to the location where the integer 1075419546
is stored. The pointer dereferencing operator *
can then be used to retrieve the value pointed to by the pointer. Using *&result
will retrieve the integer stored at the location however by first casting the pointer to a float*
(a pointer to a float
) a float is instead retrieved from the memory location resulting in the float 2.4f
being assigned to unsafeCast
. So the narrative of *(float*) &result
is give me a pointer to result
and assume the pointer is pointer to a float
and retrieve the value pointed to by the pointer.
As opposed to the first cast the second cast doesn't require any computations. It just shoves the 32 bit stored in result
into unsafeCast
(which fortunately also is 32 bit).
In general performing a cast like that can fail in many ways but by using unsafe
you are telling the compiler that you know what you are doing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With