Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Interpreting a 32bit unsigned long as Single Precision IEEE-754 Float in C

I am using the XC32 compiler from Microchip, which is based on the standard C compiler.

I am reading a 32bit value from a device on a RS485 network and storing this in a unsigned long that I have typedef'ed as DWORD.

i.e.

typedef DWORD unsigned long;

As it stands, when I typecast this value to a float, the value I get is basically the floating point version of it's integer representation and not the proper IEEE-754 interpreted float.

i.e.

DWORD dword_value = readValueOnRS485();
float temp = (float)dword_value;

Here, dword_value would come through in HEX format as 0x4366C0C4 which as a decimal would be represented as 1130807492 so the typecasting of this to a float simply gives me 1.130807492*10^9 or 1130807492.0 which is not what I want.

I want the single precision IEEE-754 representation which would give me a float value of 230.75299072265625

So clearly typecasting to float doesn't work for me. I need a method that can convert this form me. I have looked all over in the XC32 library and I cannot find anything.

Does anyone know of a predefined method that does this interpretation properly for me? Or is there maybe some suggested method I can write? I am trying to avoid writing my own code for this specific task as I am worried I do not find an efficient solution if C already has a function for this.

The interesting thing is, if I do this to a char*, the value is represented on that char* correctly as 230.75:

sprintf(random_char_pointer, "%.2f, dword_value);

Here printing random_char_pointer to screen gives me 230.75 so the sprintf must be handling the interpreting for me correctly. Therefore I am assuming there is something in C already for me. Can anyone assist?

like image 780
Dino Alves Avatar asked Nov 10 '15 15:11

Dino Alves


People also ask

What is 32-bit single precision floating point?

Single-precision floating-point format (sometimes called FP32 or float32) is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

Which of the following represents (- 0.75 10 in IEEE 754 single precision binary 32 format A?

Hence the correct answer is 1 0111 1110 1000 0000 0000 0000 0000 000.

Which statement gives a correct representation of a single precision or 32-bit floating-point number?

Option 2) is correct answer. Concept: In IEEE- 754 single precision format, a floating-point number is represented in 32 bits.

What is the smallest positive normalized floating-point number in IEEE 754 32-bit floating-point?

The smallest positive normalized float is 1. 02 × 2−126.


1 Answers

The recommended way to do stuff like this is to use a union:

union {
    DWORD w;
    float f;
} wordfloat;

wordfloat.w = dword_value;
temp = wordfloat.f;

This does what you want as per ISO 9899:2011 §6.5.2.3 ¶3 footnote 95:

A postfix expression followed by the . operator and an identifier designates a member of a structure or union object. The value is that of the named member,95) and is an lvalue if the first expression is an lvalue. If the first expression has qualified type, the result has the so-qualified version of the type of the designated member.

95) If the member used to read the contents of a union object is not the same as the member last used to store a value in the object, the appropriate part of the object representation of the value is reinterpreted as an object representation in the new type as described in 6.2.6 (a process sometimes called “type punning”). This might be a trap representation.

like image 134
fuz Avatar answered Nov 14 '22 11:11

fuz