Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C/C++ - Convert 24-bit signed integer to float

I'm programming in C++. I need to convert a 24-bit signed integer (stored in a 3-byte array) to float (normalizing to [-1.0,1.0]).

The platform is MSVC++ on x86 (which means the input is little-endian).

I tried this:

float convert(const unsigned char* src)
{
    int i = src[2];
    i = (i << 8) | src[1];
    i = (i << 8) | src[0];

    const float Q = 2.0 / ((1 << 24) - 1.0);

    return (i + 0.5) * Q;
}

I'm not entirely sure, but it seems the results I'm getting from this code are incorrect. So, is my code wrong and if so, why?

like image 634
Etienne Dechamps Avatar asked May 26 '10 19:05

Etienne Dechamps


2 Answers

You are not sign extending the 24 bits into an integer; the upper bits will always be zero. This code will work no matter what your int size is:

if (i & 0x800000)
    i |= ~0xffffff;

Edit: Problem 2 is your scaling constant. In simple terms, you want to multiply by the new maximum and divide by the old maximum, assuming that 0 remains at 0.0 after conversion.

const float Q = 1.0 / 0x7fffff;

Finally, why are you adding 0.5 in the final conversion? I could understand if you were trying to round to an integer value, but you're going the other direction.

Edit 2: The source you point to has a very detailed rationale for your choices. Not the way I would have chosen, but perfectly defensible nonetheless. My advice for the multiplier still holds, but the maximum is different because of the 0.5 added factor:

const float Q = 1.0 / (0x7fffff + 0.5);

Because the positive and negative magnitudes are the same after the addition, this should scale both directions correctly.

like image 149
Mark Ransom Avatar answered Sep 30 '22 05:09

Mark Ransom


Since you are using a char array, it does not necessarily follow that the input is little endian by virtue of being x86; the char array makes the byte order architecture independent.

Your code is somewhat over complicated. A simple solution is to shift the 24 bit data to scale it to a 32bit value (so that the machine's natural signed arithmetic will work), and then use a simple ratio of the result with the maximum possible value (which is INT_MAX less 256 because of the vacant lower 8 bits).

#include <limits.h>

float convert(const unsigned char* src)
{
    int i = src[2] << 24 | src[1] << 16 | src[0] << 8 ;
    return i / (float)(INT_MAX - 256) ;
}

Test code:

unsigned char* makeS24( unsigned int i, unsigned char* s24 )
{
    s24[2] = (unsigned char)(i >> 16) ;
    s24[1] = (unsigned char)((i >> 8) & 0xff);
    s24[0] = (unsigned char)(i & 0xff);
    return s24 ;
}

#include <iostream>

int main()
{
    unsigned char s24[3] ;
    volatile int x = INT_MIN / 2 ;

    std::cout << convert( makeS24( 0x800000, s24 )) << std::endl ;  // -1.0
    std::cout << convert( makeS24( 0x7fffff, s24 )) << std::endl ;  //  1.0
    std::cout << convert( makeS24( 0, s24 )) << std::endl ;         //  0.0
    std::cout << convert( makeS24( 0xc00000, s24 )) << std::endl ;  // -0.5
    std::cout << convert( makeS24( 0x400000, s24 )) << std::endl ;  //  0.5

}
like image 26
Clifford Avatar answered Sep 30 '22 06:09

Clifford