I have float 32 numbers (let's say positive numbers) in numpy format. I want to convert them to fixed point numbers with predefined number of bits to reduce precision.
For example, number 3.1415926 becomes 3.25 in matlab by using function num2fixpt. The command is num2fixpt(3.1415926,sfix(5),2^(1 + 2-5), 'Nearest','on') which says 3 bits for integer part, 2 bits for fractional part.
Can I do the same thing using Python
Python also has a built-in function to convert floats to integers: int() . In this case, 390.8 will be converted to 390 . When converting floats to integers with the int() function, Python cuts off the decimal and remaining numbers of a float to create an integer.
Python doesn't provide any inbuilt method to easily convert floating point decimal numbers to binary number.
To convert the integer to float, use the float() function in Python. Similarly, if you want to convert a float to an integer, you can use the int() function.
You can do it if you understand how IEEE floating point notation works. Basically you'll need to convert to a python LONG, do bitwise operators, then covert back. For example:
import time,struct,math
long2bits = lambda L: ("".join([str(int(1 << i & L > 0)) for i in range(64)]))[::-1]
double2long = lambda d: struct.unpack("Q",struct.pack("d",d))[0]
double2bits = lambda d: long2bits(double2long(d))
long2double = lambda L: struct.unpack('d',struct.pack('Q',L))[0]
bits2double = lambda b: long2double(bits2long(b))
bits2long=lambda z:sum([bool(z[i] == '1')*2**(len(z)-i-1) for i in range(len(z))[::-1]])
>>> pi = 3.1415926
>>> double2bits(pi)
'0100000000001001001000011111101101001101000100101101100001001010'
>>> bits2long('1111111111111111000000000000000000000000000000000000000000000000')
18446462598732840960L
>>> double2long(pi)
4614256656431372362
>>> long2double(double2long(pi) & 18446462598732840960L)
3.125
>>>
def rshift(x,n=1):
while n > 0:
x = 9223372036854775808L | (x >> 1)
n -= 1
return x
>>> L = bits2long('1'*12 + '0'*52)
>>> L
18442240474082181120L
>>> long2double(rshift(L,0) & double2long(pi))
2.0
>>> long2double(rshift(L,1) & double2long(pi))
3.0
>>> long2double(rshift(L,4) & double2long(pi))
3.125
>>> long2double(rshift(L,7) & double2long(pi))
3.140625
This will only truncate the number of bits though, not round them. The rshift function is necessary because python's right-shift operator fills the empty leftmost bit with a zero. See a discription of IEEE floating point here.
You can round to binary fixed precision without explicit type conversions that tend to generate a lot of interpreter overhead:
import numpy as np
n_bits = 2
f = (1 << n_bits)
a = np.linspace(1, 2, 11)
a_fix = np.round(a*f)*(1.0/f)
print a
print a_fix
Results in
[ 1. 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2. ]
[ 1. 1. 1.25 1.25 1.5 1.5 1.5 1.75 1.75 2. 2. ]
The example uses numpy, but that's just for the convenience of generating a list of example values. Python's built-in round
will work just as well for single values:
x=3.1415926
x_fix = round(x*f)/float(f)
print x_fix
Note that both f and 1.0/f have an exact floating-point representation; therefore, the multiplication and division are exact, without rounding errors. Also note that multiplying by 1.0/f
is about 3x faster than dividing directly in the case of large arrays.
This approach doesn't control the number of bits for the integer part, so if you want the numbers to be capped or wrap around if they are too big, you'd have to do a bit more bit-shifting.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With