I have a list of say 100k floats and I want to convert it into a bytes buffer.
buf = bytes() for val in floatList: buf += struct.pack('f', val) return buf
This is quite slow. How can I make it faster using only standard Python 3.x libraries.
Just tell struct
how many float
s you have. 100k floats takes about a 1/100th of a second on my slow laptop.
import random import struct floatlist = [random.random() for _ in range(10**5)] buf = struct.pack('%sf' % len(floatlist), *floatlist)
You can use ctypes, and have a double-array (or float array) exactly as you'd have in C , instead of keeping your data in a list. This is fair low level, but is a recommendation if you need great performance and if your list is of a fixed size.
You can create the equivalent of a C double array[100];
in Python by doing:
array = (ctypes.c_double * 100)()
The ctypes.c_double * 100
expression yields a Python class for an array of doubles, 100 items long. To wire it to a file, you can just use buffer
to get its contents:
>>> f = open("bla.dat", "wb") >>> f.write(buffer(array))
If your data is already in a Python list, packing it into a double array may or may not be faster than calling struct
as in Agf's accepted answer - I will leave measuring which is faster as homework, but all the code you need is this:
>>> import ctypes >>> array = (ctypes.c_double * len(floatlist))(*floatlist)
To see it as a string, just do: str(buffer(array))
- the one drawback here is that you have to take care of float size (float vs double) and CPU dependent float type - the struct module can take care of this for you.
The big win is that with a float array you can still use the elements as numbers, by accessing then just as if it where a plain Python list, while having then readily available as a planar memory region with buffer
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With