I need to implement a Dynamic Programming algorithm to solve the Traveling Salesman problem in time that beats Brute Force Search for computing distances between points. For this I need to index subproblems by size and the value of each subproblem will be a float (the length of the tour). However holding the array in memory will take about 6GB RAM if I use python floats (which actually have double precision) and so to try and halve that amount (I only have 4GB RAM) I will need to use single precision floats. However I do not know how I can get single precision floats in Python (I am using Python 3). Could someone please tell me where I can find them (I was not able to find much on this on the internet). Thanks.
EDIT: I notice that numpy also has a float16 type which will allow for even more memory savings. The distances between points are around 10000 and there are 25 unique points and my answer needs to be to the nearest integer. Will float16 provide enought accuracy or do I need to use float32?
In python float precision to 2 floats in python, and python float precision to 3. There are many ways to set the precision of the floating-point values.
Python's built-in float type has double precision (it's a C double in CPython, a Java double in Jython). If you need more precision, get NumPy and use its numpy.
Python's floating-point numbers are usually 64-bit floating-point numbers, nearly equivalent to np.float64 . In some unusual situations it may be useful to use floating-point numbers with more precision.
In Python, to print 2 decimal places we will use str.format() with “{:.2f}” as string and float as a number. Call print and it will print the float with 2 decimal places.
As a first step, you should use a NumPy array to store your data instead of a Python list.
As you correctly observe, a Python float uses double precision internally, and the double-precision value underlying a Python float can be represented in 8 bytes. But on a 64-bit machine, with the CPython reference implementation of Python, a Python float
object takes a full 24 bytes of memory: 8 bytes for the underlying double-precision value, 8 bytes for a pointer to the object type, and 8 bytes for a reference count (used for garbage collection). There's no equivalent of Java's "primitive" types or .NET's "value" types in Python - everything is boxed. That makes the language semantics simpler, but means that objects tend to be fatter.
Now if we're creating a Python list of float
objects, there's the added overhead of the list itself: one 8-byte object pointer per Python float
(still assuming a 64-bit machine here). So in general, a list of n
Python float
objects is going to cost you over 32n
bytes of memory. On a 32-bit machine, things are a little better, but not much: our float
objects are going to take 16 bytes each, and with the list pointers we'll be using 20n
bytes of memory for a list of float
s of length n
. (Caveat: this analysis doesn't quite work in the case that your list refers to the same Python float
object from multiple list indices, but that's not a particularly common case.)
In contrast, a NumPy array of n
double-precision floats (using NumPy's float64
dtype) stores its data in "packed" format in a single data block of 8n
bytes, so allowing for the array metadata the total memory requirement will be a little over 8n
bytes.
Conclusion: just by switching from a Python list to a NumPy array you'll reduce your memory needs by about a factor of 4. If that's still not enough, then it might make sense to consider reducing precision from double to single precision (NumPy's float32
dtype), if that's consistent with your accuracy needs. NumPy's float16
datatype takes only 2 bytes per float, but records only about 3 decimal digits of precision; I suspect that it's going to be close to useless for the application you describe.
You could try the c_float
type from the ctypes
standard library. Alternatively, if you are capable of installing additional packages you might try the numpy
package. It includes the float32
type.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With