Can you please help understand what are the main differences (if any) between the native int type and the numpy.int32 or numpy.int64 types?
The numpy integers, on the other hand, are fixed-sized. This means there is a maximum value they can hold. This is defined by the number of bytes in the integer ( int32 vs. int64 ), with more bytes holding larger numbers, as well as whether the number is signed or unsigned ( int32 vs.
NumPy dtypeA data type object implements the fixed size of memory corresponding to an array. We can create a dtype object by using the following syntax. The constructor accepts the following object. Object: It represents the object which is to be converted to the data type.
Int16 is used to represents 16-bit signed integers. Int32 is used to represents 32-bit signed integers . Int64 is used to represents 64-bit signed integers.
There are several major differences. The first is that python integers are flexible-sized (at least in python 3.x). This means they can grow to accommodate any number of any size (within memory constraints, of course). The numpy integers, on the other hand, are fixed-sized. This means there is a maximum value they can hold. This is defined by the number of bytes in the integer (int32
vs. int64
), with more bytes holding larger numbers, as well as whether the number is signed or unsigned (int32
vs. uint32
), with unsigned being able to hold larger numbers but not able to hold negative number.
So, you might ask, why use the fixed-sized integers? The reason is that modern processors have built-in tools for doing math on fixed-size integers, so calculations on those are much, much, much faster. In fact, python uses fixed-sized integers behind-the-scenes when the number is small enough, only switching to the slower, flexible-sized integers when the number gets too large.
Another advantage of fixed-sized values is that they can be placed into consistently-sized adjacent memory blocks of the same type. This is the format that numpy arrays use to store data. The libraries that numpy relies on are able to do extremely fast computations on data in this format, in fact modern CPUs have built-in features for accelerating this sort of computation. With the variable-sized python integers, this sort of computation is impossible because there is no way to say how big the blocks should be and no consistentcy in the data format.
That being said, numpy is actually able to make arrays of python integers. But rather than arrays containing the values, instead they are arrays containing references to other pieces of memory holding the actual python integers. This cannot be accelerated in the same way, so even if all the python integers fit within the fixed integer size, it still won't be accelerated.
None of this is the case with Python 2. In Python 2, Python integers are fixed integers and thus can be directly translated into numpy integers. For variable-length integers, Python 2 had the long
type. But this was confusing and it was decided this confusion wasn't worth the performance gains, especially when people who need performance would be using numpy or something like it anyway.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With