When multiplying a numpy float with a list, the float is automatically cast to int
>>> import numpy as np
>>> a = [1, 2, 3]
>>> np.float64(2.0) * a ### This behaves as 2 * a
[1, 2, 3, 1, 2, 3]
A normal float gives a TypeError
>>> 2.0 * a ### This does not
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't multiply sequence by non-int of type 'float'
However, the numpy float cannot be used for indexing
>>> a[np.float64(2.0)]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: list indices must be integers, not numpy.float64
What is the logic behind this behaviour?
You've hit up against a known bug in NumPy. The GitHub issue was closed last year, but the behavior remains in NumPy version 1.9.1.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With