Surely a 0d array is scalar, but Numpy does not seem to think so... am I missing something or am I just misunderstanding the concept?
>>> foo = numpy.array(1.11111111111, numpy.float64) >>> numpy.ndim(foo) 0 >>> numpy.isscalar(foo) False >>> foo.item() 1.11111111111
The 0-D arrays in Numpy are scalar and they cannot be accessed via indexing. Firstly we will import numpy as np. The 0-D arrays are the elements in an array.
0-D arrays, or Scalars, are the elements in an array. Each value in an array is a 0-D array.
In NumPy, a scalar is any object that you put in an array. It's similar to the concept in linear algebra, an element of a field which is used to define a vector space. NumPy ensures all scalars in an array have same types. It's impossible one scalar having type int32, the other scalars having type int64.
One should not think too hard about it. It's ultimately better for the mental health and longevity of the individual.
The curious situation with Numpy scalar-types was bore out of the fact that there is no graceful and consistent way to degrade the 1x1 matrix to scalar types. Even though mathematically they are the same thing, they are handled by very different code.
If you've been doing any amount of scientific code, ultimately you'd want things like max(a)
to work on matrices of all sizes, even scalars. Mathematically, this is a perfectly sensible thing to expect. However for programmers this means that whatever presents scalars in Numpy should have the .shape and .ndim attirbute, so at least the ufuncs don't have to do explicit type checking on its input for the 21 possible scalar types in Numpy.
On the other hand, they should also work with existing Python libraries that does do explicit type-checks on scalar type. This is a dilemma, since a Numpy ndarray have to individually change its type when they've been reduced to a scalar, and there is no way of knowing whether that has occurred without it having do checks on all access. Actually going that route would probably make bit ridiculously slow to work with by scalar type standards.
The Numpy developer's solution is to inherit from both ndarray and Python scalars for its own scalary type, so that all scalars also have .shape, .ndim, .T, etc etc. The 1x1 matrix will still be there, but its use will be discouraged if you know you'll be dealing with a scalar. While this should work fine in theory, occasionally you could still see some places where they missed with the paint roller, and the ugly innards are exposed for all to see:
>>> from numpy import * >>> a = array(1) >>> b = int_(1) >>> a.ndim 0 >>> b.ndim 0 >>> a[...] array(1) >>> a[()] 1 >>> b[...] array(1) >>> b[()] 1
There's really no reason why a[...]
and a[()]
should return different things, but it does. There are proposals in place to change this, but looks like they forgot to finish the job for 1x1 arrays.
A potentially bigger, and possibly non-resolvable issue, is the fact that Numpy scalars are immutable. Therefore "spraying" a scalar into a ndarray, mathematically the adjoint operation of collapsing an array into a scalar, is a PITA to implement. You can't actually grow a Numpy scalar, it cannot by definition be cast into an ndarray, even though newaxis
mysteriously works on it:
>>> b[0,1,2,3] = 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'numpy.int32' object does not support item assignment >>> b[newaxis] array([1])
In Matlab, growing the size of a scalar is a perfectly acceptable and brainless operation. In Numpy you have to stick jarring a = array(a)
everywhere you think you'd have the possibility of starting with a scalar and ending up with an array. I understand why Numpy has to be this way to play nice with Python, but that doesn't change the fact that many new switchers are deeply confused about this. Some have explicit memory of struggling with this behaviour and eventually persevering, while others who are too far gone are generally left with some deep shapeless mental scar that frequently haunts their most innocent dreams. It's an ugly situation for all.
You have to create the scalar array a little bit differently:
>>> x = numpy.float64(1.111) >>> x 1.111 >>> numpy.isscalar(x) True >>> numpy.ndim(x) 0
It looks like scalars in numpy may be a bit different concept from what you may be used to from a purely mathematical standpoint. I'm guessing you're thinking in terms of scalar matricies?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With