I have a function that, among other things, calculates the mean of rows a ndarray (2d or 1d). This is by way of ndarray.mean(axis=0)
For the 1d array, I'd like it to just return itself since there is only 1 "row", instead of averaging the elements and returning a scalar.
Is there a pythonic way to do this other than just checking the ndim
attribute before taking the average?
def d_Error(X, y, weights, bias):
y_hat = probability(X, weights, bias)
dE_matrix = (X.T * (y - y_hat)).T # each row is the gradient at that sample
dEdw = np.mean(dE_matrix, axis=0) # get average gradient
dEdb = (y - y_hat).mean() # gives scalar
dEdz = np.append(dEdw, dEdb)
return dEdz
Use np.atleast_2d
-
np.atleast_2d(ar).mean(axis=0)
For 2D, np.atleast_2d
doesn't change anything. For 1D
, let's look at a sample case -
In [125]: a1D = np.arange(4).astype(float)
In [126]: a1D
Out[126]: array([0., 1., 2., 3.])
In [127]: np.atleast_2d(a1D).mean(axis=0)
Out[127]: array([0., 1., 2., 3.])
Another with reshaping -
ar.reshape(-1,ar.shape[-1]).mean(0)
A less elegant solution, but more a "trick" is passing tuple of indices to the axis=…
parameter. If that is empty, it returns the original array. So you can pass it a range of indices:
dEdw = dE_matrix.mean(axis=tuple(range(dE_matrix.ndim-1)))
This thus will result in a singleton tuple (0,)
for a 2d array, and an empty tuple ()
for the 1d array.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With