Here is my problem, I have a dataframe like this :
Depr_1 Depr_2 Depr_3 S3 0 5 9 S2 4 11 8 S1 6 11 12 S5 0 4 11 S4 4 8 8
and I just want to calculate the mean over the full dataframe, as the following doesn't work :
df.mean()
Then I came up with :
df.mean().mean()
But this trick won't work for computing the standard deviation. My final attempts were :
df.get_values().mean() df.get_values().std()
Except that in the latter case, it uses mean() and std() function from numpy. It's not a problem for the mean, but it is for std, as the pandas function uses by default ddof=1
, unlike the numpy one where ddof=0
.
In pandas, the std() function is used to find the standard Deviation of the series. The mean can be simply defined as the average of numbers. In pandas, the mean() function is used to find the mean of the series.
To calculate the mean of whole columns in the DataFrame, use pandas. Series. mean() with a list of DataFrame columns. You can also get the mean for all numeric columns using DataFrame.
Pandas DataFrame std() We can get stdard deviation of DataFrame in rows or columns by using std(). Int (optional ), or tuple, default is None, standard deviation among all the elements. If axis given then values across the axis is returned. int ( Optional ),default is None, for multiindex Axis.
You could convert the dataframe to be a single column with stack
(this changes the shape from 5x3 to 15x1) and then take the standard deviation:
df.stack().std() # pandas default degrees of freedom is one
Alternatively, you can use values
to convert from a pandas dataframe to a numpy array before taking the standard deviation:
df.values.std(ddof=1) # numpy default degrees of freedom is zero
Unlike pandas, numpy will give the standard deviation of the entire array by default, so there is no need to reshape before taking the standard deviation.
A couple of additional notes:
The numpy approach here is a bit faster than the pandas one, which is generally true when you have the option to accomplish the same thing with either numpy or pandas. The speed difference will depend on the size of your data, but numpy was roughly 10x faster when I tested a few different sized dataframes on my laptop (numpy version 1.15.4 and pandas version 0.23.4).
The numpy and pandas approaches here will not give exactly the same answers, but will be extremely close (identical at several digits of precision). The discrepancy is due to slight differences in implementation behind the scenes that affect how the floating point values get rounded.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With