I have hundreds of text files like these, with each column separated by three spaces. The data is for a year: 12 months and 31 days for each month.
Below, I'm only showing below what's relevant to question:
001 DIST - ADILABAD ANDHRA MEAN TEMP
DATE JAN FEB MAR . . . . NOV DEC
01 21.5 24.3 27.1 25.8 22.4
02 21.4 24.2 27.1 25.8 22.4
. . . . . .
. . . . . .
. . . . . .
27 23.6 26.8 30.3 23.1 21.3
28 23.8 27.0 30.6 22.9 21.3
29 23.4 31.0 22.9 21.2
30 23.5 31.1 22.6 21.4
31 23.8 31.2 . . . . 21.6
I want to read each column into an array and then average it.
For this I'm using the genfromtext()
function like this:
import numpy as np
JAN,FEB,MAR,APR,MAY,JUN,JUL,AUG,SEP,OCT,NOV,DEC = np.genfromtxt("tempmean_andhra_adilabad.txt", skiprows=3,
unpack=True, invalid_raise=False,
usecols=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12),
autostrip=True)
As you can see I've skipped the first three rows and the first column and unpacked each column in an array. Without invalid_raise=False
, I was getting the following error:
Traceback (most recent call last):
File "pyshell#32", line 1, in 'module'
JAN,FEB,MAR,APR,MAY,JUN,JUL,AUG,SEP,OCT,NOV,DEC = np.genfromtxt("temp mean_andhra_adilabad.txt",skiprows=3,unpack=True,usecols=(1,2,3,4,5,6,7,8,9,10,11,12),autostrip=True)
File "C:\Python27\lib\site-packages\numpy\lib\npyio.py", line 1667, in genfromtxt
raise ValueError(errmsg)
ValueError: Some errors were detected !
Line #32 (got 12 columns instead of 12)
Line #33 (got 12 columns instead of 12)
Line #34 (got 8 columns instead of 12)
I think this problem is because columns have different length? Or some other reason?
I wanted to see the output so I used invalid_raise=False
. Now my problem is that when I'm printing any of the array, like JAN
I'm only getting 28 elements. i.e. Every array has only 28 elements. It seems that only 28 rows are read for each column as FEB
column ends with 28 days. But I need the data for each month i.e. 31 elements for JAN
30 for JUNE
etc.
How do I get all elements for each month?
I think it's a very basic question but I'm very new to Python and NumPy
and began learning just two weeks back. I've searched a lot of questions on StackOverflow and Google and learned about how to skip rows, columns etc. But I could not find any answer relating to this particular question.
Please suggest some module, function, code etc.
Thanks in advance.
You data is not "delimited" by text. Instead it has fixed-width columns. As @EdChum shows in his answer, pandas has a function for reading data with fixed-width columns. You can also use genfromtxt
by giving the column widths in the delimiter
argument. It looks like the field widths are (4, 7, 7, 7, ...). In the code below, I'll write this as (4,) + (7,)*12
:
In [27]: (4,) + (7,)*12
Out[27]: (4, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7)
The default data type used by genfromtxt
is np.float64
. If a field can't be converted to a float, it will be replaced with nan
. So the data at the end of the months with fewer than 31 days will be nan
.
In the following, I renamed your file to "temp_mean.txt". Note that your file has an extra blank line at the end, so the argument skip_footer=1
is also used. If you don't use this argument, you'll get an extra row of nan
values in data
.
In [16]: data = genfromtxt("temp_mean.txt", skiprows=3, delimiter=(4,)+(7,)*12, usecols=range(1,13), skip_footer=1)
In [17]: data.shape
Out[17]: (31, 12)
In [18]: data[:,0] # JAN
Out[18]:
array([ 21.5, 21.4, 21.2, 21.2, 21.4, 21.7, 21.8, 22. , 22. ,
22.3, 22.3, 22.3, 22.5, 22.5, 22.5, 22.5, 22.5, 22.6,
22.8, 23.1, 23.1, 22.8, 22.9, 23.1, 23.4, 23.5, 23.6,
23.8, 23.4, 23.5, 23.8])
In [19]: data[:,1] # FEB
Out[19]:
array([ 24.3, 24.2, 24.3, 24.4, 24.6, 24.4, 24.1, 24.4, 24.5,
24.6, 24.9, 25. , 25.1, 25.6, 25.7, 25.7, 25.8, 26. ,
25.9, 25.9, 25.8, 25.8, 25.8, 26.2, 26.5, 26.7, 26.8,
27. , nan, nan, nan])
In [20]: data[-1,:] # Last row.
Out[20]:
array([ 23.8, nan, 31.2, nan, 34.7, nan, 27.4, 27. , nan,
25.7, nan, 21.6])
To get the monthly means, you can use np.nanmean
:
In [21]: np.nanmean(data, axis=0)
Out[21]:
array([ 22.5483871 , 25.35714286, 29.22903226, 32.79333333,
34.65806452, 31.19666667, 27.89032258, 27.01612903,
27.66666667, 27.22580645, 24.34666667, 21.81290323])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With