I import different netCDF files with xarray and eventually need to convert all of them to one panda dataframe. It's a file containing weather data, with many missing observations for certain latitudes and longitudes over time (because they are in the middle of the ocean). Coordinates: Lat, Long, Time; Variables: Temp, Pre. Before converting to a dataframe, I want to get rid of these missing observations/whole coordinates. Is there an easy and efficient way to do that with xarray? I didn't find anything in the docs.
import pandas as pd
import xarray as xr
path = 'Z:/Research/Climate_change/Climate_extreme_index/CRU data/'
temp_data = path+'cru_ts4.01.1901.2016.tmp.dat.nc'
pre_data = path+'cru_ts4.01.1901.2016.pre.dat.nc'
# Open netcdf
def open_netcdf(datapath):
print("Loading data...")
data = xr.open_dataset(datapath, autoclose=True, drop_variables='stn', cache=True)
return data
# Merge dataframes
data_temp = open_netcdf(temp_data)
data_pre = open_netcdf(pre_data)
all_data = xr.merge([data_temp, data_pre])
#################################################################
<xarray.Dataset>
Dimensions: (lat: 360, lon: 720, time: 1392)
Coordinates:
* lon (lon) float32 -179.75 -179.25 -178.75 -178.25 -177.75 -177.25 ...
* lat (lat) float32 -89.75 -89.25 -88.75 -88.25 -87.75 -87.25 -86.75 ...
* time (time) datetime64[ns] 1901-01-16 1901-02-15 1901-03-16 ...
Data variables:
tmp (time, lat, lon) float32 ...
pre (time, lat, lon) float32 ...
#########################################################
#Dataframe example
tmp pre
lat lon time
-89.75 -179.75 1901-01-16 NaN NaN
1901-02-15 NaN NaN
1901-03-16 NaN NaN
1901-04-16 NaN NaN
1901-05-16 NaN NaN
1901-06-16 NaN NaN
1901-07-16 NaN NaN
1901-08-16 NaN NaN
1901-09-16 NaN NaN
1901-10-16 NaN NaN
1901-11-16 NaN NaN
The short answer is that converting the Dataset to a DataFrame before dropping NaNs is exactly the right solution.
One of the key differences between a pandas DataFrame with a MultiIndex and an xarray Dataset is that some index elements (time/lat/lon combinations) can be dropped in a MultiIndex without dropping all instances of the time, lat, or lon with a NaN. On the other hand, the DataArray models each dimension (time, lat, and lon) as orthogonal, meaning NaNs cannot be dropped without dropping an entire slice of the array. This is a core feature of the xarray data model.
As an example, here is a small dataset that matches the structure of your data:
In [1]: import pandas as pd, numpy as np, xarray as xr
In [2]: ds = xr.Dataset({
...: var: xr.DataArray(
...: np.random.random((4, 3, 6)),
...: dims=['time', 'lat', 'lon'],
...: coords=[
...: pd.date_range('2010-01-01', periods=4, freq='Q'),
...: np.arange(-60, 90, 60),
...: np.arange(-180, 180, 60)])
...: for var in ['tmp', 'pre']})
...:
We can create a fake land mask which will NaN out specific lat/lon combos for each time period
In [3]: land_mask = (np.random.random((1, 3, 6)) > 0.3)
In [4]: ds = ds.where(land_mask)
In [5]: ds.tmp
Out[5]:
<xarray.DataArray 'tmp' (time: 4, lat: 3, lon: 6)>
array([[[0.020626, 0.937496, nan, 0.052608, 0.266924, 0.361297],
[0.299442, 0.524904, 0.447275, 0.277471, nan, 0.595671],
[0.541777, 0.279131, nan, 0.282487, nan, nan]],
[[0.473278, 0.302622, nan, 0.664146, 0.401243, 0.949998],
[0.225176, 0.601039, 0.543229, 0.144694, nan, 0.196285],
[0.059406, 0.37001 , nan, 0.867737, nan, nan]],
[[0.571011, 0.864374, nan, 0.123406, 0.663951, 0.684302],
[0.867234, 0.823417, 0.351692, 0.46665 , nan, 0.215644],
[0.425196, 0.777346, nan, 0.332028, nan, nan]],
[[0.916069, 0.54719 , nan, 0.11225 , 0.560431, 0.22632 ],
[0.605043, 0.991989, 0.880175, 0.3623 , nan, 0.629986],
[0.222462, 0.698494, nan, 0.56983 , nan, nan]]])
Coordinates:
* time (time) datetime64[ns] 2010-03-31 2010-06-30 2010-09-30 2010-12-31
* lat (lat) int64 -60 0 60
* lon (lon) int64 -180 -120 -60 0 60 120
You can see that no lat or lon index can be dropped without losing valid data. On the other hand, when the data is converted to a DataFrame, the lat/lon/time dimensions are stacked, meaning a single element in this index can be dropped without affecting other rows:
In [6]: ds.to_dataframe()
Out[6]:
tmp pre
lat lon time
-60 -180 2010-03-31 0.020626 0.605749
2010-06-30 0.473278 0.192560
2010-09-30 0.571011 0.850161
2010-12-31 0.916069 0.415747
-120 2010-03-31 0.937496 0.465283
2010-06-30 0.302622 0.492205
2010-09-30 0.864374 0.461739
2010-12-31 0.547190 0.755914
-60 2010-03-31 NaN NaN
2010-06-30 NaN NaN
2010-09-30 NaN NaN
2010-12-31 NaN NaN
0 2010-03-31 0.052608 0.529258
2010-06-30 0.664146 0.116303
2010-09-30 0.123406 0.389693
... ... ...
60 120 2010-03-31 NaN NaN
2010-06-30 NaN NaN
2010-09-30 NaN NaN
2010-12-31 NaN NaN
[72 rows x 2 columns]
When dropna()
is called on this DataFrame, no data is dropped:
In [7]: ds.to_dataframe().dropna(how='all')
Out[7]:
tmp pre
lat lon time
-60 -180 2010-03-31 0.020626 0.605749
2010-06-30 0.473278 0.192560
2010-09-30 0.571011 0.850161
2010-12-31 0.916069 0.415747
-120 2010-03-31 0.937496 0.465283
2010-06-30 0.302622 0.492205
2010-09-30 0.864374 0.461739
2010-12-31 0.547190 0.755914
0 2010-03-31 0.052608 0.529258
2010-06-30 0.664146 0.116303
2010-09-30 0.123406 0.389693
2010-12-31 0.112250 0.485259
60 2010-03-31 0.266924 0.795056
2010-06-30 0.401243 0.299577
2010-09-30 0.663951 0.359567
2010-12-31 0.560431 0.933291
... ... ...
60 0 2010-03-31 0.282487 0.148216
2010-06-30 0.867737 0.643767
2010-09-30 0.332028 0.471430
2010-12-31 0.569830 0.380992
There is the dropna function, e.g.
all_data.dropna('time', how='all')
But as of now it is only implemented along a single dimension at once, so i am not sure if it does what you wish. I understand you want to remove those lat, lon pairs that are nan for all times? I think you have to turn lat, lon into a pandas multiindex coordinate and then use dropna along this new dimension.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With