I have a pandas dataframe (df), and I want to do something like:
newdf = df[(df.var1 == 'a') & (df.var2 == NaN)]
I've tried replacing NaN with np.NaN
, or 'NaN'
or 'nan'
etc, but nothing evaluates to True. There's no pd.NaN
.
I can use df.fillna(np.nan)
before evaluating the above expression but that feels hackish and I wonder if it will interfere with other pandas operations that rely on being able to identify pandas-format NaN's later.
I get the feeling there should be an easy answer to this question, but somehow it has eluded me. Any advice is appreciated. Thank you.
In Spark, using filter() or where() functions of DataFrame we can filter rows with NULL values by checking IS NULL or isNULL . These removes all rows with null values on state column and returns the new DataFrame. All above examples returns the same output.
To filter out the rows of pandas dataframe that has missing values in Last_Namecolumn, we will first find the index of the column with non null values with pandas notnull() function. It will return a boolean series, where True for not null and False for null values or missing values.
Simplest of all solutions:
filtered_df = df[df['var2'].isnull()]
This filters and gives you rows which has only NaN
values in 'var2'
column.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With