Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to delete a column in pandas dataframe based on a condition?

I have a pandas DataFrame, with many NAN values in it.

How can I drop columns such that number_of_na_values > 2000?

I tried to do it like that:

toRemove = set()
naNumbersPerColumn = df.isnull().sum()
for i in naNumbersPerColumn.index:
    if(naNumbersPerColumn[i]>2000):
         toRemove.add(i)
for i in toRemove:
    df.drop(i, axis=1, inplace=True)

Is there a more elegant way to do it?

like image 985
Fedorenko Kristina Avatar asked Jul 24 '15 15:07

Fedorenko Kristina


People also ask

How do I drop a column based on conditions in pandas?

During the data analysis operation on a dataframe, you may need to drop a column in Pandas. You can drop column in pandas dataframe using the df. drop(“column_name”, axis=1, inplace=True) statement.

How do you delete a row from a DataFrame based on a condition?

Use pandas. DataFrame. drop() method to delete/remove rows with condition(s).

How do you remove a column from a contingency table in Python?

Drop a column in python In pandas, drop( ) function is used to remove column(s). axis=1 tells Python that you want to apply function on columns instead of rows. Column A has been removed.


1 Answers

Here's another alternative to keep the columns that have less than or equal to the specified number of nans in each column:

max_number_of_nas = 3000
df = df.loc[:, (df.isnull().sum(axis=0) <= max_number_of_nas)]

In my tests this seems to be slightly faster than the drop columns method suggested by Jianxun Li in the cases I tested (as shown below). However, I should note that the performance becomes more similar if you simply don't use the apply method (e.g. df.drop(df.columns[df.isnull().sum(axis=0) > max_number_of_nans], axis=1)). Just a reminder that when it comes to performance in pandas vectorization almost always wins out over apply.

np.random.seed(0)
df = pd.DataFrame(np.random.randn(10000,5), columns=list('ABCDE'))
df[df < 0] = np.nan
max_number_of_nans = 5010

%timeit c = df.loc[:, (df.isnull().sum(axis=0) <= max_number_of_nans)]
>> 1.1 ms ± 4.08 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit c = df.drop(df.columns[df.isnull().sum(axis=0) > max_number_of_nans], axis=1)
>> 1.3 ms ± 11.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit c = df.drop(df.columns[df.apply(lambda col: col.isnull().sum() > max_number_of_nans)], axis=1)
>> 2.11 ms ± 29.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Performance often varies with data size so don't forget to check whatever case is closest to your data.

np.random.seed(0)
df = pd.DataFrame(np.random.randn(10, 5), columns=list('ABCDE'))
df[df < 0] = np.nan
max_number_of_nans = 5

%timeit c = df.loc[:, (df.isnull().sum(axis=0) <= max_number_of_nans)]
>> 755 µs ± 4.84 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit c = df.drop(df.columns[df.isnull().sum(axis=0) > max_number_of_nans], axis=1)
>> 777 µs ± 12 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit c = df.drop(df.columns[df.apply(lambda col: col.isnull().sum() > max_number_of_nans)], axis=1)
>> 1.71 ms ± 17.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
like image 142
n8yoder Avatar answered Sep 27 '22 21:09

n8yoder