I'm starting from the pandas DataFrame docs here: http://pandas.pydata.org/pandas-docs/stable/dsintro.html
I'd like to iteratively fill the DataFrame with values in a time series kind of calculation. So basically, I'd like to initialize the DataFrame with columns A, B and timestamp rows, all 0 or all NaN.
I'd then add initial values and go over this data calculating the new row from the row before, say row[A][t] = row[A][t-1]+1
or so.
I'm currently using the code as below, but I feel it's kind of ugly and there must be a way to do this with a DataFrame directly, or just a better way in general. Note: I'm using Python 2.7.
import datetime as dt import pandas as pd import scipy as s if __name__ == '__main__': base = dt.datetime.today().date() dates = [ base - dt.timedelta(days=x) for x in range(0,10) ] dates.sort() valdict = {} symbols = ['A','B', 'C'] for symb in symbols: valdict[symb] = pd.Series( s.zeros( len(dates)), dates ) for thedate in dates: if thedate > dates[0]: for symb in valdict: valdict[symb][thedate] = 1+valdict[symb][thedate - dt.timedelta(days=1)] print valdict
Fill Data in an Empty Pandas DataFrame by Appending Rows First, create an empty DataFrame with column names and then append rows one by one. The append() method can also append rows. When creating an empty DataFrame with column names and row indices, we can fill data in rows using the loc() method.
Pandas Replace Blank Values with NaN using mask() You can also replace blank values with NAN with DataFrame. mask() methods. The mask() method replaces the values of the rows where the condition evaluates to True.
TLDR; (just read the bold text)
Most answers here will tell you how to create an empty DataFrame and fill it out, but no one will tell you that it is a bad thing to do.
Here is my advice: Accumulate data in a list, not a DataFrame.
Use a list to collect your data, then initialise a DataFrame when you are ready. Either a list-of-lists or list-of-dicts format will work, pd.DataFrame
accepts both.
data = [] for a, b, c in some_function_that_yields_data(): data.append([a, b, c]) df = pd.DataFrame(data, columns=['A', 'B', 'C'])
Pros of this approach:
It is always cheaper to append to a list and create a DataFrame in one go than it is to create an empty DataFrame (or one of NaNs) and append to it over and over again.
Lists also take up less memory and are a much lighter data structure to work with, append, and remove (if needed).
dtypes
are automatically inferred (rather than assigning object
to all of them).
A RangeIndex
is automatically created for your data, instead of you having to take care to assign the correct index to the row you are appending at each iteration.
If you aren't convinced yet, this is also mentioned in the documentation:
Iteratively appending rows to a DataFrame can be more computationally intensive than a single concatenate. A better solution is to append those rows to a list and then concatenate the list with the original DataFrame all at once.
That's fine, you can still do this in linear time by growing or creating a python list of smaller DataFrames, then calling pd.concat
.
small_dfs = [] for small_df in some_function_that_yields_dataframes(): small_dfs.append(small_df) large_df = pd.concat(small_dfs, ignore_index=True)
or, more concisely:
large_df = pd.concat( list(some_function_that_yields_dataframes()), ignore_index=True)
append
or concat
inside a loopHere is the biggest mistake I've seen from beginners:
df = pd.DataFrame(columns=['A', 'B', 'C']) for a, b, c in some_function_that_yields_data(): df = df.append({'A': i, 'B': b, 'C': c}, ignore_index=True) # yuck # or similarly, # df = pd.concat([df, pd.Series({'A': i, 'B': b, 'C': c})], ignore_index=True)
Memory is re-allocated for every append
or concat
operation you have. Couple this with a loop and you have a quadratic complexity operation.
The other mistake associated with df.append
is that users tend to forget append is not an in-place function, so the result must be assigned back. You also have to worry about the dtypes:
df = pd.DataFrame(columns=['A', 'B', 'C']) df = df.append({'A': 1, 'B': 12.3, 'C': 'xyz'}, ignore_index=True) df.dtypes A object # yuck! B float64 C object dtype: object
Dealing with object columns is never a good thing, because pandas cannot vectorize operations on those columns. You will need to do this to fix it:
df.infer_objects().dtypes A int64 B float64 C object dtype: object
loc
inside a loopI have also seen loc
used to append to a DataFrame that was created empty:
df = pd.DataFrame(columns=['A', 'B', 'C']) for a, b, c in some_function_that_yields_data(): df.loc[len(df)] = [a, b, c]
As before, you have not pre-allocated the amount of memory you need each time, so the memory is re-grown each time you create a new row. It's just as bad as append
, and even more ugly.
And then, there's creating a DataFrame of NaNs, and all the caveats associated therewith.
df = pd.DataFrame(columns=['A', 'B', 'C'], index=range(5)) df A B C 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 NaN NaN NaN 4 NaN NaN NaN
It creates a DataFrame of object columns, like the others.
df.dtypes A object # you DON'T want this B object C object dtype: object
Appending still has all the issues as the methods above.
for i, (a, b, c) in enumerate(some_function_that_yields_data()): df.iloc[i] = [a, b, c]
Timing these methods is the fastest way to see just how much they differ in terms of their memory and utility.
Benchmarking code for reference.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With