I'm trying to read a fairly large CSV file with Pandas and split it up into two random chunks, one of which being 10% of the data and the other being 90%.
Here's my current attempt:
rows = data.index row_count = len(rows) random.shuffle(list(rows)) data.reindex(rows) training_data = data[row_count // 10:] testing_data = data[:row_count // 10]
For some reason, sklearn
throws this error when I try to use one of these resulting DataFrame objects inside of a SVM classifier:
IndexError: each subindex must be either a slice, an integer, Ellipsis, or newaxis
I think I'm doing it wrong. Is there a better way to do this?
Finding duplicate rows To take a look at the duplication in the DataFrame as a whole, just call the duplicated() method on the DataFrame. It outputs True if an entire row is identical to a previous row.
What version of pandas are you using? For me your code works fine (i`m on git master).
Another approach could be:
In [117]: import pandas In [118]: import random In [119]: df = pandas.DataFrame(np.random.randn(100, 4), columns=list('ABCD')) In [120]: rows = random.sample(df.index, 10) In [121]: df_10 = df.ix[rows] In [122]: df_90 = df.drop(rows)
Newer version (from 0.16.1 on) supports this directly: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sample.html
I have found that np.random.choice()
new in NumPy 1.7.0 works quite well for this.
For example you can pass the index values from a DataFrame and and the integer 10 to select 10 random uniformly sampled rows.
rows = np.random.choice(df.index.values, 10) sampled_df = df.ix[rows]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With