I have a large csv file of 3.5 go and I want to read it using pandas.
This is my code:
import pandas as pd
tp = pd.read_csv('train_2011_2012_2013.csv', sep=';', iterator=True, chunksize=20000000, low_memory = False)
df = pd.concat(tp, ignore_index=True)
I get this error:
pandas/parser.pyx in pandas.parser.TextReader.read (pandas/parser.c:8771)()
pandas/parser.pyx in pandas.parser.TextReader._read_rows (pandas/parser.c:9731)()
pandas/parser.pyx in pandas.parser.TextReader._tokenize_rows (pandas/parser.c:9602)()
pandas/parser.pyx in pandas.parser.raise_parser_error (pandas/parser.c:23325)()
CParserError: Error tokenizing data. C error: out of
The capacity of my ram is 8 Go.
try this bro:
mylist = []
for chunk in pd.read_csv('train_2011_2012_2013.csv', sep=';', chunksize=20000):
mylist.append(chunk)
big_data = pd.concat(mylist, axis= 0)
del mylist
This error could also be caused by the chunksize=20000000. Decreasing that fixed the issue in my case. In ℕʘʘḆḽḘ's solution chunksize is also decreased which might have done the trick.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With