I want to read a huge text file that contains list of lists of integers. Now I'm doing the following:
G = []
with open("test.txt", 'r') as f:
for line in f:
G.append(list(map(int,line.split())))
However, it takes about 17 secs (via timeit). Is there any way to reduce this time? Maybe, there is a way not to use map.
numpy has the functions loadtxt
and genfromtxt
, but neither is particularly fast. One of the fastest text readers available in a widely distributed library is the read_csv
function in pandas
(http://pandas.pydata.org/). On my computer, reading 5 million lines containing two integers per line takes about 46 seconds with numpy.loadtxt
, 26 seconds with numpy.genfromtxt
, and a little over 1 second with pandas.read_csv
.
Here's the session showing the result. (This is on Linux, Ubuntu 12.04 64 bit. You can't see it here, but after each reading of the file, the disk cache was cleared by running sync; echo 3 > /proc/sys/vm/drop_caches
in a separate shell.)
In [1]: import pandas as pd
In [2]: %timeit -n1 -r1 loadtxt('junk.dat')
1 loops, best of 1: 46.4 s per loop
In [3]: %timeit -n1 -r1 genfromtxt('junk.dat')
1 loops, best of 1: 26 s per loop
In [4]: %timeit -n1 -r1 pd.read_csv('junk.dat', sep=' ', header=None)
1 loops, best of 1: 1.12 s per loop
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With