Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the fastest way to read in a large data file of text columns?

Tags:

python

c

io

dataset

I have a data file of almost 9 million lines (soon to be more than 500 million lines) and I'm looking for the fastest way to read it in. The five aligned columns are padded and separated by spaces, so I know where on each line to look for the two fields that I want. My Python routine takes 45 secs:

import sys,time

start = time.time()
filename = 'test.txt'    # space-delimited, aligned columns
trans=[]
numax=0
for line in open(linefile,'r'):
    nu=float(line[-23:-11]); S=float(line[-10:-1])
    if nu>numax: numax=nu
    trans.append((nu,S))
end=time.time()
print len(trans),'transitions read in %.1f secs' % (end-start)
print 'numax =',numax

whereas the routine I've come up with in C is a more pleasing 4 secs:

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

#define BPL 47
#define FILENAME "test.txt"
#define NTRANS 8858226

int main(void) {
  size_t num;
  unsigned long i;
  char buf[BPL];
  char* sp;
  double *nu, *S;
  double numax;
  FILE *fp;
  time_t start,end;

  nu = (double *)malloc(NTRANS * sizeof(double));
  S = (double *)malloc(NTRANS * sizeof(double));

  start = time(NULL);
  if ((fp=fopen(FILENAME,"rb"))!=NULL) {
    i=0;
    numax=0.;
    do {
      if (i==NTRANS) {break;}
      num = fread(buf, 1, BPL, fp);
      buf[BPL-1]='\0';
      sp = &buf[BPL-10]; S[i] = atof(sp);
      buf[BPL-11]='\0';
      sp = &buf[BPL-23]; nu[i] = atof(sp);
      if (nu[i]>numax) {numax=nu[i];}
      ++i;
    } while (num == BPL);
    fclose(fp);
    end = time(NULL);
    fprintf(stdout, "%d lines read; numax = %12.6f\n", (int)i, numax);
    fprintf(stdout, "that took %.1f secs\n", difftime(end,start));
  } else {
    fprintf(stderr, "Error opening file %s\n", FILENAME);
    free(nu); free(S);
    return EXIT_FAILURE;
  }

  free(nu); free(S);
  return EXIT_SUCCESS;
  }

Solutions in Fortran, C++ and Java take intermediate amounts of time (27 secs, 20 secs, 8 secs). My question is: have I made any outrageous blunders in the above (particularly the C-code)? And is there any way to speed up the Python routine? I quickly realised that storing my data in an array of tuples was better than instantiating a class for each entry.

like image 225
Chris Avatar asked Dec 28 '22 07:12

Chris


1 Answers

Some points:

  1. Your C routine is cheating; it is being tipped off with the filesize, and is pre-allocating ...

  2. Python: consider using array.array('d') ... one each for S and nu. Then try pre-allocation.

  3. Python: write your routine as a function and call it -- accessing function-local variables is rather faster than accessing module-global variables.

like image 146
John Machin Avatar answered Dec 31 '22 15:12

John Machin