Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Split large files using python

Tags:

python

split

I have some trouble trying to split large files (say, around 10GB). The basic idea is simply read the lines, and group every, say 40000 lines into one file. But there are two ways of "reading" files.

1) The first one is to read the WHOLE file at once, and make it into a LIST. But this will require loading the WHOLE file into memory, which is painful for the too large file. (I think I asked such questions before) In python, approaches to read WHOLE file at once I've tried include:

input1=f.readlines()

input1 = commands.getoutput('zcat ' + file).splitlines(True)

input1 = subprocess.Popen(["cat",file],
                              stdout=subprocess.PIPE,bufsize=1)

Well, then I can just easily group 40000 lines into one file by: list[40000,80000] or list[80000,120000] Or the advantage of using list is that we can easily point to specific lines.

2)The second way is to read line by line; process the line when reading it. Those read lines won't be saved in memory. Examples include:

f=gzip.open(file)
for line in f: blablabla...

or

for line in fileinput.FileInput(fileName):

I'm sure for gzip.open, this f is NOT a list, but a file object. And seems we can only process line by line; then how can I execute this "split" job? How can I point to specific lines of the file object?

Thanks

like image 821
LookIntoEast Avatar asked Nov 11 '11 15:11

LookIntoEast


People also ask

How do you split a large file in Python?

Step 1 (Using Pandas): Find the number of rows from the files. Step 1 (Using Traditional Python): Find the number of rows from the files. Step 2: User to input the number of lines per file (Range) and generate a random number. In case you want an equal split, provide the same number for max and min.

How do you split a large file into smaller parts in Python?

To split a big binary file in multiple files, you should first read the file by the size of chunk you want to create, then write that chunk to a file, read the next chunk and repeat until you reach the end of original file.


5 Answers

The best solution I have found is using the library filesplit (https://pypi.org/project/filesplit/).
You only need to specify the input file, the output folder and the desired size in bytes for output files. Finally, the library will do all the work for you.

from fsplit.filesplit import Filesplit
fs = Filesplit()
def split_cb(f, s):
    print("file: {0}, size: {1}".format(f, s))

fs.split(file="/path/to/source/file", split_size=900000, output_dir="/pathto/output/dir", callback=split_cb)
like image 115
rafaoc Avatar answered Nov 10 '22 04:11

rafaoc


NUM_OF_LINES=40000
filename = 'myinput.txt'
with open(filename) as fin:
    fout = open("output0.txt","wb")
    for i,line in enumerate(fin):
      fout.write(line)
      if (i+1)%NUM_OF_LINES == 0:
        fout.close()
        fout = open("output%d.txt"%(i/NUM_OF_LINES+1),"wb")

    fout.close()
like image 32
yurib Avatar answered Nov 10 '22 03:11

yurib


If there's nothing special about having a specific number of file lines in each file, the readlines() function also accepts a size 'hint' parameter that behaves like this:

If given an optional parameter sizehint, it reads that many bytes from the file and enough more to complete a line, and returns the lines from that. This is often used to allow efficient reading of a large file by lines, but without having to load the entire file in memory. Only complete lines will be returned.

...so you could write that code something like this:

# assume that an average line is about 80 chars long, and that we want about 
# 40K in each file.

SIZE_HINT = 80 * 40000

fileNumber = 0
with open("inputFile.txt", "rt") as f:
   while True:
      buf = f.readlines(SIZE_HINT)
      if not buf:
         # we've read the entire file in, so we're done.
         break
      outFile = open("outFile%d.txt" % fileNumber, "wt")
      outFile.write(buf)
      outFile.close()
      fileNumber += 1 
like image 21
bgporter Avatar answered Nov 10 '22 05:11

bgporter


For a 10GB file, the second approach is clearly the way to go. Here is an outline of what you need to do:

  1. Open the input file.
  2. Open the first output file.
  3. Read one line from the input file and write it to the output file.
  4. Maintain a count of how many lines you've written to the current output file; as soon as it reaches 40000, close the output file, and open the next one.
  5. Repeat steps 3-4 until you've reached the end of the input file.
  6. Close both files.
like image 45
NPE Avatar answered Nov 10 '22 05:11

NPE


chunk_size = 40000
fout = None
for (i, line) in enumerate(fileinput.FileInput(filename)):
    if i % chunk_size == 0:
        if fout: fout.close()
        fout = open('output%d.txt' % (i/chunk_size), 'w')
    fout.write(line)
fout.close()
like image 30
Jason Sundram Avatar answered Nov 10 '22 03:11

Jason Sundram