I've got a python method which needs to collect lots of data from an API, format it into a CSV, compress it and stream the result back.
I've been Googling and every solution I can find either requires writing to a temp file or keeping the whole archive in memory.
Memory is definitely not an option as I'd get OOM pretty rapidly. Writing to a temp file has a slew of problems associated with it (this box only uses disk for logs at the moment, much longer lead time before download starts, file cleanup issues, etc, etc). Not to mention the fact that it's just nasty.
I'm looking for a library that will allow me to do something like...
C = Compressor(outputstream)
C.BeginFile('Data.csv')
for D in Api.StreamResults():
C.Write(D)
C.CloseFile()
C.Close()
In other words, something that will be writing the output stream as I write data in.
I've managed to do this in .Net and PHP - but I have no idea how to approach it in Python.
To put things into perspective, by "lots" of data, I mean I need to be able to handle up to ~10 Gb of (raw plaintext) data. This is part of an export/dump process for a big data system.
As the gzip module documentation states, you can pass a file-like object to the GzipFile
constructor.
Since python is duck-typed, you're free to implement your own stream, like so:
import sys
from gzip import GzipFile
class MyStream(object):
def write(self, data):
#write to your stream...
sys.stdout.write(data) #stdout, for example
gz= GzipFile( fileobj=MyStream(), mode='w' )
gz.write("something")
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With