Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

reportlab low performance

I'm using reportlab to convert some big library (plain text in Russian) into pdf format. When the original file is small enough (say, about 10-50 kB), it works fine. But if I'm trying to convert big texts (above 500kB) it takes lot of time to reportlab to proceed. Does anyone knows what could be the problem?

BYTES_TO_READ = 10000 

def go(text):
    doc = SimpleDocTemplate("output.pdf")
    Story = [Spacer(1, 2*inch)]
    style = styles["Normal"]
    p = Paragraph(text, style)
    Story.append(p)
    doc.build(Story)

def get_text_from_file():
    source_file = open("book.txt", "r")
    text = source_file.read(BYTES_TO_READ)
    source_file.close()
    return text

go(get_text_from_file())

So, when I try to set the BYTES_TO_READ variable to more than 200-300 thousands (i.e., just to see what happening, not reading the full book, just some part of it) - it takes HUGE amount of time

like image 783
efi Avatar asked Sep 17 '25 12:09

efi


1 Answers

Let me preface by saying that I don't have much experience with reportlab at all. This is just a general suggestion. It also does not deal with exactly how you should be parsing and formatting the text you are reading into proper structures. I am just continuing to use the Paragraph class to write text.

In terms of performance, I think your problem is related to trying to read a huge string once, and passing that huge string as a single paragraph to reportlab. If you think about it, what paragraph is really 500k bytes?

What you would probably want to do is read in smaller chunks, and build up your document:

def go_chunked(limit=500000, chunk=4096):

    BYTES_TO_READ = chunk

    doc = SimpleDocTemplate("output.pdf")
    Story = [Spacer(1, 2*inch)]
    style = styles["Normal"]

    written = 0

    with open("book.txt", "r") as source_file:
        while written < limit:
            text = source_file.read(BYTES_TO_READ)
            if not text:
                break
            p = Paragraph(text, style)
            Story.append(p)
            written += BYTES_TO_READ

    doc.build(Story)

When processing a total of 500k bytes:

%timeit go_chunked(limit=500000, chunk=4096)
1 loops, best of 3: 1.88 s per loop

%timeit go(get_text_from_file())
1 loops, best of 3: 64.1 s per loop

Again, obviously this is just splitting your text into arbitrary paragraphs being the size of the BYTES_TO_READ value, but its not much different than one huge paragraph. Ultimately, you might want to parse the text you are reading into a buffer, and determine your own paragraphs, or just split on lines if that is the format of your original source:

def go_lines(limit=500000):

    doc = SimpleDocTemplate("output.pdf")
    Story = [Spacer(1, 2*inch)]
    style = styles["Normal"]

    written = 0

    with open("book.txt", "r") as source_file:
        while written < limit:
            text = source_file.readline()
            if not text:
                break
            text = text.strip()
            p = Paragraph(text, style)
            Story.append(p)
            written += len(text)

    doc.build(Story)

Performance:

%timeit go_lines()
1 loops, best of 3: 1.46 s per loop
like image 142
jdi Avatar answered Sep 20 '25 02:09

jdi