Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tokenizing large (>70MB) TXT file using Python NLTK. Concatenation & write data to stream errors

First of all, I am new to python/nltk so my apologies if the question is too basic. I have a large file that I am trying to tokenize; I get memory errors.

One solution I've read about is to read the file one line at a time, which makes sense, however, when doing that, I get the error cannot concatenate 'str' and 'list' objects. I am not sure why that error is displayed since (after reading the file, I check its type and it is in fact a string.

I have tried to split the 7MB files into 4 smaller ones, and when running that, I get: error: failed to write data to stream.

Finally, when trying a very small sample of the file (100KB or less), and running the modified code, I am able to tokenize the file.

Any insights into what's happening? Thank you.

# tokenizing large file one line at a time
import nltk
filename=open("X:\MyFile.txt","r").read()
type(raw) #str
tokens = '' 
for line in filename
        tokens+=nltk.word_tokenize(filename)
#cannot concatenate 'str' and 'list' objects

The following works with small file:

import nltk
filename=open("X:\MyFile.txt","r").read()
type(raw)
tokens = nltk.word.tokenize(filename)
like image 628
Luis Miguel Avatar asked Mar 24 '12 16:03

Luis Miguel


1 Answers

Problem n°1: You are iterating the file char by char like that. If you want to read every line efficiently simply open the file (don't read it) and iterate over file.readlines() as follows.

Problem n°2: The word_tokenize function returns a list of tokens, so you were trying to sum a str to a list of tokens. You first have to transform the list into a string and then you can sum it to another string. I'm going to use the join function to do that. Replace the comma in my code with the char you want to use as glue/separator.

import nltk
filename=open("X:\MyFile.txt","r")
type(raw) #str
tokens = '' 
for line in filename.readlines():
    tokens+=",".join(nltk.word_tokenize(line))

If instead you need the tokens in a list simply do:

import nltk
filename=open("X:\MyFile.txt","r")
type(raw) #str
tokens = []
for line in filename.readlines():
    tokens+=nltk.word_tokenize(line)

Hope that helps!

like image 67
luke14free Avatar answered Oct 13 '22 19:10

luke14free