I am using Python3 on Ubuntu 14.04, and am running Stanford POSTagger on a corpus of 67 raw text articles, thje redacted python script is as follows:
from nltk.tag.stanford import POSTagger
with open('the_file.txt','r') as file:
G=file.readlines()
stan=[]
english_postagger = POSTagger('models/english-bidirectional-distsim.tagger', 'stanford-postagger.jar')
for line in g:
stan.append(english_postagger.tag(tokenize_fast(line)))
after several iterations of which I get the following error:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at edu.stanford.nlp.sequences.ExactBestSequenceFinder.bestSequence(ExactBestSequenceFinder.java:109)
at edu.stanford.nlp.sequences.ExactBestSequenceFinder.bestSequence(ExactBestSequenceFinder.java:31)
at edu.stanford.nlp.tagger.maxent.TestSentence.runTagInference(TestSentence.java:322)
at edu.stanford.nlp.tagger.maxent.TestSentence.testTagInference(TestSentence.java:312)
at edu.stanford.nlp.tagger.maxent.TestSentence.tagSentence(TestSentence.java:135)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.tagSentence(MaxentTagger.java:998)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.tagCoreLabelsOrHasWords(MaxentTagger.java:1788)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.tagAndOutputSentence(MaxentTagger.java:1798)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.runTagger(MaxentTagger.java:1709)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.runTagger(MaxentTagger.java:1770)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.runTagger(MaxentTagger.java:1543)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.runTagger(MaxentTagger.java:1499)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.main(MaxentTagger.java:1842)
I have also run the stanford postagger from command line as:
java -mx300m -classpath stanford-postagger.jar edu.stanford.nlp.tagger.maxent.MaxentTagger -model models/wsj-0-18-bidirectional-distsim.tagger -textFile sample-input.txt > sample-tagged.txt
with a similar error. I even passed Java 2 GB of memory, and still no luck.
Any thoughts/ideas or hacky type solutions are greatly welcomed!
Well spotted @nsanglar, so I tried:
java -Xmx2g -classpath stanford-postagger.jar edu.stanford.nlp.tagger.maxent.MaxentTagger -model models/wsj-0-18-bidirectional-distsim.tagger -textFile raw_text.txt > sample-tagged.txt
I get an error log message, with the following header:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 283639808 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
# Out of Memory Error (os_linux.cpp:2798), pid=25677, tid=140571167794944
# JRE version: OpenJDK Runtime Environment (7.0_65-b32) (build 1.7.0_65-b32)
# Java VM: OpenJDK 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 2.5.2
# Distribution: Ubuntu 14.04 LTS, package 7u65-2.5.2-3~14.04
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
Well, it turns out it was a RAM issue, I simply did not have enough memory to execute the command. Running the tagger off a server did the trick.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With