Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Importing StanfordNER Tagger Google Colab

I am having some issues when trying to import StanfordNER Tagger to use for NER. Here is my code (took portions of this from other posts here):

import os
def install_java():
  !apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
  os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
  !java -version
install_java()

!pip install StanfordCoreNLP
from stanfordcorenlp import StanfordCoreNLP
nlp = StanfordCoreNLP('stanford-corenlp', lang='en', memory='4g')

The error I am getting highlights the last line of code telling me:

OSError: stanford-corenlp is not a directory.

Any help would be great!

Edit: Here is another line of code that worked for me. For what inside StanfordNERTagger, load those files into Colab and give the path name. Do the same for what I originally asked as my problem above. Worked for me.

from nltk.tag import StanfordNERTagger
from nltk.tokenize import word_tokenize



st = StanfordNERTagger('/content/english.muc.7class.distsim.crf.ser.gz',
                   '/content/stanford-ner.jar',
                   encoding='utf-8')

text = 'While in France, Christine Lagarde discussed short-term stimulus efforts in a recent interview with the Wall Street Journal.'

tokenized_text = word_tokenize(text)
classified_text = st.tag(tokenized_text)

print(classified_text)
like image 612
rmahesh Avatar asked Oct 17 '22 05:10

rmahesh


1 Answers

The following code downloads all the required files and also sets the environment:

from nltk.tag.stanford import StanfordNERTagger
from nltk.tokenize import word_tokenize
import nltk

!wget 'https://nlp.stanford.edu/software/stanford-ner-2018-10-16.zip'
!unzip stanford-ner-2018-10-16.zip

nltk.download('punkt')

st = StanfordNERTagger('/content/stanford-ner-2018-10-16/classifiers/english.all.3class.distsim.crf.ser.gz',
                       '/content/stanford-ner-2018-10-16/stanford-ner.jar',
                       encoding='utf-8')

text = 'While in France, Christine Lagarde discussed short-term stimulus efforts in a recent interview with the Wall Street Journal.'

tokenized_text = word_tokenize(text)
classified_text = st.tag(tokenized_text)
like image 192
Farzin Avatar answered Oct 20 '22 09:10

Farzin