Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

POS tagging in German

Tags:

python

nlp

nltk

I am using NLTK to extract nouns from a text-string starting with the following command:

tagged_text = nltk.pos_tag(nltk.Text(nltk.word_tokenize(some_string)))

It works fine in English. Is there an easy way to make it work for German as well?

(I have no experience with natural language programming, but I managed to use the python nltk library which is great so far.)

like image 268
Johannes Meier Avatar asked Oct 28 '09 20:10

Johannes Meier


People also ask

What do you mean by POS tagging?

Part-of-speech (POS) tagging is a popular Natural Language Processing process which refers to categorizing words in a text (corpus) in correspondence with a particular part of speech, depending on the definition of the word and its context.

What is POS tagger give examples?

POS Tagging in NLTK is a process to mark up the words in text format for a particular part of a speech based on its definition and context. Some NLTK POS tagging examples are: CC, CD, EX, JJ, MD, NNP, PDT, PRP$, TO, etc. POS tagger is used to assign grammatical information of each word of the sentence.

How do you tag POS?

The POS tagging process is the process of finding the sequence of tags which is most likely to have generated a given word sequence. We can model this POS process by using a Hidden Markov Model (HMM), where tags are the hidden states that produced the observable output, i.e., the words.

What are the two main methods used for POS tagging?

POS-tagging algorithms fall into two distinctive groups: rule-based and stochastic. E. Brill's tagger, one of the first and most widely used English POS-taggers, employs rule-based algorithms.


5 Answers

Natural language software does its magic by leveraging corpora and the statistics they provide. You'll need to tell nltk about some German corpus to help it tokenize German correctly. I believe the EUROPARL corpus might help get you going.

See nltk.corpus.europarl_raw and this answer for example configuration.

Also, consider tagging this question with "nlp".

like image 187
Mike Atlas Avatar answered Oct 03 '22 13:10

Mike Atlas


The Pattern library includes a function for parsing German sentences and the result includes the part-of-speech tags. The following is copied from their documentation:

from pattern.de import parse, split
s = parse('Die Katze liegt auf der Matte.')
s = split(s)
print s.sentences[0]

>>>   Sentence('Die/DT/B-NP/O Katze/NN/I-NP/O liegt/VB/B-VP/O'
     'auf/IN/B-PP/B-PNP der/DT/B-NP/I-PNP Matte/NN/I-NP/I-PNP ././O/O')

Update: Another option is spacy, there is a quick example in this blog article:

import spacy

nlp = spacy.load('de')
doc = nlp(u'Ich bin ein Berliner.')

# show universal pos tags
print(' '.join('{word}/{tag}'.format(word=t.orth_, tag=t.pos_) for t in doc))
# output: Ich/PRON bin/AUX ein/DET Berliner/NOUN ./PUNCT
like image 38
Suzana Avatar answered Oct 03 '22 14:10

Suzana


Part-of-Speech (POS) tagging is very specific to a particular [natural] language. NLTK includes many different taggers, which use distinct techniques to infer the tag of a given token in a given token. Most (but not all) of these taggers use a statistical model of sorts as the main or sole device to "do the trick". Such taggers require some "training data" upon which to build this statistical representation of the language, and the training data comes in the form of corpora.

The NTLK "distribution" itself includes many of these corpora, as well a set of "corpora readers" which provide an API to read different types of corpora. I don't know the state of affairs in NTLK proper, and if this includes any german corpus. You can however locate free some free corpora which you'll then need to convert to a format that satisfies the proper NTLK corpora reader, and then you can use this to train a POS tagger for the German language.

You can even create your own corpus, but that is a hell of a painstaking job; if you work in a univeristy, you gotta find ways of bribing and otherwise coercing students to do that for you ;-)

like image 43
mjv Avatar answered Oct 03 '22 14:10

mjv


Possibly you can use the Stanford POS tagger. Below is a recipe I wrote. There are python recipes for German NLP that I've compiled and you can access them on http://htmlpreview.github.io/?https://github.com/alvations/DLTK/blob/master/docs/index.html

#-*- coding: utf8 -*-

import os, glob, codecs

def installStanfordTag():
    if not os.path.exists('stanford-postagger-full-2013-06-20'):
        os.system('wget http://nlp.stanford.edu/software/stanford-postagger-full-2013-06-20.zip')
        os.system('unzip stanford-postagger-full-2013-06-20.zip')
    return

def tag(infile):
    cmd = "./stanford-postagger.sh "+models[m]+" "+infile
    tagout = os.popen(cmd).readlines()
    return [i.strip() for i in tagout]

def taglinebyline(sents):
    tagged = []
    for ss in sents:
        os.popen("echo '''"+ss+"''' > stanfordtemp.txt")
        tagged.append(tag('stanfordtemp.txt')[0])
    return tagged

installStanfordTag()
stagdir = './stanford-postagger-full-2013-06-20/'
models = {'fast':'models/german-fast.tagger',
          'dewac':'models/german-dewac.tagger',
          'hgc':'models/german-hgc.tagger'}
os.chdir(stagdir)
print os.getcwd()


m = 'fast' # It's best to use the fast german tagger if your data is small.

sentences = ['Ich bin schwanger .','Ich bin wieder schwanger .','Ich verstehe nur Bahnhof .']

tagged_sents = taglinebyline(sentences) # Call the stanford tagger

for sent in tagged_sents:
    print sent
like image 43
alvas Avatar answered Oct 03 '22 14:10

alvas


I have written a blog-post about how to convert the German annotated TIGER Corpus in order to use it with the NLTK. Have a look at it here.

like image 23
Philipp Avatar answered Oct 03 '22 13:10

Philipp