What is the best tokenizer exist for processing Korean language?
I have tried CJKTokenizer in Solr4.0. It is doing the tokenization, but accuracy is very low.
POSTECH/K is a Korean Morphological Analyzer that is able to tokenize and POS tag Korean data without much effort. The software reports 90.7% on the corpus it was train and tested on (see http://nlp.postech.ac.kr/download/postag_k/9908_cljournal_gblee.pdf).
The POS tagging achieved 81% on the korean data of a multilingual corpus project i've been working on.
However, there's a catch, you have to use windows to run the software. But I've a script to bypass that limitation, here's the script:
#!/bin/bash -x
###############################################################################
## Sejong-Shell is a script to call POSTAG/SEJONG tagger on Unix Machine
## because POSTAG/Sejong is only usable in Korean Microsoft Windows environment
## the original POSTAG/Sejong can be downloaded from
## http://isoft.postech.ac.kr/Course/CS730b/2005/index.html
##
## Sejong-Shell is dependent on WINdows Emulator.
## The WINE program can be downloaded from
## http://www.winehq.org/download/
##
## The shell scripts accepts the input files from one directory and
## outputs the tagged files into another while retaining the filename
###############################################################################
cd <source-file_dir>
#<source_-ile_dir> is the directory that saves the textfiles that needs tagging
for file in `dir -d *`
do
echo $file
sudo cp <source-file_dir>/"$file" <POSTAG-Sejong_dir>/input.txt
# <POSTAG-Sejong_dir> refers to the directory where the pos-tagger is saved
wine start /Unix "$HOME/postagsejong/sjTaggerInteg.exe"
sleep 30
# This is necessary so that the file from the current loop won't be
# overlapping with the next, do increase the time for sleep if the file
# is large and needs more than 30 sec for POSTAG/Sejong to tag.
sudo cp <POSTAG-Sejong_dir>/output.txt <target-file_dir>/"$file"
# <target-file_dir> is where you want the output files to be stored
done
# Instead of the sleep command to prevent the overlap:
# $sleep 30
# Alternatively, you can manually continue a loop with the following
# command that continues a loop after a keystroke input:
# $read -p "Press any key to continue…"
Note that the encoding for POSTECH/K is euc-kr
, so if it's utf8
. you can use the following script to convert from utf8 to euc-kr.
#!/usr/bin/python # -*- coding: utf-8 -*-
'''
pre-sejong clean
'''
import codecs
import nltk
import os, sys, re, glob
from nltk.tokenize import RegexpTokenizer
reload(sys)
sys.setdefaultencoding('utf-8')
cwd = './gizaclean_ko' #os.getcwd()
wrd = './presejong_ko'
kr_sent_tokenizer = nltk.RegexpTokenizer(u'[^!?.?!]*[!?."www.*"]')
for infile in glob.glob(os.path.join(cwd, '*.txt')):
# if infile == './extract_ko/singapore-sling.txt': continue
# if infile == './extract_ko/ion-orchard.txt': continue
print infile
(PATH, FILENAME) = os.path.split(infile)
reader = open(infile)
writer = open(os.path.join(wrd, FILENAME).encode('euc-kr'),'w')
for line in reader:
para = []urlread = lambda url: urllib.urlopen(url).read()
para.append (kr_sent_tokenizer.tokenize(unicode(line,'utf-8').strip()))
for sent in para[0]:
newsent = sent.replace(u'\xa0', ' '.encode('utf-8'))
newsent2 = newsent.replace(u'\xe7', 'c'.encode('utf-8'))
newsent3 = newsent2.replace(u'\xe9', 'e'.encode('utf-8'))
newsent4 = newsent3.replace(u'\u2013', '-')
newsent5 = newsent4.replace(u'\xa9', '(c)')
newsent6 = newsent5.encode('euc-kr').strip()
print newsent6
writer.write(newsent6+'\n')
(source for sejong-shell : Liling Tan. 2011. Building the foundation text for Nanyang Technological University - Multilingual Corpus (NTU-MC). Final year project. Singapore: Nanyang Technological University. pp. 44.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With