Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Remove diacritics using Go

Tags:

unicode

utf-8

go

How can I remove all diacritics from the given UTF8 encoded string using Go? e.g. transform the string "žůžo" => "zuzo". Is there a standard way?

like image 868
eeq Avatar asked Nov 03 '14 20:11

eeq


2 Answers

You can use the libraries described in Text normalization in Go.

Here's an application of those libraries:

// Example derived from: http://blog.golang.org/normalization  package main  import (     "fmt"     "unicode"      "golang.org/x/text/transform"     "golang.org/x/text/unicode/norm" )  func isMn(r rune) bool {     return unicode.Is(unicode.Mn, r) // Mn: nonspacing marks }  func main() {     t := transform.Chain(norm.NFD, transform.RemoveFunc(isMn), norm.NFC)     result, _, _ := transform.String(t, "žůžo")     fmt.Println(result) } 
like image 50
dyoo Avatar answered Oct 06 '22 02:10

dyoo


To expand a bit on the existing answer:

The internet standard for comparing strings of different character sets is called "PRECIS" (Preparation, Enforcement, and Comparison of Internationalized Strings in Application Protocols) and is documented in RFC7564. There is also a Go implementation at golang.org/x/text/secure/precis.

None of the standard profiles will do what you want, but it would be fairly straight forward to define a new profile that did. You would want to apply Unicode Normalization Form D ("D" for "Decomposition", which means the accents will be split off and be their own combining character), and then remove any combining character as part of the additional mapping rule, then recompose with the normalization rule. Something like this:

package main  import (     "fmt"     "unicode"      "golang.org/x/text/secure/precis"     "golang.org/x/text/transform"     "golang.org/x/text/unicode/norm" )  func main() {     loosecompare := precis.NewIdentifier(         precis.AdditionalMapping(func() transform.Transformer {             return transform.Chain(norm.NFD, transform.RemoveFunc(func(r rune) bool {                 return unicode.Is(unicode.Mn, r)             }))         }),         precis.Norm(norm.NFC), // This is the default; be explicit though.     )     p, _ := loosecompare.String("žůžo")     fmt.Println(p, loosecompare.Compare("žůžo", "zuzo"))     // Prints "zuzo true" } 

This lets you expand your comparison with more options later (eg. width mapping, case mapping, etc.)

It's also worth noting that removing accents is almost never what you actually want to do when comparing strings like this, however, without knowing your use case I can't actually make that assertion about your project. To prevent the proliferation of precis profiles it's good to use one of the existing profiles where possible. Also note that no effort was made to optimize the example profile.

like image 25
Sam Whited Avatar answered Oct 06 '22 02:10

Sam Whited