Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficient string matching in Apache Spark

Using an OCR tool I extracted texts from screenshots (about 1-5 sentences each). However, when manually verifying the extracted text, I noticed several errors that occur from time to time.

Given the text "Hello there 😊! I really like Spark ❤️!", I noticed that:

1) Letters like "I", "!", and "l" get replaced by "|".

2) Emojis are not correctly extracted and replaced by other characters or are left out.

3) Blank spaces are removed from time to time.

As a result, I might end up with a string like this: "Hello there 7l | real|y like Spark!"

Since I am trying to match these string against a dataset including the correct text (in thise case "Hello there 😊! I really like Spark ❤️!"), I am looking for an efficient way how to match the string in Spark.

Can anyone suggest an efficient algorithm for Spark which allows me to compare the extract texts (~100.000) against my dataset (~100 million)?

like image 735
mrtnsd Avatar asked May 12 '17 13:05

mrtnsd


1 Answers

I wouldn't use Spark in the first place, but if you are really committed to the particular stack, you can combine a bunch of ml transformers to get best matches. You'll need Tokenizer (or split):

import org.apache.spark.ml.feature.RegexTokenizer  val tokenizer = new RegexTokenizer().setPattern("").setInputCol("text").setMinTokenLength(1).setOutputCol("tokens") 

NGram (for example 3-gram)

import org.apache.spark.ml.feature.NGram  val ngram = new NGram().setN(3).setInputCol("tokens").setOutputCol("ngrams") 

Vectorizer (for example CountVectorizer or HashingTF):

import org.apache.spark.ml.feature.HashingTF  val vectorizer = new HashingTF().setInputCol("ngrams").setOutputCol("vectors") 

and LSH:

import org.apache.spark.ml.feature.{MinHashLSH, MinHashLSHModel}  // Increase numHashTables in practice. val lsh = new MinHashLSH().setInputCol("vectors").setOutputCol("lsh") 

Combine with Pipeline

import org.apache.spark.ml.Pipeline  val pipeline = new Pipeline().setStages(Array(tokenizer, ngram, vectorizer, lsh)) 

Fit on example data:

val query = Seq("Hello there 7l | real|y like Spark!").toDF("text") val db = Seq(   "Hello there 😊! I really like Spark ❤️!",    "Can anyone suggest an efficient algorithm" ).toDF("text")  val model = pipeline.fit(db) 

Transform both:

val dbHashed = model.transform(db) val queryHashed = model.transform(query) 

and join

model.stages.last.asInstanceOf[MinHashLSHModel]   .approxSimilarityJoin(dbHashed, queryHashed, 0.75).show 
+--------------------+--------------------+------------------+                   |            datasetA|            datasetB|           distCol| +--------------------+--------------------+------------------+ |[Hello there 😊! ...|[Hello there 7l |...|0.5106382978723405| +--------------------+--------------------+------------------+ 

The same approach can be used in Pyspark

from pyspark.ml import Pipeline from pyspark.ml.feature import RegexTokenizer, NGram, HashingTF, MinHashLSH  query = spark.createDataFrame(     ["Hello there 7l | real|y like Spark!"], "string" ).toDF("text")  db = spark.createDataFrame([     "Hello there 😊! I really like Spark ❤️!",      "Can anyone suggest an efficient algorithm" ], "string").toDF("text")   model = Pipeline(stages=[     RegexTokenizer(         pattern="", inputCol="text", outputCol="tokens", minTokenLength=1     ),     NGram(n=3, inputCol="tokens", outputCol="ngrams"),     HashingTF(inputCol="ngrams", outputCol="vectors"),     MinHashLSH(inputCol="vectors", outputCol="lsh") ]).fit(db)  db_hashed = model.transform(db) query_hashed = model.transform(query)  model.stages[-1].approxSimilarityJoin(db_hashed, query_hashed, 0.75).show() # +--------------------+--------------------+------------------+ # |            datasetA|            datasetB|           distCol| # +--------------------+--------------------+------------------+ # |[Hello there 😊! ...|[Hello there 7l |...|0.5106382978723405| # +--------------------+--------------------+------------------+ 

Related

  • Optimize Spark job that has to calculate each to each entry similarity and output top N similar items for each
like image 174
Alper t. Turker Avatar answered Oct 04 '22 17:10

Alper t. Turker