I have multiple pdf converted into a text files and I want to search for a certain phrase that might be in the files. My problem is that the conversion between pdf and text file is not perfect so sometimes there are errors that appear in the text (such as missing spaces between word; mix-up between i, l, 1's; etc.)
I was wondering if there is any common technique to give me a "soft" search, something that looks at the hamming distance between two terms for example.
if 'word' in sentence:
vs
if my_search('word',sentence, tolerance):
you can use something like this:
from difflib import SequenceMatcher
text = """there are
some 3rrors in my text
but I cannot find them"""
def fuzzy_search(search_key, text, strictness):
lines = text.split("\n")
for i, line in enumerate(lines):
words = line.split()
for word in words:
similarity = SequenceMatcher(None, word, search_key)
if similarity.ratio() > strictness:
return " '{}' matches: '{}' in line {}".format(search_key, word, i+1)
print fuzzy_search('errors', text, 0.8)
which should output this:
'errors' matches: '3rrors' in line 2
fuzzywuzzy looks like it might work for you: https://github.com/seatgeek/fuzzywuzzy
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With