So I know that google-vision api supports multiple language for text-detection. And by using the code below I can detect english language from image. But according to google I can use the parameter language hints to detect other languages. So where exactly am I suppose to put this parameter in the code below?
def detect_text(path):
"""Detects text in the file."""
from google.cloud import vision
imageContext = 'bn'
client = vision.ImageAnnotatorClient(imageContext)
with io.open(path, 'rb') as image_file:
content = image_file.read()
image = vision.types.Image(content=content)
response = client.text_detection(image=image)
texts = response.text_annotations
print('Texts:')
for text in texts:
print('\n"{}"'.format(text.description))
vertices = (['({},{})'.format(vertex.x, vertex.y)
for vertex in text.bounding_poly.vertices])
print('bounds: {}'.format(','.join(vertices)))
detect_text('Outline-of-the-Bangladesh-license-plates_Q320.jpg')
Like this:
response = client.text_detection(
image=image,
image_context={"language_hints": ["bn"]}, # Bengali
)
See "ImageContext" for more details.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With