I have two images:
I want to measure how straight/smooth the text borders are rendered.
First image is rendered perfectly straight, so it deserves a quality measure 1
. On the other hand, the second image is rendered with a lot of variant curves (rough in a way) that is why it deserves a quality measure less than 1
. How will I measure it using image processing or any Python function or any function written in other languages?
Clarification :
There are font styles that are rendered originally with straight strokes but there are also font styles that are rendered smoothly just like the cursive font styles. What I'm really after is to differentiate the text border surface roughness of the characters by giving it a quality measure.
I want to measure how straight/smooth the text borders are rendered in an image. Inversely, it can also be said that I want to measure how rough the text borders are rendered in an image.
Use the Top, Bottom, Left, and Right controls to specify, in points, the distance between the respective border and the paragraph text. Click the OK button two times. WordTips is your source for cost-effective Microsoft Word training. (Microsoft Word is the most popular word processing software in the world.)
The text-rendering property gives information to the rendering engine about what to optimize for when rendering text. The text-rendering property has four values: When the text-rendering property is set to "optimizeLegibility" successive capital letters become more spaced, and ligatures are enabled.
Measuring text means determining the dimensions of a rectangle that encloses the text as tightly as possible. There are two different ways to measure text: using the Font class and getting the black box. Let us take a look at both of these methods below.
Tip: The border-radius property is actually a shorthand property for the border-top-left-radius, border-top-right-radius, border-bottom-right-radius and border-bottom-left-radius properties. The border-radius property can have from one to four values.
I don't know any python function, but I would:
1) Use potrace
to trace the edges and convert them to bezier curves. Here's a vizualisation:
2) Then let's zoom to the top part of the P
for example:
You draw lines perpendicular to the curve for a finite length (let's say 100 pixels). You plot the color intensity (you can convert to HSI or HSV and use one of those channels, or just convert to grayscale and take the pixel value directly) over that line:
3) Then you calculate the standard deviation of the derivative. Small standard deviation means sharp edges, large standard deviation means blurry edges. For a perfect edge, the standard deviation would be zero.
4) For every edge were you drew a perpendicular line, you now have a "smoothness" value. You can then average all the smoothness values per edge, per letter, per word or per image, as you see fit. Also, the more perpendicular lines you draw, the more accurate your smoothness value, but the more computationally intensive.
I would try something simple like creating a 'roughness' metric using a few functions from the opencv library, since it's easy to work with in Python (and C++, as well as other wrappers).
For example (without actual source, since I'm typing on my phone):
cv2.findContours
to get outlines of the letters.cv2.arcLength
on each contour as denominators.cv2.approxPolyDP
to simplify each contour.cv2.arcLength
on each simplified contour as numerators.In step 5, ratios closer to 1.0 require less simplification, so they're presumably less rough. Ratios closer to 0.0 require a lot of simplification, and are therefore probably very rough. Of course, you'll have to tweak the contour finding code to get appropriate outlines to work with, and you'll need to manage numerical precision to keep the math calculations meaningful, but hopefully the idea is clear enough.
OpenCV also has the useful functions cv2.convexHull
and cv2.convexityDefects
that you might find interesting in related work. However, they didn't seem appropriate for the letters here, since internal features on letters like M for example would be more challenging to address.
Speaking of rough things, I admit this algorithmic outline is incredibly rough! However, I hope it gives you a useful idea to try that seems straightforward to implement quickly to start getting quantitative feedback.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With