Black Highlighter — April 22nd, 2019
In this session, I figured out how to take the data representation of the text rectangles that were detected last time and represent them visually. A number of issues made this more difficult than first anticipated, though.
First, the coordinates that
VNTextObservation contains aren’t pixel-based coordinates; they’re percentages of the total image size. This is pretty easy to fix, we just multiply each point by the image size. However, it’s also in a “flipped” coordinate system. Instead of the top-left origin point used by UIKit, these coordinates use a lower-left origin point. We have to appropriately translate vertical positions before converting them to pixel coordinates.
The other issue that I ran into is that the image view we display the image in scales the image down to fit on screen. However, the visualization view I created doesn’t do the same scaling (yet). As such, the rectangles that the view draws to show detected text doesn’t quite line up with the text in the image. I’ll have to make the visualization view draw at the same scale as the image view if I want things to look right.
- Translate text observations to rectangles in image coordinates
- Create visualization view to draw detected text
- Sessions Completed
- Days Since Start
- Issues Closed
- Issues Open
- Percent Complete
Watch this session on YouTube: