Geoff Pado

Black Highlighter — April 17th, 2019

In this session, I started digging into the actual functionality of the app. In order to start hiding stretches of text, we have to start by detecting stretches of text. The first version of Black Highlighter used CIDetector and CITextFeature to locate text in screenshots. Since that release, though, Apple has added new frameworks for detecting things in images: the Vision framework.

I decided to wrap interacting with the Vision framework in an Operation subclass. This allows us to put all the work of detecting text rectangles in a single class and not requiring other classes to import the Vision framework. It’ll also help with pushing the work to detect text off of the main thread and into the background. Creating a new Operation subclass is pretty straightforward, but requires a bit of boilerplate.

One other thing I tackled this session is logging. Some unexpected errors can occur during text detection, and I thought it would be important to log when those errors happen. I’m mostly using the system os_log, with just a thin wrapper to make it a bit easier to use from Swift.

Commits Made

Tickets Closed

None.

Tickets Created

None.

Project Stats

Sessions Completed
7
Days Since Start
17
Issues Closed
8
Issues Open
25
Percent Complete
24.2%

Replay

Watch this session on YouTube: