Complex Gesture Recognition in iOS – Part 2: The iOS Implementation
- Part 1: The Research
- Part 2: The iOS Implementation (GitHub repo)
- Part 3: The Demo App (coming less soon)
Well, it took a little longer than I had hoped, but I finally got around to implementing the N-Dollar Gesture Recognizer in Objective-C for iOS. If you remember, this all started with a need for an iOS implemenation of a good multi-stroke gesture (I call them glyphs) recognizer. N-Dollar was the clear winner in my research, so without further ado I present my attempt at implementing it.
Grab the source here…
To summarize how to use this library in a few bullet points:
- Initialize a Detector and seed it with templates to the gestures you want to recognize
- Capture user-input and pass it to the Detector
- When you’re ready to detect the gesture, ask the Detector to calculate which of its templates the user input matches the most.
Initializing the Detector
Initializing the detector is a simple affair:
It’s important to note that the detector is pretty useless unless you seed it with some templates to match against. Here I add a template of a gesture with
The JSON representation of these gestures are simply arrays of X,Y coordinate points. For instance, here’s a JSON array-of-points of a gesture that resembles the letter ‘D’. You can choose to use multiple strokes, or a single stroke with all of the points combined.
Honey badger The detector doesn’t care!
Capturing User Input
It’s really up to you how you want to capture the user input. The UIResponder Class is a good start, but I’m personally using Cocos2d, so the following code makes use of their abstraction of touch events:
Note that at each firing of the touch event handler, I take the corresponding point and add it to the recognizer through the
[glyphDetector addPoint:] message.
Detect the Glyph
Once you’re satisfied with the user input, you can call
[glyphDetector detectGlyph]. This will use the N-Dollar/Protractor algorithm to compare the user input with the templates you defined. For each template, the detector determines a score — higher is better. It’ll return the pre-defined gesture with the highest score to its delegate and from there it’s up to you whether to trust the match, or wait for more user input!
Update: Here’s more info on interpreting the score that’s returned.
Up Next, An App
Don’t expect it any time soon, but I hope to put together an example app that demonstrates this library in its entirety. Please feel free to contact me at brit (at) this domain with any questions, or just shoot me a message on the GitHub repo!