The trouble with gestures (and web 2.5)

While reading Lukas Mathis’s article on the problems with gestures, I found myself nodding along with his description of the usability challenges presented by gesture based interfaces (as in iPhone/iPad/Android/WebOS). Mathis says the trouble is

these gestures are not obvious. As Duncan Wilcox points out, gesture-based user interfaces often don’t have visible elements that represent verbs. The gesture is the verb. This works if the gesture is intuitive, but breaks down if there is no «natural» gesture for a verb. And since there is no intuitive, natural way of moving an object by one pixel (or skewing it, or mirroring it), we have to learn that command, and memorize it.

In my experience, the only natural gestures are push on buttony things (and links), scrolling, and paging. Even these three are helped significantly by visual evidence: the thing looks “pressable”, or suggests that more content exists above/below or left/right of your current screen position.

And that’s actually the old UI, the new version is nicer.

Mathis’s argument, in a nutshell, is: don’t use unnatural gestures. The examples he uses to illustrate his point, single pixel nudge and “match the size of two objects” in Pages, seem a bit contrived to me. I say contrived because I have never wanted to do either of those things while creating a document in my nearly two decades of writing stuff on computers. The fact that someone wasted the time to create a gesture for these two features is far more absurd, however. As a user, I can’t help but wonder how is it that they couldn’t get the formatting toolbar to show up when the iPad was in landscape mode, but they managed to get these gestures in?

Modes, qausi-modes, and inspectors, which Mathis suggests would go a long way to creating a more discoverable/learnable set of interface elements, are already in use by some of my favorite iPhone/iPad and Web apps. It seems more likely that a set of standards will coalesce around these approaches in the absence of the fervor being drummed up by the patent land-grab around gestures. I hope it works out because the current environment is encouraging people to come up with some incredibly stupid gestures. And we all know how hard it is to walk away from a bad interface idea without inflicting it on someone, right? I mean you took all that time thinking it up… Want to skew a photo while adding a border and inverting the colors? There’s a (patented) gesture for that. Ugh.

Forgetting gestures for just a moment, I can’t help but think about the problems with gestures in relation to the ever richer and more readily available interaction/animation techniques for the web. Just like gestures, these tools are adding to the vocabulary of the web. And, just like gestures we are in danger of using them in ways that obscure rather than reveal the “how” of our interfaces. These new tools even give us the power to change functionality that seemed sovereign. Like this bunch of jerks who want to hijack copy and paste for their own purposes.

Bottom line, just like the well intentioned designers who are coming up with new and nearly undiscoverable/unlearn-able gestures, web designers need to be careful. As exciting as it might be to invent “the next generation of user interfaces,” I think everyone is better served by designers being good stewards of interface standards until technology allows us to create a “gesture” that naturally replaces the old interaction. Short of that, we are in danger of making

user interfaces [that] are a step back, a throwback to the command line…the user interface doesn’t tell you what you can do with an object. Instead, you have to remember [what you can do], the same way you had to remember the commands you could use in a command line interface.