March 2012

Volume 27 Number 03

Don't Get Me Started - Touch, Not the Mouse

By David Platt | March 2012

David PlattMicrosoft is making a huge amount of noise about touch control of computers. As usual, some of it is justified and some is completely wrong. Perhaps this is a fundamental human trait, throwing any new technology at every problem in sight to see what sticks. But with just a bit of careful thought, and a pause from the mass hysteria, you could save many hours wasted in blind alleys.

Touch is an excellent way to control phones and tablets, which don’t have space for keyboards or mice. I bought a non-touch Amazon Kindle years ago, which at the time I liked very much. But I haven’t used it since I got my smartphone—the Kindle reader app is just too darn sweet. Tap anywhere on the right side to page forward, the left to page back, the menu button to see other choices on which I tap to select. I can’t stand using the old Kindle’s hard buttons to page, or the joystick control to select from menus, or holding the extra weight and seeing the wasted space of the seldom-used alphabetic keyboard. Even with a smaller screen, the touch-enabled phone app blows away the non-touch-dedicated device. Fortunately, Santa brought me a new Kindle Touch, which I’m having fun with.

On the other hand, I’ve been trying to use touchscreens on a PC since around 1981. They don’t work well there, because PCs solve different problems than do mobile devices. PCs are used for producing content as much as for consuming content, so users need a keyboard to enter that content. Even the worst keyboard typist uses two fingers, one on each hand. On a PC with a vertical monitor touchscreen, a typist can typically use only one hand, cutting the data input rate by at least 50 percent and, often, far more.

The touch demo programs show kids finger painting on their touchscreens. Hoo-[expletive]-ray. Show me an artist over the age of six who still finger paints. Tablet for fun, yes. PC for work, no.

A PC mouse has single-pixel resolution, and it’s easy to handle—just slide your hand to the side and grasp it. To touch a PC screen, you have to lift your entire arm, cantilevered out from your shoulder. This takes much more muscle effort, and the pointing is far less precise—line-of-text resolution at best. Do 10 arm lifts right now, and tell me how you’d like to do that all day, in return for less-precise pointing. As long as you need a PC keyboard, a mouse is the best solution to the pointing problem.

But even on a mobile device, touch isn’t the magic bullet. Touch is excellent for selecting among alternatives presented on the screen, but the small keys on a phone’s virtual keyboard are slow and error-prone for entering arbitrary data, such as navigation addresses. I recently drove my younger daughter to a gymnastics meet at “Allard Center YMCA, Goffstown, New Hampshire.” That would have taken dozens of keystrokes, even with auto-complete, and especially with keying errors due to the small size. With voice recognition, I just spoke it into the microphone and bang! I was on the road. (She placed third all-around in her age group. Good going, Lucy.)

The real win is the combination of touch and voice, using each for what it does best. Arriving at an airport recently for a teaching gig, I got in my rental car, pulled out my phone and tapped Navigate. Then I simply said “Marriott” and the phone presented me with a list of all the nearby Marriotts in order of distance from me. I tapped the one I wanted, and the phone started guiding me there. It doesn’t get much sweeter than that.

As Donald Norman wrote in “The Design of Everyday Things” (Doubleday Business, 1990), about encountering objects that are easy to use: “… stop and examine it: the ease of use did not come about by accident. Someone designed the object carefully and well.”

Even I, a hardened cynic who’s worked on these things for decades, could only shake my head in wonder and say “[expletive] magic."

Well, that didn’t take long: In January’s column (“Lowering Higher Education”), I wrote that university education would shift from a classroom-based model to a Web-based model, hammering those invested in the former. As an example, I cited Stanford’s class on Artificial Intelligence, taught by Peter Norvig and Sebastian Thrun, which the school offered free to anyone via the Web. It attracted 58,000 students from around the world.

Less than a month after that column ran, Thrun announced that he’s leaving his tenured position to join startup online university Udacity (udacity.com). “Having done [the Web AI course], I can’t teach at Stanford again,” Thrun said in an MSNBC.com article. “You can take the blue pill and go back to your classroom and lecture to your 20 students, but I’ve taken the red pill and I’ve seen Wonderland.”


David S. Platt teaches Programming .NET at Harvard University Extension School and at companies all over the world. He’s the author of 11 programming books, including “Why Software Sucks” (Addison-Wesley Professional, 2006) and “Introducing Microsoft .NET” (Microsoft Press, 2002). Microsoft named him a Software Legend in 2002. He wonders whether he should tape down two of his daughter’s fingers so she learns how to count in octal. You can contact him at rollthunder.com.