Export (0) Print
Expand All
106 out of 125 rated this helpful - Rate this topic

Touch

All Microsoft® Windows® applications should have a great touch experience. And doing so is easier than you think.

Design concepts
Guidelines
   Control usage
   Control sizing
   Control layout and spacing
   Interaction
   Windows Touch gestures
   Forgiveness
Documentation

Touch refers to the way Windows lets you interact directly with a computer using a finger. Compared to using a mouse, keyboard, or pen, touch is often much more natural, engaging, and convenient.

Figure of finger touching tablet screen

Using touch.

Many touch interactions are performed using gestures and flicks. A gesture is a quick movement of one or more fingers on a screen that the computer interprets as a command, rather than as a mouse movement, writing, or drawing. Windows 7 has new multitouch gestures such as pan, zoom, rotate, two-finger tap, and press and tap. One of the quickest and easiest gestures to perform is a flick. A flick is a simple gesture that results in navigation or an editing command. Navigational flicks include drag up, drag down, move back, and move forward, whereas editing flicks include copy, paste, undo, and delete.

A manipulation is a real-time, physical handling of an object. A manipulation differs from a gesture in that the input corresponds directly to how the object would react naturally to the action in the real world. For example, a photo viewing application might allow users to manipulate a photo by moving, zooming, resizing, and rotating the image. Multitouch manipulations use multiple contact points simultaneously.

For tasks that require fine movement or hover, Windows Vista® provides the touch pointer, which is a floating, on-screen pointer that looks like a mouse. Because the touch pointer isn't as easy to use as direct input, touchable programs avoid relying on the touch pointer. The touch pointer is disabled by default in Windows 7.

A program is considered touchable when it has only the touch support provided by Windows and the controls for the most important tasks are easy to touch. (Tasks that require using a fingernail aren't considered touch friendly—touchable controls need to be large enough for easy targeting with a fingertip.) In practice, this means:

  • The program's interactive controls are large enough to be easily touchable—at least 23x23 pixels (13x13 dialog units, or DLUs).
  • The program has good keyboard and mouse support, so that relevant system gestures such as flicks, multitouch gestures, and drag-and-drop are functional.
  • No tasks require using hover or the touch pointer.
  • All controls use Microsoft Active Accessibility (MSAA) to provide programmatic access to the UI for assistive technologies.

A program is considered touch-enabled when it has been designed for touch for its primary tasks, which usually means:

  • The most frequently used controls are at least 40x40 pixels (23x22 DLUs).
  • Relevant gestures are supported (including panning, zoom, rotate, two-finger tap, press and tap), and the effect occurs at the point of contact.
  • The program provides smooth, responsive visual feedback while panning, zooming, and rotating so that it feels highly interactive.

A program is considered touch-optimized when it has been specifically designed for touch, which usually means:

  • Tasks are designed for easy touch by placing the most frequently performed commands directly on the UI or content instead of in drop-down menus.
  • The program's special experiences are designed to have an immersive touch experience (possibly using raw touch input data), with multitouch manipulations and details like having feedback with real-world physical properties, such as momentum and friction.
  • Tasks are forgiving, allowing users to correct mistakes easily and handle inaccuracy with touching and dragging.
  • Tasks are designed to avoid or reduce the need for heavy text input or precise selection.

Note: Guidelines related to mouse, pen, and accessibility are presented in separate articles.

Design concepts

Using touch for input has the following characteristics:

  • Natural and intuitive. Everyone knows how to point with a finger and touch things. Object interactions are designed to correspond to how users interact with objects in the real world in a consistent manner.
  • Less intrusive. Using touch is silent, and consequently much less distracting than typing or clicking, especially in social situations such as meetings. Compared to using a pen, using a finger is particularly convenient because you don't have to locate or pick up a pen.
  • Portable. A computer with touch capability can be more compact because most tasks can be completed without a keyboard, mouse, or touchpad. It can be more flexible because it doesn't require a work surface. It enables new places and scenarios for using a computer.
  • Direct and engaging. Touch makes you feel like you are directly interacting with the objects on the screen, whereas using a mouse or touchpad always requires you to coordinate hand movements with separate on-screen pointer movements—which feels indirect by comparison.
  • Reduced accuracy. Users can't target objects as accurately using touch, compared to a mouse or pen. Consequently, you can't expect users to tap or manipulate small objects.

Touch provides a natural, real-world feel to interaction. Direct manipulation and animation complete this impression, by giving objects a realistic, dynamic motion and feedback. For example, consider a card game. Not only is it convenient and easy to drag cards using a finger, the experience takes on an engaging real-world feel when you can toss the cards and have them glide, spin, and bounce exactly like physical cards. And when you try to move a card that can't be moved, it's a better experience to have the card resist but not prevent movement, and settle back in place when released, to clearly indicate that the action was recognized but can't be done.

All Windows programs should be touchable

While touch is traditionally associated with Tablet PCs, it is becoming common on ordinary computers. The Windows Tablet and Touch Technology is a standard component of Windows Vista and Windows 7, so any compatible computer has the ability to take advantage of touch if it has the appropriate hardware. As a result, computer manufacturers are now including touchscreens in ordinary laptops and even in desktop monitors.

As touch spreads from Tablet PCs to other types of computers, software program developers and designers will find it increasingly important to support touch as well. All Windows programs should have a great touch experience. Users should be able to perform your program's most important tasks efficiently using their fingers. Some tasks, like typing or detailed pixel manipulation, may not be appropriate for touch, but those that are should be touchable.

Fortunately, if your program is already well designed, providing a great touch experience is easy to do. For this purpose, a well-designed program:

  • Has good mouse support. The interactive controls have clear, visible affordances. Objects have standard behaviors for the standard mouse interactions (single and double left-click, right-click, drag, and hover).
  • Has good keyboard support. The program makes users efficient by providing standard shortcut key assignments, especially for navigation and editing commands that can also be generated through touch gestures.
  • Has controls large enough for touch. The controls have a minimum size of 23x23 pixels (13x13 DLUs), and the most commonly used controls are at least 40x40 pixels (23x22 DLUs). To avoid unresponsive behavior, there should be no small gaps between targets—the UI elements should be spaced so that adjacent targets are either touching or have at least 5 pixels (3 DLUs) of space between them.
  • Provides smooth, responsive panning and zooming wherever appropriate. The program redraws quickly enough to pan and zoom events that it feels interactive during the gesture.
  • Is accessible. Uses Microsoft Active Accessibility (MSAA) to provide programmatic access to the UI for assistive technologies. The program appropriately responds to theme and system metric changes.
  • Works well and looks good in 120 dpi (dots per inch), which is the recommended default display resolution for computers enabled for Windows Touch.
  • Uses common controls. Most common controls are designed to support a good touch experience. If necessary, the program uses well-implemented custom controls that are designed to support easy targeting and interactive manipulation.
  • Uses constrained controls. Constrained controls like lists and sliders, when designed for easy touch targeting, can be better than unconstrained controls like text boxes because they reduce the need for text input.
  • Provides appropriate default values. The program selects the safest (to prevent loss of data or system access) and most secure option by default. If safety and security aren't factors, the program selects the most likely or convenient option, thereby eliminating unnecessary interaction.
  • Provides text auto completion. Provides a list of most likely or recently input values to make text input much easier.

Unfortunately, the converse is also true—if your program isn't well designed, its shortcomings are going to be especially obvious to users who use touch.

Just as accessible software benefits all users, providing a great touch experience benefits all users because everyone benefits when basic interactions are easy to perform, efficient, responsive, and forgiving.

Model for touch interaction

If you aren't experienced with using touch, the best introduction is to learn by doing. Get a touch-enabled computer, put the mouse and keyboard aside, and do the tasks that you normally do using just your fingers. If you have a Tablet PC, experiment with holding it in different positions, such as on your lap, lying flat on a table, or in your arms while you're standing. Try using it in portrait and landscape orientation.

As you experiment with touch, you'll discover that:

  • Small controls are difficult to use. The size of the controls greatly affects your ability to interact effectively. Controls that are at least 23x23 pixels (13x13 DLUs) are usable with a finger, but larger controls of at least 40x40 pixels (23x22 DLUs) are even more comfortable to use. For example, the Start menu (42x35 pixels) is easy to touch whereas spin controls (15x11 pixels) are much too small to use with a finger.
  • Task locality helps. While you can move the pointer across a 14-inch screen with a 3-inch mouse movement, using touch requires you to move your hand the full 14 inches. Repeatedly moving between targets that are far apart can be tedious, so it's much better to keep task interactions within the range of a resting hand whenever possible. Context menus are convenient because they require no hand movement.
  • Hover must not be required. Most touchscreen technologies don't detect a hovering finger, even if they can detect a hovering pen. If a program has tasks that depend on hover, you won't be able to perform them efficiently using touch.
  • Text input and selection are difficult. Lengthy text input is especially difficult using touch, so auto-completion and acceptable default text values can really simplify tasks. Text selection can also be quite difficult, so tasks are easier when they don't require precise cursor placement.
  • Small targets near the edge of the display can be very difficult to touch. Some display bezels protrude, and some touchscreen technologies are less sensitive at the edges, making controls near the edge harder to use. For example, the Minimize, Maximize/Restore, and Close buttons on the title bar can be harder to use when a window is maximized.

While there are several challenges here, addressing them improves the experience for all users.

Basic touch design principles

Each input device has its strengths and weaknesses. The keyboard is best for text input and giving commands with minimal hand movement. The mouse is best for efficient, precise pointing. Touch is best for object manipulation and giving simple commands. A pen is best for freeform expression, as with handwriting and drawing.

When thinking about touch support for your program:

  • Don't assume that if a UI works well for a mouse, it also works well for touch. While good mouse support is a start, a good touch experience has a few additional requirements.
  • You can assume that if a UI works well for a finger, it also works well for a pen. Making your program touchable goes a long way to providing good pen support. The primary difference is that fingers have a blunter tip, so they need larger targets. And again, hover must be optional. For guidelines about supporting pen input, see Pen.
  • Don't depend on touch pointer to fix touch UI problems. Because the touch pointer isn't as easy to use as direct input, view the touch pointer as a last resort for programs that haven't been designed for touch.

Control sizes

Fitts' Law states that the time required to interact with a target depends upon the size of the target and the distance to it. The smaller a target is, and the further away it is, the harder it is to use. But due to the large surface area of the fingertip, small controls that are too close together can also be difficult to target precisely.

As a general rule, a control size of 23x23 pixels (13x13 DLUs) is a good minimum interactive control size for any input device. By contrast, the spin controls at 15x11 pixels are much too small to be used effectively with touch.

Screen shot of spin control and its dimensions

The spin control is too small for touch.

Note that the minimum size is really based on physical area, not layout metrics such as pixels or DLUs. Research indicates that the minimum target area for efficient, accurate interaction using a finger is 6x6 millimeters (mm). This area translates to layout metrics as follows:

FontMillimetersRelative pixelsDLUs
9 point Segoe UI6x623x2313x13
8 point Tahoma6x623x2315x14

Furthermore, research shows that a minimum size of 10x10 mm (about 40x40 pixels) enables better speed and accuracy, and also feels more comfortable to users. When practical, use this larger size for command buttons used for the most important or frequently used commands.

Screen shot of button larger than minimum size

In this example, Microsoft Word uses buttons larger than 10x10 mm for the most important commands.

Screen shot of Calculator with large buttons

This version of Calculator uses buttons larger than 10x10 mm for its most frequently used commands.

The goal isn't to have giant controls—just easily touchable controls. You can make controls easily touchable without appearing excessively large by using the techniques listed in the guidelines section later in this article.

Note: When sizing controls, target 96 dpi. Users' ability to touch improves with higher resolutions.

Control spacing

The spacing between controls is also a factor in making controls easily touchable. Targeting is quicker but less precise when using a finger as the pointing device, resulting in users more often tapping outside their intended target. When interactive controls are placed very close together but are not actually touching, users may click on inactive space between the controls. Because clicking inactive space has no result or visual feedback, users are often uncertain what went wrong. If small controls are too closely spaced, the user needs to tap with precision to avoid tapping the wrong object. To address these issues, the target regions of interactive controls should either be touching or preferably have at least 5 pixels (3 DLUs) of space between them.

You can make controls within groups easier to differentiate by using more than the recommended vertical spacing between controls. For example, radio buttons at 19 pixels high are shorter than the minimum recommended size of 23 pixels. When you have vertical space available, you can achieve roughly the same effect as the recommended sizing by adding an additional 4 pixels of spacing to the standard 7 pixels.

Correct:
Screen of shot dialog box with normal spacing

Better:
Screen shot of dialog box with extra spacing

In the better example, the extra spacing between the radio buttons makes them easier to differentiate.

There may be situations in which extra spacing would be desirable when using touch, but not when using the mouse or keyboard. In such cases, you should only use a more spacious design when an action is initiated using touch.

Control location

Task locality reduces tedious repeating cross-screen movements. To minimize hand movements, locate controls close to where they are most likely going to be used.

Incorrect:
Screen shot of color palette separated from tools

In this example from Windows XP, the color palette is too far from where it is likely to be used.

Consider that the user's current location is the closest a target can be, making it trivial to acquire. Thus, context menus take full advantage of Fitts' law, as do the mini-toolbars used by Microsoft Office.

Screen shot of pointers near menus

The current pointer location is always the easiest to acquire.

Small targets near the display edge can be difficult to touch, so avoid placing small controls near window edges. To ensure that controls are easy to target when a window is maximized, either make them at least 23x23 pixels (13x13 DLUs) or place them away from the window edge.

Touch interactions

System gestures

System gestures are defined and handled by Windows. As a result, all Windows programs have access to them. These gestures have equivalent mouse, keyboard, and application command messages:

System gestureSynthesized equivalent message
Hover (when supported)Mouse hover
Tap (down and up)Mouse left-click
Double tap (down and up twice)Mouse double left-click
Press and hold (down, pause, up)Mouse right-click
Drag (down, move, up)Mouse left-drag
Press, hold, and drag (down, pause, move, up)Mouse right-drag
Select (down, move over selectable objects, up)Mouse select

Developers: For more information, see SystemGesture Enumeration.

Flicks

Flicks are simple gestures that are roughly the equivalent of keyboard shortcuts. Navigational flicks include drag up, drag down, move back, and move forward. Editing flicks include copy, paste, undo, and delete. To use flicks, your program only needs to respond to the related keystrokes commands—or your program can handle the events directly.

Figure of flick gestures such as the Move gesture

The eight flick gestures and their default assignments in Windows 7. The navigation flicks were changed to correspond to panning (where the object moves with the gesture) instead of scrolling (where the object moves in the opposite direction of the gesture).

Figure of flick gestures such as the Move gesture

The eight flick gestures and their default assignments in Windows Vista.

The navigational flicks have natural mapping, so they are easy to learn and remember. The editing flicks are diagonals that require more precision and their mappings are not as natural (flick towards the Recycle Bin to delete, flick in the Back arrow direction to undo), so these aren't enabled by default. All flick actions can be customized using the Pen and Input Devices control panel item.

FlickSynthesized equivalent message
Flick leftForward command (Back command for Windows Vista)
Flick rightBack command (Forward command for Windows Vista)
Flick upKeyboard Scroll Down
Flick downKeyboard Scroll Up
Flick up-left diagonalKeyboard Delete
Flick down-left diagonalKeyboard Undo
Flick up-right diagonalKeyboard Copy
Flick down-right diagonalKeyboard Paste

Application gestures

Applications can define and handle other gestures as well. The Microsoft Gesture Recognizer can recognize over 40 gestures. To use application gestures, your program must define the gestures it recognizes, and then handle the resulting events.

Hover

Hover is a useful interaction because it allows users to get additional information through tips before initiating an action. Doing so makes users feel more confident and reduces errors.

Unfortunately, hover isn't supported by touch technologies, so users will not be able to hover when using a finger. The simple solution to this problem is to take full advantage of hover, but only in ways that are not required to perform an action. In practice, this usually means that the action can also be performed by clicking, but not necessarily in exactly the same way.

Screen shots of results of hovering and clicking

In this example, users can see today's date by either hovering or clicking.

Responsiveness and consistency

Responsiveness is essential for creating touch experiences that feel direct and engaging. To feel direct, gestures must take effect immediately, and an object's contact points must stay under the user's fingers smoothly throughout the gesture. The effect of a manipulation should map directly to the user's motion, so, for example, if the user rotates his fingers 90 degrees, the object should rotate 90 degrees as well. Any lag, choppy response, loss of contact, or inaccurate results destroys the perception of direct manipulation and also of quality.

Consistency is essential for creating touch experiences that feel natural and intuitive. Once users learn a standard gesture, they expect that gesture to have the same effect across all applicable programs. To avoid confusion and frustration, never assign non-standard meanings to standard gestures. Instead, use custom gestures for interactions unique to your program.

Forgiveness

What makes touch so natural, expressive, efficient, and engaging is its directness. In fact, interacting through touch is often referred to as direct manipulation. However, where there is direct manipulation, there can be accidental manipulation—and therefore the need for forgiveness.

Forgiveness is the ability to reverse or correct an undesired action easily. You make a touch experience forgiving by providing undo, giving good visual feedback, having a clear physical separation between frequently used commands and destructive commands, and allowing users to correct mistakes easily. Associated with forgiveness is preventing undesired actions from happening in the first place, which you can do by using constrained controls and confirmations for risky actions or commands that have unintended consequences.

Editing text

Editing text is one of the most challenging interactions when using a finger. Using constrained controls, appropriate default values, and auto-completion eliminates or reduces the need to input text. But if your program involves editing text, you can make users more productive by automatically zooming input UI up to 150 percent by default when touch is used.

For example, an e-mail program could display UI at normal touchable size, but zoom the input UI to 150 percent to compose messages.

Screen shot of Outlook message in large font

In this example, the input UI is zoomed to 150 percent.

If you do only six things...
1.  Make your Windows programs have a great touch experience! Users should be able to perform your program's most important tasks efficiently using a finger (at least the tasks that don't involve a lot of typing or detailed pixel manipulation).
2.  For common controls, use the standard control sizing. For other controls, make sure they have at least a 23x23 pixel (13x13 DLU) click target, even if their static appearance is much smaller.
3.  Make use of hover, but don't make it the only way to perform an action. Hover isn't supported by most touchscreen technologies.
4.  To create a direct and engaging experience, have gestures take effect immediately, keep contact points under the user's fingers smoothly throughout the gesture, and have the effect of the gesture map directly to the user's motion.
5.  To create a natural and intuitive experience, support appropriate standard gestures and assign them their standard meanings. Use custom gestures for interactions unique to your program.
6.  Make sure your program provides the ability to reverse or correct any undesired actions—especially for destructive commands. Accidental actions are more likely when using touch.

Guidelines

Control usage

  • Prefer using common controls. Most common controls are designed to support a good touch experience.
  • Choose custom controls that are designed to support touch. You might need to have custom controls to support your program's special experiences. Choose custom controls that:
    • Can be sized large enough to be easily touchable.
    • When manipulated, move and react the way real-world objects move and react, such as by having momentum and friction.
    • Are forgiving by allowing users to easily correct mistakes.
    • Are forgiving of inaccuracy with clicking and dragging. Objects that are dropped near their destination should fall into the correct place.
    • Have feedback that is clearly visible even when the finger is over the control, such a ripple effect.
  • Prefer constrained controls. Use constrained controls like lists and sliders whenever possible, instead of unconstrained controls like text boxes, to reduce the need for text input.
  • Provide appropriate default values. Select the safest (to prevent loss of data or system access) and most secure option by default. If safety and security aren't factors, select the most likely or convenient option, thereby eliminating unnecessary interaction.
  • Provide text auto-completion. Provide a list of most likely or recently input values to make text input much easier.
  • For important tasks that use multiple selection, if a standard multiple-selection list is normally used, provide an option to use a check box list instead.

Control sizing

  • For common controls, use the recommended control sizes. The recommended control sizing satisfies the 23x23 pixel (13x13 DLU) minimum size, except for check boxes and radio buttons (their text width compensates somewhat), spin controls (which aren't usable with touch but are redundant), and splitters.

    Screen shot of controls of recommended size

    The recommended control sizes are easily touchable.

  • For command buttons used for the most important or frequently used commands, use a minimum size of 40x40 pixels (23x22 DLUs) whenever practical. Doing so yields better speed and accuracy, and also feels more comfortable to users.

    Screen shot of small, medium, and large buttons

    Whenever practical, use larger command buttons for important or frequently used commands.

  • For other controls:
    • Use larger click targets. For small controls, make the target size larger than the statically visible UI element. For example, 16x16 pixel icon buttons can have a 23x23 pixel click target buttons, and text elements can have selection rectangles 8 pixels wider than the text and 23 pixels high.

      Correct:
      Screen shot of Help button with more target area  Cc872774.Touch22b(en-us,MSDN.10).png

      Incorrect:
      Screen shot of icon with small target area

      Correct:
      Screen shot of icon with larger target area

      In the correct examples, the click targets are larger than the statically visible UI elements.

    • Use redundant click targets. It's acceptable for click targets to be smaller than the minimum size if that control has redundant functionality.

      For example, the progressive disclosure triangles used by the tree view control are only 6x9 pixels, but their functionality is redundant with their associated item labels.

      Screen shot of tree view with 'expand' triangles

      The tree view triangles are too small to be easily touchable, but they are redundant in functionality with their larger associated labels.

  • Respect system metrics. Use system metrics for all sizes—don't hardwire sizes. If necessary, users can change the system metrics or dpi to accommodate their needs. However, treat this as a last resort because users shouldn't normally have to adjust system settings to make UI usable.

    Screen shot of menus with normal and large sizing

    In this example, the system metric for menu height was changed.

Control layout and spacing

  • Choose a layout that places controls close to where they are most likely going to be used. Keep task interactions within a small area whenever possible. Avoid long distance hand movements, especially for common tasks and for drags.
  • Use the recommended spacing. The recommended spacing is touch-friendly. However, if your program can benefit from larger sizing and spacing, consider the recommended sizing and spacing to be minimums when appropriate.
  • Interactive controls should either be touching or preferably have at least 5 pixels (3 DLUs) of space between them. Doing so prevents confusion when users tap outside their intended target.
  • Consider adding more than the recommended vertical spacing within groups of controls, such as command links, check boxes, and radio buttons, as well as between the groups. Doing so makes them easier to differentiate.
  • Consider adding more than the recommended vertical spacing dynamically when an action is initiated using touch. Doing so makes objects easier to differentiate, but without taking more space when using a keyboard or mouse. Increase the spacing by a third of its normal size or at least 8 pixels.

    Screen shot of lists with normal and extra spacing

    In this example, Windows 7 taskbar Jump Lists are more spacious when displayed using touch.

Interaction

  • Make hover redundant. Take full advantage of hover, but only in ways that are not required to perform an action. This usually means that the action can also be performed by clicking, but not necessarily in exactly the same way. Hover isn't supported by most touch technologies, so users with such touchscreens can't perform any tasks that require hovering.
  • For programs that need text input, fully integrate the touch keyboard feature by:
    • Providing appropriate default values for user input.
    • Providing auto-complete suggestions when appropriate.

    Developers: For more information about integrating the touch keyboard, see ITextInputPanelInterface.

  • Allow users to zoom the content UI if your program has tasks that require editing text. Consider automatically zooming to 150 percent when touch is used.
  • Provide smooth, responsive panning and zooming wherever appropriate. Redraw quickly after a pan or zoom to remain responsive. Doing so is necessary to make direct manipulation feel truly direct.
  • During a pan or zoom, make sure that the contact points stay under the finger throughout the gesture. Otherwise, the pan or zoom is difficult to control.
  • Because gestures are memorized, assign them meanings that are consistent across programs. Don't give different meanings to gestures with fixed semantics. Use an appropriate program-specific gesture instead.

Windows Touch gestures

Use the following gestures whenever applicable to your program. These gestures are the most useful and natural.

  • Panning
    Entry state: One or two fingers in contact with the screen.
    Motion: Drag, with any additional fingers remaining in same position relative to each other.
    Exit state: Last finger up ends the gesture.
    Effect: Move the underlying object directly and immediately as the fingers move. Be sure to keep the contact point under the finger throughout the gesture.

    Figure of fingers touching and dragging

    The pan gesture.

  • Zoom
    Entry state: Two fingers in contact with the screen at the same time.
    Motion: Fingers move apart or together (pinch) along an axis.
    Exit state: Any finger up ends the gesture or the fingers break the axis.
    Effect: Zoom the underlying object in or out directly and immediately as the fingers separate or approach on the axis. Be sure to keep the contact points under the finger throughout the gesture.

    Figure of fingers moving apart

    The zoom gesture.

    If animated carefully, allowing users to zoom while panning can be a powerful, efficient interaction.
  • Rotate
    Entry state: Two fingers in contact with the screen at the same time.
    Motion: One or both fingers rotate around the other, moving perpendicular to the line between them.
    Exit state: Any finger up ends the gesture.
    Effect: Rotate the underlying object the same amount as the fingers have rotated. Be sure to keep the contact points under the finger throughout the gesture.

    Figure of fingers moving in circular motion

    The rotation gesture.

    Rotation makes sense only for certain types of objects, so it's not mapped to a system Windows interaction.

    Rotation is often done differently by different people. Some people prefer to rotate one finger around a pivot finger, while others prefer to rotate both fingers in a circular motion. Most people use a combination of the two, with one finger moving more than the other. While smooth rotation to any angle is the best interaction, in many contexts, such as photo viewing, it is best to settle to the nearest 90 degree rotation once the user lets go. In photo editing, a small rotation can be used to straighten the photo.

  • Two-finger tap
    Entry state: Two fingers in contact with the screen at the same time.
    Motion: No motion.
    Exit state: Any finger up ends the gesture.
    Effect: Alternatively zooms or restores the default view for the object between the fingers.

    Figure of two fingers tapping

    The two-finger tap gesture.

  • Press and tap
    Entry state: One finger in contact with the screen, followed by a second finger.
    Motion: No motion.
    Exit state: Second finger up ends the gesture.
    Effect: Performs a right click for the object under the first finger.

    Figure of finger pressing, second finger tapping

    The press and tap gesture.

Forgiveness

  • Provide an Undo command. Ideally, you should provide a simple way to undo all commands, but your program may have some commands whose effect cannot be undone.
  • Whenever practical, provide good feedback on finger down, but don't take actions until finger up. Doing so allows users to correct mistakes before they make them.
  • Whenever practical, allow users to correct mistakes easily. If an action takes effect on finger up, allow users to correct mistakes by sliding while the finger is still down.
  • Whenever practical, indicate that a direct manipulation can't performed by resisting the movement. Allow the movement to happen, but have the object settle back in place when released to clearly indicate that the action was recognized but can't be done.
  • Have clear physical separation between frequently used commands and destructive commands. Otherwise, users might touch destructive commands accidentally. A command is considered destructive if its effect is widespread and either it cannot be easily undone or the effect isn't immediately noticeable.
  • Confirm commands for risky actions or commands that have unintended consequences. Use a confirmation dialog box for this purpose.
  • Consider confirming any other actions that users tend to do accidentally when using touch, and which either go unnoticed or are difficult to undo. Normally, these are called routine confirmations and are discouraged based on the assumption that users don't often issue such commands by accident with a mouse or keyboard. To prevent unnecessary confirmations, present these confirmations only if the command was initiated using touch.

    Screen shot of Yes/No buttons on Delete dialog box

    Routine confirmations are acceptable for interactions that users often do accidentally using touch.

    Developers: You can distinguish between mouse events and touch events using the GetMessageExtraInfo API.

Documentation

When referring to touch:

  • Refer to the user's hand when used as an input device as one or more fingers.
  • Refer generically to the keyboard, mouse, trackball, pen, or finger as an input device.
  • Use tap (and double-tap) instead of click when documenting procedures specific to using a finger or pen. Tap means to press the screen and then lift before a hold time. It may or may not be used to generate a mouse click. For interactions that don't involve the finger or pen, continue to use click.
  • Touchscreen and touchpad are single, compound words. Don't refer to them as touch screen and touch pad.
Did you find this helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft. All rights reserved.