Guidelines for common user interactions (Windows Store apps)
These guidelines will help you create intuitive and immersive user interaction experiences for your Windows Store app that expose consistent functionality for all users, no matter what device or input method is used.
While Windows 8 provides unique and distinctive user interactions that are optimized for touch, more familiar and established input devices and methods, such as mouse, keyboard, and pen/stylus, are still fully supported.
So take advantage of these new and compelling interaction capabilities as you create your Windows Store app for Windows 8.
First and foremost, design your app with the expectation that touch will be the primary input method of your users. Support for mouse and pen/stylus requires no additional work as Windows 8 provides this for free.
Keep in mind that a UI optimized for touch is not necessarily superior to a traditional UI, as both provide advantages and disadvantages that are unique to the technology and application. In the move to a touch-first UI, it is important to understand the core differences between touch, mouse, and pen/stylus input. Do not take familiar mouse and pen/stylus properties and behaviors for granted, as touch in Windows 8 does more than simply emulate mouse functionality.
You will find throughout these guidelines that touch input requires a different approach to UI design.
Instead of working through an indirect pointing device, touch interaction supports the direct manipulation of controls and objects. Both touch input and visual feedback is provided through a single device: the display.
The following table shows some of the differentiating factors that you should consider when designing touch-optimized Windows Store apps.
|Factor||Touch interactions||Mouse, keyboard, pen/stylus interactions|
|Precision||The contact area of a fingertip is greater than a single x-y coordinate, which increases the chances of unintended command activations.||The mouse and pen/stylus supply a precise x-y coordinate.|
|The shape of the contact area changes throughout the movement.||Mouse movements and pen/stylus strokes supply precise x-y coordinates. Keyboard focus is explicit.|
|There is no mouse cursor to assist with targeting.||The mouse cursor, pen/stylus cursor, and keyboard focus all assist with targeting.|
|Human anatomy||Fingertip movements are inherently imprecise as a straight-line motion with one or more fingers is difficult due to the curvature of hand joints and the number of joints involved in the motion.||It's easier to perform a straight-line motion with the mouse or pen/stylus because the hand that controls them travels a shorter physical distance than the cursor on the screen.|
|Some areas on the touch surface of a display device can be difficult to reach due to finger posture and the user's grip on the device.||The mouse and pen/stylus can reach any part of the screen while any control can be accessed by the keyboard through tab order.|
|Objects might be obscured by one or more fingertips or the user's hand. This is known as occlusion.||Indirect input devices do not cause occlusion.|
|Object state||Touch uses a two-state model: the touch surface of a display device is either touched (on) or not (off). There is no hover state that can trigger additional visual feedback.||
A mouse, pen/stylus, and keyboard all expose a three-state model: up (off), down (on), and hover (focus).
Hover lets users explore and learn through tooltips associated with UI elements. Hover and focus effects can relay which objects are interactive and also help with targeting.
|Rich interaction||Supports multi-touch: multiple input points (fingertips) on a touch surface.||Supports a single input point.|
|Supports direct manipulation of objects through gestures such as tapping, dragging, sliding, pinching, and rotating.||No support for direct manipulation as mouse, pen/stylus, and keyboard are indirect input devices.|
Mouse input has had the benefit of more than 25 years of refinement. Features such as hover-triggered tooltips have been designed to solve UI exploration specifically for mouse, pen/stylus, and keyboard input. With Windows Touch, UI features such as this are designed in ways that are specific to the advantages provided by touch input without compromising the user experience for other devices.
Appropriate visual feedback during interactions with your app helps users recognize, learn, and adapt to how their interactions are interpreted by both the app and Windows 8. Visual feedback can indicate successful interactions, relay system status, improve the sense of control, reduce errors, help users understand the system and input device, and encourage interaction.
Visual feedback is critical when relying on touch input for activities that require accuracy and precision based on location. Displaying feedback whenever and wherever touch input is detected will help the user understand any custom targeting heuristics defined by your app and its controls.
The following techniques are used to enhance the immersive experience of Windows Store app.
Targeting is optimized through:
- Touch target sizes
Clear size guidelines ensure that applications provide a comfortable UI that contains objects and controls that are easy and safe to target.
- Contact geometry
The entire contact area of the finger is used to determine the most likely target object.
Items within a group are easily re-targeted by dragging the finger between them (for example, radio buttons). The current item is activated when the touch is released.
Densely packed items (for example, hyperlinks) are easily re-targeted by pressing the finger down and, without sliding, rocking it back and forth over the items. Due to occlusion, the current item is identified through a tooltip or the status bar and is activated when the touch is released.
Design for sloppy interactions by using:
- Snap-points that can make it easier to stop at desired locations when interacting with content.
- Directional "rails" that can assist with vertical or horizontal panning, even when the hand moves in a slight arc. For more information, see Guidelines for panning.
Finger and hand occlusion is avoided through:
- Size and positioning of UI
Make UI elements big enough so that they cannot be completely covered by a fingertip contact area.
Position menus and pop-ups above the contact area whenever possible.
Show tooltips when finger contact is maintained on an object. This is useful for describing object functionality (drag the fingertip off the object to avoid invoking it).
For small objects, offset tooltips so they are not covered by the fingertip contact area. This is helpful for targeting.
- Handles for precision
Where precision is required (for example, text selection), provide selection handles that are offset to improve accuracy. For more information, see Guidelines for selecting text and images.
Avoid timed mode changes in favor of direct manipulation, which simulates the direct, real-time physical handling of an object. The object responds as the fingers are moved.
A timed interaction, on the other hand, occurs after a touch interaction. Timed interactions typically depend on invisible thresholds like time, distance, or speed to determine what command to perform. Timed interactions have no visual feedback until the system performs the action.
Direct manipulation provides a number of benefits over timed interactions:
- Instant visual feedback during interactions make users feel more engaged, confident, and in control.
- Direct manipulations make it safer to explore a system because they are reversible—users can easily step back through their actions in a logical and intuitive manner.
- Interactions that directly affect objects and mimic real world interactions are more intuitive, discoverable, memorable, and don't rely on obscure or abstract interactions.
- Timed interactions can be difficult to perform, as users must reach arbitrary and invisible thresholds.
In addition, the following are strongly recommended:
- Manipulations should not be distinguished by the number of fingers used.
- Interactions should support compound manipulations. For example, pinch to zoom while dragging the fingers to pan.
- Interactions should not be distinguished by time. The same interaction should have the same outcome regardless of the time taken to perform it. Time-based activations introduce mandatory delays for users and detract from both the immersive nature of direct manipulation and the perception of system responsiveness.
Note An exception to this is where specific timed interactions are used to assist in learning and exploration (for example, press and hold).
- Appropriate descriptions and visual cues have a great effect on the use of advanced interactions.
As previously discussed, each input method has its own set of strengths and weaknesses. A mouse-based UI is not optimized for touch input. In turn, a touch-optimized UI design must recognize and consider the design implications for mouse, pen/stylus, and keyboard users. In short, Windows 8 touch-optimized applications must support efficient and intuitive interactions that expose equivalent functionality.
As an example, panning is a very different experience for touch users compared to scrolling for mouse, pen/stylus, and keyboard users. Panning is perfectly suited to touch because of the ability to drag and slide content with inertia physics and animations. This type of action is much more difficult to perform with a mouse or pen/stylus, and impossible with the keyboard, all of which have familiar and well established scrolling solutions.
In addition, the keyboard is optimized for text entry and editing and shortcuts to commands.
For these reasons, and depending on the requirements of your app, different interaction models might be better suited to the unique characteristics of an input type. The following table describes common interactions and how they map across touch, mouse, and keyboard for Windows 8.
|Select||Swipe opposite the scrolling direction (see Guidelines for cross-slide)||Right-click||Spacebar||Swipe opposite the scrolling direction (see Guidelines for cross-slide)|
|Show app bar||Swipe from top or bottom edge||Right-click||Windows Logo Key+Z, menu key||Swipe from top or bottom edge|
|Context menu||Tap on selected text, press and hold||Right-click||Menu||Tap on selected text, press and hold|
|Scrolling short distance||Slide||Scroll bar, arrow keys, left-click and slide||Arrow keys||Scroll bar|
|Scrolling long distance||Slide (including inertia)||Scroll bar, mouse wheel, left-click and slide||Page up, Page down||Scroll bar|
|Rearrange (drag)||Slide opposite the scrolling direction past a distance threshold (see Guidelines for cross-slide)||Left-click and slide||Ctrl-C, Ctrl-V||Slide opposite the scrolling direction past a distance threshold (see Guidelines for cross-slide)|
|Zoom||Pinch, stretch||Mouse wheel, Ctrl+mouse wheel, UI command||Ctrl+Plus(+)/Minus(-)||UI command|
|Rotate||Turn||Ctrl+Shift+mouse wheel, UI command||Ctrl+Plus(+)/Minus(-)||UI command|
|Insert cursor/select text||Tap, tap on gripper||Left-click+slide, double-click||Arrow keys, Shift+arrow keys, Ctrl+arrow keys, and so on||Tap, tap on gripper|
|More information||Press and hold||Hover (with time threshold)||Move focus rectangle (with time threshold)||Press and hold|
|Interaction feedback||Touch visualizations||Cursor movement, cursor changes||Focus rectangles||Pen visualizations|
|Move focus||N/A||N/A||Arrow keys, Tab||N/A|
- Responding to user interaction
- Gestures, manipulations, and interactions
- Touch interaction design
Build date: 11/29/2012