Responding to user interaction (XAML)

Learn about the user interaction platform, the input sources (including touch, touchpad, mouse, pen/stylus, and keyboard), modes (touch keyboard, mouse wheel, pen eraser, and so on), and supported user interactions.

We explain how basic input and interaction functionality is provided for free, how to customize the user interaction experience, and how the interaction patterns are shared across language frameworks.

Using guidelines, best practices, and examples, we show you how to take full advantage of the interaction capabilities to build apps with intuitive, engaging, and immersive user experiences.

Tip  The info in this topic is specific to developing apps using C++, C#, or Visual Basic.

See Responding to user interaction (HTML) for apps using JavaScript.

See Responding to user interaction (DirectX and C++) for apps using DirectX with C++.

 

Prerequisites: If you're new to developing apps, have a look through these topics to get familiar with the technologies discussed here.

Create your first Windows Store app using C# or Visual Basic

Create your first Windows Store app using C++

Roadmap for Windows Runtime apps using C# or Visual Basic

Roadmap for Windows Store apps using C++

Learn about events with Events and routed events overview

App features, start to finish: Explore this functionality in more depth as part of our App features, start to finish series

User interaction, start to finish (XAML)

User interaction customization, start to finish (XAML)

User experience guidelines:

The platform control libraries (HTML and XAML) provide the full Windows user interaction experience, including standard interactions, animated physics effects, and visual feedback. If you don't need customized interaction support, use these built-in controls.

If the platform controls are not sufficient, these user interaction guidelines can help you provide a compelling and immersive interaction experience that is consistent across input modes. These guidelines are primarily focused on touch input, but they are still relevant for touchpad, mouse, keyboard, and stylus input.

Samples: See this functionality in action in our app samples.

Input: Device capabilities sample

Input sample

Input: Gestures and manipulations with GestureRecognizer

XAML scrolling, panning, and zooming sample

Input: Ink sample

Overview of the user interaction platform

Design your apps with touch interactions in mind: Touch input is supported by an increasingly large and varied number of devices, and touch interactions are a fundamental aspect of the user experience.

Because touch is a primary mode of interaction for users, Windows 8 and Windows Phone are optimized for touch input to make them responsive, accurate, and easy to use. Rest assured, tried and true input modes (such as mouse, pen, and keyboard) are fully supported and functionally consistent with touch (see Gestures, manipulations, and interactions). The speed, accuracy, and tactile feedback that traditional input modes provide are familiar and appealing to many users. These unique and distinctive interaction experiences have not been compromised.

Incorporating touch interactions into the design of your apps can greatly enhance the user experience. Be creative with the design of this experience, support the widest range of capabilities and preferences, appeal to the widest possible audience, and attract more customers to your app.

The user interaction platform is based on layers of functionality that progressively add flexibility and power:

Built-in controls

Take advantage of the built-in controls provided through the language frameworks to provide a full platform user interaction experience. This functionality works well for the majority of apps.

The built-in controls are designed from the ground up to be touch-optimized while providing consistent and engaging interaction experiences across all input modes. They support a comprehensive set of gestures (press and hold, tap, slide, swipe, pinch, stretch, turn) that, coupled with direct manipulations (pan, zoom, rotate, drag) and realistic inertia behavior, enable a compelling and immersive interaction experience that follows best practices consistently across the Windows platform.

For more info on the control libraries, see Adding controls and content (Windows Store apps using C#/VB/C++ and XAML).

Views

Tweak the user interaction experience through the pan/scroll and zoom settings of your app views. An app view dictates how a user accesses and manipulates your app and its content. Views also provide behaviors such as inertia, content boundary bounce, and snap points.

Pan/scroll settings dictate how users navigate within a single view (such as a page of a magazine or book, the folder structure of a computer, a library of documents, or a photo album) when the content of the view doesn't fit within the viewport.

Note  

Zoom settings apply to both optical zoom and the SemanticZoom control. Semantic Zoom is a touch-optimized technique for presenting and navigating large sets of related data or content within a single view using two distinct modes of classification (or zoom levels). This functionality is analogous to panning and scrolling (which can be used in conjunction with Semantic Zoom) within a single view.

Using app views to modify the pan/scroll and zoom behaviors can provide a smoother interaction experience than is possible through the handling of pointer and gesture events as described later.

For more info on app views, see Defining layouts and views.

For more info on zooming, see Guidelines for optical zoom and resizing or Guidelines for Semantic Zoom.

For more info on panning/scrolling, see Guidelines for panning.

Pointer, gesture, and manipulation events

A pointer is a generic input type with a unified event mechanism that exposes basic info (such as screen position) on the active input source (touch, touchpad, mouse, or pen). Gestures range from simple, static interactions like tapping to more complicated manipulations like zooming, panning, and rotating. For more details, see Gestures, manipulations, and interactions.

Note  

Static gesture events are triggered after an interaction is complete. Manipulation gesture events indicate an ongoing interaction. Manipulation gesture events start firing when the user touches the element and continue until the user lifts the finger or the manipulation is canceled.

 

Access to the pointer and gesture events enables you to use the Touch interaction design language for games, custom controls and feedback visuals, extending gestures, processing raw input data, and other custom interactions.

Pointer events

The following events correspond to lower-level gestures. Events at this level are similar to traditional mouse input events, but provide more information about the user input gesture and device.

Event Description
PointerCaptureLost Occurs when an element loses contact with the pointer.
PointerEntered Occurs after the pointer enters the bounds of an element.
PointerExited Occurs after the pointer leaves the bounds of an element.
PointerMoved Occurs when the pointer moves within the bounds of an element.
PointerPressed Occurs when a press gesture occurs within the bounds of an element.
PointerReleased Occurs when a release gesture occurs within the bounds of an element.
PointerWheelChanged Occurs when the user changes the position of the mouse wheel.

 

Using PointerRoutedEventArgs

All pointer events use PointerRoutedEventArgs for event data. In addition to the familiar Handled and OriginalSource properties, this class provides the following members:

Member Description
Pointer property Gets a Pointer object that identifies the input device and device type.
GetCurrentPoint method Gets a PointerPoint object that provides extensive info about the pointer location and device state at the time of the event.
GetIntermediatePoints method Gets a list of PointerPoint objects that represent the locations and device states of the pointer between the current and previous input events. This is useful for determining whether a series of pointer actions represents a more complex gesture.
KeyModifiers property Indicates whether a modifier key such as Control or Shift is pressed at the same time as the pointer event.

 

Pointer capture

In some cases, you want an element to continue to receive PointerMoved events even when the pointer is no longer above the element. This is called pointer capture. It is useful, for example, when the user performs a drag operation that should not be interrupted simply because the user momentarily moves the pointer outside the bounds of an element. When an element has pointer capture, the PointerMoved event does not occur for any other elements that the pointer moves over.

You can use the CapturePointer, ReleasePointerCapture, and ReleasePointerCaptures methods to enable or disable pointer capture. This works even with multiple input devices or touch points. While pointer capture is in effect, you can use the PointerCaptures property to retrieve Pointer objects that represent each captured pointer.

Pointer capture requires that the left mouse button, finger, or stylus button remain pressed for the duration of the movement. As soon as the button is released or the finger lifted, pointer capture is lost, and the PointerCaptureLost event occurs.

Gesture events

The following events correspond to high-level gestures. These events occur in addition to the lower-level events that occur for the same user actions. For example, the Tapped event occurs after the PointerPressed and PointerReleased events occur. In general, you should use one of the higher level events unless you need to respond to a specific portion of the gesture. For example, you might need to perform different actions for press and release.

Event Description
Tapped Occurs when an element is clicked or tapped, unless its IsTapEnabled property is set to false.
RightTapped Occurs when an element is right-clicked, or after a Holding event, unless the element's IsRightTapEnabled property is set to false.
DoubleTapped Occurs when an element is clicked or tapped twice in succession, unless its IsDoubleTapEnabled property is set to false.
Holding Occurs when the pointer is pressed and held on an element, unless the element's IsHoldingEnabled property is set to false. This event does not occur for mouse input. For equivalent mouse input, use RightTapped instead.

 

Each of these events has its own event arguments type, but they all share some common members. In a handler for one of these events, you can determine whether the input came from mouse, touch, or pen by checking the PointerDeviceType property of the event argument. You can also determine the coordinates of the event relative to the screen or to a specified element by calling the GetPosition method of the event argument.

Manipulation events

The following events correspond to even lower-level gestures. Events at this level provide the most information about the user input gesture.

Event Description
ManipulationStarting Occurs when the manipulation processor is first created.
ManipulationStarted Occurs when an input device begins a manipulation on the UIElement.
ManipulationDelta Occurs when the input device changes position during a manipulation.
ManipulationInertiaStarting Occurs when the input device loses contact with the UIElement object during a manipulation and inertia begins.
ManipulationCompleted Occurs when a manipulation and inertia on the UIElement are complete.

 

As their names imply, these events are suitable for using mouse, touch, and pen input to manipulate elements in your UI. For example, you can use these events to enable users to drag elements around the screen, and to provide realistic inertial effects. The various event argument classes provide detailed info on pointer position, change, and velocity, in addition to common properties such as PointerDeviceType, Handled, and OriginalSource.

For a simple example using the manipulation events, see Quickstart: Touch input.

Hit testing

When a user input gesture occurs over a UIElement, the corresponding events occur for that element only if it is visible to the input. Otherwise, the gesture passes through the element to any underlying elements or parent elements in the visual tree.

Determining whether an element is visible to mouse, touch, and stylus input is called hit testing. There are several factors that affect hit testing, but you can determine whether a given element can raise input events by checking its IsHitTestVisible property. This property returns true only if the element meets the following criteria:

  • Its Visibility property value is Visible.
  • Its Background or Fill property value is not null (Nothing in Visual Basic), which results in transparency and hit test invisibility. To make an element transparent and hit testable, be sure the relevant property is set to Transparent instead null.
  • If the element is a control, its IsEnabled property value is true.

Some controls have special rules for hit testing. For example, TextBlock and related controls have no Background property, but they are still hit testable within the entire region of their layout slot. Image and MediaElement controls are hit testable over their defined rectangle, regardless of transparent content. Also, most Panel classes are not hit testable, but can still handle user input events routed from elements that they contain.

You can determine which elements are located at the same position as a user input event, regardless of whether the elements are hit testable. To do this, call the FindElementsInHostCoordinates method. As the name implies, this method finds the elements at a location relative to a specified host element. However, you should be aware that applied transforms and layout changes can affect the coordinate system of an element, and therefore affect which elements are found at a given location.

Customize your app experience

To customize and control your app's interaction experience even more, use the Windows Runtime platform APIs. For example, you might want to handle additional configuration options and hardware capabilities such as mouse devices with a right button, wheel button, tilt wheel, or X buttons and pen devices with barrel buttons and eraser tips.

Most interaction APIs are in the Windows.UI.Xaml and Windows.UI.Xaml.Input namespaces, with ink functionality exposed through Windows.UI.Input.Inking and input device data exposed through the Windows.Devices.Input.

Before providing customized interaction experiences through new or modified gestures and manipulations, consider the following:

  • Does an existing gesture provide the experience your app requires. Don't provide a custom gesture to zoom or pan when you can simply adapt your app to support or interpret an existing gesture.
  • Does the custom gesture warrant the potential inconsistency across apps.
  • Does the gesture require specific hardware support, such as number of contacts.
  • Is there a control (such as ScrollViewer) that provides the interaction experience you need. If a control can intrinsically handle user input, is a custom gesture or manipulation required at all.
  • Does your custom gesture or manipulation result in a smooth and natural interaction experience.
  • Does the interaction experience make sense. If the interaction depends on such things as the number of contacts, velocity, time (not recommended), and inertia, ensure that these constraints and dependencies are consistent and discoverable. For example, how users interpret fast and slow can directly affect the functionality of your app and the users satisfaction with the experience.
  • Is the gesture or manipulation affected by the physical abilities of your user. Is it accessible.
  • Will an app bar command or some other UI command suffice.

In short, create custom gestures and manipulations only if there is a clear, well-defined requirement and no basic gesture can support your scenario.

Conceptual

Developing Windows Store apps (VB/C#/C++ and XAML)

Touch interaction design