Mobile Ink Jots 6: Using Gestures in Tablet PC Applications

 

Mark Hopkins
Microsoft Corporation

March 2005

Applies to:
   Microsoft Windows Tablet PC Edition

Summary: A Gesture is a pen movement or combination of movements that match the patterns defined by a recognizer, and is assigned special behavior within an application in order to implement the behavior assigned to the gesture. Gestures provide a powerful mechanism for making your application truly pen-centric. This article will help you implement gestures into your application, and it will provide advice on when to use gestures and how to make them discoverable by your users. (11 printed pages)

Click here to download the sample code for this article. (156 KB)

Contents

What Are Gestures?
How Are Gestures Useful?
Driving Applications with a Pen
Implementing Gestures
Implementing Application Gestures
Implementing System Gestures
The Sample Application
StylusInput GestureRecognizer
Thoughts on Usability
Conflicts with Language Characters
Future Gestures
Conclusion

What Are Gestures?

In addition to interpreting Strokes as written characters, a recognizer can return results that specify the user has entered a Gesture. A gesture is an Ink stroke or pen movement that matches the set of glyphs defined by a recognizer. These glyphs are assigned special behavioral meaning in the application. When a Gesture object is recognized it triggers an event that you can handle in your application.

There are two kinds of gestures; System Gestures and Application Gestures.

System Gestures

System gestures map to traditional mouse messages. These gestures enable a Microsoft Windows XP Tablet PC Edition user to more easily use traditional Windows applications and controls on the Tablet PC. Applications developed for a Tablet PC must support these gestures in the same way that they support mouse actions. For traditional applications, these events are interpreted as traditional mouse events. System gestures supported by Tablet PC are listed in the following table.

Table 1. System Gestures supported by Microsoft Windows XP Tablet PC Edition

System Gesture Description
Tap Maps to a left-click on a mouse. This can be used to choose a command from the menu or toolbar, take action if a command is chosen, set an insertion point, or show selection feedback.
DoubleTap Maps to a double-click on a mouse. This can be used to select a word or open a file or folder.
RightTap Maps to a right-click on a mouse. This can be used to show a shortcut menu.
Drag Maps to a left drag on a mouse. This can be used to drag-select (such as in Microsoft Word when starting with an insertion point), select multiple words, drag (such as when dragging an object in Microsoft Windows), or scroll.
RightDrag Specifies a press and hold followed by a Stroke, which maps to a right drag on a mouse. This can be used to drag (such as when dragging an object or selection followed by a shortcut menu).
HoldEnter Specifies a left click for a long time, which has no mouse equivalent. This is a fallback for when a user continues a press-and-hold action for a long time and the event reverts to a Tap.
HoldLeave Not implemented.
HoverEnter Maps to a mouse hover. This can be used to show a ToolTip, roll-over effects, or other mouse hover behaviors.
HoverLeave Maps to a mouse leaving a hover. This can be used to end ToolTip roll-over effects or other mouse hover behaviors.

Application Gestures

Application gestures are supported at the individual program level, as opposed to operating system-wide. Application gestures are typically ink-based, but not necessarily. The same application gesture can mean different things in different applications.

There are over forty application gestures defined by the Tablet PC platform. For example, there is a Scratchout gesture, a Check gesture, Chevron gestures, Line gestures, Shape gestures such as Star, Square, Circle, and Triangle, and so on. For a complete list of application gestures and their shapes, please see Application Gestures and Semantic Behavior in the Tablet PC SDK.

How Are Gestures Useful?

Gestures provide a natural interface for pen users. A Tablet PC is useful in areas where a traditional computer or laptop would be hard or even impossible to use. For instance, a Tablet PC can be used while standing in a hallway or on a manufacturing floor. In order to make using applications in such situations more natural, the Tablet PC platform provides support for gestures. Gestures can provide alternatives to the traditional controls found in Microsoft Windows applications. Scroll bars in particular can be very difficult to use with a pen. They are very thin and sit on the edge of the screen, making them difficult to target with the pen. You could, for example, use gestures to implement scrolling in your application, making it easier for users to drive your application with a pen.

Gestures can be more powerful than traditional keystrokes because they allow targeting of the gesture to an object. For example, the Scratchout gesture enables you to erase ink faster than with a mouse and keyboard. Using the mouse requires you to select the ink then use the keyboard to issue a delete command or right-click with the mouse and select the delete command from a menu. The Scratchout gesture does all this in one quick, natural movement of the pen.

Driving Applications with a Pen

Imagine a sales application that runs on a Tablet PC. A salesperson has limited time in front of a potential client. When the customer asks a question, the salesperson needs to be able to answer it immediately. The application could contain the whole catalog of products available. It could use gestures to enable the salesperson to move quickly through products by using Up and Down gestures to scroll through lists or forms. The salesperson could do a ChevronDown gesture, which looks much like a 'V,' on a particular product in a list and a video would launch showing how to use that product.

Implementing Gestures

Adding support for gestures to your application is really quite simple.

Implementing Application Gestures

There are four steps involved in adding support for Application Gestures:

  1. Set the ink collector's CollectionMode property to collect gestures.

    Note   Where possible, setting the CollectionMode property to CollectionMode.GestureOnly enables the recognizer to concentrate on just recognizing gestures. This makes the recognizer much more accurate than when it must distinguish between gestures and handwriting.

  2. Set the gesture status to true for the gestures you are interested in handling.

  3. Declare an event handler for the Gesture event, whether InkCollector.Gesture, InkEdit.Gesture, InkOverlay.Gesture, or InkPicture.Gesture.

  4. Implement the Gesture event handler.

Assuming an InkOverlay object called inkOverlay, the code for the first three steps looks like this:

// Set ink collection mode
inkOverlay.CollectionMode = CollectionMode.GestureOnly;

// Set which application gestures we want to receive
inkOverlay.SetGestureStatus(ApplicationGesture.Scratchout, true);
inkOverlay.SetGestureStatus(ApplicationGesture.DownLeft, true);
inkOverlay.SetGestureStatus(ApplicationGesture.DownLeftLong, true);

// Declare the application Gesture event handler
inkOverlay.Gesture += 
    new InkCollectorGestureEventHandler(inkOverlay_Gesture);

The calls to SetGestureStatus indicate the gestures we want to collect. In this manner, you set the gesture status on individual gestures and only those gestures cause your Gesture event handler to be called. This is more efficient than calling SetGestureStatus with a value of ApplicationGesture.AllGestures.

To handle more than one gesture**,** your event handler needs to determine which gesture caused the event. The following code shows a Gesture event handler that handles three specific gestures: Scratchout, DownLeft, and DownLeftLong. It does the same action for DownLeft and DownLeftLong because these gestures are very similar. Rather than make the user create exactly the right gesture, we just handle both the same.

private void inkOverlay_Gesture(object sender, 
                                InkCollectorGestureEventArgs e)
{
  Gesture g = e.Gestures[0];
  DialogResult dlgResult;

  // Which gesture did we get?
  switch (g.Id)
  {
    case ApplicationGesture.Scratchout:
      // Handle the Scratchout gesture
      break;

    case ApplicationGesture.DownLeft:
    case ApplicationGesture.DownLeftLong:
      // Handle DownLeft and DownLeftLong together 
      // since they are so similar
      break;

    default:
      // Just show a message box for all other gestures
      MessageBox.Show(g.Id.ToString());
      break;
  }
}

Implementing System Gestures

You implement a SystemGesture event handler to be notified of system gestures. Because system gestures are akin to mouse messages, it is not necessary to specify which gestures you want to receive. You will receive all of them. There are two steps required to set up a SystemGesture event handler.

  1. Declare an event handler for system gestures.
  2. Implement the SystemGesture event handler, whether InkCollector.SystemGesture, InkOverlay.SystemGesture, or InkPicture.SystemGesture.

This code declares the SystemGesture event handler in the form's constructor:

// Declare the SystemGesture event handler
inkOverlay.SystemGesture += 
  new InkCollectorSystemGestureEventHandler(inkOverlay_SystemGesture);

Here is the implementation of the SystemGesture event handler method:

private void inkOverlay_SystemGesture(object sender, 
                           InkCollectorSystemGestureEventArgs e)
{
  // Which system gesture did we get?
  switch (e.Id)
  {
    case SystemGesture.HoverEnter:
    case SystemGesture.HoverLeave:
      // These happen so often, just ignore them
      break;

    case SystemGesture.RightTap:
      statusBar1.Text = "RightTap";
      break;

    default:
      statusBar1.Text = e.Id.ToString();
      break;
  }
}

Some system gestures are duplicated as application gestures. You can try this by tapping the screen with the pen, doing so in the Gesture area of the sample. You will see the Tap system gesture displayed in the status bar. A few seconds later you will see a message box showing that you received a Tap application gesture, as well. Notice how much longer it takes to get the application gesture.

The Sample Application

The sample application that accompanies this article implements a simple shopping list.

Click here for larger image.

Figure 1. The sample application.

A CheckedListBox is pre-populated with some common grocery items. Check some of the items in the list and do a Scratchout gesture in the gesture area. You should get a dialog box that asks if you want to delete the checked items.

Figure 2. The Delete Items dialog.

Do a DownLeft gesture in the Gesture area. You are presented with a dialog box in which you can enter a new item for the shopping list. The dialog uses an InkOverlay to collect ink input.

Click here for larger image.

Figure 3. The Add New Item dialog.

Notice the large buttons and how the InkOverlay area is labeled to show the user where to Ink. It is also large enough so it is comfortable for the user to write. This is one way to make your application more pen-centric. Make sure the controls are easy to target and use with a pen.

The InkOverlay is attached to a Label control in the Form2 constructor.

// Create inkOverlay1 and attach it to the Label
inkOverlay1 = new InkOverlay(label1);

By using InkOverlay as opposed to a control, like InkEdit, we have more control over the collection of Ink. This enables us to handle the Scratchout gesture and implement back-of-pen erase for editing Ink.

We set the CollectionMode to accept ink and gestures. We also set which gestures we want to handle. In this case we handle only the Scratchout gesture. We also declare the Gesture event handler.

// Set ink collection mode
inkOverlay1.CollectionMode = CollectionMode.InkAndGesture;

// Add Gesture support
inkOverlay1.SetGestureStatus(ApplicationGesture.Scratchout, true);
inkOverlay1.Gesture += 
  new InkCollectorGestureEventHandler(inkOverlay1_Gesture);

This is followed by code in the constructor for setting up CursorInRange and CursorOutOfRange event handlers. These event handlers show and hide inking instructions. They also determine if the pen is inverted and implement back-of-pen erase.

Here is the code for the Gesture event handler:

private void inkOverlay1_Gesture(object sender, 
                                 InkCollectorGestureEventArgs e)
{
  Gesture g = e.Gestures[0];

  // Only handle the gesture if the confidence level is Strong
  if (g.Confidence == RecognitionConfidence.Strong)
  {
    switch (g.Id)
    {
      // Did we, in fact, get the Scratchout gesture?
      case ApplicationGesture.Scratchout:
        // Hit test the ink
        Strokes hitStrokes =
               inkOverlay1.Ink.HitTest(e.Strokes.GetBoundingBox(), 30);
        inkOverlay1.Ink.DeleteStrokes(hitStrokes);
        label1.Refresh();
        break;

      // If we get NoGesture, cancel to keep the stroke as ink
      case ApplicationGesture.NoGesture:
        e.Cancel = true;
        break;

      default:
        // What gesture did we get?
        MessageBox.Show("Unexpected gesture: " + g.Id.ToString(), 
                "Unexpected Gesture");
        break;
    }
  }
  else
  {
    // Confidence too low, cancel the gesture
    e.Cancel = true;
  }
}

There are several points of interest in this event handler. First, we check the Confidence property on the Gesture object. Because we are in InkAndGesture mode, we only want to handle the Gesture event if the recognizer is very confident that the user intended to create a gesture as opposed to writing ink. We want to be as sure as we can that the user was not trying to write something that was mistaken for a gesture.

If the confidence is not high enough, we set the InkCollectorGestureEventArgs.Cancel property to true. This causes the Gesture event to be cancelled by the InkOverlay and the resulting Stroke object to be treated as Ink.

If the confidence is Strong, we check to see which Gesture we got. Interestingly, even though we only set the status of the Scratchout gesture to true, we will still get NoGesture with a Strong confidence. What's happening here is that the next Gesture in the InkCollectorGestureEventArgs.Gestures array will be the Scratchout gesture with a Confidence value of Poor. So, the recognizer is letting us know that the user might have intended to enter the Scratchout gesture. In the sample application, we set the Cancel property to true once more to cancel the Gesture event and keep the Stroke as Ink.

StylusInput GestureRecognizer

The StylusInput APIs provide a GestureRecognizer class that can be instantiated and placed in an AsyncPluginCollection or SyncPluginCollection. The following code snippet from the RealTimeStylus Plugin Sample shows how to instantiate the GestureRecognizer. It also adds an item in a CheckedListBox to allow the user to select whether or not to use the recognizer. The last line adds the application form to the AsyncPluginCollection.

// Attempt to create the GestureRecognizer plugin.
// An exception will occur if no recognizers are available.
// In this case, the sample proceeds, but does not add the
// gesture recognizer into the list of available plugins.
GestureRecognizer gr = null;
try
{
  gr = new GestureRecognizer();
  ApplicationGesture [] gestures = { ApplicationGesture.AllGestures };
  gr.EnableGestures(gestures);
  chklbPlugins.Items.Add(new PluginListItem(gr,"GestureRecognizer"));
}
catch
{
}

// ...

// Add the IStylusAsyncPlugin derived form 
// to the AsyncPluginCollection.
myRealTimeStylus.AsyncPluginCollection.Add(this);

For more information about the StylusInput APIs, see Accessing and Manipulating Pen Input and particularly Recognizer Plug-ins in the Tablet PC SDK. Creating your own Gesture recognizer for use with the StylusInput architecture is beyond the scope of this article.

Thoughts on Usability

Because gestures require no specific user interface, they can be difficult for end-users to discover and use. Try to give the user as many hints as possible about the gestures in your application. Perhaps even put a gesture key on the screen as we did in the sample application, although this can take up valuable screen real estate.

Consider communicating to end-users what happened as a result of their gesture. They might perform a gesture expecting one thing, and something entirely different results. Additionally, the suite of commands that gestures perform should be scrutinized. At the very least, users should be able to undo the action in case the user gets an unintended result.

Having a freeform ink-input paradigm in an application can complicate how gestures are incorporated. Consider an application like Microsoft Windows Journal. How can one reliably tell if users are drawing a gesture versus drawing actual ink they want in their document? False positives can be very frustrating for end-users. For example, it's very frustrating if the user wants to draw ink**,** but instead the application interprets the input as a gesture that takes him or her to the next page. You could introduce a Gesture mode instead of an Ink mode, but this is cumbersome and generally slower than doing things the old fashioned way: using a mouse-based user interface such as selection, toolbars, and menus.

For more information about incorporating gestures in your applications, see Design Considerations for Gestures in the Tablet PC SDK.

Conflicts with Language Characters

Be sure to test the gestures in your application with the recognizer set to languages other than the default language you normally use. In particular, test with East Asian language recognizers, which have characters or portions of characters that resemble some gestures. For more information about the potential conflicts, see Conflicts with East Asian Characters in the Tablet PC SDK.

Future Gestures

Microsoft may add more gestures to the platform. The Unimplemented Glyphs topic in the Tablet PC SDK lists a set of pen actions that Microsoft plans to map to gestures, as well as gestures that are being reserved for future actions. This information is given so that you know which actions and gestures are under consideration for future implementation.

Conclusion

Incorporating gestures into your Tablet PC application requires forethought. Because gestures have no specific user interface, you need to consider how to let users know that gestures are available. You could add short videos or animations to your help, similar to those included in the Getting Going with Tablet PC help that comes in the Windows XP Tablet PC Edition operating system.

You'll also want to test your application by using multiple recognizers to make sure the gestures don't conflict with characters from the corresponding languages.

Gestures can be a powerful addition to your application, however. They can help to make it truly pen-centric. The Tablet PC platform is ripe for innovation, and gestures could give your application the polish it needs to stand out in this rapidly growing market.