Exercise 1: Build a Multi-touch Application

Task 1 – Create the Win32 Application

  1. Start Visual Studio 2008 SP1
  2. Select new C++ based Win32 application project:

  3. Compile and run!
  4. We are going to use APIs and Macros that belong to Windows 7, change the WINVER and _WIN32_WINNT definitions in the targetver.h header file to 0x0601

    #ifndef WINVER //Specifies that the minimum required platform is Windows 7
    #define WINVER 0x0601
    #endif

    #ifndef _WIN32_WINNT //Specifies that the minimum required platform is Win 7
    #define _WIN32_WINNT 0x0601
    #endif

  5. Compile and run!

Task 2 – Test the Existence and Readiness of Multi-touch Hardware

  1. The application that we are building requires touch-enable computer, add the following code, before the call to InitInstance() in the _tWinMain(), to check the hardware touch ability and readiness:BYTE digitizerStatus = (BYTE)GetSystemMetrics(SM_DIGITIZER);
    if ((digitizerStatus & (0x80 + 0x40)) == 0) //Stack Ready + MultiTouch
    {
    MessageBox(0, L"No touch support is currently availible", L"Error", MB_OK);
    return 1;
    }

    BYTE nInputs = (BYTE)GetSystemMetrics(SM_MAXIMUMTOUCHES);
    wsprintf(szTitle, L"%s - %d touch inputs", szTitle, nInputs);

  2. You can see that besides checking for touch availability and readiness we also find out the number of touch inputs that the hardware support.
  3. Compile and run!

Task 3 – Add the Stroke Source and Header Files to the Project, and Draw Lines with your Fingers

We would like to use our fingers as a multiple mouse device. We want to draw a line with each of our fingers that touches the screen. To do that we are going to use two stroke collections. One collection holds the finished strokes (lines) and another collection holds the on-going currently painting lines. Each finger that touches the screen adds points to a stroke in the g_StrkColDrawing collection. When we raise the finger from the screen, we move the finger's stroke from the g_StrkColDrawing to the g_StrkColFinished collection. At WM_PAINT we draw both collections.

  1. In the Starter folder you will find two files: Stroke.h and Stroke.cpp. Copy them to the project folder and use “Add Existing item…” to add them to the project.
  2. Add an #include "Stroke.h" line at the top of MTScratchpadWMTouch.cpp file

    #include "Stroke.h"

  3. Add global variables definition at the //Global Variables: section at the top of mtGesture.cpp file:CStrokeCollection g_StrkColFinished; // The user finished entering strokes. // The user lifted his or her finger.
    CStrokeCollection g_StrkColDrawing; // The Strokes collection the user is // currently drawing.

  4. Add the following lines in the WndProc(), note that WM_PAINT has been already created by the application wizard:case WM_PAINT:
    hdc = BeginPaint(hWnd, &ps);
    // Full redraw: draw complete collection of finished strokes and
    // also all the strokes that are currently in drawing.
    g_StrkColFinished.Draw(hdc);
    g_StrkColDrawing.Draw(hdc);
    EndPaint(hWnd, &ps);
    break;

  5. Now it's time to enable WM_TOUCH messages. By default a Window receives WM_GESTURE messages. To switch to the low-level WM_TOUCH messages we need to call to the RegisterTouchWindow() API. Add the following code to the InitInstance() function just before the call to ShowWindow():// Register the application window for receiving multi-touch input.
    if (!RegisterTouchWindow(hWnd, 0))
    {
    MessageBox(hWnd, L"Cannot register application window for touch input", L"Error", MB_OK);
    return FALSE;
    }

  6. We asked Windows to send WM_TOUCH messages. WM_TOUCH message is a special message. Unless you asked the system not to gather multiple touch events in one message (see the TWF_FINETOUCH parameter) you get all of your touch points in one message. This is reasonable since the user touches the screen with many touch points simultaneously. Add the following lines to the WndProc() function: case WM_TOUCH:
    {
    // A WM_TOUCH message can contain several messages from different contacts
    // packed together.
    unsigned int numInputs = (int) wParam; //Number of actual contact messages
    TOUCHINPUT* ti = new TOUCHINPUT[numInputs]; // Allocate the storage for //the parameters of the per- //contact messages

    // Unpack message parameters into the array of TOUCHINPUT structures, each
    // representing a message for one single contact.
    if (GetTouchInputInfo((HTOUCHINPUT)lParam, numInputs, ti, sizeof(TOUCHINPUT)))
    {
    // For each contact, dispatch the message to the appropriate message
    // handler.
    for(unsigned int i=0; i<numInputs; ++i)
    {
    if (ti[i].dwFlags & TOUCHEVENTF_DOWN)
    {
    OnTouchDownHandler(hWnd, ti[i]);
    }
    else if (ti[i].dwFlags & TOUCHEVENTF_MOVE)
    {
    OnTouchMoveHandler(hWnd, ti[i]);
    }
    else if (ti[i].dwFlags & TOUCHEVENTF_UP)
    {
    OnTouchUpHandler(hWnd, ti[i]);
    }
    }
    }
    CloseTouchInputHandle((HTOUCHINPUT)lParam);
    delete [] ti;
    }
    break;

  7. The wParam holds the number of touch input that came with the WM_TOUCH message. The GetTouchInputInfo() API fills a TOUCHINPUT array with touch information for each touch point. After you finish to extract data from the TOUCHINPUT array, you need to call the CloseTouchInputHandle() to free system resources. Here is the definition of the TOUCHINPUT structure:typedef struct tagTOUCHINPUT {
    LONG x;
    LONG y;
    HANDLE hSource;
    DWORD dwID;
    DWORD dwFlags;
    DWORD dwMask;
    DWORD dwTime;
    ULONG_PTR dwExtraInfo;
    DWORD cxContact;
    DWORD cyContact;
    } TOUCHINPUT, *PTOUCHINPUT;
    typedef TOUCHINPUT const * PCTOUCHINPUT;

    /*
    * Conversion of touch input coordinates to pixels
    */
    #define TOUCH_COORD_TO_PIXEL(l) ((l) / 100)

    /*
    * Touch input flag values (TOUCHINPUT.dwFlags)
    */
    #define TOUCHEVENTF_MOVE 0x0001
    #define TOUCHEVENTF_DOWN 0x0002
    #define TOUCHEVENTF_UP 0x0004

    Four parameters from the TOUCHINPUT structure are in our interest: The x and y are the touch location in screen coordination multiply by a hundred. This means that we need to divide each axis value by hundred (or use the TOUCH_COORD_TO_PIXEL() macro) and call ScreenToClient() to move to the Window coordination system. Be aware that if the screen is set to High DPI (more than 96 DPI), you may also need to divide the values by 96 and multiply by the current DPI. For simplicity we skip this step in our application. The other two parameters are the dwID and dwFlags. The dwFlags tells us the type of the touch input: Down, Move or Up. In our application TOUCHEVENTF_DOWN starts new stroke, TOUCHEVENTF_MOVE adds another point to an existing stroke and TOUCHEVENTF_UP finishes a stroke and move it to the g_StrkColFinished collection. The last parameter is the dwID, this is the touch input identifier. When a finger touches the screen for the first time, a unique touch id is associated with the finger. All farther touch inputs that come from this finger get the same unique id until the last TOUCHEVENTF_UP input. When the finger leaves the screen the id is freed and may be reuse as a unique id for other finger that will touch the screen. It's time to handle the touch inputs, add the following functions before the WndProc() function:

    // Returns color for the newly started stroke.
    // in:
    // bPrimaryContact flag, whether the contact is the primary contact
    // returns:
    // COLORREF, color of the stroke
    COLORREF GetTouchColor(bool bPrimaryContact)
    {
    static int s_iCurrColor = 0; // Rotating secondary color index
    static COLORREF s_arrColor[] = // Secondary colors array
    {
    RGB(255, 0, 0), // Red
    RGB(0, 255, 0), // Green
    RGB(0, 0, 255), // Blue
    RGB(0, 255, 255), // Cyan
    RGB(255, 0, 255), // Magenta
    RGB(255, 255, 0) // Yellow
    };

    COLORREF color;
    if (bPrimaryContact)
    {
    // The application renders the primary contact in black.
    color = RGB(0,0,0); // Black
    }
    else
    {
    // Take the current secondary color.
    color = s_arrColor[s_iCurrColor];

    // Move to the next color in the array.
    s_iCurrColor = (s_iCurrColor + 1) % (sizeof(s_arrColor)/sizeof(s_arrColor[0]));
    }
    return color;
    }

    // Extracts contact point in client area coordinates (pixels) from a
    // TOUCHINPUT structure.
    // in:
    // hWnd window handle
    // ti TOUCHINPUT structure (info about contact)
    // returns:
    // POINT with contact coordinates
    POINT GetTouchPoint(HWND hWnd, const TOUCHINPUT& ti)
    {
    POINT pt;
    pt.x = TOUCH_COORD_TO_PIXEL(ti.x);
    pt.y = TOUCH_COORD_TO_PIXEL(ti.y);
    ScreenToClient(hWnd, &pt);
    return pt;
    }

    // Handler for touch-down input.
    // in:
    // hWnd window handle
    // ti TOUCHINPUT structure (info about contact)
    void OnTouchDownHandler(HWND hWnd, const TOUCHINPUT& ti)
    {
    // Create a new stroke, add a point, and assign a color to it.
    CStroke strkNew;
    POINT p = GetTouchPoint(hWnd, ti);

    strkNew.AddPoint(p);
    strkNew.SetColor(GetTouchColor((ti.dwFlags & TOUCHEVENTF_PRIMARY) != 0));
    strkNew.SetId(ti.dwID);

    // Add the new stroke to the collection of strokes being drawn.
    g_StrkColDrawing.AddStroke(strkNew);
    }

    // Handler for touch-move input.
    // in:
    // hWnd window handle
    // ti TOUCHINPUT structure (info about contact)
    void OnTouchMoveHandler(HWND hWnd, const TOUCHINPUT& ti)
    {
    // Find the stroke in the collection of the strokes being drawn.
    int iStrk = g_StrkColDrawing.FindStrokeById(ti.dwID);
    POINT p = GetTouchPoint(hWnd, ti);

    // Add the contact point to the stroke.
    g_StrkColDrawing[iStrk].AddPoint(p);

    // Partial redraw: redraw only the last line segment.
    HDC hDC = GetDC(hWnd);
    g_StrkColDrawing[iStrk].DrawLast(hDC);
    ReleaseDC(hWnd, hDC);
    }

    // Handler for touch-up message.
    // in:
    // hWnd window handle
    // ti TOUCHINPUT structure (info about contact)
    void OnTouchUpHandler(HWND hWnd, const TOUCHINPUT& ti)
    {

    // Find the stroke in the collection of the strokes being drawn.
    int iStrk = g_StrkColDrawing.FindStrokeById(ti.dwID);

    // Add the finished stroke to the collection of finished strokes.
    g_StrkColFinished.AddStroke(g_StrkColDrawing[iStrk]);

    // Remove finished stroke from the collection of strokes being drawn.
    g_StrkColDrawing.RemoveStroke(iStrk);

    // Redraw the window.
    InvalidateRect(hWnd, NULL, FALSE);
    }

  8. To make the drawing a little bit more interesting we pick a different color for each unique id. The primary touch is the first finger that touched the screen. It is special inout since it act as the mouse pointer. We chose to give it a black color. Compile the application and run! You can touch it now!