Export (0) Print
Expand All

Face Tracking

Kinect for Windows 1.5, 1.6, 1.7, 1.8

The Microsoft Face Tracking Software Development Kit for Kinect for Windows (Face Tracking SDK), together with the Kinect for Windows Software Development Kit (Kinect For Windows SDK), enables you to create applications that can track human faces in real time.

The Face Tracking SDK’s face tracking engine analyzes input from a Kinect camera, deduces the head pose and facial expressions, and makes that information available to an application in real time. For example, this information can be used to render a tracked person’s head position and facial expression on an avatar in a game or a communication application or to drive a natural user interface (NUI).

This version of the Face Tracking SDK was designed to work with Kinect sensor so the Kinect for Windows SDK must be installed before use.

System Requirements

Minimum Hardware Requirements

  • Computer with a dual-core, 2.66-GHz or faster processor
  • Windows 7 or Windows 8-compatible graphics card that supports Microsoft® DirectX® 9.0c capabilities
  • 2 GB of RAM
  • Kinect sensor—retail edition, which includes special USB/power cabling

Configuring Your Development Environment

This section provides notes for how to configure your development environment to work with the Face Tracking SDK.

Microsoft Visual Studio Express Edition for C++ includes an integrated development environment (IDE) in which you can code, debug, and test your applications.

If you are new to Visual Studio Express as a development environment, the following information on the Microsoft Developer Network (MSDN®) web site will help you become familiar with these development tools:

  • For a brief look at how to configure Visual Studio Express and then build and test a new project, see the appendix to this guide, “How to Build an Application with Visual Studio Express.”
  • Visit Visual C++ Developer Center, where you can find many tutorials, samples, and videos to help you get started.

How to Setup a FaceTracking Project in C++

To use face tracking in your native C/C++ project you need to:

  1. Include FaceTrackLib.h in your source file
  2. Include header and library directories in search paths

Include FaceTrackLib.h in your source file

  #include <FaceTrackLib.h>
        
JJ130970.note(en-us,IEB.10).gifNote

FaceTrackLib.h checks if _WINDOWS is defined – make sure your project has _WINDOWS defined (Configuration Properties, C/C++, Preprocessor), or modify FaceTrackLib.h and disable the check, e.g.

Search for the check in FaceTrackLib.h:

  #ifdef _WINDOWS
  #include <objbase.h>
  #include <windows.h>
  #endif

  And comment it out:
  //#ifdef _WINDOWS
  #include <objbase.h>
  #include <windows.h>
  //#endif
        

How to Setup a FaceTracking Project in Managed Code

  1. Add the following two projects to your C# solution
  2. Microsoft.Kinect.Toolkit;
    Microsoft.Kinect.Toolkit.FaceTracking;
              
  3. Also, add the two projects as references to your project:
  4. JJ130970.k4w_face_vs_references(en-us,IEB.10).png

  5. Consider adding the following namespaces:
  6. using Microsoft.Kinect;
    using Microsoft.Kinect.Toolkit;
    using Microsoft.Kinect.Toolkit.FaceTracking;
              
JJ130970.note(en-us,IEB.10).gifNote

Install (redistribute) facetracklib.dll and facetrackdata.dll matching the bit-ness of your application next to your application.

Creating an Install Package with the Face Track SDK

The instructions below demonstrate how to create a basic install package with the Face Tracking SDK. We will create an Install Package for the sample applications included in the SDK. An install package enables developers to conveniently distribute their own applications, along with binaries that their application may depend on in a single file.

  1. Open the FaceTracking Sample solution
  2. Add an Installer Project to the Face Tracking Sample Application
  3. Add a New Project: File -> Add -> New Project

  4. Choose Other Project Type
  5. From the “Installed Templates” pane, expand “Other Project Types”, then expand “Setup and Deployment”, then choose “Visual Studio Installer”, then choose “Setup Project” from middle pane. Enter the name for the installer project.

  6. Add files to the Project Output.
  7. Right click on “Application Folder”, choose “Add”, then choose “Project Output”

    Within “Project Output” select the executable for application and the necessary Dlls. For this sample project the only output files will be “SingleFace.exe” and the dll’s necessary for the Face Tracking SDK – “facetracklib.dll” and “facetrackdata.dll”

  8. Build
  9. Build the newly created project – choose “Build” from the top menu, then “Build Solution”.

    At this point the install package should be fully built. You should be able to launch the .msi file and install the sample applications that were included with the Face Tracking SDK.

The following DLLs need to be installed next to the binaries of your application:

  • FaceTrackLib.dll
  • FaceTrackData.dll

Make sure to install the x86 version when used with a 32 bit application, and the amd64 version when used with a 64 bit application.

General Notes

  • Use Visual Studio 2010 Express, Visual Studio 2012 Express, or another Visual Studio 2010 or 2012 edition.

Technical Specifications

Coordinate System

The Face Tracking SDK uses the Kinect coordinate system to output its 3D tracking results. The origin is located at the camera’s optical center (sensor), Z axis is pointing towards a user, Y axis is pointing up. The measurement units are meters for translation and degrees for rotation angles.

Figure 1.  Camera Space

JJ130970.k4w_face_camera_space(en-us,IEB.10).png

The computed 3D mask has coordinates that place it over the user’s face (in the camera’s coordinate frame) as shown in Figure 1 - Camera Space.

Input Images

The Face Tracking SDK accepts Kinect color and depth images as input. The tracking quality may be affected by the image quality of these input frames (that is, darker or fuzzier frames track worse than brighter or sharp frames). Also, larger or closer faces are tracked better than smaller faces.

API Description

Overview

The Face Tracking SDK face tracking engine is a registration-free COM object. It was designed to work in-process only and its interfaces should be treated as regular COM interfaces. There are four main custom COM interfaces:

InterfaceRepresents
IFTFaceTracker The main interface for face tracking.
IFTResult A result of a face tracking operation
IFTImage A helper interface that wraps various image buffers.
IFTModel A fitted 3D face model.

You can use the FTCreateFaceTracker() exported factory method to create an instance of an IFTFaceTracker object. You can use the FTCreateImage() factory method to create an instance of an IFTImage object. IFTResult and IFTModel are created from IFTFaceTracker. All factory methods increase a reference count of returned interfaces.

Also, the Face Tracking SDK uses the following data structures:

StructureDescription
FT_SENSOR_DATA Contains all input data for a face tracking operation.
FT_CAMERA_CONFIGContains the configuration of the video or depth sensor which frames are being tracked.
FT_VECTOR2D Contains the points of a 2D vector.
FT_VECTOR3D Contains the points of a 3D vector.
FT_TRIANGLE Contains a 3D face model triangle.
FT_WEIGHTED_RECT Contains a weighted rectangle returned by the Face Tracking SDK.
JJ130970.note(en-us,IEB.10).gifNote
Note: The specific details of the API can be found in the Microsoft Face Tracking SDK Reference.

Hello Face Code Sample

The Hello Face sample is a simplified C++ code sample that demonstrates the basic concepts for using the Face Tracking SDK.

  // This example assumes that the application provides
  // void* cameraFrameBuffer, a buffer for an image, and that there is a method
  // to fill the buffer with data from a camera, for example
  // cameraObj.ProcessIO(cameraFrameBuffer)

  // Create an instance of face tracker
  IFTFaceTracker* pFT = FTCreateFaceTracker();
  if(!pFT)
  {
    // Handle errors
  }

  FT_CAMERA_CONFIG myColorCameraConfig = {640, 480, 1.0}; // width, height, focal length
  FT_CAMERA_CONFIG myDepthCameraConfig = {640, 480}; // width, height

  HRESULT hr = pFT->Initialize(&myColorCameraConfig, &myDepthCameraConfig, NULL, NULL);
  if( FAILED(hr) )
  {
    // Handle errors
  }

  // Create IFTResult to hold a face tracking result
  IFTResult* pFTResult = NULL;
  hr = pFT->CreateFTResult(&pFTResult);
  if(FAILED(hr))
  {
    // Handle errors
  }

  // prepare Image and SensorData for 640x480 RGB images
  IFTImage* pColorFrame = FTCreateImage();
  if(!pColorFrame)
  {
    // Handle errors
  }

  // Attach assumes that the camera code provided by the application
  // is filling the buffer cameraFrameBuffer
  pColorFrame->Attach(640, 480, cameraFrameBuffer, FORMAT_UINT8_R8G8B8, 640*3);

  FT_SENSOR_DATA sensorData;
  sensorData.pVideoFrame = &colorFrame;
  sensorData.ZoomFactor = 1.0f;
  sensorData.ViewOffset = POINT(0,0);

  bool isTracked = false;

  // Track a face
  while ( true )
  {
    // Call your camera method to process IO and fill the camera buffer
    cameraObj.ProcessIO(cameraFrameBuffer); // replace with your method

    // Check if we are already tracking a face
    if(!isTracked)
    {
      // Initiate face tracking. This call is more expensive and
      // searches the input image for a face.
      hr = pFT->StartTracking(&sensorData, NULL, NULL, pFTResult);
      if(SUCCEEDED(hr) && SUCCEEDED(pFTResult->Status))
      {
        isTracked = true;
      }

      else
      {
        // Handle errors
        isTracked = false;
      }
    }
    else
    {
      // Continue tracking. It uses a previously known face position,
      // so it is an inexpensive call.
      hr = pFT->ContinueTracking(&sensorData, NULL, pFTResult);
      if(FAILED(hr) || FAILED (pFTResult->Status))
      {
        // Handle errors
        isTracked = false;
      }
    }

    // Do something with pFTResult.

    // Terminate on some criteria.
  }

  // Clean up.
  pFTResult->Release();
  pColorFrame->Release();
  pFT->Release();

Face Tracking Interfaces

IFTFaceTracker

The main interface is IFTFaceTracker, and its instance may be created by calling FTCreateFaceTracker. After initialization, it allows tracking a face synchronously by passing color and depth images as input (see IFTImage as part of FT_SENSOR_DATA); the results are returned via IFTResult instance (see below). It is assumed that both color and depth input images are coming from Kinect sensor.

An initialization method allows configuring Face Tracking for the system it is used in.

IFTFaceTracker provides a method CreateFTResult to create an instance of IFTResult for holding face tracking results specific for the model the instance of IFaceTracker is using. An application will need to create an instance of IFTResult before it can start tracking faces. An application can use the IFTFaceTracker method to get an array of potential head areas for the image data (FT_SENSOR_DATA) provided by the application. It is up to you to interpret the results and decide which faces to track.

To start tracking a face, you need to call StartTracking. StartTracking is an expensive method that searches in the provided image for a face, determines its orientation and initiates the face tracking. You can provide a hint regarding where to look for a face (pROI) or pass NULL to search the entire image – the first face found will be tracked. Another hint is the orientation of the head (headPoints[2]). The head orientation may be derived from the Kinect skeleton data. Without providing a ‘hint’ for the head orientation Face Tracking will still try to track the face but initial results might be suboptimal.

Once StartTracking successfully started tracking a face as indicated by the returned pFTResult, an application should continue face tracking by subsequently calling ContinueTracking. ContinueTracking uses information from the previous calls of StartTracking or ContinueTracking. Keep calling ContinueTracking until you want to stop face tracking or face tracking failed, for example because the person whose face is tracked steps outside the camera frame. A failure of face tracking is indicated by a pFTResult status. To re-start face tracking, an application calls StartTracking again and subsequently calls ContinueTracking.

ContinueTracking is a relatively fast function that uses state information about a tracked face. It is much cheaper than StartTracking. In some rare occasions, you might call only StartTracking if your application frame rate is very low or if the face is moving very fast between frames (so that continuous tracking is not possible).

For tracking multiple users, determine the faces you want to track, e.g. by instancing IFTFaceTracker and calling DetectFaces. Create additional instances of IFTFaceTracker for each extra face you want to track.

You can also retrieve (GetShapeUnits) and set (SetShapeUnits) SUs. By providing SUs for known users, face tracking returns better results in the beginning as it does not have to “learn” the SUs for a tracked face. It takes about approximately 2 minutes to learn SUs in real-time for a given user. You can retrieve the current SUs from IFTFaceTracker (for example, for persisting the SU between runs of the application).

Typically IFTFaceTracker keeps computing the SUs to improve them. If this is undesirable for an application, you can choose to not compute SUs by passing False to SetShapeComputationState. An application might want to not compute SUs if SUs for a user have been previously created by other tools other than Face Tracking SDK that are more suited for the application and it wants to save its computations cost. However, you can use GetShapeComputationState and SetShapeComputationState to fully control when to compute SUs.

When you want to use a custom mapping from depth data to image data, you need to provide a function to map the pixels in the depth image to the color image by registering a mapping function with FTRegisterDepthToColor.

IFTImage

You provide image data (video image and depth image) for face tracking through FT_SENSOR_DATA, a structure containing interface pointers to a video image and a depth image. Use FTCreateImage to create IFTImage instances.

IFTImage wraps the data for an image used in face tracking. It specifies the supported image formats (FTIMAGEFORMAT), e.g. FTIMAGEFORMAT_UINT8_R8G8B8 for RGB. Furthermore, it either provides a buffer to store an image (Allocate) or it uses an external storage for an image (Attach). In the first case,Allocate, IFTImage releases the allocated memory when you call Reset. In the latter case (Attach), you are responsible for managing the memory as Reset will not free the attached memory.

IFTImage provides various access methods to information about the image: the format, height, width, size of the image, and bytes per pixel. Furthermore, IFTImage provides access to the buffer. Helper methods for fast image copying and drawing debugging lines are also part of this interface.

IFTResult

The IFTResult interface provides access to the result from face tracking calls (IFTFaceTracker.StartTracking, IFTFaceTracker.ContinueTracking). IFTResult is created by calling IFTFaceTracker.CreateFTResult. IFTFaceTracker provides CreateFTResult as results are related to the underlying model that IFTFaceTracker has been initialized with.

Call GetStatus to determine if face tracking is successful (in which case S_OK is returned).

Upon a successful face tracking call, IFTResult provides access to the following information:

  • GetFaceRect – The rectangle in video frame coordinates of the bounding box around the tracked face.
  • Get2DShapePoints - 2D (X,Y) coordinates of the key points on the aligned face in video frame coordinates. It tracks the 87 2D points indicated in the following image (as well as 13 others that aren’t represented here).

IFTModel

The IFTModel interface provides a way to convert tracking results to a mesh of 3D vertices in the camera space. Its instance is returned by IFTFaceTracker::GetFaceModel() method. The interface provides several methods to get various model properties:

GetSUCount, GetAUCount – returns number of shape units (SU) or animation units (AU) used in the 3D linear model

GetTriangles – returns 3D model mesh triangles (indexes of vertices). Each triangle has 3 vertex indexes listed in the clockwise fashion.

GetVertexCount – returns number of vertices in the 3D model mesh

Also, IFTModel provides two methods to get a 3D face model in either the video camera space or projected onto the video camera image plane. These methods are:

Get3DShape - returns the 3D face model vertices transformed by the passed Shape Units, Animation Units, scale stretch, rotation and translation

GetProjectedShape - Returns the 3D face model vertices transformed by the passed Shape Units, Animation Units, scale stretch, rotation and translation and projected to the video frame

The following code sample shows a function that uses IFTModel interface to visualize the 3D mask computed by the face tracker:

  
          
    HRESULT VisualizeFaceModel(
      IFTImage* pColorImg,
      IFTModel* pModel,
      FT_CAMERA_CONFIG const* pCameraConfig,
      FLOAT const* pSUCoef,
      FLOAT zoomFactor,
      POINT viewOffset,
      IFTResult* pAAMRlt,
      UINT32 color
      )
    {
      if (!pColorImg || !pModel || !pCameraConfig || !pSUCoef || !pAAMRlt)
      {
        return E_POINTER;
      }

      HRESULT hr = S_OK;
      UINT vertexCount = pModel->GetVertexCount();
      FT_VECTOR2D* pPts2D = reinterpret_cast<FT_VECTOR2D*>
          (_malloca(sizeof(FT_VECTOR2D) * vertexCount));

      if (pPts2D)
      {
        FLOAT *pAUs;
        UINT auCount;
        hr = pAAMRlt->GetAUCoefficients(&pAUs, &auCount);
        if (SUCCEEDED(hr))
        {
          FLOAT scale, rotationXYZ[3], translationXYZ[3];
          hr = pAAMRlt->Get3DPose(&scale, rotationXYZ, translationXYZ);

          if (SUCCEEDED(hr))
          {
            hr = pModel->GetProjectedShape(pCameraConfig, zoomFactor, viewOffset,
            pSUCoef, pModel->GetSUCount(), pAUs, auCount,
            scale, rotationXYZ, translationXYZ, pPts2D, vertexCount);
            if (SUCCEEDED(hr))
            {
              POINT* p3DMdl = reinterpret_cast<POINT*>
                  (_malloca(sizeof(POINT) * vertexCount));

              if (p3DMdl)
              {
                for (UINT i = 0; i<vertexCount; ++i)
                {
                  p3DMdl[i].x = LONG(pPts2D[i].x + 0.5f);
                  p3DMdl[i].y = LONG(pPts2D[i].y + 0.5f);
                }
                FT_TRIANGLE* pTriangles;
                UINT triangleCount;
                hr = pModel->GetTriangles(&pTriangles, &triangleCount);
                if (SUCCEEDED(hr))
                {
                  struct EdgeHashTable
                  {
                    UINT32* pEdges;
                    UINT edgesAlloc;

                    void Insert(int a, int b)
                    {
                      UINT32 v = (min(a, b) << 16) | max(a, b);
                      UINT32 index = (v + (v << 8)) * 49157, i;
                      for ( i = 0; i<edgesAlloc - 1 &&
                        pEdges[(index + i) & (edgesAlloc - 1)] &&
                        v != pEdges[(index + i) & (edgesAlloc - 1)]; ++i )
                      {}

                    pEdges[(index + i) & (edgesAlloc - 1)] = v;
                    }
                  } eht; // Declare an edge hash table

                  eht.edgesAlloc = 1 << UINT(log(2.f * (1 + vertexCount + 
                    triangleCount)) / log(2.f));
                  eht.pEdges = reinterpret_cast<UINT32*>
                  (_malloca(sizeof(UINT32) * eht.edgesAlloc));

                  if (eht.pEdges)
                  {
                    ZeroMemory(eht.pEdges,
                    sizeof(UINT32) * eht.edgesAlloc);

                    for (UINT i = 0; i < triangleCount; ++i)
                    { 
                        eht.Insert(pTriangles[i].i, pTriangles[i].j);
                        eht.Insert(pTriangles[i].j, pTriangles[i].k);
                        eht.Insert(pTriangles[i].k, pTriangles[i].i);
                    }
                    
                    for (UINT i = 0; i < eht.edgesAlloc; ++i)
                    {
                      eht.pEdges[i] & pColorImg->DrawLine(
                        p3DMdl[eht.pEdges[i] >> 16],
                        p3DMdl[eht.pEdges[i] & 0xFFFF],
                        color, 1 );
                    }

                  _freea(eht.pEdges);
                  }

                  // Render the face rect in magenta
                  RECT rectFace;
                  hr = pAAMRlt->GetFaceRect(&rectFace);
                  if (SUCCEEDED(hr))
                  {
                    POINT leftTop = {rectFace.left, rectFace.top};
                    POINT rightTop = {rectFace.right - 1, rectFace.top};
                    POINT leftBottom = {rectFace.left,
                    rectFace.bottom - 1};
                    POINT rightBottom = {rectFace.right - 1,
                    rectFace.bottom - 1};

                    UINT32 nColor = 0xff00ff;
                    SUCCEEDED(hr = pColorImg->DrawLine(leftTop, rightTop, nColor, 1)) &
                    SUCCEEDED(hr = pColorImg->DrawLine(rightTop, rightBottom, nColor, 1)) &
                    SUCCEEDED(hr = pColorImg->DrawLine(rightBottom, leftBottom, nColor, 1)) &
                    SUCCEEDED(hr = pColorImg->DrawLine(leftBottom, leftTop, nColor, 1));
                  }
                }

                _freea(p3DMdl);
              }
              else
              {
                hr = E_OUTOFMEMORY;
              }
            }
          }
        }

      _freea(pPts2D);
    }
    else
    {
      hr = E_OUTOFMEMORY;
    }

    return hr;
  }

Face Tracking Outputs

This section provides details on the output of the Face Tracking engine. Each time you call StartTracking or ContinueTracking, FTResult will be updated, which contains the following information about a tracked user:

  • Tracking status
  • 2D points
  • 3D head pose
  • AUs

2D Mesh and Points

The Face Tracking SDK tracks the 87 2D points indicated in the following image (in addition to 13 points that aren’t shown in Figure 2 - Tracked Points):

Figure 2.  Tracked Points

JJ130970.k4w_face_tracked_points(en-us,IEB.10).png

These points are returned in an array, and are defined in the coordinate space of the RGB image (in 640 x 480 resolution) returned from the Kinect sensor.

The additional 13 points (which are not shown in the figure) include:

  • The center of the eye, the corners of the mouth, and the center of the nose
  • A bounding="" box around the head

3D Head Pose

The X,Y, and Z position of the user’s head are reported based on a right-handed coordinate system (with the origin at the sensor, Z pointed towards the user and Y pointed UP – this is the same as the Kinect’s skeleton coordinate frame). Translations are in meters.

The user’s head pose is captured by three angles: pitch, roll, and yaw.

Figure 3.  Head Pose Angles

JJ130970.k4w_face_head_pose_angles(en-us,IEB.10).png

The angles are expressed in degrees, with values ranging from -180 degrees to +180 degrees.

AngleValue

Pitch angle

0=neutral

-90 = looking down towards the floor

+90 = looking up towards the ceiling

Face Tracking tracks when the user’s head pitch is less than 20 degrees, but works best when less than 10 degrees.

Roll angle

0 = neutral

-90 = horizontal parallel with right shoulder of subject

+90 = horizontal parallel with left shoulder of the subject

Face Tracking tracks when the user’s head roll is less than 90 degrees, but works best when less than 45 degrees.

Yaw angle

0 = neutral

-90 = turned towards the right shoulder of the subject

+90 = turned towards the left shoulder of the subject

Face Tracking tracks when the user’s head yaw is less than 45 degrees, but works best when less than 30 degrees


Animation Units

The Face Tracking SDK results are also expressed in terms of weights of six AUs and 11 SUs, which are a subset of what is defined in the Candide3 model (http://www.icg.isy.liu.se/candide/). The SUs estimate the particular shape of the user’s head: the neutral position of their mouth, brows, eyes, and so on. The AUs are deltas from the neutral shape that you can use to morph targets on animated avatar models so that the avatar acts as the tracked user does.

The Face Tracking SDK tracks the following AUs. Each AU is expressed as a numeric weight varying between -1 and +1.

AU Name and ValueAvatar IllustrationAU Value Interpretation

Neutral Face

(all AUs 0)

JJ130970.k4w_face_neutral_face2(en-us,IEB.10).png 

AU0 – Upper Lip Raiser

(In Candid3 this is AU10)

JJ130970.k4w_face_upper_lip_raiser(en-us,IEB.10).png

0=neutral, covering teeth

1=showing teeth fully

-1=maximal possible pushed down lip

AU1 – Jaw Lowerer

(In Candid3 this is AU26/27)

JJ130970.k4w_face_jaw_lowerer(en-us,IEB.10).png

0=closed

1=fully open

-1= closed, like 0

AU2 – Lip Stretcher

(In Candid3 this is AU20)

JJ130970.k4w_face_lip_stretcher(en-us,IEB.10).png

0=neutral

1=fully stretched (joker’s smile)

-0.5=rounded (pout)

-1=fully rounded (kissing mouth)

AU3 – Brow Lowerer

(In Candid3 this is AU4)

JJ130970.k4w_face_brow_lowerer(en-us,IEB.10).png

0=neutral

-1=raised almost all the way

+1=fully lowered (to the limit of the eyes)

AU4 – Lip Corner Depressor

(In Candid3 this is AU13/15)

JJ130970.k4w_face_lip_corner_depressor(en-us,IEB.10).png

0=neutral

-1=very happy smile

+1=very sad frown

AU5 – Outer Brow Raiser

(In Candid3 this is AU2)

JJ130970.k4w_face_outer_brow_raiser(en-us,IEB.10).png

0=neutral

-1=fully lowered as a very sad face

+1=raised as in an expression of deep surprise

Shape Units

The Face Tracking SDK tracks the following 11 SUs in IFTFaceTracker. They are discussed here because of their logical relation to the Candide-3 model.

Each SU specifies the vertices it affects and the displacement (x,y,z) per affected vertex.

SU NameSU number in Candide-3
Head height0
Eyebrows vertical position1
Eyes vertical position2
Eyes, width3
Eyes, height4
Eye separation distance5
Nose vertical position 8
Mouth vertical position 10
Mouth width11
Eyes vertical differencen/a
Chin widthn/a

In addition to the Candide-3 as described at http://www.icg.isy.liu.se/candide/, face tracking supports the following:

  • Eyes vertical difference
  • Chin width

Face tracking does not support the following:

  • Cheeks z (6)
  • Nose z-extension="" (7)
  • Nose pointing="" up (9)

3D Face Model Provided by IFTModel Interface

The Face Tracking SDK also tries to fit a 3D mask to the user’s face. The 3D model is based on the Candide3 model (http://www.icg.isy.liu.se/candide/) :

JJ130970.note(en-us,IEB.10).gifNote
This model is not returned directly at each call to the Face Tracking SDK, but can be computed from the AUs and SUs.

Face Tracking SDK Code Samples

The Face Tracking SDK includes two native code samples that demonstrate basic functionality of the SDK and the use of various parameters to optimize performance. The SingleFace sample demonstrates how to track an individual face and animate corresponding parameters onto an avatar like object. The MultiFace sample demonstrates similar functionality as the SingleFace sample, but with multiple faces. Additionally the samples also demonstrate how various modes and settings within the Face Tracking SDK can be modified to optimize performance. The following table describes command line parameters for the code samples that can be used to determine ideal performance. The samples can be found in the "Samples" folder of the Developer Toolkit.

Command Line Parameters for the Face Tracking SDK Code Samples

Capture OptionCommand FormatParametersDescription
Specified Depth-Depth[:DEPTHFORMAT][:DEPTHSIZE]The DEPTHFORMAT component can have following values
  • “DEPTH”: each depth pixel is encoded as a 16 bit integer representing a value in millimeters, or 0 if undefined.
  • “PLAYERID”: each depth pixel contains a player index in the least significant 3 bits, and a depth value in millimeters in the most significant 13 bits.

If no depth format is specified, or if the “-Depth” argument is absent, the program will use the default format, depth + player id.

Request that the depth image is captured according to the specified format and specified size.
Color-Color[:COLORFORMAT][:COLORSIZE]The COLORFORMAT component can have the values
  • “RGB”: the="" color pixels are encoded in RGB mode.
  • “PLAYERID”: the color pixels are encoded in YUV mode.
  • If no color format is specified, or if the “-Color” argument is absent, the program will use the default format, RGB.

The COLORSIZE component can have the values:

  • “320x240”: the depth image is composed of 320 rows of 240 pixels
  • “640x480”: the depth image is composed of 640 rows of 480 pixels
  • “1280x960”: the depth image is composed of 1280 rows of 960 pixels

If no color size is specified, or if the “-Color” argument is absent, the program will use the default format, 640x480.

Request that the color image is captured according to the specified format and specified size.
Near Mode-NearModeIf argument is present, near mode is activated.Request that the depth sensor be programmed to operate in “near mode.” If the argument is absent, the depth sensor will operate in standard mode. Optimizes operation of the Kinect Hardware when the subject is closer than 3ft.
Seated Skeleton-SeatedSkeletonIf argument is present, seated Skeleton mode is activated.Request that the skeleton be assessed using the “seated” pipeline. If the argument is absent, the skeleton will be computed using the standard (standing) pipeline.

Community Additions

ADD
Show:
© 2014 Microsoft