Face Detection for Media Capture

[Some information relates to pre-released product which may be substantially modified before it's commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here.]

This article describes how to apply the FaceDetectionEffect to the media capture preview stream. The face detection effect also allows you to receive a notification whenever a face is detected in the preview stream and provides the bounding box for each detected face within the preview frame. On supported devices, the face detection effect also provides enhanced exposure and focus on the most important face in the scene.

Note  

This article builds on concepts and code discussed in Capture Photos and Video with MediaCapture, which describes the steps for implementing basic photo and video capture. It is recommended that you familiarize yourself with the basic media capture pattern in that article before moving on to more advanced capture scenarios. The code in this article assumes that your app already has an instance of MediaCapture that has been properly initialized.

In this article:

Face detection namespaces

To use face detection, your app must include the following namespaces in addition to the namespaces required for basic media capture.


using Windows.Media.Core;


Initialize the face detection effect and add it to the preview stream

Video effects are implemented using two APIs, an effect definition, which provides settings that the capture device needs to initialize the effect, and an effect instance, which can be used to control the effect. Since you may want to access the effect instance from multiple places within your code, you should typically declare a member variable to hold the object.


FaceDetectionEffect _faceDetectionEffect;


In your app, after you have initialized the MediaCapture object, create a new instance of FaceDetectionEffectDefinition. Set the DetectionMode property to prioritize faster face detection or more accurate face detection. Set SynchronousDetectionEnabled to specify that incoming frames are not delayed waiting for face detection to complete as this can result in a choppy preview experience.

Register the effect with the capture device by calling AddVideoEffectAsync on your MediaCapture object, providing the FaceDetectionEffectDefinition and specifying MediaStreamType::VideoPreview to indicate that the effect should be applied to the video preview stream, as opposed to the capture stream. AddVideoEffectAsync returns an instance of the added effect. Because this method can be used with multiple effect types, you must cast the returned instance to a FaceDetectionEffect object.

Enable or disable the effect by setting the FaceDetectionEffect::Enabled property. Adjust how often the effect analyzes frames by setting the FaceDetectionEffect::DesiredDetectionInterval property. Both of these properties can be adjusted while media capture is ongoing.



// Create the definition, which will contain some initialization settings
var definition = new FaceDetectionEffectDefinition();

// To ensure preview smoothness, do not delay incoming samples
definition.SynchronousDetectionEnabled = false;

// In this scenario, choose detection speed over accuracy
definition.DetectionMode = FaceDetectionMode.HighPerformance;

// Add the effect to the preview stream
_faceDetectionEffect = (FaceDetectionEffect)await _mediaCapture.AddVideoEffectAsync(definition, MediaStreamType.VideoPreview);

// Choose the shortest interval between detection events
_faceDetectionEffect.DesiredDetectionInterval = TimeSpan.FromMilliseconds(33);

// Start detecting faces
_faceDetectionEffect.Enabled = true;



Receive notifications when faces are detected

If you want to perform some action when faces are detected, such as drawing a box around detected faces in the video preview, you can register for the FaceDetected event.


// Register for face detection events
_faceDetectionEffect.FaceDetected += FaceDetectionEffect_FaceDetected;


In the handler for the event, you can get a list of all faces detected in a frame by accessing the FaceDetectionEffectFrame::DetectedFaces property of the FaceDetectedEventArgs. The FaceBox property is a BitmapBounds structure that describes the rectangle containing the detected face in units relative to the preview stream dimensions. To view sample code that transforms the preview stream coordinates into screen coordinates, see TBD sample URL.


private void FaceDetectionEffect_FaceDetected(FaceDetectionEffect sender, FaceDetectedEventArgs args)
{
    foreach (Windows.Media.FaceAnalysis.DetectedFace face in args.ResultFrame.DetectedFaces)
    {
        BitmapBounds faceRect = face.FaceBox;

        // Draw a rectangle on the preview stream for each face
    }
}


Clean up the face detection effect

When your app is done capturing, before disposing of the MediaCapture object, you should disable the face detection effect with FaceDetectionEffect::Enabled and unregister your FaceDetected event handler if you previously registered one. Call MediaCapture::ClearEffectsAsync, specifying the video preview stream since that was the stream to which the effect was added. Finally, set your member variable to null.


// Disable detection
_faceDetectionEffect.Enabled = false;

// Unregister the event handler
_faceDetectionEffect.FaceDetected -= FaceDetectionEffect_FaceDetected;

// Remove the effect from the preview stream
await _mediaCapture.ClearEffectsAsync(MediaStreamType.VideoPreview);

// Clear the member variable that held the effect instance
_faceDetectionEffect = null;


Check for focus and exposure support for detected faces

Not all devices have a capture device that can adjust its focus and exposure based on detected faces. Because face detection consumes device resources, you may only want to enable face detection on devices that can use the feature to enhance capture. To see if face-based capture optimization is available, get the VideoDeviceController for your initialized MediaCapture and then get the video device controller's RegionsOfInterestControl. Check to see if the MaxRegions supports at least one region. Then check to see if either AutoExposureSupported or AutoFocusSupported are true. If these conditions are met, then the device can take advantage of face detection to enhance capture.


var regionsControl = _mediaCapture.VideoDeviceController.RegionsOfInterestControl;
bool faceDetectionFocusAndExposureSupported =
    regionsControl.MaxRegions > 0 &&
    (regionsControl.AutoExposureSupported || regionsControl.AutoFocusSupported);


 

 

Show: