Export (0) Print
Expand All

Kinect Fusion Basics-WPF C# Sample

Kinect for Windows 1.7, 1.8

This sample illustrates how to use Kinect Fusion for 3D reconstruction.

Dn188700.note(en-us,IEB.10).gifImportant

DirectX 11 feature support is required to run Kinect Fusion.

In order to determine the DirectX Feature support level of your graphics card, run DXDiag.exe to determine the supported future level.

  1. Launch DxDiag.exe
  2. Navigate to the “Display” tab.
  3. In the “Drivers” area, there will be a text fields with the label “Feature Levels:”
  4. If 11.0 is in the list of supported feature levels, then Kinect Fusion will run in GPU mode.

Note: Simply having DirectX11 installed is not enough, you must also have hardware that supports the DirectX 11.0 feature set.

Overview

The Sample Uses the Following APIsTo Do This
KinectSensor.KinectSensors propertyGet the Kinect sensors that are plugged in and ready for use.
Reconstruction.FusionCreateReconstruction methodCreate a volume cube with the sensor at the center of the near plane and the volume directly in front of the sensor.
FusionFloatImageFrame classCreate image frames for depth data, point cloud data, and reconstruction data.
DepthImageFormat.Resolution640x480Fps30 enumeration valueChoose the depth stream format including the data type, resolution, and frame rate of the data.
KinectSensor.DepthStream property and DepthImageStream.Enable methodEnable the sensor to stream out depth data.
KinectSensor.Start and KinectSensor.Stop methodsStart or stop streaming data.
ImageStream.FramePixelDataLength propertySpecify the length of the pixel data buffer when you allocate memory to store the depth stream data from the Kinect.
ImageStream.FrameWidth and ImageStream.FrameHeight propertiesSpecify the width and height of the WriteableBitmap used to store/render the depth data.
KinectSensor.DepthFrameReady eventAdd an event handler for the depth data. The sensor will signal the event handler when each new frame of depth data is ready.
Reconstruction.ProcessFrame methodCalculate the camera pose and then integrate if tracking is successful.
Reconstruction.ResetReconstruction methodIf tracking failed, clear the 3D reconstruction volume and set a new camera pose.
Reconstruction.CalculatePointCloud methodCalculate a point cloud by raycasting into the reconstruction volume.
FusionDepthProcessor.ShadePointCloud methodCreate a shaded color image of a point cloud.
FusionColorImageFrame.CopyPixelDataTo methodCopy the pixel data to a bitmap.
Reconstruction.GetCurrentWorldToCameraTransform method Get the current internal world-to-camera transform (camera view pose).
Reconstruction.GetCurrentWorldToVolumeTransform method Get the current internal world-to-volume transform.
Reconstruction.DepthToDepthFloatFrame method Convert the specified array of Kinect depth pixels to a FusionFloatImageFrame object.

To run a sample you must have the Kinect for Windows SDK installed. To compile a sample, you must have the developer toolkit installed. The latest SDK and developer toolkit are available on the developer download page. If you need help installing the toolkit, look on this page: To Install the SDK and Toolkit. The toolkit includes a sample browser, which you can use to launch a sample or download it to your machine. To open the sample browser, click Start > All Programs > Kinect for Windows SDK [version number] > Developer Toolkit Browser.

If you need help loading a sample in Visual Studio or using Visual Studio to compile, run, or debug, see Opening, Building, and Running Samples in Visual Studio.

Community Additions

ADD
Show:
© 2015 Microsoft