Kinect Fusion Explorer-WPF C# Sample
Kinect for Windows 1.7, 1.8
This sample demonstrates additional features of Kinect Fusion for 3D reconstruction. This sample allows adjustment of many reconstruction parameters, and export of reconstructed meshes.
DirectX 11 feature support is required to run Kinect Fusion.
In order to determine the DirectX Feature support level of your graphics card, run DXDiag.exe to determine the supported future level.
Note: Simply having DirectX11 installed is not enough, you must also have hardware that supports the DirectX 11.0 feature set.
|The Sample Uses the Following APIs||To Do This|
|KinectSensor.KinectSensors property||Get the Kinect sensors that are plugged in and ready for use.|
|FusionFloatImageFrame class||Create image frames for depth data, point cloud data, and reconstruction data.|
|DepthImageFormat.Resolution640x480Fps30 enumeration value||Choose the depth stream format including the data type, resolution, and frame rate of the data.|
|KinectSensor.DepthStream property and DepthImageStream.Enable method||Enable the sensor to stream out depth data.|
|KinectSensor.Start and KinectSensor.Stop methods||Start or stop streaming data.|
|ImageStream.FramePixelDataLength property||Specify the length of the pixel data buffer when you allocate memory to store the depth stream data from the Kinect.|
|ImageStream.FrameWidth and ImageStream.FrameHeight properties||Specify the width and height of the WriteableBitmap used to store/render the depth data.|
|KinectSensor.DepthFrameReady event||Add an event handler for the depth data. The sensor will signal the event handler when each new frame of depth data is ready.|
|FusionDepthProcessor.ShadePointCloud method||Create a shaded color image of a point cloud.|
|FusionColorImageFrame.CopyPixelDataTo method||Copy the pixel data to a bitmap.|
|ColorReconstruction.AlignDepthFloatToReconstruction method||Align a depth float image to the reconstruction volume to calculate the new camera pose.|
|ColorReconstruction.GetCurrentWorldToCameraTransform method||Retrieve the current internal world-to-camera transform (camera view pose).|
|ColorReconstruction.GetCurrentWorldToVolumeTransform method||Get the current internal world-to-volume transform.|
|ColorReconstruction.IntegrateFrame method||Integrate depth float data into the reconstruction volume from the specified camera pose.|
|ColorReconstruction.CalculatePointCloud method||Calculate a point cloud by raycasting into the reconstruction volume, returning the point cloud containing 3D points and normals of the zero-crossing dense surface at every visible pixel in the image.|
|ColorReconstruction.CalculateMesh method||Export a polygon mesh of the zero-crossing dense surfaces from the reconstruction volume with per-vertex color.|
|ColorReconstruction.DepthToDepthFloatFrame method||Convert the specified array of Kinect depth pixels to a FusionFloatImageFrame object.|
|ColorReconstruction.SmoothDepthFloatFrame method||Spatially smooth a depth float image frame using edge-preserving filtering.|
|ColorReconstruction.AlignPointClouds method||Align two sets of overlapping oriented point clouds and calculate the camera's relative pose.|
|ColorReconstruction.SetAlignDepthFloatToReconstructionReferenceFrame method||Set a reference depth frame that is used internally to help with tracking when calling the AlignDepthFloatToReconstruction method to calculate a new camera pose.|
|ColorReconstruction.CalculatePointCloudAndDepth method||Calculate a point cloud by raycasting into the reconstruction volume, returning the point cloud containing 3D points and normals of the zero-crossing dense surface at every visible pixel in the image from the specified camera pose, color visualization image, and the depth to the surface.|
|CameraPoseFinder.FusionCreateCameraPoseFinder method||Initialize a new instance of the CameraPoseFinder class.|
|CameraPoseFinder.ResetCameraPoseFinder method||Clear the CameraPoseFinder.|
|CameraPoseFinder.ProcessFrame method||Add the specified camera frame to the camera pose finder database if the frame differs enough from poses that already exist in the database.|
|CameraPoseFinder.FindCameraPose method||Retrieve the poses in the camera pose finder database that are most similar to the current camera input.|
|CameraPoseFinder.GetStoredPoseCount method||Retrieve the number of frames that are currently stored in the camera pose finder database.|
To run a sample you must have the Kinect for Windows SDK installed. To compile a sample, you must have the developer toolkit installed. The latest SDK and developer toolkit are available on the developer download page. If you need help installing the toolkit, look on this page: To Install the SDK and Toolkit. The toolkit includes a sample browser, which you can use to launch a sample or download it to your machine. To open the sample browser, click Start > All Programs > Kinect for Windows SDK [version number] > Developer Toolkit Browser.
If you need help loading a sample in Visual Studio or using Visual Studio to compile, run, or debug, see Opening, Building, and Running Samples in Visual Studio.