ColorReconstruction.AlignDepthFloatToReconstruction Method
Kinect for Windows 1.8
Aligns a depth float image to the reconstruction volume to calculate the new camera pose.public bool AlignDepthFloatToReconstruction ( FusionFloatImageFrame depthFloatFrame, int maxAlignIterationCount, FusionFloatImageFrame deltaFromReferenceFrame, out float alignmentEnergy, Matrix4 worldToCameraTransform )
Parameters
- depthFloatFrame
- Type: FusionFloatImageFrame
The depth float frame to be processed. - maxAlignIterationCount
- Type: Int32
The maximum number of iterations of the algorithm to run. The minimum value is one. Using only a small number of iterations will have a faster run time, but the algorithm may not converge to the correct transformation. - deltaFromReferenceFrame
- Type: FusionFloatImageFrame
A pre-allocated float image frame, to be filled with information about how well each observed pixel aligns with the passed-in reference frame. This could be processed to create a color rendering, or could be used as input to additional vision algorithms such as object segmentation. These residual values are normalized −1 to 1 and represent the alignment cost/energy for each pixel. Larger magnitude values (either positive or negative) represent more discrepancy, and lower values represent less discrepancy or less information at that pixel.
Note that if valid depth exists, but no reconstruction model exists behind the depth pixels, a value of zero (which indicates perfect alignment) will be returned for that area. In contrast, where no valid depth occurs a value of one will always be returned. Pass null to this parameter if you do not want to use this functionality.
- alignmentEnergy
- Type: Single
A floating-point value that receives a value that describes how well the observed frame aligns to the model with the calculated pose. A larger magnitude value represents more discrepancy, and a lower value represents less discrepancy. It is unlikely that an exact zero value (perfect alignment) will ever be returned, as every frame from the sensor will contain some sensor noise. - worldToCameraTransform
- Type: Matrix4
The best guess at the current camera pose. This is usually the camera pose result from the most recent call to the FusionDepthProcessor.AlignPointClouds or ColorReconstruction.AlignDepthFloatToReconstruction method.
Return Value
Returns true if successful; returns false if the algorithm encountered a problem aligning the input depth image and could not calculate a valid transformation.