INuiFusionColorReconstruction::IntegrateFrame Method

Integrates depth float data and color data into the reconstruction volume from the specified camera pose.

Syntax

public:
HRESULT IntegrateFrame(
         const NUI_FUSION_IMAGE_FRAME *pDepthFloatFrame,
         const NUI_FUSION_IMAGE_FRAME *pColorFrame,
         USHORT maxIntegrationWeight,
         FLOAT maxColorIntegrationAngle,
         const Matrix4 *pWorldToCameraTransform
)

Parameters

  • pDepthFloatFrame
    Type: NUI_FUSION_IMAGE_FRAME
    The depth float frame to be integrated.

  • pColorFrame
    Type: NUI_FUSION_IMAGE_FRAME
    The color frame to be integrated.

  • maxIntegrationWeight
    Type: USHORT
    A parameter to control the temporal smoothing of depth integration. The minimum value is one. Lower values have more noisy representations, but are suitable for more dynamic environments because moving objects integrate and disintegrate faster. Higher values integrate objects more slowly, but provide finer detail with less noise.

  • maxColorIntegrationAngle
    Type: FLOAT
    Angle with respect to the surface normal over which color will be integrated, in degrees. The useful range of values for this parameter is [0.0f, 90.0f]. You can use this parameter to integrate color only when the Kinect sensor is nearly parallel with the surface (that is, the camera direction of view is perpendicular to the surface), or within a specified angle from the surface normal direction.

    This angle relative to this normal direction vector describes the acceptance half angle; for example, a +/- 90 degree acceptance angle in all directions (that is, a 180 degree hemisphere) relative to the normal integrates color in any orientation of the sensor towards the front of the surface, even when parallel to the surface. An acceptance angle of zero integrates color only directly along a single ray exactly perpendicular to the surface.

    Setting this value has a cost at run-time. However, not setting this value causes this method to integrate color from any angle over all voxels along camera rays around the zero crossing surface region in the volume, which can cause thin structures to have the same color on both sides.

  • pWorldToCameraTransform
    Type: Matrix4
    The camera pose. This is usually the camera pose result from the most recent call to the AlignPointClouds or AlignDepthFloatToReconstruction method.

    Note

    This method also sets the internal camera pose to this pose.

Return value

Type: HRESULT
S_OK if successful; otherwise, returns a failure code.

Requirements

Header: nuikinectfusioncolorvolume.h

Library: TBD