How To: Create a Depth Texture

Demonstrates how to create a texture that contains depth information for a scene using a customized RenderTarget2D, DepthStencilBuffer, and a simple Effect.

To render the depth to a texture, you create a new RenderTarget2D and a DepthStencilBuffer with your desired depth format. Then you render your scene by using a shader that draws each pixel in the render target as a depth value instead of a normal color. Finally, you use GetTexture on the RenderTarget2D to save that information to a Texture2D.

To render the scene, this sample uses a technique from a customized Effect file. For more information, see How To: Use EffectParameters and EffectTechniques.

The Complete Sample

The code in this topic shows you the technique. You can download a complete code sample for this topic, including full source code and any additional supporting files required by the sample.

Creating a Depth Texture

To create a depth texture

  1. In your game's LoadContent method, create a new RenderTarget2D for rendering the depth in your scene.

    You may want to choose a surface format that gives you the most depth information. In this example, you will use SurfaceFormat.Single if the game computer supports it. Use CheckDeviceFormat on the computer to see if your chosen SurfaceFormat is supported on your game computer. Otherwise, consult Xbox 360 Surface Formats for supported SurfaceFormats on Xbox 360.

    SurfaceFormat.Single creates a 32-bit floating point value for each pixel, representing the red channel. No bits are used for the green, blue, or alpha channels. Using floating point values for depth allows more precision for the shadow calculations. This results in smoother shadows. Using a large render target and enabling antialiasing for the render target also improves the quality of the shadow rendering.

    To create the render target, there are four variables to consider: the proposed width of the texture, the proposed height of the texture, the surface format of the texture, and whether to use anti-aliasing. Not all PC video cards will accept all configurations of these four parameters. CreateRenderTarget tries to find the best match by first using CheckDeviceFormat to determine if the caller's preferred surface format is supported (in this case, a floating point texture). Next, it uses CheckDeviceMultiSampleType to confirm the anti-aliasing setting with the chosen surface format.

    shadowRenderTarget = GfxComponent.CreateRenderTarget(GraphicsDevice,
        1, SurfaceFormat.Single);
    public static RenderTarget2D CreateRenderTarget(GraphicsDevice device, int numberLevels,
        SurfaceFormat surface)
        MultiSampleType type = device.PresentationParameters.MultiSampleType;
        // If the card can't use the surface format
        if (!GraphicsAdapter.DefaultAdapter.CheckDeviceFormat(DeviceType.Hardware,
            GraphicsAdapter.DefaultAdapter.CurrentDisplayMode.Format, TextureUsage.None,
            QueryUsages.None, ResourceType.RenderTarget, surface))
            // Fall back to current display format
            surface = device.DisplayMode.Format;
        // Or it can't accept that surface format with the current AA settings
        else if (!GraphicsAdapter.DefaultAdapter.CheckDeviceMultiSampleType(DeviceType.Hardware,
            surface, device.PresentationParameters.IsFullScreen, type))
            // Fall back to no antialiasing
            type = MultiSampleType.None;
        int width, height;
        // See if we can use our buffer size as our texture
        CheckTextureSize(device.PresentationParameters.BackBufferWidth, device.PresentationParameters.BackBufferHeight,
            out width, out height);
        // Create our render target
        return new RenderTarget2D(device,
            width, height, numberLevels, surface,
            type, 0);

    Lastly, before creating the texture, CreateRenderTarget calls CheckTextureSize, which uses GraphicsDeviceCapabilities to see if the video card supports the chosen texture size (some video cards expect all textures to be powers or two, or square, or both). CheckTextureSize adjusts the texture size to fit the listed capabilities.

    public static bool CheckTextureSize(int width, int height, out int newwidth, out int newheight)
        bool retval = false;
        GraphicsDeviceCapabilities Caps;
        Caps = GraphicsAdapter.DefaultAdapter.GetCapabilities(DeviceType.Hardware);
        if (Caps.TextureCapabilities.RequiresPower2)
            retval = true;  // Return true to indicate the numbers changed
            // Find the nearest base two log of the current width, and go up to the next integer                
            double exp = Math.Ceiling(Math.Log(width)/Math.Log(2));
            // and use that as the exponent of the new width
            width = (int)Math.Pow(2, exp);
            // Repeat the process for height
            exp = Math.Ceiling(Math.Log(height)/Math.Log(2));
            height = (int)Math.Pow(2, exp);
        if (Caps.TextureCapabilities.RequiresSquareOnly)
            retval = true;  // Return true to indicate numbers changed
            width = Math.Max(width, height);
            height = width;
        newwidth = Math.Min(Caps.MaxTextureWidth, width);
        newheight = Math.Min(Caps.MaxTextureHeight, height);
        return retval;
  2. Next in your LoadContent method, create a new DepthStencilBuffer for your custom RenderTarget2D.

    The DepthStencilBuffer settings for width, height, and multisample quality should be the same as the values chosen for your render target. You may want to choose a depth format that gives you the most depth information. In this example, you will use DepthFormat.Depth24Stencil8Single if the game computer supports it. Use CheckDepthStencilMatch on Windows-based computers to see if your chosen DepthFormat is supported. Otherwise, consult Xbox 360 Surface Formats for supported DepthFormats on Xbox 360. DepthFormat.Depth24Stencil8Single creates a 24-bit floating-point value for depth. This allows more depth precision than the normal 24-bit fixed-point buffer. Still, you must be careful with floating-point values to guard against floating-point errors for depth values that are nearly identical.

    shadowDepthBuffer = GfxComponent.CreateDepthStencil(shadowRenderTarget,
    public static DepthStencilBuffer CreateDepthStencil(RenderTarget2D target)
        return new DepthStencilBuffer(target.GraphicsDevice, target.Width,
            target.Height, target.GraphicsDevice.DepthStencilBuffer.Format,
            target.MultiSampleType, target.MultiSampleQuality);
    public static DepthStencilBuffer CreateDepthStencil(RenderTarget2D target, DepthFormat depth)
        if (GraphicsAdapter.DefaultAdapter.CheckDepthStencilMatch(DeviceType.Hardware,
           GraphicsAdapter.DefaultAdapter.CurrentDisplayMode.Format, target.Format,
            return new DepthStencilBuffer(target.GraphicsDevice, target.Width,
                target.Height, depth, target.MultiSampleType, target.MultiSampleQuality);
            return CreateDepthStencil(target);
  3. In your game's Update method, when you calculate a projection matrix from the point of view of the light source, include as many objects in the scene as possible if they are visible to the camera.

    In this example, both objects are always visible to the camera and do not move, so the bounding sphere used is the bounding sphere of both objects combined.

  4. Set the near and far plane of the projection matrix as close as possible to the objects in the scene.

    This will result in more accurate depth values for the depth map. The projection matrix is passed to the effect.

    protected override void Update(GameTime gameTime)
        Matrix proj = CalcLightProjection(LightPos, bounds, defaultViewport);
  5. In your Draw method, use CompareFunction.LessEqual to set the appropriate DepthBufferFunction for your depth texture effect.

    When you depth-test a pixel, CompareFunction.LessEqual preserves lower depth values while discarding any pixels with a depth value higher than the current lowest value.

    GraphicsDevice.RenderState.DepthBufferFunction = CompareFunction.LessEqual;
  6. In your Draw method, call GraphicsDevice.SetRenderTarget to set the current render target (target 0) to the render target you created in Step 1.

  7. Make a copy of the current DepthStencilBuffer on the GraphicsDevice before assigning the DepthStencilBuffer you created previously to the DepthStencilBuffer property on GraphicsDevice.

    GraphicsDevice.SetRenderTarget(0, shadowRenderTarget);
    // Cache the current depth buffer
    DepthStencilBuffer old = GraphicsDevice.DepthStencilBuffer;
    // Set our custom depth buffer
    GraphicsDevice.DepthStencilBuffer = shadowDepthBuffer;
  8. Render your scene using an effect that will draw the depth value of each pixel to your render target.

    The following code shows a simple vertex and pixel shader for such an effect.

    // Render the shadow map
  9. After you have rendered the depth values to the render target, call SetRenderTarget again

    This sets the current render target (target 0) to null. Also, it resets the current render target to the display buffer.

  10. Set the DepthStencilBuffer on the GraphicsDevice to its former value.

    // Set render target back to the back buffer
    GraphicsDevice.SetRenderTarget(0, null);
    // Reset the depth buffer
    GraphicsDevice.DepthStencilBuffer = old;
  11. Call GetTexture on your RenderTarget2D to get a Texture2D containing the depth values for your scene.

    This is your depth texture.

    // Return the shadow map as a texture
    return shadowRenderTarget.GetTexture();

Rendering Depth in HLSL

To render depth in HLSL

  1. Use a custom vertex shader to render depth values in HLSL.

    The vertex shader returns two values to the pixel shader. The first value is a POSITION that transforms the incoming POSITION into the view and projection space of the light source. The second value is the depth value of the transformed POSITION. The depth is calculated by dividing the z coordinate by the w coordinate. Dividing by w gives you a depth between 0 and 1. The depth is subtracted from 1 to get more precision from the floating point format. The depth is packed into a TEXCOORD semantic (in this case, TEXCOORD0) to be returned by the pixel shader.

        float4 Position : POSITION;
        float Depth : TEXCOORD0;
    float4 GetPositionFromLight(float4 position)
        float4x4 WorldViewProjection = mul(mul(g_mWorld, g_mLightView), g_mLightProj);
        return mul(position, WorldViewProjection);  
    VS_SHADOW_OUTPUT RenderShadowMapVS(float4 vPos: POSITION)
        Out.Position = GetPositionFromLight(vPos); 
        // Depth is Z/W.  This is returned by the pixel shader.
        // Subtracting from 1 gives us more precision in floating point.
        Out.Depth.x = 1-(Out.Position.z/Out.Position.w);    
        return Out;
  2. Use a custom pixel shader to render depth values in HLSL.

    The pixel shader returns one value, the depth of the pixel. This is calculated by the pixel shader and passed in the TEXCOORD0 semantic.

    The depth is returned as the red value by the pixel shader. If you are using SurfaceFormat.Single, this holds a 32-bit floating point value. More bits give you smoother shadows. Floating point will create smoother shadows than fixed point. However, this shader will create shadows using almost any SurfaceFormat value.

    float4 RenderShadowMapPS( VS_SHADOW_OUTPUT In ) : COLOR
        // The depth is Z divided by W. We return
        // this value entirely in a 32-bit red channel
        // using SurfaceFormat.Single.  This preserves the
        // floating-point data for finer detail.
        return float4(In.Depth.x,0,0,1);

Community Additions