Apply textures to primitives

Apply textures to primitives

[This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation]

Here we load raw texture data and apply that data to a 3D primitive by using the cube that we created in Using depth and effects on primitives. We also introduce a simple dot-product lighting model, where the cube surfaces are lighter or darker based on their distance and angle relative to a light source.

Objective: To apply textures to primitives.


We assume that you are familiar with C++. You also need basic experience with graphics programming concepts.

We also assume that you went through Quickstart: setting up DirectX resources and displaying an image, Creating shaders and drawing primitives, and Using depth and effects on primitives.

Time to complete: 20 minutes.


1. Defining variables for a textured cube

First, we need to define the BasicVertex and ConstantBuffer structures for the textured cube. These structures specify the vertex positions, orientations, and textures for the cube and how the cube will be viewed. Otherwise, we declare variables similarly to the previous tutorial, Using depth and effects on primitives.

struct BasicVertex
    float3 pos;  // position
    float3 norm; // surface normal vector
    float2 tex;  // texture coordinate

struct ConstantBuffer
    float4x4 model;
    float4x4 view;
    float4x4 projection;

// This class defines the application as a whole.
ref class Direct3DTutorialFrameworkView : public IFrameworkView
    Platform::Agile<CoreWindow> m_window;
    ComPtr<IDXGISwapChain1> m_swapChain;
    ComPtr<ID3D11Device1> m_d3dDevice;
    ComPtr<ID3D11DeviceContext1> m_d3dDeviceContext;
    ComPtr<ID3D11RenderTargetView> m_renderTargetView;
    ComPtr<ID3D11DepthStencilView> m_depthStencilView;
    ComPtr<ID3D11Buffer> m_constantBuffer;
    ConstantBuffer m_constantBufferData;

2. Creating vertex and pixel shaders with surface and texture elements

Here we create more complex vertex and pixel shaders than in the previous tutorial, Using depth and effects on primitives. This app's vertex shader transforms each vertex position into projection space and passes the vertex texture coordinate through to the pixel shader.

The app's array of D3D11_INPUT_ELEMENT_DESC structures that describe the layout of the vertex shader code has three layout elements: one element defines the vertex position, another element defines the surface normal vector (the direction that the surface normally faces), and the third element defines the texture coordinates.

We create vertex, index, and constant buffers that define an orbiting textured cube.

JJ552949.wedge(en-us,WIN.10).gifTo define an orbiting textured cube

  1. First, we define the cube. Each vertex is assigned a position, a surface normal vector, and texture coordinates. We use multiple vertices for each corner to allow different normal vectors and texture coordinates to be defined for each face.
  2. Next, we describe the vertex and index buffers (D3D11_BUFFER_DESC and D3D11_SUBRESOURCE_DATA) using the cube definition. We call ID3D11Device::CreateBuffer once for each buffer.
  3. Next, we create a constant buffer (D3D11_BUFFER_DESC) for passing model, view, and projection matrices to the vertex shader. We can later use the constant buffer to rotate the cube and apply a perspective projection to it. We calls ID3D11Device::CreateBuffer to create the constant buffer.
  4. Finally, we specify the view transform that corresponds to a camera position of X = 0, Y = 1, Z = 2. For a generalized camera class, see Separating DirectX concepts into components for reuse.

        auto vertexShaderBytecode = reader->ReadData("SimpleVertexShader.cso");
        ComPtr<ID3D11VertexShader> vertexShader;

        // Create an input layout that matches the layout defined in the vertex shader code.
        // These correspond to the elements of the BasicVertex struct defined above.
        const D3D11_INPUT_ELEMENT_DESC basicVertexLayoutDesc[] =
            { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0,  0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
            { "NORMAL",   0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
            { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT,    0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0 },

        ComPtr<ID3D11InputLayout> inputLayout;

        // Load the raw pixel shader bytecode from disk and create a pixel shader with it.
        auto pixelShaderBytecode = reader->ReadData("SimplePixelShader.cso");
        ComPtr<ID3D11PixelShader> pixelShader;

        // Create vertex and index buffers that define a simple unit cube.

        // In the array below, which will be used to initialize the cube vertex buffers,
        // multiple vertices are used for each corner to allow different normal vectors and
        // texture coordinates to be defined for each face.
        BasicVertex cubeVertices[] =
            { float3(-0.5f, 0.5f, -0.5f), float3(0.0f, 1.0f, 0.0f), float2(0.0f, 0.0f) }, // +Y (top face)
            { float3( 0.5f, 0.5f, -0.5f), float3(0.0f, 1.0f, 0.0f), float2(1.0f, 0.0f) },
            { float3( 0.5f, 0.5f,  0.5f), float3(0.0f, 1.0f, 0.0f), float2(1.0f, 1.0f) },
            { float3(-0.5f, 0.5f,  0.5f), float3(0.0f, 1.0f, 0.0f), float2(0.0f, 1.0f) },

            { float3(-0.5f, -0.5f,  0.5f), float3(0.0f, -1.0f, 0.0f), float2(0.0f, 0.0f) }, // -Y (bottom face)
            { float3( 0.5f, -0.5f,  0.5f), float3(0.0f, -1.0f, 0.0f), float2(1.0f, 0.0f) },
            { float3( 0.5f, -0.5f, -0.5f), float3(0.0f, -1.0f, 0.0f), float2(1.0f, 1.0f) },
            { float3(-0.5f, -0.5f, -0.5f), float3(0.0f, -1.0f, 0.0f), float2(0.0f, 1.0f) },

            { float3(0.5f,  0.5f,  0.5f), float3(1.0f, 0.0f, 0.0f), float2(0.0f, 0.0f) }, // +X (right face)
            { float3(0.5f,  0.5f, -0.5f), float3(1.0f, 0.0f, 0.0f), float2(1.0f, 0.0f) },
            { float3(0.5f, -0.5f, -0.5f), float3(1.0f, 0.0f, 0.0f), float2(1.0f, 1.0f) },
            { float3(0.5f, -0.5f,  0.5f), float3(1.0f, 0.0f, 0.0f), float2(0.0f, 1.0f) },

            { float3(-0.5f,  0.5f, -0.5f), float3(-1.0f, 0.0f, 0.0f), float2(0.0f, 0.0f) }, // -X (left face)
            { float3(-0.5f,  0.5f,  0.5f), float3(-1.0f, 0.0f, 0.0f), float2(1.0f, 0.0f) },
            { float3(-0.5f, -0.5f,  0.5f), float3(-1.0f, 0.0f, 0.0f), float2(1.0f, 1.0f) },
            { float3(-0.5f, -0.5f, -0.5f), float3(-1.0f, 0.0f, 0.0f), float2(0.0f, 1.0f) },

            { float3(-0.5f,  0.5f, 0.5f), float3(0.0f, 0.0f, 1.0f), float2(0.0f, 0.0f) }, // +Z (front face)
            { float3( 0.5f,  0.5f, 0.5f), float3(0.0f, 0.0f, 1.0f), float2(1.0f, 0.0f) },
            { float3( 0.5f, -0.5f, 0.5f), float3(0.0f, 0.0f, 1.0f), float2(1.0f, 1.0f) },
            { float3(-0.5f, -0.5f, 0.5f), float3(0.0f, 0.0f, 1.0f), float2(0.0f, 1.0f) },

            { float3( 0.5f,  0.5f, -0.5f), float3(0.0f, 0.0f, -1.0f), float2(0.0f, 0.0f) }, // -Z (back face)
            { float3(-0.5f,  0.5f, -0.5f), float3(0.0f, 0.0f, -1.0f), float2(1.0f, 0.0f) },
            { float3(-0.5f, -0.5f, -0.5f), float3(0.0f, 0.0f, -1.0f), float2(1.0f, 1.0f) },
            { float3( 0.5f, -0.5f, -0.5f), float3(0.0f, 0.0f, -1.0f), float2(0.0f, 1.0f) },

        unsigned short cubeIndices[] =
            0, 1, 2,
            0, 2, 3,

            4, 5, 6,
            4, 6, 7,

            8, 9, 10,
            8, 10, 11,

            12, 13, 14,
            12, 14, 15,

            16, 17, 18,
            16, 18, 19,

            20, 21, 22,
            20, 22, 23

        D3D11_BUFFER_DESC vertexBufferDesc = {0};
        vertexBufferDesc.ByteWidth = sizeof(BasicVertex) * ARRAYSIZE(cubeVertices);
        vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
        vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
        vertexBufferDesc.CPUAccessFlags = 0;
        vertexBufferDesc.MiscFlags = 0;
        vertexBufferDesc.StructureByteStride = 0;

        D3D11_SUBRESOURCE_DATA vertexBufferData;
        vertexBufferData.pSysMem = cubeVertices;
        vertexBufferData.SysMemPitch = 0;
        vertexBufferData.SysMemSlicePitch = 0;

        ComPtr<ID3D11Buffer> vertexBuffer;

        D3D11_BUFFER_DESC indexBufferDesc;
        indexBufferDesc.ByteWidth = sizeof(unsigned short) * ARRAYSIZE(cubeIndices);
        indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
        indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
        indexBufferDesc.CPUAccessFlags = 0;
        indexBufferDesc.MiscFlags = 0;
        indexBufferDesc.StructureByteStride = 0;

        D3D11_SUBRESOURCE_DATA indexBufferData;
        indexBufferData.pSysMem = cubeIndices;
        indexBufferData.SysMemPitch = 0;
        indexBufferData.SysMemSlicePitch = 0;

        ComPtr<ID3D11Buffer> indexBuffer;

        // Create a constant buffer for passing model, view, and projection matrices
        // to the vertex shader.  This will allow us to rotate the cube and apply
        // a perspective projection to it.

        D3D11_BUFFER_DESC constantBufferDesc = {0};
        constantBufferDesc.ByteWidth = sizeof(m_constantBufferData);
        constantBufferDesc.Usage = D3D11_USAGE_DEFAULT;
        constantBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
        constantBufferDesc.CPUAccessFlags = 0;
        constantBufferDesc.MiscFlags = 0;
        constantBufferDesc.StructureByteStride = 0;

        // Specify the view transform corresponding to a camera position of
        // X = 0, Y = 1, Z = 2.  See Lesson 5 for a generalized camera class.

        m_constantBufferData.view = float4x4(
            -1.00000000f, 0.00000000f,  0.00000000f,  0.00000000f,
             0.00000000f, 0.89442718f,  0.44721359f,  0.00000000f,
             0.00000000f, 0.44721359f, -0.89442718f, -2.23606800f,
             0.00000000f, 0.00000000f,  0.00000000f,  1.00000000f

3. Creating textures and samplers

Here we apply texture data to a cube rather than applying colors as in the previous tutorial, Using depth and effects on primitives.

We use raw texture data to create textures.

JJ552949.wedge(en-us,WIN.10).gifTo create textures and samplers

  1. First, we read raw texture data from the texturedata.bin file on disk.
  2. Next, we construct a D3D11_SUBRESOURCE_DATA structure that references that raw texture data.
  3. Then, we populate a D3D11_TEXTURE2D_DESC structure to describe the texture. We then pass the D3D11_SUBRESOURCE_DATA and D3D11_TEXTURE2D_DESC structures in a call to ID3D11Device::CreateTexture2D to create the texture.
  4. Next, we create a shader-resource view of the texture so shaders can use the texture. To create the shader-resource view, we populate a D3D11_SHADER_RESOURCE_VIEW_DESC to describe the shader-resource view and pass the shader-resource view description and the texture to ID3D11Device::CreateShaderResourceView. In general, you match the view description with the texture description.
  5. Next, we create sampler state for the texture. This sampler state uses the relevant texture data to define how the color for a particular texture coordinate is determined. We populate a D3D11_SAMPLER_DESC structure to describe the sampler state. We then pass the D3D11_SAMPLER_DESC structure in a call to ID3D11Device::CreateSamplerState to create the sampler state.
  6. Finally, we declare a degree variable that we will use to animate the cube by rotating it every frame.

        // Load the raw texture data from disk and construct a subresource description that references it.
        auto textureData = reader->ReadData("texturedata.bin");
        D3D11_SUBRESOURCE_DATA textureSubresourceData = {0};
        textureSubresourceData.pSysMem = textureData->Data;

        // Specify the size of a row in bytes, known a priori about the texture data.
        textureSubresourceData.SysMemPitch = 1024;

        // As this is not a texture array or 3D texture, this parameter is ignored.
        textureSubresourceData.SysMemSlicePitch = 0;

        // Create a texture description from information known a priori about the data.
        // Generalized texture loading code can be found in the Resource Loading sample.
        D3D11_TEXTURE2D_DESC textureDesc = {0};
        textureDesc.Width = 256;
        textureDesc.Height = 256;
        textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
        textureDesc.Usage = D3D11_USAGE_DEFAULT;
        textureDesc.CPUAccessFlags = 0;
        textureDesc.MiscFlags = 0;

        // Most textures contain more than one MIP level.  For simplicity, this sample uses only one.
        textureDesc.MipLevels = 1;

        // As this will not be a texture array, this parameter is ignored.
        textureDesc.ArraySize = 1;

        // Don't use multi-sampling.
        textureDesc.SampleDesc.Count = 1;
        textureDesc.SampleDesc.Quality = 0;

        // Allow the texture to be bound as a shader resource.
        textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;

        ComPtr<ID3D11Texture2D> texture;

        // Once the texture is created, we must create a shader resource view of it
        // so that shaders may use it.  In general, the view description will match
        // the texture description.
        D3D11_SHADER_RESOURCE_VIEW_DESC textureViewDesc;
        ZeroMemory(&textureViewDesc, sizeof(textureViewDesc));
        textureViewDesc.Format = textureDesc.Format;
        textureViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
        textureViewDesc.Texture2D.MipLevels = textureDesc.MipLevels;
        textureViewDesc.Texture2D.MostDetailedMip = 0;

        ComPtr<ID3D11ShaderResourceView> textureView;

        // Once the texture view is created, create a sampler.  This defines how the color
        // for a particular texture coordinate is determined using the relevant texture data.
        D3D11_SAMPLER_DESC samplerDesc;
        ZeroMemory(&samplerDesc, sizeof(samplerDesc));

        samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;

        // The sampler does not use anisotropic filtering, so this parameter is ignored.
        samplerDesc.MaxAnisotropy = 0;

        // Specify how texture coordinates outside of the range 0..1 are resolved.
        samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
        samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
        samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;

        // Use no special MIP clamping or bias.
        samplerDesc.MipLODBias = 0.0f;
        samplerDesc.MinLOD = 0;
        samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;

        // Don't use a comparison function.
        samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;

        // Border address mode is not used, so this parameter is ignored.
        samplerDesc.BorderColor[0] = 0.0f;
        samplerDesc.BorderColor[1] = 0.0f;
        samplerDesc.BorderColor[2] = 0.0f;
        samplerDesc.BorderColor[3] = 0.0f;

        ComPtr<ID3D11SamplerState> sampler;

        // This value will be used to animate the cube by rotating it every frame;
        float degree = 0.0f;

4. Rotating and drawing the textured cube and presenting the rendered image

As in the previous tutorials, we enter an endless loop to continually render and display the scene. We call the rotationY inline function (BasicMath.h) with a rotation amount to set values that will rotate the cube’s model matrix around the Y axis. We then call ID3D11DeviceContext::UpdateSubresource to update the constant buffer and rotate the cube model. Next, we call ID3D11DeviceContext::OMSetRenderTargets to specify the render target and the depth-stencil view. We call ID3D11DeviceContext::ClearRenderTargetView to clear the render target to a solid blue color and call ID3D11DeviceContext::ClearDepthStencilView to clear the depth buffer.

In the endless loop, we also draw the textured cube on the blue surface.

JJ552949.wedge(en-us,WIN.10).gifTo draw the textured cube

  1. First, we call ID3D11DeviceContext::IASetInputLayout to describe how vertex buffer data is streamed into the input-assembler stage.
  2. Next, we call ID3D11DeviceContext::IASetVertexBuffers and ID3D11DeviceContext::IASetIndexBuffer to bind the vertex and index buffers to the input-assembler stage.
  3. Next, we call ID3D11DeviceContext::IASetPrimitiveTopology with the D3D11_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP value to specify for the input-assembler stage to interpret the vertex data as a triangle strip.
  4. Next, we call ID3D11DeviceContext::VSSetShader to initialize the vertex shader stage with the vertex shader code and ID3D11DeviceContext::PSSetShader to initialize the pixel shader stage with the pixel shader code.
  5. Next, we call ID3D11DeviceContext::VSSetConstantBuffers to set the constant buffer that is used by the vertex shader pipeline stage.
  6. Next, we call PSSetShaderResources to bind the shader-resource view of the texture to the pixel shader pipeline stage.
  7. Next, we call PSSetSamplers to set the sampler state to the pixel shader pipeline stage.
  8. Finally, we call ID3D11DeviceContext::DrawIndexed to draw the cube and submit it to the rendering pipeline.

As in the previous tutorials, we call IDXGISwapChain::Present to present the rendered image to the window.

            // Update the constant buffer to rotate the cube model.
            m_constantBufferData.model = rotationY(-degree);
            degree += 1.0f;


            // Specify the render target and depth stencil we created as the output target.

            // Clear the render target to a solid color, and reset the depth stencil.
            const float clearColor[4] = { 0.071f, 0.04f, 0.561f, 1.0f };



            // Set the vertex and index buffers, and specify the way they define geometry.
            UINT stride = sizeof(BasicVertex);
            UINT offset = 0;



            // Set the vertex and pixel shader stage state.





            // Draw the cube.

            // Present the rendered image to the window.  Because the maximum frame latency is set to 1,
            // the render loop will generally be throttled to the screen refresh rate, typically around
            // 60Hz, by sleeping the application on Present until the screen is refreshed.
                m_swapChain->Present(1, 0)

Summary and next steps

We loaded raw texture data and applied that data to a 3D primitive.

Next we take the concepts of 3D graphics from these tutorials and demonstrate how to separate them into separate code objects for reuse.

Separating DirectX concepts into components for reuse



© 2016 Microsoft