A First Look at 3-D Support in Avalon
Dazzling Graphics: Top Ten UI Development Breakthroughs In Windows Presentation Foundation
Distributed .NET: Learn The ABCs Of Programming Windows Communication Foundation
A First Look at InfoCard
Talking Windows: Exploring New Speech Recognition And Synthesis APIs In Windows Vista
Windows Workflow Foundation
Windows Workflow Foundation, Part 2
WinFX Workflow: Simplify Development With The Declarative Model Of Windows Workflow Foundation
XPS Documents: A First Look at APIs For Creating XML Paper Specification Documents
Collapse the table of content
Expand the table of content

A First Look at 3-D Support in Avalon


Ian Griffiths

June 2004

Applies to:
   Longhorn Community Technical Preview, WinHEC 2004 Build (Build 4074)

Summary: Demonstrates how Avalon's simple 3-D support enables you to create three-dimensional graphics. Discusses the Avalon ViewPort3D element and the use of point of view, perspective, camera, and light in XAML markup. (14 printed pages)

Download the Avalon 3-D samples that are discussed in this article.


3-D Rendering in a 2-D World
Models and Meshes
Lights, Camera, Obvious Cliché

The new graphics system in Longhorn, code-named "Avalon," provides excellent graphics facilities enabling high-quality, resolution-independent user interfaces to be constructed without requiring heroic efforts from software developers. As well as providing a rich repertoire of two-dimensional drawing primitives, it also supports three-dimensional drawing.
Note   The first version of Longhorn to support this is build 4074, which was distributed at WinHEC, and is also available to MSDN Universal subscribers in the Subscriber Downloads area. The features described in this article are not in the Longhorn preview released at the PDC 2003.

Because the 3-D support is part of Avalon's drawing system, you can mix 3-D features into your design simply by including them in the markup along with the rest of your user interface. Unlike the 3-D APIs on current versions of Windows such as DirectX or OpenGL, there is no need to use a completely different style of programming for 3-D.

3-D Rendering in a 2-D World

Computer screens are two-dimensional in nature, so representing three-dimensional scenes presents some hurdles. While Avalon takes care of the bulk of the work, it is necessary to understand a few fundamentals of 3-D rendering to use these facilities successfully.

In order to display anything on the screen, whether in 2-D or 3-D, Avalon needs to know what kinds of things are to be displayed, such as rectangles or text, and it requires details such as their size, position, and color. For 2-D, this is typically sufficient, but for 3-D, it is not quite enough, for one simple reason: the screen is two-dimensional. In order to show some 3-D objects, a 2-D representation of them must be constructed on the screen, and in order to do that, it is necessary to choose a point of view—an object's appearance on screen will change according to where you are looking at it from. For example, Figure 1 shows two different views of the same object.


Figure 1. A tetrahedron as seen from two different angles

So it should come as no surprise that when working with 3-D elements in Avalon, not only must you supply the set of items to be displayed, you must also specify the point of view from which to display them. The items are referred to as the "model" and point of view is called the "camera." These are both specified as properties of an Avalon element called ViewPort3D.


ViewPort3D is the Avalon element used to add three-dimensional content to your applications. Avalon uses a fundamentally two-dimensional approach to layout and rendering, so it is necessary for 3-D content to be encapsulated in an element that can participate in the two-dimensional element tree. ViewPort3D fulfils that role—it sits at the boundary between the 2-D and 3-D worlds.

As far as the Avalon layout engine is concerned, a ViewPort3D is no different from any other visual element. Like all elements, it is treated as a 2-D entity with a 2-D position, and a notional width and height. It can be rotated and scaled like any other content. It can participate in hit testing. (Although in the WinHEC build, it does not raise the usual Avalon mouse input events. This will be addressed in a future build.)

The ViewPort3D essentially acts as a window onto a self-contained, isolated 3-D world. From the outside, it behaves like any other two-dimensional Avalon element. It is on the inside that we find the 3-D scene, and point of view settings.


The point of view used by a ViewPort3D is referred to as a "camera" because Avalon attempts to draw what you would see if a real camera were to take a picture of the objects in the model.

There are several settings you must decide on for the camera. As with a real camera, you must choose the position and orientation. We also get to make some choices that have no direct equivalent on a real camera. Consider this example:

<ViewPort3D ID="viewport" ClipToBounds="true" Width="100" Height="100">
    <PerspectiveCamera NearPlaneDistance="1" FarPlaneDistance="100" 
       Position="0,0,5" LookAtPoint="0,0,0" Up="0, 1, 0"
       FieldOfView="45" />



This shows a camera in a ViewPort3D. Various camera types are available depending on the kind of projection required. ("Projection" is the name for the process of producing a 2-D view of a 3-D scene.) Avalon supports two different styles of projection: orthographic and perspective. These are provided by the PerspectiveCamera and OrthographicCamera respectively. There is also a MatrixCamera, which allows the projection to be specified directly as a 4x4 matrix. (This can be useful for applications that want to perform transformations on the camera position and angle.)

A perspective projection produces more natural-looking images than an orthographic projection. It does this by making objects in the distance appear smaller than nearby objects. Figure 2 shows a simple scene viewed through a PerspectiveCamera, and it makes the layout of the scene clear—we can see a blue pyramid inside a red box. The depth of the box is clearly apparent, because the perspective projection has shown the back of the box as being smaller than its open front, even though both front and back are really the same size. It has also tapered the floor and side walls towards the back of the room. This has significantly distorted the shapes—Figure 2 shows the square side walls and rectangular floor have been drawn as trapezoids, and the relative sizes of the features have been lost. But these same distortions occur naturally with a real camera (and also when looking at a real 3-D scene with our eyes), so the image is easy to comprehend.


Figure 2. Perspective projection

Figure 3 shows the same scene as Figure 2, but rendered using an orthographic projection, as offered by the OrthographicCamera. This is a much simpler projection than the perspective projection. It just removes one dimension, flattening everything into a plane, while leaving the other dimensions untouched.


Figure 3. Orthographic projection

The unique benefit of an orthographic view is that it preserves relative sizes—objects do not get smaller the further away they are. Building plans typically use this projection because of this lack of distortion. It can also be useful for certain kinds of data visualization, such as bar charts.

By comparison, the perspective projection makes it very hard to judge relative sizes. This is exacerbated by the fact that the screen is two-dimensional—when looking at real three-dimensional objects, we can use our binocular vision to determine the distance of the things we are looking at. This enables us to distinguish between objects that are small, and objects that are far away. Conventional computer screens cannot provide stereoscopic images, so there is more scope for confusion. Careful use of lighting can mitigate this though—we are good at picking up lighting-based depth cues in the absence of stereoscopic images.

The main problem with orthographic projections is that they don't look very realistic. We are used to seeing the distortions introduced by a perspective projection because that's how real 3-D scenes appear to us. Also, while orthographic projections preserve sizes in two dimensions, they lose the third completely.

This is why Figure 3 looks so flat. The orthographic projection has made the rear wall of the box fill the image because it is exactly the same size as the box's open front. The side walls and floor do not even appear—while the perspective projection angled the walls in slightly, just as we would see them in a real 3-D environment, here they remain perpendicular to the screen, and are consequently invisible.

In most applications, the unrealistic flat images produced by an orthographic projection are not justified by the preservation of object sizes. So in the example above, we have chosen a perspective camera.

As well as choosing the kind of projection, we also need to provide information on the camera position and angle. The Position indicates the camera location, while the LookAtPoint determines where it is pointing. Figure 4 shows the same scene as Figure 2, with the Position moved to the left, and LookAtPoint left in the same place.


Figure 4. Moving the camera position

Figure 5 shows the same scene again. This time, the Position is the same as in Figure 2, but the LookAtPoint has been moved to the right. This has the same effect as panning a normal camera, and has caused half of the scene to move out of the shot. (This illustrates why it is vitally important to get the camera position and angles correct—if you point the camera in the wrong direction you might not see anything at all!)


Figure 5: Moving the camera LookAtPoint

A real camera could also be tilted to choose between portrait, landscape, or some more jaunty angle. This is the purpose of the Up attribute—it is a 3-D vector indicating the direction that should appear to be straight up when the image is rendered on screen. In Figure 2, we have positioned the camera 5 units away from, and looking directly at the center of the model. The Up vector is 0,1,0 meaning that we have chosen a positive "Y" direction as being upwards. Figure 6 shows the same scene rendered with an Up vector of 1,1,0, which has the effect of rotating the image by 45 degrees.


Figure 6. Changing the camera up vector

With a real camera, we would still not have enough information to know what the shot would look like—we would also need to know the focal length. (If your camera has a zoom lens, zooming in or out adjusts the focal length. It determines the amount of magnification provided by the lens.) The PerspectiveCamera does not have a focal length attribute, but it allows us to achieve the same effect by specifying the FieldOfView angle. If you specify a wide angle, this has the same effect as zooming out (or selecting a wide-angle lens, on a camera with interchangeable lenses). To zoom in, select a narrow field of view.

(You might think that since we are free to position the camera wherever we like in our virtual world, the ability to zoom is unnecessary—surely if we want to zoom in, we can just position the camera closer to the scene. In fact, narrowing the field of view will not have quite the same effect as moving the camera, in just the same way that with a real camera, zooming in produces a slightly different result from physically moving the camera closer. Changing the field of view or using a zoom lens simply enlarges or shrinks the image. Moving the camera will not only change the apparent size of objects, it will also change the extent to which perspective distorts the image. A popular cinematographic technique exploits this to produce a rather disturbing effect: you can move the camera and adjust the zoom at the same time to keep the subject the same size. This will result in the subject remaining still, but their surroundings will appear to recede into the distance as the perspective changes.)

The camera also requires us to specify the NearPlaneDistance and FarPlaneDistance attributes. These don't have any equivalent on a real camera. They are used to prevent items that are too close to or too far from the camera from being drawn. Because the camera can be located anywhere, it is possible to get some rather alarming effects if it happens to be very close to (or even inside) one of the objects in the model—such items might fill the whole picture. The NearPlaneDistance allows you to specify a minimum distance from the camera—objects closer than this will not be drawn. The FarPlaneDistance specifies the furthest distance away from the camera that an object can be before it will be omitted. This causes sufficiently distant objects to vanish, rather than remaining in view as meaningless tiny dots.

Models and Meshes

For a camera to be of any use, it must have something to look at. As well as specifying a Camera, a ViewPort3D must contain a Model, which is a collection of 3-D objects. Each object must derive from the Model3D class. Currently, there are only a few classes derived from Model3D. Aside from light sources (described later) and the Model3DCollection class (a collection of Model3D-derived objects), there is only one 3-D primitive available today—MeshPrimitive3D.

(There are currently no higher-level primitives such as cubes, spheres, or spline patches. Everything is built up with MeshPrimitive3D elements.)

MeshPrimitive3D lets you create a mesh, a very flexible primitive that is the foundation of most modern 3-D rendering systems. Meshes let you define the shape of three-dimensional objects using lots of little triangles. While this isn't always the most convenient way to create shapes—nature tends to prefer curves—there's a very good reason for using triangles: graphics cards are really good at rendering triangles. It's what they do best—they can draw millions every second. 3-D design programs are really good at building up all sorts of interesting shapes from triangles, so in practice, this "any shape you want so long as it's triangular" philosophy is not a problem.

As an example, we're going to build one of the simplest of three-dimensional shapes: a tetrahedron. This shape has four sides, which are all, conveniently enough, triangular. Figure 1 shows a tetrahedron viewed from two different angles.

Since MeshPrimitive3D lets us define shapes with triangles, we will need to tell it where the corners of all of those triangles are. Since a tetrahedron has four sides, and each triangle obviously has three corners, you might expect to have to specify twelve corners. However, with a tetrahedron, each corner is shared by three faces, so there are only four distinct corners.

Almost all meshes share corners in this way, so we always define a mesh by passing in a list of distinct points, and a second list indicating which triangles use which corners. The corners are passed in the Positions attribute, as shown in the following example. The triangles are passed as a list of offsets into the Positions list with the TriangleIndices attribute:

        <Mesh3D TriangleIndices="0 1 2  1 2 3  2 3 0  0 1 3"
                Normals="-1,-1,0 1,-1,0 1,0,0 0,0,1"
                Positions="-2,-2,-2 2,-2,-2 0,2,-2 0,0,3"/>
        <BrushMaterial Brush="Blue" />

The TriangleIndices attribute contains a list of numbers grouped in threes. (Extra spaces have been inserted in this example to emphasize this. These spaces are not mandatory.) Here, the first set is "0 1 2", meaning that the first triangle in this mesh uses the first, second, and third coordinates in the Positions list. The numbers in the Positions attribute are also grouped in threes, but for a different reason: each group represents a three-dimensional coordinate.

The Normals attribute is used in lighting calculations, and represents the surface normal, that is, the direction in which the surface is facing. Surfaces that face directly towards a light source will appear brighter than those angled away from the light. You might wonder why we need to supply this information—surely Avalon can work this out for us by looking at the coordinates we supplied. However, in practice, we often specify normals that are slightly different from the ones implied by the position of the triangles. This is because we often want to display objects that appear to have curved surfaces. Avalon makes surfaces appear more smooth by playing a common 3-D trick with normals and lighting.

When working out how brightly lit a surface should be, Avalon does not simply choose a single lighting level for each triangle in the mesh. It calculates the lighting for each point in the mesh, and then blends between these lighting levels across the area of the triangles that join these points. Consider this example:

<Mesh3D TriangleIndices="0 1 2 1 2 3"
        Normals="0,0,1 0,0,1 0,0,1 0,0,1"
        Positions="-2,-2,0 2,-2,0 -2,2,0 2,2,0"/>

This defines a square surface. The Normals are in groups of three, one for each vertex, and each group is a 3-D vector indicating the direction in which the surface faces at that corner. Note how all the normals in this example are pointing in the same direction—that means that this surface is flat. So if we illuminate it with two directional light sources, a red one from the top right and a blue one from the bottom left, the surface appears with a uniform mixture of the two lights' colors, as Figure 7 shows.


Figure 7. A flat surface illuminated by two colored light sources

By making the surface flat, we have effectively disabled the smoothing Avalon can perform. However, we can modify this example so that the normals are all pointing in different directions:

<Mesh3D TriangleIndices="0 1 2 1 2 3"
        Normals="-1,-1,1 1,-1,1 -1,1,1 1,1,1"
        Positions="-2,-2,0 2,-2,0 -2,2,0 2,2,0"/>

While the vertices are in the same places, in this example, the normals are all splayed outwards. This indicates that the surface should be shaded as though it were bulging out slightly like a pincushion. This curvature means that the color changes across the surface, as Figure 8 shows. The normal directions here cause the top left of the surface to be angled towards the red light, and away from the blue light. At the bottom left, it is the other way around. As you can see, Avalon has blended the colors across the surface to approximate the curvature indicated by the Normals property.


Figure 8. A curved surface illuminated by two colored light sources

You might be thinking that defining meshes is a lot of work. It is, which is why they are not usually created by hand—meshes are normally created using a 3-D design tool of some kind. In time, the same will be true for Avalon—most high quality 3-D design will be done using tools designed for the job. Creating a mesh by hand is the only supported option in the current preview build, but this practice is likely to become marginalized in the long run. In future builds, there will be mechanisms allowing meshes to be imported from external sources such as DirectX mesh files, so you'll be able to use any tool capable of generating such files to create your meshes.

While the current build of Longhorn does not support importing mesh files directly, I have created a tool that you can use to convert a DirectX mesh file to XAML, which can then be imported manually into your application.

As with 2-D objects, there is more to a 3-D object than its shape—we also need to be able to control the color or texture of the shape. As you can see from the tetrahedron example above, MeshPrimitive3D accommodates this. As well as the Mesh property, which defines the shape of the surface, there is a second property, Material.


With two-dimensional Avalon elements, we use the Brush class to define how areas of the screen are filled in. It is possible to use any Brush with a 3-D element as well. However, three-dimensional surfaces can have aspects to their appearance that are not catered for by the Brush class. For example, as well as needing to know what color a particular point on a surface should be, we need to know how it will respond to the lighting—glossy surfaces look quite different to matt ones, and metallic surfaces look different again.

Avalon therefore does not use the Brush class directly. Instead, we must define the color of our object using an object derived from the abstract Material class. Today there is only one concrete subclass of Material: BrushMaterial. It is a fairly basic class, and does not yet provide any of the 3-D surface effects described in the previous paragraph, but the class hierarchy leaves the flexibility to add more exotic material properties in the future.

BrushMaterial lets us define a surface's appearance by supplying any Avalon Brush object. At least that's the theory. Unfortunately, the WinHEC build only fully supports the SolidBrush. (ImageBrush can also be made to work, although it can cause problems.) The DrawingBrush does not work at all, which means, sadly, that you cannot draw vector graphics onto the surfaces of 3-D objects today. This should be fixed in a future release.

The previous example used a SolidBrush as the mesh's material:

    . . .
        <BrushMaterial Brush="Blue" />


In order to be able to see any of our 3-D objects, we will of course need some light. If you do not supply at least one light source in your model, nothing will appear. There are several kinds of light source available:

Light TypeUsage
DirectionalLightModels a distant light source, such as the sun. The light source does not have any particular location, simply a direction.
PointLightModels a nearby source, such as a light bulb—the source has a position, and light comes from that position. (The way in which such a source illuminates an object will depend on their distance and relative locations.)
SpotLightSimilar to a point light, except it does not throw light in all directions—like a real spotlight, it casts a cone of light.
AmbientLightA nondirectional light source—ambient light sources illuminate all objects uniformly regardless of their location, or the direction they are facing.

Lights are placed in the model—these elements all derive from the Light class which in turn derives from the same Model3D class as the MeshPrimitive. There is no significance to their ordering—they can appear before or after the objects in the model that they illuminate.

Figure 2 uses two light sources—a DirectionalLight shining in the direction of -3,-2,-1 (that is, shining in from the top left, angled slightly towards the back) and a PointLight to the top right of and just in front of the scene. Figure 9 shows how the scene looks with just the DirectionalLight, while Figure 10 shows the effect with just the PointLight.


Figure 9. DirectionalLight


Figure 10. PointLight

Figure 11 has a SpotLight in place of the PointLight. The results are rather surprising. You would expect to see a circle of light where the spotlight is pointing, fading to darkness away from the center. But remember that lighting calculations are done only for vertices, rather than for every pixel rendered. This means you will only see a "spot" if the spotlight is illuminating an area covered by relatively small triangles. This scene is made up entirely of very large triangles, so the effect is rather strange.


Figure 11. SpotLight

Figure 12 is lit with an ambient light. Note how this illuminates the whole scene in a completely uniform manner, making it much harder to perceive the layout.


Figure 12. AmbientLight

Lights, Camera, Obvious Cliché!

We now have all the elements we require to display a 3-D scene. We describe the shapes using MeshPrimitive3D elements. We indicate the point of view and projection type with a camera. And we must light the scene. Here is a complete XAML example tying these all together:

<DockPanel xmlns="http://schemas.microsoft.com/2003/xaml">

  <ViewPort3D ClipToBounds="true" DockPanel.Dock="Fill">

      <PerspectiveCamera NearPlaneDistance="1" FarPlaneDistance="100" 
         LookAtPoint="0,0,0" Position="0, 0, 5" Up="0, 1, 0"
         FieldOfView="45" />

      <Model3DCollection IncludeInHitTestResults="True">
          <DirectionalLight Color="#FFFFFFFF" Direction="3,-1,-3" />
          <AmbientLight Color="#66666666" />

          <MeshPrimitive3D >
                    TriangleIndices="0 1 2  1 2 3  2 3 0  0 1 3"
                    Normals="-1,-1,0 1,-1,0 1,0,0 0,0,1"
                    Positions="-2,-2,-2  2,-2,-2  0,2,-2  0,0,1"/>
              <BrushMaterial Brush="Blue" />



This XAML was used to generate the first of the two views shown in Figure 1. (The second view of the tetrahedron was made simply by moving the camera position.) Notice that the ViewPort3D in this example is a child of a normal Avalon DockPanel—it integrates into the element tree just like any other Avalon element. We could put other markup in here such as text, or buttons alongside the 3-D part. In this case we have used the DockPanel.Dock="Fill" attribute to set the 2-D size of the viewport. If you are using other panel types, you can instead use its Width and Height properties.


Before Avalon, 3-D was its own distinct world, requiring the use of dedicated APIs. Now, Avalon's simple 3-D support allows three-dimensional scenes to be included in your application's visuals alongside 2-D drawing primitives. The ViewPort3D integrates into Avalon's visual tree, so it can be used in the same way as any of the 2-D visual elements, enabling the use of 3-D anywhere in your UI.

© 2016 Microsoft