Oktober 2014

Volume 29 Number 11

Dieser Artikel wurde maschinell übersetzt.

Unity - Entwickeln Ihres ersten Spiels mit Unity und C#, Teil 3

Adam Tuliper | Oktober 2014

You’re still with me in this series. Good. In the first article, I covered some Unity basics (msdn.microsoft.com/magazine/dn759441). In the second, I focused on 2D in Unity (msdn.microsoft.com/magazine/dn781360). Now I get to my favorite part of game development—3D. The world of 3D is a truly magical place—amazing immersive environments, rich sound effects and beautiful visuals—even just a simple puzzle game with real-world physics can keep you hooked for hours.

3D games definitely add a layer of complexity over 2D, but by taking it piece by piece you can build up a cool 3D game. New project settings for both 2D and 3D in Unity support 3D. You can have 3D objects in a 2D game (and vice versa).

What Makes Up a 3D Scene?

3D scenes consist primarily of three main visual components—lights, mesh renderers and shaders. A light is, well, a light, and Unity supports four different types. You can find them all under the GameObject menu. Experiment with adding the various types and changing their properties. The easiest one to light up your scene is a directional light, which is like the sun in the sky.

A mesh (or model) is a collection of vertices that make up the polygons that make up an object. A shader is a compiled routine that contains code to control how your object will show or interact with light. Some shaders simply take light and reflect it like a mirror; others take a texture (an image to be applied to your mesh) and can enable shadows and depth; and some even allow you to cut visual holes through your models, like a fence.

Models are typically FBX or OBJ files exported from another modelling software package. FBX files can also contain animation data, so you might receive one FBX file for your model and one containing several animations. Several third-party file formats are also supported, such as the Autodesk Maya .ma format and Blender files. You will typically need the third-party program installed on the same system if you want Unity to import these files, and then it’s simply a matter of dragging and dropping them into your Unity project, just as you would any other file. Behind the scenes, Unity will convert other file formats (upon import or detecting file changes) into the FBX file format.

Asset Store

I touched on the Asset Store in my first article, but in 3D games is where it’s really handy. I’m not an artist, and because this is a technical magazine, I assume most of you aren’t, either. (If you are, please accept my congrats, you are part of a rare group.) But if I want to create a game with lush environments and old destroyed buildings, for example, it’s not a problem. I can buy what I need from the Asset Store. If I want 15 different zombies, I can procure a pack from Mixamo in the Asset Store. The potential combinations are nearly endless, so don’t worry about someone else’s game looking like yours. Best of all, the Asset Store integrates into Unity. You can upgrade your packages by clicking Window | Asset Store and then the bin icon. You can also check out reviews and comments to more easily determine if a particular item is good for your project, for example, whether its mobile-optimized or not. Desktop games can typically handle a lot more objects/vertices/textures/memory than a mobile game, although some of the newer chips make mobile devices today seem like Xbox 360s.  

In a typical 3D game, many of the same concepts from a 2D game apply—colliders, triggers, rigid bodies, game objects/transforms, components and more. Regardless of the type of 3D game, you’ll typically want to control input, movement, and characters; use animations and particle effects; and build an imaginative world that’s both fantastical and realistic. I’ll discuss some of the ways Unity helps with this.

Input, Movement and Character Controllers

Reading input for movement becomes a bit more complicated in 3D because rather than simply moving in the X and Y planes, you can now move in three dimensions: X, Y and Z. Scenarios for 3D movement include (but aren’t limited to) top-down movement, where a character moves only horizontally and vertically; rotating a camera or character when reading mouse input, as is done in many first-person shooter (FPS) games; strafing left to right when reading horizontal input; rotating to turn around when reading horizontal input; or just walking backward. There are a good number of movement options from which to choose.

When moving an object, you don’t give it a position to move to, as you might expect. Remember, you’re executing code with each frame, so you need to move the object in small increments. You can either let the physics engine handle this by adding a force to your rigidbody to move it, or you can tween the object. Tweening basically means transitioning between values; that is, moving from point A to point B. There are various ways to tween values in Unity, including free third-party libraries such as iTween. Figure 1 shows some manual ways to move an object in Unity. Note that for simplicity, they haven’t been optimized (to do so, I’d hold a reference to the transform in a variable to prevent going from managed code to native code too often).

Figure 1 Various Methods for Moving Objects

// Method 1
void Update()
{
  // Move from point a to point b by .2 each frame - assuming called in Update.
  // Will not overshoot the destination, so .2 is the max amount moved.
  transform.position =
    Vector3.MoveTowards(transform.position, new Vector3(10, 1, 100), .2f);
}
// Method 2
void Update()
{
  // Interpolate from point a to point b by a percentage each frame,
  // in this case 10 percent (.1 float).
  var targetPosition = new Vector3(10,0,15);
  transform.position = Vector3.Lerp(parentRig.position, targetPosition, .1f);
}
// Method 3
void Update()
{
  // Teleport the object forward in the direction it is rotated.
  // If you rotate the object 90 degrees, it will now move forward in the direction
  // it is now facing. This essentially translates local coordinates to 
  // world coordinates to move object in direction and distance
  // specified by vector. See the Unity Coordinate Systems section in the 
  // main article.
  transform.Translate(Vector3.forward * Time.deltaTime);
}
// Method 4
void FixedUpdate()
{
  // Cause the object to act like it's being pushed to the
  // right (positive x axis). You can also use (Vector.right * someForce)
  // instead of new Vector().
  rigidbody.AddForce( new Vector3(7, 0, 0), ForceMode.Force);
}
// Method 5
void FixedUpdate()
{
  // Cause the object to act like it's being pushed to the positive
  // x axis (world coordinates) at a speed of approx 7 meters per second.
  // The object will slow down due to friction.
  rigidbody.velocity = new Vector3(7,0,0);
}
// Method 6
// Move the rigidbody's position (note this is not via the transform).
// This method will push other objects out of the way and move to the right in
// world space ~three units per second.
private Vector3 speed = new Vector3(3, 0, 0);
void FixedUpdate()
{
  rigidbody.MovePosition(rigidbody.position + speed * Time.deltaTime);
}
// Method 7
void FixedUpdate()
{
  // Vector3.forward is 0,0,1. You could move a character toward 0,0,1, but you
  // actually want to move the object forward no matter its rotation.
  // This is used when you want a character to move in the direction it's
  // facing, no matter its rotation. You need to convert the meaning of
  // this vector from local space (0,0,1) to world space,
  // and for that you can use TransformDirection and assign that vector
  // to its velocity.
  rigidbody.velocity = transform.TransformDirection(Vector3.forward * speed);
}

Each approach has advantages and disadvantages. There can be a performance hit moving just the transform (methods 1-2), though it’s a very easy way to do movement. Unity assumes if an object doesn’t have a rigidbody component on it, it probably isn’t a moving object. It builds a static collision matrix internally to know where objects are, which enhances performance. When you move objects by moving the transform, this matrix has to be recalculated, which causes a performance hit. For simple games, you may never notice the hit and it may be the easiest thing for you to do, although as your games get more complicated, it’s important to move the rigidbody itself, as I did in methods 4-6.

Rotating Objects

Rotating an object is fairly simple, much like moving an object,except the vectors now represent degrees instead of a position or a normalized vector. A normalized vector is simply a vector with a max value of one for any value and can be used when you just want to simply reference a direction by using a vector. There are some vector keywords available to help, such as Vector3.right, back, forward, down, up, left, right, zero and one. Anything that will move or rotate in the positive horizontal direction can use Vector.right, which is just a shortcut for (1,0,0), or one unit to the right. For rotating an object, this would represent one degree. In Figure 2, I just rotate an object by a little bit in each frame.

Figure 2 Methods for Rotating an Object

// Any code below that uses _player assumes you
// have this code prior to it to cache a reference to it.
private GameObject _player;
void Start()
{
  _player = GameObject.FindGameObjectWithTag("Player");
}
// Method 1
void Update () {
  // Every frame rotate around the X axis by 1 degree a
  // second (Vector3.right = (1,0,0)).
  transform.Rotate(Vector3.right * Time.deltaTime);
}
// Method 2
void Update () {
  // No matter where the player goes, rotate toward him, like a gun
  // turret following a target.
  transform.LookAt(_player.transform);
}
// Method 3
void Update()
{
  Vector3 relativePos = _player.transform.position - transform.position;
  // If you set rotation directly, you need to do it via a Quaternion.
  transform.rotation = Quaternion.LookRotation(relativePos);
}

Each of these techniques has minor nuances. Which one should you use? I would try to apply forces to the rigidbody, if possible. I’ve probably just confused you a bit with that option. The good news is, there’s existing code that can do virtually all of this for you.

Did you notice the Quaternion in Method 3? Unity uses Quaternions internally to represent all rotations. Quaternions are efficient structures that prevent an effect called gimbal lock, which can happen if you use regular Euler angles for rotation. Gimbal lock occurs when two axes are rotated to be on the same plane and then can’t be separated. (The video at bit.ly/1mKgdFI provides a good explanation.) To avoid this problem, Unity uses Quaternions rather than Euler angles, although you can specify Euler angles in the Unity Editor and it will do the conversion into a Quaternion on the back end. Many people never experience gimbal lock, but I wanted to point out that if you want to set a rotation directly in code, you must do it via a Quaternion, and you can convert from Euler angles using Quaternion.Euler.

Now that you’ve seen many options, I should note that I find the easiest method is to use a rigidbody and simply apply .AddForce to the character. I prefer to reuse code when I can, and luckily Unity supplies a number of prefabs.

Let’s Not Reinvent the Wheel

Unity provides the Sample Assets package in the Asset Store (bit.ly/1twX0Kr), which contains a cross-platform input manager with mobile joystick controls, some animations and particles, and most important, some prebuilt character controllers.

There are some older assets included with Unity (as of this writing, version 4.6). Those assets are now distributed as a separate package that Unity can update separately. Rather than having to write all of the code to create a first-person character in your game, a third-person character, or even a self-driving car, you can simply use the prefabs from the sample assets. Drag and drop into your scene and instantly you have a third person view with multiple animations and full access to the source code, as shown in Figure 3.

A Third-Person Prefab
Figure 3 A Third-Person Prefab

Animations

An entire book could be dedicated (and has) to the Mecanim animation system in Unity. Animations in 3D are generally more complicated than in 2D. In 2D, an animation file typically changes a sprite renderer in each key frame to give the appearance of animation. In 3D, the animation data is a lot more complex. Recall from my second article that animation files contain key frames. In 3D, there can be many key frames, each with many data points for changing a finger, moving an arm or a leg, or for performing any number and type of movements. Meshes can also have defined bones in them and can use components called skinned mesh renderers, which deform the mesh based on how the bones move, much as a living creature would.

Animation files are usually created in a third-party modeling/animation system, although you can create them in Unity, as well.

The basic pose for a character in a 3D animation system is the T-pose, which is just what it sounds like—the character standing straight with outstretched arms, and it applies to just about any humanoid-shape model. You can then enliven that basic character by having Mecanim assign virtually any animation file to it. You can have a zombie, elf and human all dancing the same way. You can mix and match the animation files however you see fit and assign them via states much as you would in 2D. To do this, you use an animation controller like the one shown in Figure 4.

Animation Controller for Controlling a Character’s Animation States
Figure 4 Animation Controller for Controlling a Character’s Animation States

Remember, you can get characters and animations from the Unity Asset Store; you can create them with modeling tools; and there are third-party products like Mixamo’s Fuse that enable you to quickly generate your own customized characters. Check out my Channel 9 videos for an intro to animation in Unity.

Creating a World

Unity has a built-in terrain system for generating a world. You can create a terrain and then use the included terrain tools to sculpt your terrain, make mountains, place trees and grass, paint textures, and more. You can add a sky to your world by importing the skybox package (Assets | Import Package | Skyboxes) and assigning it in Edit | Render Settings | Skybox Material. It took me just a couple of minutes to create a terrain with reflective, dynamic water, trees, sand, mountains and grass, as shown in Figure 5.

A Quickly Created Terrain
Figure 5 A Quickly Created Terrain

Unity Coordinate Systems

Unity has four different methods for referring to a point in a game or on the screen as shown in Figure 6. There’s screen space, which ranges from 0 to the number of pixels and is used typically to get the location on the screen where the user touches or clicks. The viewport space is simply a value from 0 to 1, which makes it easy to say, for example, that halfway is .5, rather than having to divide pixels by 2. So I can easily place an object in the middle of the screen by using (.5, .5) as its position. World space refers to the absolute positioning of an object in a game based on three coordinates, (0, 0, 0). All top-level game objects in a scene have their coordinates listed in world space. Finally, local space is always relative to the parent game object. With a top-level game object, this is the same as world space. All child game objects are listed in the Editor in coordinates relative to their parent, so a model in your app of a house, for example, may have world coordinates of (200, 0, 35), while its front door (assuming it’s a child game object of the house) might be only (1.5, 0, 0), as that’s relative to the parent. In code, when you reference transform.position, it’s always in world coordinates, even if it’s a child object. In the example, the door would be (201.5, 0, 35), but if you instead reference transform.localPosition, you’d return (1.5, 0, 0). Unity has functions for converting among the various coordinate systems.

Coordinates in Unity
Figure 6 Coordinates in Unity

In the prior move examples I mostly moved using world space, but in some cases used local space. Refer back to method 7 in Figure 1. In that example I take a local normalized (or unit) vector of Vector.forward, which is (0,0,1). This by itself doesn’t have much meaning. However, it shows intent to move something on the Z axis, which is forward. What if the object is rotated 90 degrees from (0,0,0)? Forward can now have two meanings. It can mean the original absolute Z axis (in world coordinates), or a Z axis relative to the rotated object, which is always pointing forward for the object. If I want an object to always move forward no matter its rotation, I can simply translate between local forward to the real-world forward vector by using transform.TransformDirection(Vector3.forward * speed) as is shown in that example.

Threading and Coroutines

Unity uses a coroutine system to manage its threads. If you want something to happen in what you think should be a different thread, you kick off a coroutine rather than creating a new thread. Unity manages it all behind the scenes. What happens is the coroutine pauses when it hits the yield method. In the example in Figure 7, an attack animation is played, paused for a random length and then played in attack again.

Figure 7 Using a Coroutine to Pause Action

void Start()
{
  // Kick off a separate routine that acts like a separate thread.
  StartCoroutine(Attack());
}
IEnumerator Attack()
{
  // Trigger an attack animation.
  _animator.SetTrigger("Attack");
  // Wait for .5 to 4 seconds before playing attacking animation, repeat.
  float randomTime = Random.Range(.5f, 4f);
  yield return new WaitForSeconds(randomTime);
}