September 2014

Volume 29 Number 9

Unity : Developing Your First Game with Unity and C#, Part 2

Adam Tuliper

Welcome back to my series on Unity. In the first article, I covered some Unity basics and architecture. In this article, I’m going to explore 2D in Unity, which builds upon the 2D support Unity added in version 4.3. You could do 2D in Unity before 4.3, but the process was quite painful without a third-party toolkit. What I want is to just drag and drop an image into my scene and have it appear and work as I’d expect via a drag/drop interface. That’s some of what Unity 4.3 brings to the table and in this article, I’ll discuss more of its features while developing a basic 2D platformer game to learn some essential Unity concepts.

2D in Unity

To get 2D support in Unity, when creating a new project you select 2D from the dropdown in the new project dialog. When you do, project defaults are set to 2D (viewed under Edit | Project Settings | Editor) and any images imported into your project are brought in as sprites as opposed to just textures. (I’ll cover this in the next section.) Also, the scene view defaults to 2D mode. This really just provides a helper button that fixes you to two axes during scene development, but has no effect in your actual game. You can click it at any time to pop in and out of 2D working mode. A 2D game in Unity is really still a 3D environment; your work is just constrained to the X and Y axes. Figure 1 and Figure 2 show the 2D mode selected and not selected. I have the camera highlighted so you can see an outline of the camera’s viewing area, but note that it looks out into space as a rectangle shape.

2D Mode Selected—Camera Has Focus
Figure 1 2D Mode Selected—Camera Has Focus

2D Mode Not Selected—Camera Has Focus
Figure 2 2D Mode Not Selected—Camera Has Focus

The highlighted camera is set up as an orthographic camera, one of two camera modes in Unity. This camera type, which is commonly used in 2D, doesn’t scale objects further away as your eyes would see them; that is, there’s no depth from the camera position. The other camera type is perspective, which shows objects as our eyes see them, with depth. There are various reasons to use one camera type instead of the other, but in general, choose perspective if you need visual depth, unless you want to scale your objects accordingly. You can change the mode simply by selecting the camera and changing the projection type. I recommend trying this out and seeing how your camera’s viewing area changes when you start moving objects further away into the Z axis. You can change the default behavior mode at any time, which affects only future importing of images into your project.

If you have an existing project in Unity or aren’t sure if you’ve selected 2D from the project dialog, you can set your project defaults for 2D by going to Edit | Project Settings | Editor; otherwise, you’ll have to manually set the texture type on every 2D image you import, which is a bit tedious if you have a lot of artwork.

It’s All About the Sprite

When 3D is selected as the default behavior mode, images are recognized as type Texture. You can’t drag a texture into your scene; a texture must be applied to an object. This way isn’t much fun to create 2D games. I want to just drag and drop an image and have it appear in my scene. If your default behavior mode is 2D, though, things get easy. When I drag and drop an image into Unity, it’s now recognized as type Sprite.

 This allows you to simply drag and drop your artwork into Unity, and then from Unity drag and drop it into your scene to build out your game. If you have artwork that looks small, rather than rescale it everywhere you can just decrease the Pixels To Units value. This is a very common operation in Unity for both 2D and 3D and typically is more performant than scaling objects via its transform’s scale property. 

When dropping objects, you might notice that one object ends up on top of another. Unity creates a series of vertices behind the scenes, even for 2D images, so the drawing order can differ on various parts of the images. It’s always best to explicitly specify the z-order of your images. You can do this via three methods, listed in the order Unity draws your sprites:

  1. Setting the “Sorting Layer” property in the Sprite Renderer.
  2. Setting the “Order in layer” property, also on the Sprite Renderer.
  3. Setting the Transform’s Z position value.

The sorting layer takes priority over all, followed by order in layer, which in turn takes priority over the transform’s z value.

Sorting layers draw in order of definition. When you add other layers (in Edit | Project Settings | Tags and Layers), Unity draws out any object it finds on the Default layer first (then Order in Layer takes over, and then the Transform’s Z position value) , then Background, then Platforms, and so forth. So you can easily fix overlapping images by setting them to the Platforms layer and giving the one you want on top an Order in Layer of 1, so it will be drawn after anything with Order in Layer of 0.

Common Functionality

Figure 3 shows a level containing some platforms and background images that were built out by dragging and dropping and setting the sorting layers.

A Game Level
Figure 3 A Game Level

As it stands now, it looks like a game, but it doesn’t act like one. At a minimum, it needs a number of features to be a functional game. I’ll discuss several of these in the following sections.

Keyboard, Mouse and Touch Movement In Unity, keyboard, mouse, accelerometer and touch are read through the input system. You can read input movement and mouse clicks or touch easily using a script like the following on the main player (I’ll build on this script shortly.):

void Update()
{
  // Returns -1 to 1
  var horizontal = Input.GetAxis("Horizontal");
  // Returns true or false. A left-click
  // or a touch event on mobile triggers this.
  var firing = Input.GetButtonDown("Fire1");
}

If you check Edit | Project Settings | Input, you can see the default inputs (Unity gives you a bunch that are there in every new project) or set new ones. Figure 4 shows the defaults for reading horizontal movement. The “left” and “right” settings represent the left and right arrow keys but notice also that “a” and “d” are used for horizontal movement. These can map to joystick inputs. You can add new ones or change the defaults. The Sensitivity field controls how fast Unity will go from 0 to 1 or -1. When the right arrow is pressed, the first frame might yield a value of .01 and then scale pretty quickly up to 1, although you can adjust the speed to give your character instant horizontal speed or movement. I’ll show you the code shortly for applying these values to your game objects. There’s no actual GameObject component required for reading these values; you simply use the Input keyword in your code to access the functionality for reading input. Input, as a general rule, should be read in the Update function as opposed to FixedUpdate to avoid missing input events.

Horizontal Input Defaults
Figure 4 Horizontal Input Defaults

Linear Movement Things need to be able to move. If this is a top-down game, gravity typically isn’t important. If it’s a platformer, gravity is exceedingly important. In either case, object collision detection is critical. Here are the basic rules. A Rigidbody2D or RigidBody (used for 3D) component added to a game object will automatically give that component mass and make it understand gravity and receive forces. According to Wikipedia, “In physics, a rigid body is an idealization of a solid body in which deformation is neglected. In other words, the distance between any two given points of a rigid body remains constant in time regardless of external forces exerted on it.” The same principle applies in games. Adding a rigid body lets you make calls like the ones in Figure 5.

Figure 5 Adding Movement and Velocity

void FixedUpdate()
{
  // -1 to 1 value for horizontal movement
  float moveHorizontal = Input.GetAxis("Horizontal");
  // A Vector gives you simply a value x,y,z, ex  1,0,0 for
  // max right input, 0,1,0 for max up.
  // Keep the current Y value, which increases 
  // for each interval because of gravity.
  var movement = new Vector3(moveHorizontal * 
    _moveSpeed, rigidbody.velocity.y, 0);
  rigidbody.velocity = movement;
  if (Input.GetButtonDown("Fire1"))
  {
    rigidbody.AddForce(0, _jumpForce, 0);
  }
}

As a general rule, linear movement should happen via Update and accelerated movement via FixedUpdate. If you’re a beginner, it might seem confusing what to use when, and, in fact, linear movement will work in either function. But you’ll get better visual results by following this rule.

Collision Detection An object gets its mass from its RigidBody component, but you also need to tell Unity how to handle collisions with this object. The size and shape of your artwork or models doesn’t matter here, although scaling does affect physics on the object itself. What matters is the size and shape of the collider component, which is simply a defined region around, on or within the object that you want Unity to detect another object contacting. This is what enables scenarios like detecting when you enter the region of an idle zombie or boulders that go bouncing down a mountain side on approach.

There are variously shaped colliders. A collider for 2D can be a circle, edge, polygon or box. Box colliders are great for objects shaped like squares or rectangles, or when you simply want to detect collisions in a square area. Think of a platform you can stand on—this is a good example of a box collider. Simply adding this component to your game object allows you to take advantage of physical collisions. In Figure 6, I added a circle collider and rigid body to the character and a box collider to the platform. When I click play in the Editor, the player immediately drops onto the platform and stops. No code required.

Adding Colliders
Figure 6 Adding Colliders

You can move and resize the region a collider covers by changing its properties on the collider component. By default, objects with colliders don’t pass through each other (except for triggers, which I’ll cover next). Collisions require collider components on both game objects and at least one object has to have a RigidBody component, unless it’s a trigger.

If I want Unity to call my code when this collision instance first happens, I simply add the following code to a game object via a script component (discussed in the previous article):

void OnCollisionEnter2D(Collision2D collision)
{
  // If you want to check who you collided with,
  // you should typically use tags, not names.
  if (collision.gameObject.tag == "Platform")
  {
    // Play footsteps or a landing sound.
  }
}

Triggers Sometimes you want to detect a collision but don’t want any physics involved. Think of a scenario like picking up treasure in a game. You don’t want the coins to be kicked out in front of the player when it approaches; you want the coins to be picked up and not interfere with player movement. In this case, you use a collider called a trigger, which is nothing more than a collider with the IsTrigger checkbox enabled. That turns off the physics and Unity will only call your code when object A (which contains a collider) comes within the region of object B (which also has a collider). In this case, the code method is OnTriggerEnter2D instead of OnCollisionEnter2D:

void OnTriggerEnter2D(Collider2D collider)
{
  // If the player hits the trigger.
  if (collider.gameObject.tag == "Player")
  {
    // More on game controller shortly.
    GameController.Score++;
    // You don’t do: Destroy(this); because 'this'
    // is a script component on a game object so you use
    // this.gameObject or just gameObject to destroy the object.
    Destroy(gameObject);
  }
}

The thing to remember is that with triggers, there’s no physical interaction, it’s basically just a notification. Triggers do not require a Rigidbody component on the game object, either, because no force calculations are taking place.

One thing that often trips up new developers is the behavior of rigid bodies when you add colliders to them. If I have a circle collider on my object and I put this object on an inclined plane (as indicated by its collider shape), it starts to roll, as Figure 7 shows. This models what you’d see in the physical world if a wheel was set on an incline. The reason I don’t use a box collider for my character is because a box has edges that can get caught up when moving over other colliders’ edges, yielding a less smooth experience. A circle collider makes this smoother. However, for times when a smooth rotation isn’t acceptable, you can use the Fixed Angle setting on the Rigidbody component.

Using a Circle Collider for Smooth Movement
Figure 7 Using a Circle Collider for Smooth Movement

Audio To hear sound, you need an Audio Listener component, which already exists on any camera by default. To play a sound, simply add an Audio Source component to a game object and set the audio clip. Unity supports most major audio formats and will encode longer clips to MP3. If you have a bunch of audio sources with clips assigned in the Unity Editor, keep in mind they’ll all be loaded at run time. You can instead load the audio via code located in a special resource folder and destroy it when done.

When I imported audio into my project, I kept it as a WAV file, which is uncompressed audio. Unity will re-encode your longer audio to optimize it, so always use the best quality audio you have. This is especially true for short files like sound effects, which Unity won’t encode. I also added an Audio Source component to my main camera, though I could’ve added it to any game object. I then assigned the Adventure audio clip to this Audio Source component and checked off Loop, so it constantly loops. And in three simple steps, I now have background music when my game plays.

GUI/Heads-Up Display A GUI system can comprise many things in a game. It may involve the menu system, the health and score display, weapons inventory, and more. Typically, a GUI system is what you see on the screen that stays put no matter where the camera is looking (although it doesn’t have to). The Unity GUI functionality is currently undergoing a complete revision and the new uGUI system is coming out in Unity 4.6. Because that isn’t released yet, I’ll simply discuss some of the basic functionality here, but check out my channel9 blog for details of the new GUI system at channel9.msdn.com/Blogs/AdamTuliper.

To add simple display text to the screen (for example, score: 0), I clicked on Game Object | Create Other | GUI Text. This option no longer exists in Unity 4.6, so you’ll want to  watch that video on uGUI I mentioned. You can still add a GUI Text component to the game object in 4.6 by clicking the Add Component button; it’s just missing from the Editor menu. With the existing (legacy) Unity GUI system, you can’t see your GUI objects in the scene view, only in the Game view, which makes layout creation a little weird. If you like, you can use pure code to set up your GUI, and there’s a GUILayout class that lets you track widgets automatically. But I prefer a GUI system where I can click and drag and work with easily, which is why I find uGUI far superior. (Before uGUI, the leader in this area was a pretty solid third-party product called NGUI, which was actually used as the initial code base for uGUI.)

The easiest way to update this display text is to simply search for or assign in the Editor a reference to the GUI Text game object, and treat it like a label in .NET and update its text property. This makes it easy to update the screen GUI text:

void UpdateScore()
{
  var score = GameObject.Find("Score").GetComponent<GUIText>();
  score.text = "Score: 0";
}

This is a slightly shortened example. For performance, I’d actually cache a reference to the GUIText component in the Start method so as to not query for it every method call.

Score Tracking Tracking scores is easy. You simply have a class that exposes a public method or properties on which to set a score. It’s common in games to have an object called a Game Controller that acts as the game’s organizer. The Game Controller can be responsible for triggering game saves, loading, score keeping and more. In this example, I can simply have a class that exposes a score variable, as shown in Figure 8. I assign this component to an empty game object, so it’s available when the scene loads. When the score is updated, the GUI is in turn updated. The _scoreText variable is assigned in the Unity editor. Just drop any GUIText game object onto this exposed field or use the search widget where this script component exposes the Score Text variable in the editor.

Figure 8 Creating the _scoreText Variable

public class GameController : MonoBehaviour
{
  private int _score;
  // Drag a GuiText game object in the editor onto
  // this exposed field in the Editor or search for it upon startup
  // as done in Figure 12.
  [SerializeField]
  private GUIText _scoreText;
  void Start()
  {
    if (_scoreText == null)
    {
      Debug.LogError("Missing the GuiText reference. ");
    }
  }
  public int Score
  {
    get { return _score; }
    set
    {
      _score = value;
        // Update the score on the screen
      _scoreText.text = string.Format("Score: {0}", _score);
    }
  }
}

I can then simply update (in this example) the mushroom’s trigger code as follows to increment the score with each pickup:

void OnTriggerEnter2D(Collider2D collider)
{
  if (collider.gameObject.tag == "Player")
  {
    GameController.Score++;
    Destroy(gameObject);
  }
}

Animations Just as with XAML, animations are created by carrying out various actions in key frames. I could easily devote a whole article just to animations in Unity, but I’ll keep it brief here because of space. Unity has two animation systems, the legacy system and the newer Mecanim. The legacy system uses animation(.ani) files, while Mecanim uses states to control which animation file plays.

Animation in 2D uses Mecanim by default. The simplest way to create an animation is to drag and drop images into your scene and let Unity create the animations for you. To start, I drag some single sprites into Unity and in turn Unity creates several things for me. First, it creates a game object with a sprite renderer component to draw the sprites. Then it creates an animation file. You can see this by going to Window | Animator and highlighting your game object. The animator shows the animation file assigned, which, in my case, contains six key frames because I dropped six images into my scene. Each key frame controls one or more parameters on some component; here, it changes the Sprite property of the Sprite Renderer component. Animations are nothing more than single images showing at some rate that makes the eye perceive movement.

Next, Unity creates an Animator component on the game object, as shown in Figure 9.

The Animator Component Pointing to a Controller
Figure 9 The Animator Component Pointing to a Controller

This component points to a simple state machine called an animation controller. This is a file Unity creates, which just shows the default state; in other words, it’s always in the “idle” state as that’s the only state available. This idle state does nothing more than point to my animation file. Figure 10 shows the actual key frame data on the time line.

The Idle Animation Data
Figure 10 The Idle Animation Data

This might seem like a lot to go through just to play an animation. The power of state machines, though, is that you can control them by setting simple variables. Remember, a state does nothing more than point to an animation file (although in 3D you can get fancy and do things like blend animations together).

I then took more images to make a run animation and dropped them onto my Yeti game object. Because I already have an animator component on the game object, Unity just creates a new animation file and adds a new state called “run.” I can simply right-click on idle and create a transition to run. This creates an arrow between the idle and run states. I can then add a new variable called “Running,” which is simple to use—you just click on the arrow between the states and change the condition to use the variable, as shown in Figure 11.

Changing from the Idle to Run States
Figure 11 Changing from the Idle to Run States

When Running becomes true, the idle animation state changes to the run animation state, which simply means the run animation file plays. You can control these variables in code very easily. If you want to start your run animation by triggering the run state when the mouse button is clicked, you can add the code shown in Figure 12.

Figure 12 Changing State with Code

private Animator _animator;
void Awake()
{
    // Cache a reference to the Animator 
    // component from this game object
    _animator = GetComponent<Animator>();
}
void Update()
{
  if (Input.GetButtonDown("Fire1"))
  {
    // This will cause the animation controller to
    // transition from idle to run states 
    _animator.SetBool("Running", true);
  }
}

In my example, I used single sprites to create an animation. It’s pretty common, though, to use a sprite sheet—a single image file with more than one image in it. Unity supports sprite sheets, so it’s a matter of telling Unity how to slice up your sprite, and then dropping those slices into your scene. The only steps that are different are changing Sprite Mode from Single to Multiple on the sprite properties, and opening the Sprite Editor, which can then automatically slice the sprite and apply the changes, as shown in Figure 13. Finally, you expand the sprite (there’s a little arrow on the sprite’s icon in the project view), highlight the resulting sprites, and drop them into your scene as you did earlier.

Creating a Sprite Sheet
Figure 13 Creating a Sprite Sheet

Animation can be a complicated subject until you learn the system. For more information, check out my channel9 blog or one of the many fine resources on Unity’s learning site.

The End of the Level When the player gets to the end of the level, you can simply have a collider that’s set to a trigger and allow the player to hit that zone. When it does, you just load another level or reload the current one:

void OnTriggerEnter2D(Collider2D collider)
{
  // If the player hits the trigger.
  if (collider.gameObject.tag == "Player")
  {
    // Reload the current level.
    Application.LoadLevel(Application.loadedLevel);
    // Could instead pass in the name of a scene to load:
    // Application.LoadLevel("Level2");
  }
}

The game object and its respective properties are shown in Figure 14. Note the collider’s height is high enough that the player can’t jump over it and also that this collider is set as a trigger.

The Game Object and Its Properties
Figure 14 The Game Object and Its Properties

Game Play In a simple 2D game like this, the flow is pretty straightforward. The player starts. Gravity on the rigid body makes the player fall. There’s a collider on the player and on the platform, so the player stops. Keyboard, mouse, and touch input are read and moves the player. The player jumps between platforms by applying rigidbody.AddForce to make it jump, and moves left or right by reading Input.GetAxis­(“Horizontal”) and applying it to the rigidbody.velocity. The player picks up mushrooms, which are just colliders set as triggers. When the player touches them, they increment the score and destroy themselves. When the player finally makes it to the last sign, there’s a collider/trigger that just reloads the current level. An additional to-do item here would be to add a large collider under the ground to detect when the player falls off the platform and simply reload the level then, as well.

Prefabs Reuse is important in coding, as well as in design. Once you assign several components and customize your game objects, you’ll often want to reuse them across the same scene or even multiple scenes or games. You can create another instance of a game object in your scene, but you can also create an instance of a prefab that doesn’t yet exist in your scene. Consider platforms and their colliders. If you want to reuse them across scenes, well, currently, you can’t. But by creating prefabs, you can. Just drag any game object from the hierarchy back into the project folder and a new file is created with the extension .prefab that includes any child hierarchies. You can now drag this file into your scenes and reuse it. The original game object turns blue to note it’s now connected to a prefab. Updating the .prefab file updates all instances in your scenes and you can also push changes from a modified scene prefab back down to the .prefab file, as well.

Clicking on the prefab displays the game objects it contains, as shown in Figure 15. If you make changes here, all instances in your scene will be updated.

Viewing the Contents of a Prefab
Figure 15 Viewing the Contents of a Prefab

Wrapping Up

There are a number of common operations performed across games. In this article, I covered the basics of a platformer game that uses colliders, rigid bodies, animations, score-keeping, basic GUI text and reading user input to apply force to move the player. These building blocks can be reused across a variety of game types. Stay tuned for a discussion of 3D coming in my next installment!

Additional Learning

 


Adam Tuliper is a senior technical evangelist with Microsoft living in sunny Southern California. He’s an indie game dev, co-admin of the Orange County Unity Meetup, and a pluralsight.com author. He and his wife are about to have their third child, so reach out to him while he still has a spare moment at adamt@microsoft.com or on Twitter at twitter.com/AdamTuliper.

Thanks to the following technical experts for reviewing this article: Matt Newman (Subscience Studios), Tautvydas Žilys (Unity)