February 2017

Volume 32 Number 2

[HoloLens]

Moving from Virtual to Mixed Reality

By Tim Kulp

Virtual reality (VR) is all the buzz right now, and developers are racing to build content. On the horizon, mixed reality (MR) is beginning to spark the imagination of developers and make us rethink how the digital world can interact with the real world. As MR gains traction, you’ll need to bring your VR apps to a new platform. In this article, I’ll explore how to update a VR app to MR on the HoloLens—without having to rewrite the whole thing.

In the September issue of MSDN Magazine, I built a VR app called Contoso Travel (msdn.com/magazine/mt763231). This app allowed the Contoso Travel department to see where its employees were traveling throughout the world using a VR map. The idea of the app is to provide an immersive experience for users with an interface that’s more than a bunch of pins on a map. In the app, each traveler is represented by an avatar that shows the person’s travel destination and times. I used gaze detection to know when the user was looking at an avatar and then allowed the user to select the avatar for a dialog window to show the traveler’s details.

Before You Get Started

This article assumes you’ve already done some reading on HoloLens development, such as Adam Tuliper’s November MSDN Magazine article (msdn.com/magazine/mt788624) or the tutorials at the HoloLens Academy (bit.ly/2gzYYr6). Specifically, this article will focus on using gaze, gesture and voice, so you may want to review those articles as I won’t go into the details provided in that content. This article also assumes you’ve read my September article as it’s the starting point for what I’ll cover here.

To start, download the code for vr_travel from GitHub (bit.ly/2fXiqy2). This is the code base I start from in this article.

Virtual Reality vs. Mixed Reality

When building apps for any platform, it’s important to understand what makes that platform unique and embrace it to provide value for that specific platform. For example, if there were no difference between a Web application and a mobile application, why would you ever choose one over the other? The key is to provide value based on the different capabilities of each platform. For VR, a key strength is that users only experience what they can see in the headset. The rest of the world fades away as content delivered to the user is exclusive of the real world. This allows for creating amazing worlds that couldn’t exist in the real world, or preventing distractions from anything but your app’s content. VR takes the user somewhere else. Use it to isolate the user within your experience. From a business application perspective, this could be a call center scenario where all activity done by the user is constrained to the center’s operations platform.

MR, in contrast, is inclusive of the world as it combines the virtual world with the real world to create an overall experience. Successful MR uses virtual content to extend the real world. Use MR when you want users to be able to work within your world while still being able to experience theirs. For example, in the office, MR can be used to show content that would always be present, such as a desk calendar connected to Outlook or a modern Rolodex that represents contacts as holographic contact cards. These holograms can be present while users work on real monitors for things like writing software. Engineers can use holograms to see holographic representations of the models they’re working on while designing the model on their computers. Office workers could augment their workspace without cluttering it with physical things.

For Contoso Travel, I want users to be able to see where their travelers are at any time. A hologram on the desk is ideal for this as users could do other work on their monitors (such as booking travel), while a quick glance to the side would show where people are at any time. Travel booking systems are complex and not something you’d want to rebuild in VR, but augmenting existing systems with an MR map of travelers is a perfect solution for enabling updates at a glance without occupying a user’s physical workspace.

Preparing for Mixed Reality

HoloLens development typically involves a lot of the same tasks, such as detecting gaze, gestures and voice, no matter what type of app you’re building. Fortunately, Microsoft has built a code library that will help you jump into HoloLens development. The HoloToolkit (bit.ly/2bO8XrT) provides a great starting point. Download the HoloToolkit and follow the instructions on GitHub to convert it into a Unity package you can import into your HoloLens projects (bit.ly/2ftiOrY).

With HoloToolkit ready to go as a Unity package, open the vr_travel code base and import the HoloToolkit package using Assets | Import Package | Custom Package. Navigate to where you’ve saved the HoloToolkit package and select it for importing. Unity will display the Import Package dialog, which shows all the different elements of the package. For this article I import everything but the Example projects. If you’d like to include those in your project, feel free as they provide sample code that could be helpful. Click Import when ready to pull the HoloToolkit package into your vr_travel project. Once the import is complete, you’ll notice a new folder in your Project view called HoloToolkit, as well as a new HoloToolkit item on the menu bar that gives you access to some useful HoloLens-specific features of Unity, such as automatically applying the settings to the scene or project to run your app on HoloLens (see Figure 1).

HoloToolkit Menu
Figure 1 HoloToolkit Menu

Even though you’re updating the app to be a HoloLens app, you might want to keep the VR version of your world. To maintain the VR world, save the main scene as a new scene using File | Save Scene as, and call the new scene main_mr.  This way you can maintain the work you’ve done for the VR world while building a new MR world. Also, as you change the code to components like the Traveler Template or Travel Manager, the VR app will be updated along with the new MR app.

Now, configure your project to be a HoloLens project by going to HoloToolkit | Configure | Apply HoloLens Project Settings. This will update your Build and other project settings to get the app ready for the HoloLens. By configuring the app to be a HoloLens project, the camera will be automatically configured to work on HoloLens. The only change you might want to make from here is to set the Skybox to be a black background instead of using a skybox texture. In the HoloLens, anything pure black isn’t rendered as a virtual object. As an example, you can use a black background to make a background transparent, allowing the real world to show through.

With the skybox removed and the project configured, you’re left with a surface where the travelers will appear floating in the void. Position the camera to a place where you can view the world, but remember, in a HoloLens app the camera represents the placement of the device. Plan for the camera based on where you want the user’s experience to start, and think about how the camera will move throughout the environment. In VR apps, the player is constrained to what the controller allows them to do. For a HoloLens app, you must think through the ways in which the HoloLens can move throughout the world and plan accordingly to provide the best experience possible for users.

With the camera in place, you’re ready to test in the HoloLens. Go to File | Build Settings to add the main_mr scene as your primary scene. Click Add Open Scenes to add main_mr, then uncheck the main scene to remove it from the list of built scenes. This allows you to build only the scene that’s meant for the HoloLens. You could build from here, but instead, let’s use the HoloToolkit build window. Open HoloToolkit | Build Window (see Figure 2), which allows you to customize your HoloLens app’s build and deployment.

The HoloToolkit Build Window
Figure 2 The HoloToolkit Build Window

Use the default settings and click Build Visual Studio SLN. This will create a solution for you to open in Visual Studio and deploy to the HoloLens device. When you complete the deployment to the HoloLens Emulator or device, you’ll notice you can see the platform and a giant Load Travelers button, but you can’t actually do anything yet. Let’s make some changes to allow the user to load the travelers. If you need help with building your project, check out “Holograms 100” on HoloAcademy (bit.ly/2bxVOoe) or “Holograms 101” for deploying to a device (bit.ly/2bhqsiV).

Adding Gaze and Gesture

The virtual reality world created in the previous article had some interactivity, so let’s reproduce that. The first interactable added to that VR project was the button to load the travelers. Let’s make that work for the HoloLens user. To start, create a new Empty Game Object called Manager. (Note: I consolidate all my “managers” to a single object so I have a central place to add components like the Gaze Manager, the Gesture Manager and so forth. This centralization makes it easy to find the managers as scenes get complex, and it simplifies access between manager components.)

With Manager selected, click Add Component and search for Gaze Manager. This enables the camera to provide gaze information to the app so you can detect when the user’s view is colliding with an object. Gaze in MR works much like the gaze code you set up for VR. The camera projects a raycast and then detects whether that raycast collides with an interactable object. The Gaze Manager component is prebuilt in the HoloToolkit and can be added as a component right to the Manager object.

A critical UI element for virtual and mixed reality development is to provide visual feedback about what the user is currently looking at. To do this, you need a cursor, and HoloToolkit has one ready to be used. In the Project folder, go to HoloToolkit | Input | Prefabs | Cursor. Drag this prefab onto the Hierarchy. This will add a cursor to the scene that tracks where the user is looking, helping users to not get lost as they look around the scene.

Before you can have the button react to a press, you need to know if the user is looking at it. The button is a Unity Canvas Button that won’t react to the raycast because it doesn’t have a collider on it. Add a box collider to the btnLoad object by clicking on btnLoad in the Hierarchy and then Add Component. Select Physics | Box Collider and then click Edit Collider. This allows you to resize the collider to that of the button. Set the X scale equal to the width of the button and the Y scale equal to the height of the button. For this article I’m keeping the UI very simple, but there’s a lot you can do to create a rich UI for the HoloLens. Check out Surya Buchwald’s article, “Scaling UI for the HoloLens” (bit.ly/2gpfGue), for how to set up a scalable UI that the design team can control without needing to engage development.

The button is now ready to react to a gaze. This requires adding the OnGazeEnterEvent and the OnGazeLeaveEvent, which tell the app on what to do when the user is looking at the button as well as when the user looks away. To do this, click on btnLoad again in the Hierarchy and then Add Component. Search for OnGazeEnterEvent and add it. Then do the same for OnGazeLeaveEvent. Once these are added to btnLoad, you can add events. In the Inspector, click the plus sign to add an event to the OnGazeEnterEvent. Drag the EventSystem object from the Hierarchy to the Object field. This will load the functions for EventSystem. In the functions list select EventSystem | SetSelectedGameObject. This controls which game object has focus within the world. Drag btnLoad to the argument object. This code block says when the user’s gaze enters btnLoad, trigger the event system to set btnLoad as the selected game object, which will change the button’s visual state to that of the highlighted color.

In the OnGazeLeave event, add an event just like the OnGaze­Enter event with all the same parameters, but instead of using btnLoad as the argument for SetSelectedGameObject, create a new GameObject in the Hierarchy called UnSelected and set that to be the argument. This will make UnSelected the selected object in the event system when the user’s gaze leaves the button. UnSelected is a placeholder object that doesn’t do anything but receive the focus when the user looks away from an object. By using this game object you can set which game objects are selected or unselected using the Unity Editor instead of writing code. Figure 3 shows what the final configuration for btnLoad would look like with the gaze events.

Configuring btnLoad
Figure 3 Configuring btnLoad

The app now can track where the user is looking and provide feedback to the user. Now let’s connect the ability to select the button with an air tap. Click on Main Camera in the Hierarchy; again, click Add Component in the Inspector and search for Gesture Manager. Add Gesture Manager to Main Camera. Gesture Manager allows the app to know when a user’s hand is present, recognize which gesture is being used and react to the gesture accordingly.

To make a game object react to a gesture, you add the OnSelect­Event component to the game object. Click on btnLoad and then Add Component, search for On Select Event and add the component to the button. Gesture Manager detects when a tap or manipulation gesture occurs and uses GameObject.SendMessage(“OnSelect”) to trigger the OnSelect event component. The problem with this is that a UnityEngine.UI.Button object already has an OnSelect event (used to activate the button’s highlight), so when GestureManager sends the OnSelect message to the button, the button’s existing OnSelect event will be triggered. To avoid the conflict of having two OnSelect events, rename the HoloToolkit OnSelectEvent to OnTapEvent. Inside the OnSelectEvent.cs file (under HoloToolkit | Input | Scripts), update the OnSelect method to OnTap and rename the class to OnTapEvent.

Another challenge buttons present is that the OnClick event isn’t an accessible event through the editor. To trigger a click event, you must call Invoke on the Click event. To do this, create a new script in the Project | Scripts folder called ButtonInteractable, as follows:

public class ButtonInteractable : MonoBehavior {
  public void Click()
  {
    var btn = GetComponent<UnityEngine.UI.Button>();
    btn.onClick.Invoke();
  }
}

Click on the btnLoad object in the Hierarchy and add the Button­Interactable component. On the OnTapEvent component, drag btnLoad to the object and then, under function, select Button­Interactable | Click. This configures the OnTap event to trigger the OnClick event. In this project the button doesn’t do a lot with the click event, but in a more complex project a button’s click event could trigger many actions. With ButtonInteractable you can maintain the click’s behavior without updating it to the OnTap event.

With the Load Travelers button now connected, build the HoloLens app and try it out. When you gaze at the Load Travelers button it turns green. Looking away from the button returns it to its normal state. Using an air tap will load the travelers and deactivate the button so the user can’t load the travelers again. Now let’s update the travelers to work with gaze and gesture.

Updating the Travelers

The traveler template object is the prefab used to spawn new travelers in the app. On the template are two script components—Traveler Interaction and VR Interactive Item. The VR Interactive Item receives events to trigger states such as Over (when the user is looking at the interactive item), Out (when the user looks away) and Click (when the user presses the fire button on the VR device). These map directly to Gaze Enter, Gaze Leave and On Tap. Some quick component additions and the travelers will be ready to be used.

On the traveler template prefab add the following components: OnGaze­EnterEvent, OnGazeLeaveEvent, and OnTapEvent. As shown in Figure 4, for the OnGazeEnterEvent, add the Traveler­Template object with a function select of VRInteractiveItem | Over. Do the same for OnGazeLeaveEvent except select VRInteractiveItem | Out for the function. This will enable the Over and Out events for the VR Interactive Item. Finally, on the On Tap Event, drag the TravelerTemplate prefab on to the object. For the function, select VRInteractiveItem | Click. This quick configuration brings your VRInter­activeItem into the HoloLens gaze and gesture capabilities.

Adding Components and Functions to the Travelers Template
Figure 4 Adding Components and Functions to the Travelers Template

At this point, the Contoso Travel HoloLens app has functionality equivalent to the Virtual Reality app. Now you can leverage the features of HoloLens that make it different from a VR device. To start, I’ll implement voice so I can load the travelers without clicking a button.

Voice as Another Input Mechanism

In traditional VR apps input is limited to gaze, directional pad and a fire button (for a device like Samsung Gear VR). With HoloLens there are multiple other input types, such as voice, which can create a very natural interface for users. Voice is an excellent tool to provide users shortcuts for actions. Instead of triggering a menu through an air tap, users can just say what they want to do. You can make voice commands active only when a specific game object is selected or make a universal command for which the system is always looking.

To start using voice, I’m going to listen for the user to say “Load Travelers,” which avoids the need to click the Load Travelers button. A great place to start understanding voice interactions is Adam Tuliper’s HoloLens article (mentioned earlier), in which he provides an introduction to implementing voice commands. For this app, I’ll apply the voice command by adding a new component to the Manager object. In the Hierarchy, click on Managers | Add Component and then search for KeywordManager. Once the Keyword Manager is added, expand it, set Recognizer Start = Autostart so the recognizer starts with the app. Then expand the Keywords and Responses section and set the size to 1 to add a keyword. Enter “Load Travelers” as the keyword, then connect the function to btnLoad and the ButtonInteractable.Click event. This emulates the button click when the user says “Load Travelers.”

You can connect keywords to the travelers, as well. Start by clicking on the TravelerTemplate prefab and deactivate the On Tap event. Add a function to the Gaze Enter event to activate the On Tap event and then add a function to the Gaze Leave event to deactivate the On Tap event. This setup allows only the TravelerTemplate with gaze to display traveler details. Next, update the Keyword Manager to a size of two and make the new entry “Who are you.” Then drag the TravelerTemplate to the object and select the function to be VRInter­activeItem | Click. This will open the traveler information dialog.

These minor configuration changes in Unity allow you to enable voice in your mixed reality app. As you think about an office setting where people can be working with keyboards or mice, voice becomes a powerful input device for users with their hands full.

Where to Go from Here

HoloLens and MR offer a lot of potential for your apps. Here, I extended my VR app with voice but I could do even more with spatial sound and mapping. Imagine users binding their travel app to their desk or having the travelers talk to a user when travel plans change. Consider how to mix input systems like gesture, gaze, and voice to provide a rich, fun interface when combined with spatial sound and mapping.

As you can see, VR apps can migrate to MR apps without much code due to the configuration nature of Unity. This saves developers time and money when converting their apps from a mobile VR platform to MR with HoloLens. Take what you’ve started here and further explore the HoloLens Academy to apply even more features of the HoloLens to your next MR app.

Updating the JSON Object Library

Depending on the version of the JSON Object library you’re using, you might need to update the JSONTemplate.cs file. In your project folder go to JSON/JSONTemplate.cs and update line 19 to the following:

FieldInfo[] fieldinfo = obj.GetType().GetFields() 
  as System.Reflection.FieldInfo[];

This update will allow you to compile the project to run on the HoloLens.


Tim Kulp is the principal engineer at bwell in Baltimore, Md. He’s a Web, mobile and Universal Windows Platform app developer, as well as author, painter, dad and “wannabe Mad Scientist Maker.” Find him on Twitter: @seccode or via LinkedIn: linkedin.com/in/timkulp.

Thanks to the following Microsoft technical expert for reviewing this article: Adam Tuliper


Discuss this article in the MSDN Magazine forum