Language: HTML | XAML

Quickstart: Voice commands with Cortana (XAML)

Applies to Windows Phone only

Use voice commands with Cortana to launch an app and specify an action or command to execute. For example, a user could tap the Start button and say "Contoso Widgets, show best sellers" to both launch an app called "Contoso Widgets" and to navigate to a "best sellers" page within the app.


Cortana gives users the ability to access some built-in features of the phone by using certain voice commands. These are the system voice commands and they vary depending on the speech language configured for use with the speech feature on the phone. Be aware of these system voice commands when you are choosing a name for your app, or choosing the optional CommandPrefix element you'll use for your app because of the speech certification requirement, detailed at App policies for Windows Phone.

For a list of the system voice commands sorted by speech language, see System voice commands for Windows Phone 8.

These are the basic steps to add voice-command functionality and integrate Cortana with your app using speech or keyboard input:

  1. Create a Voice Command Definition (VCD) file. This is an XML document that defines all the spoken commands that the user can say to initiate actions or invoke commands when activating your app. See Voice command elements and attributes.
  2. Register the command sets in the VCD file with the phone's speech feature.
  3. Handle the activation-by-voice-command, navigation within the app, and execution of the command.

Objective: To learn how to enable voice commands.


If you're new to developing Windows Store apps using C++, C#, or Visual Basic:  

To complete this tutorial, have a look through these topics to get familiar with the technologies discussed here.

User experience guidelines:  

See Speech design guidelines for helpful tips on designing a useful and engaging speech-enabled app.

1. Set up the audio feed

Ensure that your device has a microphone or the equivalent.

Set the Microphone device capability (DeviceCapability) in the App package manifest (package.appxmanifest file) to get access to the microphone’s audio feed. This allows the app to record audio from connected microphones.

See App capability declarations.

2. Create a VCD file

  1. In Visual Studio, right-click the project name, select Add->New Item, and then select Text File.
  2. Type a name for the VCD file and be sure to append the ".xml" file extension. For example, "ContosoWidgets.xml". Select Add.
  3. In Solution Explorer, select the VCD file.
  4. In the Properties window, set Build action to Content, and then set Copy to output directory to Copy if newer.

3. Edit the VCD file

Each voice command declared in a VCD file must include some basic information:

  • An example phrase that demonstrates how a user can invoke the command.

    To get a list of apps that support voice commands with examples of commands that can be used to launch each app and initiate an action, press and hold the Search button to launch Cortana and, when prompted:

    • Say "What can I say?"
    • Say "Help"
    • Tap See more

    The sample commands are pulled from the contents of the Example element (child of the CommandSet element).

  • The words or phrases that your app recognizes to initiate a command.


    A command is defined by a Command element that contains at least one ListenFor element. Each ListenFor element contains the word or words that initiates the action specified by the Command element.

    ListenFor elements cannot be programmatically modified. However, PhraseList elements that are associated with ListenFor elements can be programmatically modified. For example, let's say you have a movie viewer app for a relatively small set of movies and you want to allow users to launch a movie simply by saying the app name followed by "Play [<MovieName>]". You don't need to create a separate ListenFor element for each possible movie. Instead, you could dynamically populate PhraseList at run time with the movie options. In the ListenFor element itself, you could specify something like: <ListenFor> Play {movies} </ListenFor>, where "movies" is the Label for the PhraseList.

    See How to dynamically modify VCD phrase lists.

  • The text that your app displays and speaks when the command is recognized.
  • The page or screen to navigate to within the app when the command is recognized.

You can specify multiple language versions for the commands used to activate your app and execute a command. You can create multiple CommandSet elements, each with a different xml:lang attribute to allow your app to be used in different markets. For example, an app for the United States might have a CommandSet for English and a CommandSet for Spanish.


To activate an app and initiate an action using a voice command, the app must register a VCD file that contains a CommandSet with a language that matches the speech language that the user selected on the phone. This language is set by the user on the phone's Settings > System > Speech > Speech Language screen.

See the Voice command elements and attributes reference for more detail.

Here's an example of a VCD file that defines four command options for a demo app that implements the various types of speech recognition.

This particular example has a command prefix ("Recognize"), an optional word ("Show"), a topic list for identifying the different types of speech recognition implemented by the app, and a PhraseTopic, which allows the user to dictate a free-form message in-line with the voice command.

<VoiceCommands xmlns="">

  <CommandSet xml:lang="en-us" Name="examplevcd">
    <Example>Show default speech recognition</Example>

  <Command Name="showRecognition">
      <Example>Show default recognition</Example>
      <ListenFor>[Show] {options} {person}</ListenFor>
      <Feedback>Showing {options} {person}</Feedback>

    <PhraseList Label="options">
      <Item>default recognition</Item>
      <Item>web search</Item>
      <Item>list constraint</Item>
      <Item>file constraint</Item>
    <PhraseTopic Label="person" Scenario="Search">
      <Subject>Person Names</Subject>

  <!-- Other CommandSets for other languages -->



Setting a CommandPrefix element is optional, but can be useful in two primary scenarios:

  • The name of your app is difficult, or impossible, to pronounce. For example, if your app name is "Contoso Widg3ts", you could set CommandPrefix to "Contoso Widgets".
  • Voice commands are localized. If you choose to have a command set for each supported language, you can set CommandPrefix element differently according to each language. For example, if your app is named "Contoso Table", you could set CommandPrefix to "Contoso Mesa" for a Spanish-language command set.

It's not possible to subset-match text in the CommandPrefix element. For example, if CommandPrefix is set to "Contoso Weather", "Contoso Weather [<phrase>]" would be recognized, but "Contoso [<phrase>]" or "Weather [<phrase>]" would not be recognized.

4. Install the VCD commands

Your app must run once to install the command sets in the VCD.

When your app is activated, call InstallCommandSetsFromStorageFileAsync in the OnNavigatedTo handler to register the commands that the system should listen for.

Note  If a phone backup occurs and your app reinstalls automatically, voice command data is not preserved. To ensure the voice command data for your app stays intact, consider initializing your VCD file each time your app launches or activates, or store a setting that indicates if the VCD is currently installed and check the setting each time your app launches or activates.

Here's an example that shows how to install the commands specified by a VCD file (vcd.xml).

protected override async void OnNavigatedTo(NavigationEventArgs e)
    if (e.NavigationMode == NavigationMode.New)
        var storageFile = await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///ContosoWidgets.xml"));
        await Windows.Media.SpeechRecognition.VoiceCommandManager.InstallCommandSetsFromStorageFileAsync(storageFile);

    // TODO: If your application contains multiple pages, ensure that you are
    // handling the hardware Back button by registering for the
    // Windows.Phone.UI.Input.HardwareButtons.BackPressed event.
    // If you are using the NavigationHelper provided by some templates,
    // this event is handled for you.

5. Handle activation and execute voice commands

Once your app has been launched and the command sets installed, specify how your app responds to subsequent voice-command activations. For example, your app might navigate to a specific page of content, display a map or other navigation utility, or speak a confirmation or status.

You need to:

  1. Confirm that your app was activated by a voice command.

    Override the Application.OnActivated event and check whether IActivatedEventArgs.Kind is VoiceCommand.

  2. Determine the name of the command and what was spoken.

    Get a reference to a VoiceCommandActivatedEventArgs object from the IActivatedEventArgs and query the Result property for a SpeechRecognitionResult object.

When you've determined which voice command was used, and the text spoken, you can take the appropriate action in your app.

Important:  The VCD file allows you to map different voice commands to different screens; you don't need to use a Uniform Resource Identifier (URI) mapper for voice commands. But if you use a URI mapper for other features, such as Search extensibility, be sure to pass voice-command activations through and preserve the full URI scheme so that the VCD information is not lost.

For this example, we refer back to the VCD in the Edit the VCD file step.

Once we get the speech-recognition result for the voice command, we get the command name from the first value in the RulePath array. If the VCD file defined more than one possible voice command, we would iterate through all values in the array.

We then compare the text spoken to one of our supported commands. Here, we just prompt the user with a MessageDialog that contains the recognition results.

We also show how to access the value of the Target attribute of the Navigate element. For this example, we do not specify a target. See Voice command elements and attributes for more detail.

Note  You can find out whether the voice command that launched your app was actually spoken, or whether it was typed in as text, from the SpeechRecognitionSemanticInterpretation.Properties dictionary using the commandMode key. The value of that key will be either "voice" or "text".

protected override void OnActivated(IActivatedEventArgs args)
    // Was the app activated by a voice command?
    if (args.Kind == Windows.ApplicationModel.Activation.ActivationKind.VoiceCommand)
        var commandArgs = args as Windows.ApplicationModel.Activation.VoiceCommandActivatedEventArgs;
        Windows.Media.SpeechRecognition.SpeechRecognitionResult speechRecognitionResult = commandArgs.Result;

        // If so, get the name of the voice command, the actual text spoken, and the value of Command/Navigate@Target.
        string voiceCommandName = speechRecognitionResult.RulePath[0];
        string textSpoken = speechRecognitionResult.Text;
        string navigationTarget = speechRecognitionResult.SemanticInterpretation.Properties["NavigationTarget"][0];

        switch (voiceCommandName)
            case "showWidgets":
                if (textSpoken.Contains("today's specials"))
                    // Code to show today's specials.
                    // To do this, refactor OnLaunched into a method that you can call from both OnLaunched and OnActivated.
                    // The navigation logic will resemble this: rootFrame.Navigate(typeof(<today' specials page class>), e.Arguments);

                else if (textSpoken.Contains("best sellers"))
                    // Code to show the best sellers.

            // Cases for other voice commands.

                // There is no match for the voice command name.


Summary and next steps

Here, you learned how to implement basic voice commands using VCD files and speech recognition provided with Windows Phone.

Next, you might want to know How to dynamically modify VCD phrase lists.

Related topics

Responding to speech interactions
How to define custom recognition constraints
Speech design guidelines for Windows Phone



© 2014 Microsoft