Grammars and other constraints (XAML)

Applies to Windows Phone only

A constraint (a grammar is a type of constraint) defines the words and phrases that an app will recognize in speech input. Constraints are at the core of speech recognition and are perhaps the most important factor under your control that influences the accuracy of speech recognition.

You can use three different types of constraint to enable your app to perform speech recognition:

  1. Predefined grammars. Use the predefined dictation and web search grammars provided by Windows Phone.
  2. Programmatic list constraints. Create a lightweight, custom constraint programmatically in the form of a simple list.
  3. SRGS grammars. Create a custom grammar for your app in the XML format defined by the Speech Recognition Grammar Specification (SRGS) Version 1.0.

Which constraint type you use may depend on the complexity of the recognition experience you want to create and your level of expertise in creating grammars. Any one approach may be the best choice for a specific recognition task, and you may find uses for all three types of constraint in your app.

Types of constraint

Predefined grammars (dictation and web search grammars)

The predefined dictation and web search grammars provide speech recognition for your app without requiring you to author a grammar. When using these grammars, speech recognition is performed by a remote service and the results are returned to the phone.

The free-text dictation grammar will potentially recognize most words and phrases that a user will say in a given language, and is optimized to recognize short phrases. The predefined dictation grammar is used by default if you don't specify a constraint for a recognizer. Free-text dictation is useful when you don't want to limit the kinds of things a user may say. Typical uses include creating notes or dictating the content for a message.

The web search grammar is like a dictation grammar in that it contains a large number of words and phrases that a user might say in a given language, but it is optimized to recognize terms that people typically use when searching the web.

Because the predefined dictation and web search grammars are large, and because they are online (not on the phone), performance may not be as fast as with custom grammars that are located on the phone.

Programmatic list constraints

A programmatic list constraint provides a lightweight approach to creating a simple grammar as a list of words or phrases. A list constraint consists of an array of strings that represents speech input that your app will accept for a recognition operation. You can create a list constraint in your app by creating a speech recognition list constraint object and passing an array of strings. Then add that object to the recognizer's constraints collection. Recognition is successful when the speech recognizer recognizes any one of the strings in the array.

SRGS grammars

Unlike a programmatic list constraint, you author an SRGS grammar as a static document using the XML format defined by the Speech Recognition Grammar Specification (SRGS) Version 1.0. The XML schema for SRGS provides a powerful set of tools that allows you to create grammars for speech recognition scenarios ranging from basic to complex.

For more info, see SRGS grammars.

Working with constraints

To get started with constraints, see Adding and compiling constraints (XAML).

 

 

Show: