Share via


Note

Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.

Create and Access Semantic Content (Microsoft.Speech)

While performing recognition, a speech recognition engine returns recognition results to the speech application that include semantic information about the recognized speech input, as well as the text of recognized words and phrases. The semantic information contained in a recognition result is often more meaningful to an application than the recognized text. You can author semantic content and the code that retrieves semantics from recognition results to provide actionable information to your application.

Create Semantic Content

Although the speech recognition engine generates some semantic content by default, you can create the semantic structure for recognized speech by assigning the semantic values and semantic keys that the speech recognition engine returns in the recognition result. The mechanism for authoring semantic content depends on the process you use to author a grammar.

Create Dynamic Grammars Programmatically

Create Static Grammars as Files

Access the Recognition Result

An application registers to be notified of events that the speech recognition engine raises when it is processing speech input. A SpeechRecognitionEngine instance raises the SpeechHypothesized, SpeechRecognitionRejected or SpeechRecognized, and RecognizeCompleted events to provide information about a recognition operation. Event arguments for these events are provided by, respectively, the SpeechHypothesizedEventArgs, SpeechRecognitionRejectedEventArgs, SpeechRecognizedEventArgs, and RecognizeCompletedEventArgs classes. All of these classes inherit from the RecognitionEventArgs class. Importantly, all these classes inherit the Result property on RecognitionEventArgs.

You can access the Result property using the second parameter, which is usually named e, on the event handler. The following example shows an empty handler for the SpeechRecognized event. Event handlers for the SpeechHypothesized, SpeechRecognitionRejected, and RecognizeCompleted events are similar.

void sr_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
  // Handle SpeechRecognized event.
}

The Result property is a reference to a RecognitionResult instance, which has properties that contain information about alternate recognition results, the audio associated with the recognized phrase, the degree of certainty for the recognized phrase, and a number of other items of interest about the recognized phrase.

Access the Semantics in the Recognition Result

Semantics is an important property of RecognitionResult that contains a reference to a SemanticValue instance, which can be thought of as a collection of key/value pairs that represents the semantic organization of a recognized phrase. You can access the keys and values for the semantics of a recognition result using the IDictionary<String, SemanticValue>.Keys and IDictionary<String, SemanticValue>.Values properties, respectively.

The following illustration shows the properties on RecognitionResult and SemanticValue classes, which you can access using the Result property.

Hh362876.e12e8593-d431-474e-aba2-1c55ce3c5430(en-us,office.14).jpg

See Also

Reference

RecognizedPhrase

Concepts

Add Semantics to a GrammarBuilder Grammar (Microsoft.Speech)

Use a SemanticResultKey to Extract a SemanticResultValue (Microsoft.Speech)

Semantic Interpretation Markup (Microsoft.Speech)

Semantic Markup Language Reference (Microsoft.Speech)