Export (0) Print
Expand All

Use Speech Recognition Events

During speech recognizer run-time operations, the recognizer raises a number of events for which applications can register to be notified. For each event the application intends to handle, it must provide a handler for that event. For more information, see the example at the end of this topic.

The SpeechRecognizer class provides the following events.

Event

Description

AudioLevelUpdated

Raised when a change in the audio level is detected.

AudioSignalProblemOccurred

Raised when an error is detected in a SpeechRecognitionEngine instance.

AudioStateChanged

Raised when a change in the current state of the audio input in a recognition engine is detected.

EmulateRecognizeCompleted

Raised when an asynchronous emulation of speech recognition is finished.

LoadGrammarCompleted

Raised when the asynchronous loading of a Grammar object into a speech recognizer is finished.

RecognizerUpdateReached

Raised when the Windows Desktop Speech Technology recognition engine pauses to allow an atomic update that is requested by one of the RequestRecognizerUpdate() overloaded methods.

SpeechDetected

Raised when speech is detected.

SpeechHypothesized

Raised when the recognition engine detects speech and part of the audio input speech is tentatively recognized.

SpeechRecognitionRejected

Raised when the recognition engine detects speech, but is able to return only candidate phrases that have low confidence levels.

SpeechRecognized

Raised when the recognition detects speech, and has found one or more phrases with sufficiently high confidence levels.

StateChanged

Raised when the running state of the Windows Desktop Speech Technology recognition engine changes.

The events on the SpeechRecognitionEngine class and the SpeechRecognizer class are nearly identical. There are two differences:

  • The RecognizeCompleted event on the SpeechRecognitionEngine class is not present in the SpeechRecognizer class. This event is raised when an asynchronous recognition operation is finished.

  • The StateChanged event on the SpeechRecognizer class is not present in the SpeechRecognitionEngine class.

A recognizer raises this event when the audio level changes. This event is associated with the AudioLevelUpdatedEventArgs class, which contains the AudioLevel property.

A recognizer raises this event when there is a problem with the audio signal. The kinds of problems that are reported are values in the AudioSignalProblem enumeration. This event is associated with the AudioSignalProblemOccurredEventArgs class, which contains properties that provide information about the audio signal problem: the audio level (AudioLevel), the position in the input device’s audio stream at which the problem occurred (AudioPosition), the particular audio signal problem (AudioSignalProblem), and the position in the recognizer’s audio stream at which the problem occurred (RecognizerAudioPosition).

A recognizer raises this event when the audio state changes from one value in the AudioState enumeration to another (such as from Speech to Silence). This event is associated with the AudioStateChangedEventArgs class, which contains the new audio state of the recognizer (AudioState).

A recognizer raises this event on completion of one of the EmulateRecognizeAsync() or EmulateRecognizeAsync() overloaded methods. This event is associated with the EmulateRecognizeCompletedEventArgs class, which returns the recognition result (Result).

A recognizer raises this event on completion of the asynchronous loading of a grammar. This event is associated with the LoadGrammarCompletedEventArgs class, which contains a property that is a reference to the grammar that was loaded (Grammar).

A recognizer raises this event when it receives a request to update its state. An application makes this request using one of the RequestRecognizerUpdate overloaded methods on either the SpeechRecognizer class (the RequestRecognizerUpdate() overloaded methods) or on the SpeechRecognitionEngine class (the RequestRecognizerUpdate() overloads). This event is associated with the RecognizerUpdateReachedEventArgs class, which contains a property for the audio position (AudioPosition) and a user token property (UserToken).

A recognizer raises this event when it detects speech in the input audio. This event is associated with the SpeechDetectedEventArgs class, which contains a property that indicates the audio position at which speech occurs (AudioPosition).

A recognizer raises this event when it detects speech and a portion of the speech is tentatively recognized. This event is associated with the SpeechHypothesizedEventArgs class, which contains the Result property (inherited from the RecognitionEventArgs class).

A recognizer raises this event when it detects speech, but returns only candidate phrases with low confidence levels. This event is associated with the SpeechRecognitionRejectedEventArgs class, which contains the Result property (inherited from the RecognitionEventArgs class).

A recognizer raises this event when it detects speech, and has found one or more phrases with sufficiently high confidence levels. This event is associated with the SpeechRecognizedEventArgs class, which contains the Result property (inherited from the RecognitionEventArgs class).

A recognizer raises this event when the running state of the Windows Desktop Speech Technology recognition engine changes. This event is associated with the StateChangedEventArgs class, which contains the RecognizerState property. The possible values of this property are Listening and Stopped, the two values of the RecognizerState enumeration.

The method in the following example creates a SpeechRecognitionEngine instance named recognizer. The example then registers event handlers for the LoadGrammarCompleted, SpeechDetected, SpeechHypothesized, SpeechRecognized, and RecognizeCompleted events. Following this method are simple event handlers for these events. The variable recognizer is a SpeechRecognitionEngine instance that is assumed to be declared at the class level.

  static void Main(string[] args)
  {
    recognizer = new SpeechRecognitionEngine();
    recognizer.LoadGrammarCompleted += new EventHandler<LoadGrammarCompletedEventArgs>(recognizer_LoadGrammarCompleted);
    recognizer.SpeechDetected += new EventHandler<SpeechDetectedEventArgs>(recognizer_SpeechDetected);
    recognizer.SpeechHypothesized += new EventHandler<SpeechHypothesizedEventArgs>(recognizer_SpeechHypothesized);
    recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
    recognizer.RecognizeCompleted += new EventHandler<RecognizeCompletedEventArgs>(recognizer_RecognizeCompleted);
  }
  static void recognizer_LoadGrammarCompleted(object sender, LoadGrammarCompletedEventArgs e)
  {
    Console.WriteLine("Grammar loaded:  " + e.Grammar.Name);
  }

  static void recognizer_SpeechDetected(object sender, SpeechDetectedEventArgs e)
  {
    Console.WriteLine("Speech detected at audio position: " + e.AudioPosition);
  }

  static void recognizer_SpeechHypothesized(object sender, SpeechHypothesizedEventArgs e)
  {
    Console.WriteLine("Speech hypothesized: " + e.Result.Text);
  }

  static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
  {
    Console.WriteLine("Speech recognized:  " + e.Result.Text);
  }

  static void recognizer_RecognizeCompleted(object sender, RecognizeCompletedEventArgs e)
  {
    Console.WriteLine("Recognize completed at audio position: " + e.AudioPosition);
  }
Show:
© 2014 Microsoft