Share via


Note

Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.

Use Speech Recognition Events (Microsoft.Speech)

During speech recognizer run-time operations, the recognizer raises a number of events for which applications can register to be notified. For each event the application intends to handle, it must provide a handler for that event. For more information, see the example at the end of this topic.

The SpeechRecognitionEngine class provides the following events.

Event

Description

AudioLevelUpdated

Raised when a change in the audio level is detected.

AudioSignalProblemOccurred

Raised when an error is detected in a SpeechRecognitionEngine instance.

AudioStateChanged

Raised when a change in the current state of the audio input in a recognition engine is detected.

EmulateRecognizeCompleted

Raised when an asynchronous emulation of speech recognition is finished.

LoadGrammarCompleted

Raised when the asynchronous loading of a Grammar object into a speech recognizer is finished.

RecognizeCompleted

Raised when an asynchronous recognition operation completes.

RecognizerUpdateReached

Raised when the Windows Desktop Speech Technology recognition engine pauses to allow an atomic update that is requested by one of the RequestRecognizerUpdate() overloaded methods.

SpeechDetected

Raised when speech is detected.

SpeechHypothesized

Raised when the recognition engine detects speech and part of the audio input speech is tentatively recognized.

SpeechRecognitionRejected

Raised when the recognition engine detects speech, but is able to return only candidate phrases that have low confidence levels.

SpeechRecognized

Raised when the recognition detects speech, and has found one or more phrases with sufficiently high confidence levels.

AudioLevelUpdated

A SpeechRecognitionEngine instance raises this event when the audio level changes. This event is associated with the AudioLevelUpdatedEventArgs class, which contains the AudioLevel property.

AudioSignalProblemOccurred

A SpeechRecognitionEngine instance raises this event when there is a problem with the audio signal. The kinds of problems that are reported are values in the AudioSignalProblem enumeration. This event is associated with the AudioSignalProblemOccurredEventArgs class, which contains properties that provide information about the audio signal problem: the audio level (AudioLevel), the position in the input device’s audio stream at which the problem occurred (AudioPosition), the particular audio signal problem (AudioSignalProblem), and the position in the recognizer’s audio stream at which the problem occurred (RecognizerAudioPosition).

AudioStateChanged

A SpeechRecognitionEngine instance raises this event when the audio state changes from one value in the AudioState enumeration to another (such as from Speech to Silence). This event is associated with the AudioStateChangedEventArgs class, which contains the new audio state of the recognizer (AudioState).

EmulateRecognizeCompleted

A SpeechRecognitionEngine instance raises this event on completion of one of the EmulateRecognizeAsync() overloaded methods. This event is associated with the EmulateRecognizeCompletedEventArgs class, which returns the recognition result (Result).

LoadGrammarCompleted

A SpeechRecognitionEngine instance raises this event on completion of the asynchronous loading of a grammar. This event is associated with the LoadGrammarCompletedEventArgs class, which contains a property that is a reference to the grammar that was loaded (Grammar).

RecognizeCompleted

A SpeechRecognitionEngine instance raises this event on the completion of an asynchronous recognition operation initiated by calls to the RecognizeAsync() overloaded methods. This event is associated with the RecognizeCompletedEventArgs class, which contains several properties that describe the recognition operation.

RecognizerUpdateReached

A SpeechRecognitionEngine instance raises this event when it receives a request to update its state. An application makes this request using one of the RequestRecognizerUpdate overloaded methods on the SpeechRecognitionEngine class (the RequestRecognizerUpdate() overloads). This event is associated with the RecognizerUpdateReachedEventArgs class, which contains a property for the audio position (AudioPosition) and a user token property (UserToken).

SpeechDetected

A SpeechRecognitionEngine instance raises this event when it detects speech in the input audio. This event is associated with the SpeechDetectedEventArgs class, which contains a property that indicates the audio position at which speech occurs (AudioPosition).

SpeechHypothesized

A SpeechRecognitionEngine instance raises this event when it detects speech and a portion of the speech is tentatively recognized. This event is associated with the SpeechHypothesizedEventArgs class, which contains the Result property (inherited from the RecognitionEventArgs class).

SpeechRecognitionRejected

A SpeechRecognitionEngine instance raises this event when it detects speech, but returns only candidate phrases with low confidence levels. This event is associated with the SpeechRecognitionRejectedEventArgs class, which contains the Result property (inherited from the RecognitionEventArgs class).

SpeechRecognized

A SpeechRecognitionEngine instance raises this event when it detects speech, and has found one or more phrases with sufficiently high confidence levels. This event is associated with the SpeechRecognizedEventArgs class, which contains the Result property (inherited from the RecognitionEventArgs class).

Example

The method in the following example creates a SpeechRecognitionEngine instance named recognizer. The example then registers event handlers for the LoadGrammarCompleted, SpeechDetected, SpeechHypothesized, SpeechRecognized, and RecognizeCompleted events. Following this method are simple event handlers for these events. The variable recognizer is a SpeechRecognitionEngine instance that is assumed to be declared at the class level.

  static void Main(string[] args)
  {
    recognizer = new SpeechRecognitionEngine();
    recognizer.LoadGrammarCompleted += new EventHandler<LoadGrammarCompletedEventArgs>(recognizer_LoadGrammarCompleted);
    recognizer.SpeechDetected += new EventHandler<SpeechDetectedEventArgs>(recognizer_SpeechDetected);
    recognizer.SpeechHypothesized += new EventHandler<SpeechHypothesizedEventArgs>(recognizer_SpeechHypothesized);
    recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
    recognizer.RecognizeCompleted += new EventHandler<RecognizeCompletedEventArgs>(recognizer_RecognizeCompleted);
  }
  static void recognizer_LoadGrammarCompleted(object sender, LoadGrammarCompletedEventArgs e)
  {
    Console.WriteLine("Grammar loaded:  " + e.Grammar.Name);
  }

  static void recognizer_SpeechDetected(object sender, SpeechDetectedEventArgs e)
  {
    Console.WriteLine("Speech detected at audio position: " + e.AudioPosition);
  }

  static void recognizer_SpeechHypothesized(object sender, SpeechHypothesizedEventArgs e)
  {
    Console.WriteLine("Speech hypothesized: " + e.Result.Text);
  }

  static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
  {
    Console.WriteLine("Speech recognized:  " + e.Result.Text);
  }

  static void recognizer_RecognizeCompleted(object sender, RecognizeCompletedEventArgs e)
  {
    Console.WriteLine("Recognize completed at audio position: " + e.AudioPosition);
  }