This documentation is archived and is not being maintained.

StateChangedEventArgs Class

Returns data from the StateChanged event.


Namespace:  System.Speech.Recognition
Assembly:  System.Speech (in System.Speech.dll)

public class StateChangedEventArgs : EventArgs

The StateChangedEventArgs type exposes the following members.

Public propertyRecognizerStateGets the current state of the shared speech recognition engine in Windows.

Public methodEquals(Object)Determines whether the specified Object is equal to the current Object. (Inherited from Object.)
Protected methodFinalizeAllows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.)
Public methodGetHashCodeServes as a hash function for a particular type. (Inherited from Object.)
Public methodGetTypeGets the Type of the current instance. (Inherited from Object.)
Protected methodMemberwiseCloneCreates a shallow copy of the current Object. (Inherited from Object.)
Public methodToStringReturns a string that represents the current object. (Inherited from Object.)

The StateChanged event is raised by the SpeechRecognizer class. StateChangedEventArgs derives from EventArgs and is passed to handlers for StateChanged events.

State is a read-only property. A shared speech recognizer's state cannot be changed programmatically. Users can change a shared speech recognizer's state using the Speech Recognition user interface (UI) or through the Speech Recognition member of the Windows Control Panel. The following image shows the speech recognition user interface and its menu of commands in Windows 7.

Both the On and Sleep settings in the Speech Recognition UI correspond to the Listening state. The Off setting in the Speech Recognition UI corresponds to Stopped.

The following example creates a shared speech recognizer, and then creates two types of grammars for recognizing specific words and for accepting free dictation. The example asynchronously loads all the created grammars to the recognizer. A handler for the StateChanged event uses the EmulateRecognizeAsync(String) method to put Windows Recognition in "listening" mode.

using System;
using System.Speech.Recognition;

namespace SampleRecognition
  class Program
    private static SpeechRecognizer recognizer;
    public static void Main(string[] args)

      // Initialize a shared speech recognition engine.
      recognizer = new SpeechRecognizer();

      // Add a handler for the LoadGrammarCompleted event.
      recognizer.LoadGrammarCompleted += new EventHandler<LoadGrammarCompletedEventArgs>(recognizer_LoadGrammarCompleted);

      // Add a handler for the SpeechRecognized event.
      recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);

      // Add a handler for the StateChanged event.
      recognizer.StateChanged += new EventHandler<StateChangedEventArgs>(recognizer_StateChanged);

      // Create "yesno" grammar.
      Choices yesChoices = new Choices(new string[] { "yes", "yup", "yah}" });
      SemanticResultValue yesValue =
          new SemanticResultValue(yesChoices, (bool)true);
      Choices noChoices = new Choices(new string[] { "no", "nope", "nah" });
      SemanticResultValue noValue = new SemanticResultValue(noChoices, (bool)false);
      SemanticResultKey yesNoKey =
          new SemanticResultKey("yesno", new Choices(new GrammarBuilder[] { yesValue, noValue }));
      Grammar yesnoGrammar = new Grammar(yesNoKey);
      yesnoGrammar.Name = "yesNo";

      // Create "done" grammar.
      Grammar doneGrammar =
        new Grammar(new Choices(new string[] { "done", "exit", "quit", "stop" }));
      doneGrammar.Name = "Done";

      // Create dictation grammar.
      Grammar dictation = new DictationGrammar();
      dictation.Name = "Dictation";

      // Load grammars to the recognizer.

      // Keep the console window open.

    // Put the shared speech recognizer into "listening" mode.
    static void  recognizer_StateChanged(object sender, StateChangedEventArgs e)
     if (e.RecognizerState != RecognizerState.Stopped)
        recognizer.EmulateRecognizeAsync("Start listening");

    // Write the grammar name and the text of the recognized phrase to the console.
    static void  recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
     Console.WriteLine("Grammar({0}): {1}", e.Result.Grammar.Name, e.Result.Text);

      // Add event handler code here.

    // Handle the LoadGrammarCompleted event.
    static void  recognizer_LoadGrammarCompleted(object sender, LoadGrammarCompletedEventArgs e)
     string grammarName = e.Grammar.Name;
      bool grammarLoaded = e.Grammar.Loaded;
      if (e.Error != null)
        Console.WriteLine("LoadGrammar for {0} failed with a {1}.",
        grammarName, e.Error.GetType().Name);
      // Add exception handling code here.
      Console.WriteLine("Grammar {0} {1} loaded.",
      grammarName, (grammarLoaded) ? "is" : "is not");

.NET Framework

Supported in: 4, 3.5, 3.0

.NET Framework Client Profile

Supported in: 4

Windows 7, Windows Vista SP1 or later, Windows XP SP3, Windows Server 2008 (Server Core not supported), Windows Server 2008 R2 (Server Core supported with SP1 or later), Windows Server 2003 SP2

The .NET Framework does not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.

Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.