Note

Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.

SpeechRecognitionEngine.LoadGrammarAsync Method

Asynchronously loads a Grammar object.

Namespace:  Microsoft.Speech.Recognition
Assembly:  Microsoft.Speech (in Microsoft.Speech.dll)

Syntax

'Declaration
Public Sub LoadGrammarAsync ( _
    grammar As Grammar _
)
'Usage
Dim instance As SpeechRecognitionEngine
Dim grammar As Grammar

instance.LoadGrammarAsync(grammar)
public void LoadGrammarAsync(
    Grammar grammar
)

Parameters

Exceptions

Exception Condition
ArgumentNullException

Grammar is a null reference (Nothing in Visual Basic).

InvalidOperationException

Grammar is not in a valid state.

OperationCanceledException

The asynchronous operation was canceled.

Remarks

When the recognizer completes loading a Grammar object, it raises a LoadGrammarCompleted event. The recognizer throws an exception if the Grammar object is already loaded, is being asynchronously loaded, or has failed to load into any recognizer. You cannot load the same Grammar object into multiple instances of SpeechRecognitionEngine. Instead, create a new Grammar object for each SpeechRecognitionEngine instance.

If the recognizer is running, applications must use RequestRecognizerUpdate() to pause the speech recognition engine before loading, unloading, enabling, or disabling a grammar.

When you load a grammar, it is enabled by default. To disable a loaded grammar, use the Enabled property.

To load a Grammar object synchronously, use the LoadGrammar(Grammar) method.

Examples

The following example creates two speech recognition grammars and constructs a Grammar object from each of the completed grammars. It then asynchronously loads the Grammar objects to the SpeechRecognitionEngine instance. The handler for the recognizer's LoadGrammarCompleted event reports the status of Grammar objects when loaded. The handler for the SpeechRecognized event reports the name of the Grammar object that was used to perform a recognition and the text of the recognition result.

using System;
using Microsoft.Speech.Recognition;

namespace SampleRecognition
{
  class Program
  {
    private static SpeechRecognitionEngine recognizer;
    public static void Main(string[] args)
    {

      // Initialize a SpeechRecognitionEngine object and set its input.
      recognizer = new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US"));
      recognizer.SetInputToDefaultAudioDevice();

      // Add a handler for the LoadGrammarCompleted event.
      recognizer.LoadGrammarCompleted +=
        new EventHandler<LoadGrammarCompletedEventArgs>(recognizer_LoadGrammarCompleted);

      // Add a handler for the SpeechRecognized event.
      recognizer.SpeechRecognized +=
        new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);

      // Create the "yesno" grammar and build it into a Grammar object.
      Choices yesChoices = new Choices(new string[] { "yes", "yup", "yeah" });
      SemanticResultValue yesValue =
          new SemanticResultValue(yesChoices, (bool)true);
      Choices noChoices = new Choices(new string[] { "no", "nope", "neah" });
      SemanticResultValue noValue =
          new SemanticResultValue(noChoices, (bool)false);
      SemanticResultKey yesNoKey =
          new SemanticResultKey("yesno", new Choices(new GrammarBuilder[] { yesValue, noValue }));
      Grammar yesnoGrammar = new Grammar(yesNoKey);
      yesnoGrammar.Name = "yesNo";

      // Create the "done" grammar within the constructor of a Grammar object.
      Grammar doneGrammar =
        new Grammar(new GrammarBuilder(new Choices(new string[] 
        { "done", "exit", "quit", "stop" })));
      doneGrammar.Name = "Done";

      // Load the Grammar objects to the recognizer.
      recognizer.LoadGrammarAsync(yesnoGrammar);
      recognizer.LoadGrammarAsync(doneGrammar);

      // Start asynchronous, continuous recognition.
      recognizer.RecognizeAsync(RecognizeMode.Multiple);

      // Keep the console window open.
      Console.ReadLine();
    }

    // Handle the LoadGrammarCompleted event. 
    static void recognizer_LoadGrammarCompleted(object sender, LoadGrammarCompletedEventArgs e)
    {
      string grammarName = e.Grammar.Name;
      bool grammarLoaded = e.Grammar.Loaded;
      bool grammarEnabled = e.Grammar.Enabled;

      if (e.Error != null)
      {
        Console.WriteLine("LoadGrammar for {0} failed with a {1}.",
        grammarName, e.Error.GetType().Name);

        // Add exception handling code here.
      }

      Console.WriteLine("Grammar {0} {1} loaded and {2} enabled.", 
        grammarName, (grammarLoaded) ? "is" : "is not", 
        (grammarEnabled) ? "is" : "is not");
    }

    // Handle the SpeechRecognized event.
    static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
    {
      Console.WriteLine("Grammar({0}): {1}", e.Result.Grammar.Name, e.Result.Text);

      // Add event handler code here.
    }
  }
}

See Also

Reference

SpeechRecognitionEngine Class

SpeechRecognitionEngine Members

Microsoft.Speech.Recognition Namespace

RecognizerUpdateReached

UnloadAllGrammars

UnloadGrammar

LoadGrammar