SpeechRecognizer.AudioPosition Property

Gets the current location in the audio stream being generated by the device that is providing input to the speech recognizer.

Namespace:  System.Speech.Recognition
Assembly:  System.Speech (in System.Speech.dll)

public TimeSpan AudioPosition { get; }

Property Value

Type: System.TimeSpan
The current location in the speech recognizer's audio input stream through which it has received input.

The shared recognizer receives input while the desktop speech recognition is running.

The AudioPosition property references the input device's position in its generated audio stream. By contrast, the RecognizerAudioPosition property references the recognizer's position in processing audio input. These positions can be different. For example, if the recognizer has received input for which it has not yet generated a recognition result then the value of the RecognizerAudioPosition property is less than the value of the AudioPosition property.

In the following example, the shared speech recognizer uses a dictation grammar to match speech input. A handler for the SpeechDetected event writes to the console the AudioPosition, RecognizerAudioPosition, and AudioLevel when the speech recognizer detects speech at its input.

using System;
using System.Speech.Recognition;

namespace SampleRecognition
{
  class Program
  {
    private static SpeechRecognizer recognizer;
    public static void Main(string[] args)
    {

      // Initialize a shared speech recognition engine.
      recognizer = new SpeechRecognizer();

      // Add handlers for events.
      recognizer.LoadGrammarCompleted += 
        new EventHandler<LoadGrammarCompletedEventArgs>(recognizer_LoadGrammarCompleted);
      recognizer.SpeechRecognized += 
        new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
      recognizer.StateChanged += 
        new EventHandler<StateChangedEventArgs>(recognizer_StateChanged);
      recognizer.SpeechDetected += 
        new EventHandler<SpeechDetectedEventArgs>(recognizer_SpeechDetected);

      // Create a dictation grammar.
      Grammar dictation = new DictationGrammar();
      dictation.Name = "Dictation";

      // Load the grammar object to the recognizer.
      recognizer.LoadGrammarAsync(dictation);

      // Keep the console window open.
      Console.ReadLine();
    }

    // Gather information about detected speech and write it to the console.
    static void recognizer_SpeechDetected(object sender, SpeechDetectedEventArgs e)
    {
      Console.WriteLine();
      Console.WriteLine("Speech detected:");
      Console.WriteLine("  Audio level: " + recognizer.AudioLevel);
      Console.WriteLine("  Audio position: " + recognizer.AudioPosition);
      Console.WriteLine("  Recognizer audio position: " + recognizer.RecognizerAudioPosition);
    }

    // Write the text of the recognition result to the console.
    static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
    { 
      Console.WriteLine("Speech recognized: " + e.Result.Text);

      // Add event handler code here.
    }

    // Write the name of the loaded grammar to the console.
    static void recognizer_LoadGrammarCompleted(object sender, LoadGrammarCompletedEventArgs e)
    {
      Console.WriteLine("Grammar loaded: " + e.Grammar.Name);
    }

    // Put the shared speech recognizer into "listening" mode.
    static void recognizer_StateChanged(object sender, StateChangedEventArgs e)
    {
      if (e.RecognizerState != RecognizerState.Stopped)
      {
        recognizer.EmulateRecognizeAsync("Start listening");
      }
    }
  }
}

.NET Framework

Supported in: 4.6, 4.5, 4, 3.5, 3.0

.NET Framework Client Profile

Supported in: 4
Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2015 Microsoft