Was this page helpful?
Your feedback about this content is important. Let us know what you think.
Additional feedback?
1500 characters remaining
Export (0) Print
Expand All

Using Speech Recognition in UCMA 3.0 and Lync 2010: Code Walkthrough (Part 5 of 5)

Summary:   This is the last in a series of five articles that describe how a Microsoft Unified Communications Managed API (UCMA) 3.0 application and a Microsoft Lync 2010 application can be combined to perform speech recognition. Part 5 presents the code that is used in both applications.

Applies to:   Microsoft Lync Server 2010 | Microsoft Unified Communications Managed API 3.0 Core SDK | Microsoft Lync 2010

Published:   April 2011 | Provided by:   Mark Parker, Microsoft | About the Author

Contents

Code Gallery   Download code

This is the last in a five-part series of articles that describe how to incorporate speech recognition in Lync 2010 applications that interoperate with UCMA 3.0.

The Microsoft Unified Communications Managed API (UCMA) 3.0 application code appears in this section. The application code includes configuration information (App.config), main program code, helper code, and the Speech Recognition Grammar Specification (SRGS) XML grammar file (Flights.grxml).

Application Configuration

The following example is the application configuration file, App.config, that is used to configure settings for the Lync Server 2010 computer, the local endpoint (the UCMA 3.0 application), and the remote endpoint (the Lync 2010 application). When the appropriate parameters are entered in the add elements (and the XML comment delimiters are removed), they will not have to be entered from the keyboard when the application is running.

App.config

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <appSettings>
    <!-- Provide parameters necessary for the sample to run without prompting for user input. -->
    <!-- Provide the FQDN of the Microsoft Lync 2010 Server -->
    <!-- <add key="ServerFQDN1" value="" /> -->
    <!-- The user sign-in name that is used to sign in to the application. -->
    <!-- To use credentials used by the currently signed-in user, do not add a value. -->
    <!-- <add key="UserName1" value="" /> -->
    <!-- The user domain name that is used to sign in to the application. -->
    <!-- To use credentials used by the currently signed-in user, do not add a value. -->
    <!-- <add key="UserDomain1" value="" /> -->
    <!-- The user URI that is used to sign in to the application, in the format user@host. -->
    <!-- <add key="UserURI1" value="" /> -->
    <!-- The URI of the remote endpoint, in the format sip:user@host. -->
    <!-- <add key="CalledParty" value="" /> -->
    <!-- <add key="ClientSettingsProvider.ServiceUri" value="" /> -->
  </appSettings>
  <system.web>
    <membership defaultProvider="ClientAuthenticationMembershipProvider">
      <providers>
        <add name="ClientAuthenticationMembershipProvider" type="System.Web.ClientServices.Providers.ClientFormsAuthenticationMembershipProvider, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" serviceUri="" />
      </providers>
    </membership>
    <roleManager defaultProvider="ClientRoleProvider" enabled="true">
      <providers>
        <add name="ClientRoleProvider" type="System.Web.ClientServices.Providers.ClientRoleProvider, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" serviceUri="" cacheTimeout="86400" />
      </providers>
    </roleManager>
  </system.web>
</configuration>

Main Application Code

This following code, ChooseFlight.cs, is the heart of the UCMA 3.0 application. It contains the code that is used to create the context channel and set up the objects that are required for speech recognition.

ChooseFlight.cs

using System;
using System.Threading;
using System.Text;
using Microsoft.Rtc.Collaboration;
using Microsoft.Rtc.Collaboration.AudioVideo;
using Microsoft.Rtc.Signaling;
using Microsoft.Rtc.Collaboration.Sample.Common;
using Microsoft.Speech.AudioFormat;
using Microsoft.Speech.Recognition;
using System.Net.Mime;
using System.Collections.Generic;



namespace Microsoft.Rtc.Collaboration.Sample.ChooseFlight
{
  // This UCMA application creates an outbound audio-video call to a remote Lync 2010 user, and opens the Lync Conversation Window Extension. 
  // This application also creates a SpeechRecognitionEngine instance so that semantic information from 
  // the remote user's speech can be recognized and displayed in a form on the Lync user's computer screen.
  // Communication between the UCMA application and the remote Lync 2010 application occurs through audio from the Lync application
  // to the UCMA application, and in both directions through a ConversationContextChannel instance. After the context channel is established, 
  // the UCMA application sends contextual data to the remote user, notifying the user to speak his or her preferences for a desired flight. 
  // The audio of this utterance is fed into a SpeechRecognitionEngine and is matched against an SRGS XML grammar. If a match 
  // occurs, the UCMA application sends a string containing the flight origination and destination, and the cost of a
  // ticket. The Lync user closes both applications by clicking the Exit button in the form that appears in the Conversation Window Extension.
  // After this application ends the call, it shuts down the platform and ends, and then pauses the console to allow logs to be viewed.
  // The UCMA application logs in as UserName1, given in App.config, and places an audio-video call to the target user, CalledParty,
  // also given in App.config.
 
  public class ChooseFlightAVCall
  {
    // Some necessary instance variables.
    private CollaborationPlatform _collabPlatform;
    private AudioVideoCall _audioVideoCall;
    private AudioVideoFlow _audioVideoFlow;
  
    // The conversation.
    private Conversation _conversation;
 
    // The conversation context channel.
    private ConversationContextChannel _channel;

    // The speech recognition engine.
    SpeechRecognitionEngine _speechRecognitionEngine;
    
  
    // Wait handles are present only to keep the main thread and worker thread synchronized.
    private AutoResetEvent _waitForRecoCompleted = new AutoResetEvent(false);
    private AutoResetEvent _waitForGrammarToLoad = new AutoResetEvent(false);
    private AutoResetEvent _waitForPlatformShutdownCompleted = new AutoResetEvent(false);

    static void Main(string[] args)
    {
      ChooseFlightAVCall BasicAVCall= new ChooseFlightAVCall();
      BasicAVCall.Run();
    }

    public void Run()
    {
      // Create an AudioVideoFlow instance.
      AudioVideoFlowHelper audioVideoFlowHelper = new AudioVideoFlowHelper();
      _audioVideoFlow = audioVideoFlowHelper.CreateAudioVideoFlow(null, audioVideoFlow_StateChanged);

      _audioVideoCall = _audioVideoFlow.Call;
      _conversation = _audioVideoCall.Conversation;
      _collabPlatform = _conversation.Endpoint.Platform;

      // Create the context channel. 
      _channel = new ConversationContextChannel(_conversation, _audioVideoCall.RemoteEndpoint);

      // Register handlers for the DataReceived and StateChanged events on ConversationContextChannel.
      _channel.DataReceived += new EventHandler<ConversationContextChannelDataReceivedEventArgs>(channel_DataReceived);
      _channel.StateChanged += new EventHandler<ConversationContextChannelStateChangedEventArgs>(channel_StateChanged);

      // Create a speech recognition connector and attach it to the AudioVideoFlow instance.
      SpeechRecognitionConnector speechRecognitionConnector = new SpeechRecognitionConnector();
      speechRecognitionConnector.AttachFlow(_audioVideoFlow);

      // Start the speech recognition connector.
      SpeechRecognitionStream stream = speechRecognitionConnector.Start();

      // Create a SpeechRecognitionEngine and register for event notification.
      _speechRecognitionEngine = new SpeechRecognitionEngine();
      _speechRecognitionEngine.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(speechRecognitionEngine_SpeechRecognized);
      _speechRecognitionEngine.LoadGrammarCompleted += new EventHandler<LoadGrammarCompletedEventArgs>(speechRecognitionEngine_LoadGrammarCompleted);

      // Create a Grammar object and load it into the SpeechRecognitionEngine.
      Grammar gr = new Grammar(@"C:\Users\me\Documents\Visual Studio 2008\Projects\ChooseFlight\Airports.grxml", "Main");
      _speechRecognitionEngine.LoadGrammarAsync(gr);
      _waitForGrammarToLoad.WaitOne();

      // Connect the audio stream to the SpeechRecognitionEngine.
      SpeechAudioFormatInfo speechAudioFormatInfo = new SpeechAudioFormatInfo(8000, AudioBitsPerSample.Sixteen, Microsoft.Speech.AudioFormat.AudioChannel.Mono);
      _speechRecognitionEngine.SetInputToAudioStream(stream, speechAudioFormatInfo);
      
      // Configure a set of options for a context channel.
      ConversationContextChannelEstablishOptions channelOptions = ConfigureChannelOptions();
      
      // The Application ID.
      // This GUID is used by this application and the Lync 2010 application. 
      // A key with this title must be present in the registry
      // under HKLM\SOFTWARE\Policies\Microsoft\Communicator\ContextPackages\.
      Guid guid = new Guid("C17C216F-04A9-4234-94C1-A2EA5F0C4873");
     
      // Establish a context channel to the remote endpoint.
      IAsyncResult result = _channel.BeginEstablish(guid, channelOptions, BeginEstablishCB, _channel);

      // Block this thread until recognition is complete.
      _waitForRecoCompleted.WaitOne();
   
      // Stop the connector.
      speechRecognitionConnector.Stop();
      Console.WriteLine("Stopping the speech recognition connector.");

      result = _channel.BeginTerminate(BeginTerminateCB, _channel);
      _waitForPlatformShutdownCompleted.WaitOne();

      // Pause the console to allow the user to view logs.
      Console.WriteLine("Press any key to end the sample.");
      Console.ReadKey();  
    }


    #region EVENT HANDLERS
  
    // Event handler to record the call state transitions in the console.
    void audioVideoCall_StateChanged(object sender, CallStateChangedEventArgs e)
    {
      Console.WriteLine("Call has changed state. Previous state: " + e.PreviousState + " Current state: " + e.State);
    }

    // Event handler for the StateChanged event on the channel. 
    void channel_StateChanged(object sender, ConversationContextChannelStateChangedEventArgs e)
    {
      Console.WriteLine("Channel state change reason: {0}", e.TransitionReason.ToString());
      Console.WriteLine("New channel state: {0}", _channel.State.ToString());
    }

    // Flow that is created indicates that there is a flow present to begin media operations with, and that it is no longer null.
    public void audioVideoCall_FlowConfigurationRequested(object sender, AudioVideoFlowConfigurationRequestedEventArgs e)
    {
      Console.WriteLine("Flow Created.");
      _audioVideoFlow = e.Flow;

      // Now that the flow is non-null, bind the event handler for State Changed.
      // When the flow goes active, (as indicated by the state changed event) the application can take media-related actions on the flow.
      _audioVideoFlow.StateChanged += new EventHandler<MediaFlowStateChangedEventArgs>(audioVideoFlow_StateChanged);
    }

    
    // Event handler for the DataReceived event on ConversationContextChannel. 
    // The only data expected is "exit".
    void channel_DataReceived(object sender, ConversationContextChannelDataReceivedEventArgs e)
    {
      // Assume that no more than 10 bytes are sent at a time.
      Byte[] body_byte = new Byte[10];
      body_byte = e.ContentDescription.GetBody();
      
      String body_UTF8 = null;
      body_UTF8 = Converter.ConvertByteArrayToString(body_byte, EncodingType.UTF8);

      if (body_UTF8.Equals("exit"))
      {
        _waitForRecoCompleted.Set();
        _speechRecognitionEngine.RecognizeAsyncStop();
      }
      else
      {
        Console.WriteLine("Unexpected results received in channel_DataReceived()");
      }
    }

    // Event handler for the LoadGrammarCompleted event on the SpeechRecognitionEngine.
    void speechRecognitionEngine_LoadGrammarCompleted(object sender, LoadGrammarCompletedEventArgs e)
    {
      Console.WriteLine("Grammar is now loaded.");
      _waitForGrammarToLoad.Set();
    }

    // Event handler for the SpeechRecognized event on the SpeechRecognitionEngine.
    void speechRecognitionEngine_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
    {
      RecognitionResult recoResult = e.Result;
      String str; 
      String origCity, destCity;
      String[] prices = new String[4] { "$368.00", "$429.00", "$525.00", "$631.00" };
      int idx;
      Random rand = new Random();
      
      if (recoResult != null)
      {
        Console.WriteLine("Speech recognized: " + recoResult.Text);
        origCity = recoResult.Semantics["Origination"].Value.ToString();
        destCity = recoResult.Semantics["Destination"].Value.ToString();
        str = origCity;
        str = String.Concat(str, ";");
        str = String.Concat(str, destCity);
        str = String.Concat(str, ";");
        
        // The (bogus) cost of the flight.
        idx = rand.Next(0, 3);
        str = String.Concat(str, prices[idx]);
        SendDataToRemote(str);
      }
    }
    #endregion 


    #region HELPER METHODS
    /// <summary>
    /// Sends the given data to the remote side of the channel. The data to be sent must be converted
    /// to an array of type Byte.
    /// </summary>
    /// <param name="data">A String to be sent on the context channel.</param>
    private void SendDataToRemote(String data)
    {
      // Each string to be sent consists of three fields, separated by semicolons.
      
      int len = data.Length;

      // Convert temp to Byte[].
      Byte[] data_bytes = new byte[data.Length];

      for (int i = 0; i < len; i++)
      {
        data_bytes[i] = Convert.ToByte(data[i]);
      }

      ContentType contentType = new ContentType("text/plain; charset=us-ascii");

      _channel.BeginSendData(contentType, data_bytes, BeginSendDataCB, _channel);
    }

    ///<summary>Configure options for a ConversationContextChannel instance.
    ///</summary>
    ConversationContextChannelEstablishOptions ConfigureChannelOptions()
    {
      ConversationContextChannelEstablishOptions channelOptions = new ConversationContextChannelEstablishOptions();
      channelOptions.ApplicationInstallerPath = ""; // Used when the remote-side application is not already installed.
      channelOptions.ApplicationName = "Blue Yonder Airlines";
      channelOptions.ContextualData = "Context channel is open."; // Normally used for initialization.
      channelOptions.Toast = "Get ready for incoming contextual data.";
      return channelOptions;
    }

    #endregion


    #region CALLBACK METHODS

    // Callback for BeginEstablish on ConversationContextChannel.
    private void BeginEstablishCB(IAsyncResult result)
    {
      ConversationContextChannel channel = result.AsyncState as ConversationContextChannel;

      try
      {
        channel.EndEstablish(result);
        // Give the Lync 2010 client some time to get the UI in order. 
        Thread.Sleep(250);
      }

      catch (Exception ex)
      {
        Console.WriteLine("EndEstablish exception: " + ex);
      }

      Console.WriteLine("BeginEstablish IAsyncResult.IsCompleted: {0}", result.IsCompleted.ToString());
      _speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
    }


    // Callback for BeginSendData on ConversationContextChannel.
    private void BeginSendDataCB(IAsyncResult ar)
    {
      ConversationContextChannel channel = ar.AsyncState as ConversationContextChannel;
      channel.EndSendData(ar);
      Console.WriteLine("\nEndSendData() called on channel. IAsyncResult.IsCompleted value: {0}.", ar.IsCompleted.ToString());
    }

    // Callback for BeginTerminate on ConversationContextChannel.
    private void BeginTerminateCB(IAsyncResult ar)
    {
      ConversationContextChannel channel = ar.AsyncState as ConversationContextChannel;

      // Complete the termination of the context channel.
      channel.EndTerminate(ar);

      // Terminate the call. 
      IAsyncResult result = _audioVideoCall.BeginTerminate(EndTerminateCall, _audioVideoCall);
      Console.WriteLine("Waiting for the call to be terminated...");
    }

    private void EndTerminateCall(IAsyncResult ar)
    {
      AudioVideoCall AVCall = ar.AsyncState as AudioVideoCall;

      // Complete the termination of the incoming call.
      AVCall.EndTerminate(ar);

      // Terminate the conversation.
      IAsyncResult result = _audioVideoCall.Conversation.BeginTerminate(EndTerminateConversation, _audioVideoCall.Conversation);
      Console.WriteLine("Waiting for the conversation to be terminated...");
    }

    private void EndTerminateConversation(IAsyncResult ar)
    {
      Conversation conv = ar.AsyncState as Conversation;

      // Complete the termination of the conversation.
      conv.EndTerminate(ar);

      // Now, clean up by shutting down the platform.
      Console.WriteLine("Shutting down the platform...");

      _collabPlatform.BeginShutdown(PlatformShutdownCB, _collabPlatform);
    }

    // Callback that handles the StateChanged event on an AudioVideoFlow instance.
    private void audioVideoFlow_StateChanged(object sender, MediaFlowStateChangedEventArgs e)
    {
      // When flow is active, media operations can begin.
      if (e.State == MediaFlowState.Terminated)
      {
        // Detach SpeechRecognitionConnector since AVFlow is now terminated.
        AudioVideoFlow avFlow = (AudioVideoFlow)sender;
        if (avFlow.SpeechRecognitionConnector != null)
        {
          avFlow.SpeechRecognitionConnector.DetachFlow();
        }
      }
    }
    

    private void PlatformShutdownCB(IAsyncResult ar)
    {
      CollaborationPlatform collabPlatform = ar.AsyncState as CollaborationPlatform;
      try
      {
        // Shutdown actions will not throw.
        collabPlatform.EndShutdown(ar);
        Console.WriteLine("The platform is now shut down.");
      }
      finally
      {
        _waitForPlatformShutdownCompleted.Set();
      }
    }

    #endregion

  }
}

Helper Code

Code for the three helper files appears in this section. The AudioVideoFlowHelper.cs code creates and returns the AudioVideoFlow instance that is used in audio communication with the remote Lync 2010 endpoint.

AudioVideoFlowHelper.cs

using System;
using System.Configuration;
using System.Threading;
using Microsoft.Rtc.Collaboration;
using Microsoft.Rtc.Collaboration.AudioVideo;
using Microsoft.Rtc.Signaling;

namespace Microsoft.Rtc.Collaboration.Sample.Common
{
  class AudioVideoFlowHelper
  {
    private static String _conversationSubject = "Microsoft Lync Server 2010!";
    private static String _conversationPriority = ConversationPriority.Urgent;
    private static String _calledParty;

    private AutoResetEvent _waitForAudioVideoCallEstablishCompleted = new AutoResetEvent(false);
    private AutoResetEvent _waitForAudioVideoFlowStateChangedToActiveCompleted = new AutoResetEvent(false);
    private AutoResetEvent _waitForPrepareSourceCompleted = new AutoResetEvent(false);

    private AudioVideoFlow _audioVideoFlow;
    private EventHandler<AudioVideoFlowConfigurationRequestedEventArgs> _audioVideoFlowConfigurationRequestedEventHandler;
    private EventHandler<MediaFlowStateChangedEventArgs> _audioVideoFlowStateChangedEventHandler;

    public AudioVideoFlow CreateAudioVideoFlow(EventHandler<AudioVideoFlowConfigurationRequestedEventArgs> audioVideoFlowConfigurationRequestedEventHandler, EventHandler<MediaFlowStateChangedEventArgs> audioVideoFlowStateChangedEventHandler)
    {
      _audioVideoFlowConfigurationRequestedEventHandler = audioVideoFlowConfigurationRequestedEventHandler;
      _audioVideoFlowStateChangedEventHandler = audioVideoFlowStateChangedEventHandler;

      UCMASampleHelper UCMASampleHelper = new UCMASampleHelper();
      UserEndpoint userEndpoint = UCMASampleHelper.CreateEstablishedUserEndpoint("AudioVideoFlowHelper");

      // If application settings are provided by the App.Config file, then use them.
      if (ConfigurationManager.AppSettings.HasKeys() == true)
      {
        _calledParty = "sip:" + ConfigurationManager.AppSettings["CalledParty"];
      }
      else
      {
        // Prompt user for user URI.
        string prompt = "Please enter the Called Party URI in the User@Host format => ";
        _calledParty = UCMASampleHelper.PromptUser(prompt, "Remote User URI");
        _calledParty = "sip:" + _calledParty;
      }

      // Set up the conversation and place the call.
      ConversationSettings convSettings = new ConversationSettings();
      convSettings.Priority = _conversationPriority;
      convSettings.Subject = _conversationSubject;

      // Conversation represents a collection of modes of communication (media types)in the context of a dialog with one or more call recipients.
      Conversation conversation = new Conversation(userEndpoint, convSettings);
      AudioVideoCall audioVideoCall = new AudioVideoCall(conversation);

      // Registration for notification of StateChanged events on the Call is for logging purposes.
      audioVideoCall.StateChanged += new EventHandler<CallStateChangedEventArgs>(audioVideoCall_StateChanged);

      // Subscribe for notification of the AudioFlowConfigurationRequested event. The flow will be used to send the media.
      // Ultimately, as a part of the callback, the media will be sent/received.
      audioVideoCall.AudioVideoFlowConfigurationRequested += new EventHandler<AudioVideoFlowConfigurationRequestedEventArgs>(audioVideoCall_FlowConfigurationRequested);

      // Place the call to the remote party, using default call options.
      audioVideoCall.BeginEstablish(_calledParty, null, EndCallEstablish, audioVideoCall);

      // Wait for the call to be finish being established.
      _waitForAudioVideoCallEstablishCompleted.WaitOne();

      // Wait for the AudioVideoFlow’s State to become Active.
      _waitForAudioVideoFlowStateChangedToActiveCompleted.WaitOne();

      return _audioVideoFlow;
    }

    // Record the state transitions in the console.
    void audioVideoCall_StateChanged(object sender, CallStateChangedEventArgs e)
    {
      Console.WriteLine("Call has changed state. The previous call state was: " + e.PreviousState + " and the current state is: " + e.State);
    }

    // FlowConfigurationRequested indicates that there is a non-null flow that can be used to begin media operations, 
    // that is ready to be configured.
    public void audioVideoCall_FlowConfigurationRequested(object sender, AudioVideoFlowConfigurationRequestedEventArgs e)
    {
      Console.WriteLine("Flow Configuration Requested.");
      _audioVideoFlow = e.Flow;

      // Now that the flow is non-null, bind the event handler for the StateChanged event.
      // When the flow goes active, (as indicated by the StateChanged event) the application will perform media related actions.
      _audioVideoFlow.StateChanged += new EventHandler<MediaFlowStateChangedEventArgs>(audioVideoFlow_StateChanged);

      // Call the event handler.
      if (_audioVideoFlowConfigurationRequestedEventHandler != null)
      {
        _audioVideoFlowConfigurationRequestedEventHandler(sender, e);
      }
    }

    // Callback that is invoked when the state of an AudioVideoFlow changes.
    private void audioVideoFlow_StateChanged(object sender, MediaFlowStateChangedEventArgs e)
    {
      Console.WriteLine("Flow state changed from " + e.PreviousState + " to " + e.State);

      // When flow is active, media operations can begin.
      if (e.State == MediaFlowState.Active)
      {
        // Flow-related media operations normally begin here.
        _waitForAudioVideoFlowStateChangedToActiveCompleted.Set();
      }

      // Call the event handler.
      if (_audioVideoFlowStateChangedEventHandler != null)
      {
        _audioVideoFlowStateChangedEventHandler(sender, e);
      }
    }

    private void EndCallEstablish(IAsyncResult ar)
    {
      Call call = ar.AsyncState as Call;
      try
      {
        call.EndEstablish(ar);
        Console.WriteLine("The call with Local Participant: " + call.Conversation.LocalParticipant + " and Remote Participant: " + call.RemoteEndpoint.Participant + " is now in the established state.");
      }
      catch (OperationFailureException opFailEx)
      {
        // OperationFailureException: Indicates failure to connect the call to the remote party.
        // It is left to the application to perform real error handling here.
        Console.WriteLine(opFailEx.ToString());
      }
      catch (RealTimeException exception)
      {
        // RealTimeException may be thrown on media or link-layer failures.
        // It is left to the application to perform real error handling here.
        Console.WriteLine(exception.ToString());
      }
      finally
      {
        // Wait for the call to be finish being established.
        _waitForAudioVideoCallEstablishCompleted.Set();
      }
    }
  }
}

The second file, UCMASampleHelper.cs, creates and establishes a UserEndpoint instance on a client platform. The following example is an abbreviated version of the code that appears in the UCMA 3.0 SDK UCMASampleCode.cs file.

The following code differs from UCMASampleCode.cs in the following ways.

  1. The sample presented here is shorter. Methods that are related to creating and establishing an ApplicationEndpoint instance are removed. The following methods are removed.

    • CreateUserEndpointWithServerPlatform

    • ReadGenericApplicationContactConfiguration

    • ReadApplicationContactConfiguration

    • CreateApplicationEndpoint

    • CreateAndStartServerPlatform

  2. A number of unused private fields are excluded from the following code. Code that uses the _serverCollabPlatform field is also excluded from the following example.

  3. Two variables are added to the following example: _remoteUserURIPrompt and _remoteUserURI.

  4. The GetRemoteUserURI method is added to the following example.

UCMASampleHelper.cs (abbreviated)

using System;
using System.Configuration;
using System.Globalization;
using System.Runtime.InteropServices;
using System.Security.Cryptography.X509Certificates;
using System.Text;
using System.Threading;

using Microsoft.Rtc.Collaboration;
using Microsoft.Rtc.Signaling;

namespace Microsoft.Rtc.Collaboration.Sample.Common
{
  class UCMASampleHelper
  {
    private static ManualResetEvent _sampleFinished = new ManualResetEvent(false);

    // The name of this application, to be used as the outgoing user agent string.
    // The user agent string is put in outgoing message headers to indicate the application that is used.
    private static string _applicationName = "UCMASampleCode";

    const string _sipPrefix = "sip:";

    // These strings are used as keys into the App.Config file to get information to avoid prompting. For most of these strings,
    // suffixes 1-N are used on each subsequent call. For example, UserName1 is used for the first user and UserName2 for the second user.
    private static String _serverFQDNPrompt = "ServerFQDN";
    private static String _userNamePrompt = "UserName";
    private static String _userDomainPrompt = "UserDomain";
    private static String _userURIPrompt = "UserURI";
    private static String _remoteUserURIPrompt = "UserURI";

    // Construct the network credential that the UserEndpoint will use for authentication by the Microsoft Lync Server 2010 computer.
    private string _userName; // User name and password for pair of a user who is authorized to access Lync Server 2010. 
    private string _userPassword;
    private string _userDomain; // Domain that this user signs in to. Note: This is the Active Directory domain, not the portion of the SIP URI following the ‘@’ sign.
    private System.Net.NetworkCredential _credential;

    // The user URI and connection server of the user used.
    private string _userURI; // This should be the URI of the user specified earlier.
    private string _remoteUserURI; // The URI of the remote endpoint.

    // The server FQDN.
    private static string _serverFqdn;// The FQDN of the Microsoft Lync Server 2010 computer.

    // Transport type used to communicate with Microsoft Lync Server 2010 computer.
    private Microsoft.Rtc.Signaling.SipTransportType _transportType = Microsoft.Rtc.Signaling.SipTransportType.Tls;

    private static CollaborationPlatform _collabPlatform;
    private static bool _isPlatformStarted;
    private AutoResetEvent _platformStartupCompleted = new AutoResetEvent(false);
    private AutoResetEvent _endpointInitCompletedEvent = new AutoResetEvent(false);
    private AutoResetEvent _platformShutdownCompletedEvent = new AutoResetEvent(false);
    private UserEndpoint _userEndpoint;

    private bool _useSuppliedCredentials;
    private static int _appContactCount;
    private static int _userCount = 1;

    // This method attempts to read user settings from the App.Config file. If the settings are not
    // present in the configuration file, this method prompts the user for them in the console.
    // This method returns a UserEndpointSettings object. If you do not want to monitor LocalOwnerPresence, you can 
    // call the CreateEstablishedUserEndpoint method directly. Otherwise, you can call the ReadUserSettings,
    // CreateUserEndpoint, and EstablishUserEndpoint methods, in that order.
    public UserEndpointSettings ReadUserSettings(string userFriendlyName)
    {
      UserEndpointSettings userEndpointSettings = null;
      string prompt = string.Empty;
      if (string.IsNullOrEmpty(userFriendlyName))
      {
        userFriendlyName = "Default User";
      }

      try
      {
        Console.WriteLine(string.Empty);
        Console.WriteLine("Creating User Endpoint for {0}...", userFriendlyName);
        Console.WriteLine();

        if (ConfigurationManager.AppSettings[_serverFQDNPrompt + _userCount] != null)
        {
          _serverFqdn = ConfigurationManager.AppSettings[_serverFQDNPrompt + _userCount];
          Console.WriteLine("Using {0} as Microsoft Lync Server", _serverFqdn);
        }
        else
        {
          // Prompt user for server FQDN. If server FQDN was entered previously, then use the saved value.
          string localServer;
          StringBuilder promptBuilder = new StringBuilder();
          if (!string.IsNullOrEmpty(_serverFqdn))
          {
            promptBuilder.Append("Current Microsoft Lync Server = ");
            promptBuilder.Append(_serverFqdn);
            promptBuilder.AppendLine(". Please hit ENTER to retain this setting - OR - ");
          }

          promptBuilder.Append("Please enter the FQDN of the Microsoft Lync Server that the ");
          promptBuilder.Append(userFriendlyName);
          promptBuilder.Append(" endpoint is homed on => ");
          localServer = PromptUser(promptBuilder.ToString(), null);

          if (!String.IsNullOrEmpty(localServer))
          {
            _serverFqdn = localServer;
          }
        }

        // Prompt user for user name
        prompt = String.Concat("Please enter the User Name for ",
                                    userFriendlyName,
                                    " (or hit the ENTER key to use current credentials)\r\n" +
                                    "Please enter the User Name => ");
        _userName = PromptUser(prompt, _userNamePrompt + _userCount);

        // If user name is empty, use current credentials
        if (string.IsNullOrEmpty(_userName))
        {
          Console.WriteLine("Username was empty - using current credentials...");
          _useSuppliedCredentials = true;
        }
        else
        {
          // Prompt for password
          prompt = String.Concat("Enter the User Password for ", userFriendlyName, " => ");
          _userPassword = PromptUser(prompt, null);

          prompt = String.Concat("Please enter the User Domain for ", userFriendlyName, " => ");
          _userDomain = PromptUser(prompt, _userDomainPrompt + _userCount);
        }

        // Prompt user for user URI
        prompt = String.Concat("Please enter the User URI for ", userFriendlyName, " in the User@Host format => ");
        _userURI = PromptUser(prompt, _userURIPrompt + _userCount);
        if (!(_userURI.ToLower().StartsWith("sip:") || _userURI.ToLower().StartsWith("tel:")))
        {
          _userURI = "sip:" + _userURI;
        }
        // Increment the last user number
        _userCount++;

        // Initialize and register the endpoint, using the credentials of the user that the application represents.
        // NOTE: the _userURI should always use the "sip:user@host" format.
        userEndpointSettings = new UserEndpointSettings(_userURI, _serverFqdn);

        if (!_useSuppliedCredentials)
        {
          _credential = new System.Net.NetworkCredential(_userName, _userPassword, _userDomain);
          userEndpointSettings.Credential = _credential;
        }
        else
        {
          userEndpointSettings.Credential = System.Net.CredentialCache.DefaultNetworkCredentials;
        }
      }
      catch (InvalidOperationException iOpEx)
      {
        // InvalidOperationException should be thrown only on poorly-entered input.
        Console.WriteLine("Invalid Operation Exception: " + iOpEx.ToString());
      }

      return userEndpointSettings;
    }

    // This method creates an endpoint, using the specified UserEndpointSettings object.
    // This method returns a UserEndpoint object so that you can register endpoint-specific event handlers. 
    // If you do not want to get endpoint-specific event information at the time the endpoint is established, you can 
    // call the CreateEstablishedUserEndpoint method directly. Otherwise, you can call the ReadUserSettings,
    // CreateUserEndpoint, and EstablishUserEndpoint methods, in that order.
    public UserEndpoint CreateUserEndpoint(UserEndpointSettings userEndpointSettings)
    {
      // Reuse the platform instance so that all endpoints share the same platform.
      if (_collabPlatform == null)
      {
        // Initialize and start the platform.
        ClientPlatformSettings clientPlatformSettings = new ClientPlatformSettings(_applicationName, _transportType);
        _collabPlatform = new CollaborationPlatform(clientPlatformSettings);
      }

      _userEndpoint = new UserEndpoint(_collabPlatform, userEndpointSettings);
      return _userEndpoint;
    }
    

    // This method establishes a previously created UserEndpoint.
    // This method returns an established UserEndpoint object. If you do not want to monitor LocalOwnerPresence, you can 
    // call the CreateEstablishedUserEndpoint method directly. Otherwise, you can call the ReadUserSettings,
    // CreateUserEndpoint, and EstablishUserEndpoint methods, in that order.
    public bool EstablishUserEndpoint(UserEndpoint userEndpoint)
    {
       // Start the platform, if not already started.
       if (_isPlatformStarted == false)
       {
         userEndpoint.Platform.BeginStartup(EndPlatformStartup, userEndpoint.Platform);

         // Wait for the platform startup to be completed.
         _platformStartupCompleted.WaitOne();
         Console.WriteLine("Platform started...");
         _isPlatformStarted = true;
       }
       // Establish the user endpoint.
       userEndpoint.BeginEstablish(EndEndpointEstablish, userEndpoint);

      // Wait until the endpoint is established.
      _endpointInitCompletedEvent.WaitOne();
      Console.WriteLine("Endpoint established...");
      return true;
    }

    

    // This method creates an established UserEndpoint.
    // This method returns an established UserEndpoint object. If you do not want to monitor LocalOwnerPresence, you can 
    // call this CreateEstablishedUserEndpoint method directly. Otherwise, you can call the ReadUserSettings,
    // CreateUserEndpoint, and EstablishUserEndpoint methods, in that order.
    public UserEndpoint CreateEstablishedUserEndpoint(string endpointFriendlyName)
    {
      UserEndpointSettings userEndpointSettings;
      UserEndpoint userEndpoint = null;
      try
      {
        // Read user settings
        userEndpointSettings = ReadUserSettings(endpointFriendlyName);

        // Create User Endpoint
        userEndpoint = CreateUserEndpoint(userEndpointSettings);

        // Establish the user endpoint
        EstablishUserEndpoint(userEndpoint);
      }
      catch (InvalidOperationException iOpEx)
      {
        // InvalidOperationException should be thrown only on poorly-entered input.
        Console.WriteLine("Invalid Operation Exception: " + iOpEx.ToString());
      }

      return userEndpoint;
    }

    // Returns the remote user URI.
    // This method is not present in the original UCMASampleHelper.cs
    public String GetRemoteUserURI()
    {
      String str = "";
      try
      {
        if (ConfigurationManager.AppSettings[_remoteUserURIPrompt + _userCount] != null)
        {
          _remoteUserURI = ConfigurationManager.AppSettings[_remoteUserURIPrompt + _userCount];
          Console.WriteLine("\nUsing {0} as remote user", _remoteUserURI);
          return _remoteUserURI;
        }
        else
        {
          // Prompt user for remote user URI
          _remoteUserURI = UCMASampleHelper.PromptUser("Enter the URI for the remote user logged onto Communicator, in the sip:User@Host format or tel:+1XXXYYYZZZZ format => ", "RemoteUserURI");
          return str;
        }
      }
      catch (InvalidOperationException iOpEx)
      {
        // Invalid Operation Exception should only be thrown on poorly-entered input.
        Console.WriteLine("Invalid Operation Exception: " + iOpEx.ToString());
        return str;
      }
    }

    /// <summary>
    /// If the 'key' is not found in App.Config, prompt the user in the console, using the specified prompt text.
    /// </summary>
    /// <param name="promptText"> The text to be displayed in the console if ‘key’ is not found in App.Config.</param>
    /// <param name="key"> Searches for this key in App.Config and returns a value if found. Pass null to always display a message that requests user input.</param>
    /// <returns>String value either from App.Config or user input.</returns>
    public static string PromptUser(string promptText, string key)
    {
      String value;
      if (String.IsNullOrEmpty(key) || ConfigurationManager.AppSettings[key] == null)
      {
        Console.WriteLine(string.Empty);
        Console.Write(promptText);
        value = Console.ReadLine();
      }
      else
      {
        value = ConfigurationManager.AppSettings[key];
        Console.WriteLine("Using keypair {0} - {1} from AppSettings...", key, value);
      }

      return value;
    }


    private void EndPlatformStartup(IAsyncResult ar)
    {
      CollaborationPlatform collabPlatform = ar.AsyncState as CollaborationPlatform;
      try
      {
        // The platform should now be started.
        collabPlatform.EndStartup(ar);
        // It should be noted that all the re-thrown exceptions will crash the application. This is intentional.
        // Ideal exception handling will report the error and force the application to shut down in an orderly manner. 
        // In production code, consider using an IAsyncResult implementation to report the error 
        // instead of throwing. Alternatively, put the implementation in this try block.
      }
      catch (OperationFailureException opFailEx)
      {
        // OperationFailureException is thrown when the platform cannot establish, usually due to invalid data.
        Console.WriteLine(opFailEx.Message);
        throw;
      }
      catch (ConnectionFailureException connFailEx)
      {
        // ConnectionFailureException is thrown when the platform cannot connect.
        // ClientPlatforms will not throw this exception during startup.
        Console.WriteLine(connFailEx.Message);
        throw;
      }
      catch (RealTimeException realTimeEx)
      {
        // RealTimeException may be thrown as a result of any UCMA operation.
        Console.WriteLine(realTimeEx.Message);
        throw;
      }
      finally
      {
        // Synchronize threads.
        _platformStartupCompleted.Set();
      }

    }

    private void EndEndpointEstablish(IAsyncResult ar)
    {
      LocalEndpoint currentEndpoint = ar.AsyncState as LocalEndpoint;
      try
      {
        currentEndpoint.EndEstablish(ar);
      }
      catch (AuthenticationException authEx)
      {
        // AuthenticationException is thrown when the credentials are not valid.
        Console.WriteLine(authEx.Message);
        throw;
      }
      catch (ConnectionFailureException connFailEx)
      {
        // ConnectionFailureException is thrown when the endpoint cannot connect to the server, or the credentials are not valid.
        Console.WriteLine(connFailEx.Message);
        throw;
      }
      catch (InvalidOperationException iOpEx)
      {
        // InvalidOperationException is thrown when the endpoint is not in a valid connection state to. To connect, the platform must be started and the endpoint must be in the Idle state.
        Console.WriteLine(iOpEx.Message);
        throw;
      }
      finally
      {
        // Synchronize threads.
        _endpointInitCompletedEvent.Set();
      }
    }

    internal void ShutdownPlatform()
    {
      if (_collabPlatform != null)
      {
        _collabPlatform.BeginShutdown(EndPlatformShutdown, _collabPlatform);
      }

      // if (_serverCollabPlatform != null)
      // {
      //   _serverCollabPlatform.BeginShutdown(EndPlatformShutdown, _serverCollabPlatform);
      //}

      // Synchronize threads.
      _platformShutdownCompletedEvent.WaitOne();
    }

    private void EndPlatformShutdown(IAsyncResult ar)
    {
      CollaborationPlatform collabPlatform = ar.AsyncState as CollaborationPlatform;

      try
      {
        // Shutdown actions do not throw.
        collabPlatform.EndShutdown(ar);
        Console.WriteLine("The platform is now shut down.");
      }
      finally
      {
        _platformShutdownCompletedEvent.Set();
      }
    }

    /// <summary>
    /// Read the local store for the certificate that is used to create the platform. This is necessary to establish a connection to the server.
    /// </summary>
    /// <param name="friendlyName">The friendly name of the certificate to use.</param>
    /// <returns>The certificate instance.</returns>
    public static X509Certificate2 GetLocalCertificate(string friendlyName)
    {
      X509Store store = new X509Store(StoreLocation.LocalMachine);

      store.Open(OpenFlags.ReadOnly);
      X509Certificate2Collection certificates = store.Certificates;
      store.Close();

      foreach (X509Certificate2 certificate in certificates)
      {
        if (certificate.FriendlyName.Equals(friendlyName, StringComparison.OrdinalIgnoreCase))
        {
          return certificate;
        }
      }
      return null;
    }

    public static void WriteLine(string line)
    {
      Console.WriteLine(line);
    }

    public static void WriteErrorLine(string line)
    {
      Console.ForegroundColor = ConsoleColor.Red;
      Console.WriteLine(line);
      Console.ResetColor();
    }

    public static void WriteException(Exception ex)
    {
      WriteErrorLine(ex.ToString());
    }

    /// <summary>
    /// Prompts the user to press a key, unblocking any waiting calls to the
    /// <code>WaitForSampleFinish</code> method
    /// </summary>
    public static void FinishSample()
    {
      Console.WriteLine("Please hit any key to end the sample.");
      Console.ReadKey();
      _sampleFinished.Set();
    }

    /// <summary>
    /// 
    /// </summary>
    public static void WaitForSampleFinish()
    {
      _sampleFinished.WaitOne();
    }
      
  }
}

The following code example, Converter.cs, contains a small class with a single method that uses the specified encoding to convert a Byte array into a string. The contextual data that the UCMA 3.0 application receives from the context channel or sends through it must be formatted as a Byte array. The method in this class handles the conversion of Byte arrays to strings.

Converter.cs

using System;

namespace Microsoft.Rtc.Collaboration.Sample.Common
{
  /// <summary> 
  /// Encoding Types. 
  /// </summary> 
  public enum EncodingType
  {
     ASCII,
     Unicode,
     UTF7,
     UTF8,
     UTF32
  }
  class Converter
  {
    /// <summary> 
    /// Converts a byte array to a string using specified encoding. 
    /// </summary> 
    /// <param name="byteArray">The byte array to be converted.</param> 
    /// <param name="encodingType">A member of the EncodingType enumeration.</param> 
    /// <returns>string</returns> 
    public static System.String ConvertByteArrayToString(byte[] byteArray, EncodingType encodingType) 
    { 
      System.Text.Encoding encoding = null; 
      if (encodingType == EncodingType.ASCII)
        encoding = new System.Text.ASCIIEncoding();
      else if (encodingType == EncodingType.Unicode)
        encoding = new System.Text.UnicodeEncoding();
      else if (encodingType == EncodingType.UTF7)
        encoding = new System.Text.UTF7Encoding();
      else if (encodingType == EncodingType.UTF8)
        encoding = new System.Text.UTF8Encoding();
      else if (encodingType == EncodingType.UTF32)
        encoding = new System.Text.UTF32Encoding();
      if (!(encoding == null))
      {
        return encoding.GetString(byteArray);
      }
      else return "Bad encoding type.";
    }
  }
}

Grammar File

The following code example, Flights.grxml, contains the Speech Recognition Grammar Specification (SRGS) XML grammar that is used for speech recognition in the Microsoft Unified Communications Managed API (UCMA) 3.0 application. For more information about this grammar, see Using UCMA 3.0 and Lync 2010 for Contextual Communication: Creating the UCMA Application (Part 3 of 6).

Flights.grxml

<?xml version="1.0" encoding="UTF-8" ?>
<grammar version="1.0" xml:lang="en-US" mode="voice" root= "Main"
xmlns="http://www.w3.org/2001/06/grammar" tag-format="semantics/1.0">

<rule id="Main" scope="public">
  <example>I would like to fly from Seattle to Denver</example>
  <tag>out.Origination=""; out.Destination="";</tag>
  <one-of>
     <item>I would like to fly </item>
     <item>I want to fly </item>  
     <item>I want a ticket </item>
  </one-of>
  <item>from</item>
  <ruleref uri="#Cities" type="application/srgs+xml"/>  
  <tag>out.Origination=rules.latest();</tag>
  <item>to</item>
  <ruleref uri="#Cities" type="application/srgs+xml"/> 
  <tag>out.Destination=rules.latest();</tag>
</rule>

<rule id="Cities">
<one-of>
  <item> Atlanta <tag>out="Atlanta, GA";</tag> </item>
  <item> Baltimore <tag>out="Baltimore, MD";</tag> </item>
  <item> Boston <tag>out="Boston, MA";</tag> </item>
  <item> Dallas <tag>out="Dallas, TX";</tag> </item>
  <item> Denver <tag>out="Denver, CO";</tag> </item>
  <item> Detroit <tag>out="Detroit, MI";</tag> </item>
  <item> Jackson <tag>out="Jackson, MS";</tag> </item>
  <item> Miami <tag>out="Miami, FL";</tag> </item>
  <item> New York <tag>out="New York, NY";</tag> </item>
  <item> Philadelphia <tag>out="Philadelphia, PA";</tag> </item>
  <item> Phoenix <tag>out="Phoenix, AZ";</tag> </item>
  <item> San Francisco <tag>out="San Francisco, CA";</tag> </item>
  <item> Seattle <tag>out="Seattle, WA";</tag> </item>
  <item> Vancouver <tag>out="Vancouver, BC";</tag> </item>
</one-of>
</rule>
</grammar>

The Lync 2010 application code appears in this section.

XAML Code for the Form

The Page.xaml example contains the XAML code for the form and its controls that appear in the Lync Conversation Window Extension.

Page.xaml

<UserControl x:Class="FlightChooser.MainPage"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    mc:Ignorable= "d"
    d:DesignHeight="300" d:DesignWidth="500">

    <Grid x:Name="LayoutRoot" Background="White" Width="490">
        <TextBlock Height="23" HorizontalAlignment="Left" Margin="23,15,0,0" Name="Channel" Text="Channel status" VerticalAlignment="Top" />
        <TextBox Height="23" HorizontalAlignment="Left" Margin="113,12,0,0" Name="channelStatus" Text="Not ready" Foreground="red" VerticalAlignment="Top" Width="66" IsEnabled="true" />
        <TextBox Height="23" HorizontalAlignment="Left" Margin="133,114,0,0" Name="fltDestination" VerticalAlignment="Top" Width="120" IsEnabled="true"/>
        <TextBox Height="23" HorizontalAlignment="Left" Margin="133,78,0,0" Name="fltOrigination" VerticalAlignment="Top" Width="120" IsEnabled="true" />
        <TextBlock Height="23" HorizontalAlignment="Left" Margin="22,117,0,0" Name="textBlock1" Text="Destination city" VerticalAlignment="Top" Width="89" />
        <TextBlock Height="23" HorizontalAlignment="Left" Margin="22,81,0,0" Name="textBlock2" Text="Origination city" VerticalAlignment="Top" Width="89" />
        <TextBlock Height="23" HorizontalAlignment="Left" Margin="22,48,0,0" Name="textBlock5" Text="" VerticalAlignment="Top" Width="206" />
        <Button Content="Exit" Height="23" HorizontalAlignment="Left" Margin="23,192,0,0" Name="SendAdditionalData" Click="SendAdditionalData_Click" VerticalAlignment="Top" Width="75" />
        <TextBox Height="59" HorizontalAlignment="Left" Margin="74,229,0,0" Name="LoggerTextBox" VerticalAlignment="Top" Width="410" />
        <TextBlock Height="23" HorizontalAlignment="Left" Margin="30,249,0,0" Name="textBlock3" Text="Logger" VerticalAlignment="Top" />
        <TextBox Height="23" HorizontalAlignment="Left" Margin="135,150,0,0" Name="fltCost" VerticalAlignment="Top" Width="120" />
        <TextBlock Height="23" HorizontalAlignment="Left" Margin="22,149,0,0" Name="textBlock6" Text="Ticket price" VerticalAlignment="Top" Width="74" />
    </Grid>
</UserControl>

Form Code-Behind

The Page.xaml.cs example shows the C# code that interacts with the form that appears in the previous section. For more information, see Using Speech Recognition in UCMA 3.0 and Lync 2010: Lync Application (Part 4 of 5).

Page.xaml.cs

using System;
using System.Collections.Generic;
using System.Net;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Media;
using Microsoft.Lync.Model;
using Microsoft.Lync.Model.Extensibility;
using Microsoft.Lync.Model.Conversation;

namespace FlightChooser
{
  public partial class MainPage : UserControl
  {
    string AppId = "{C17C216F-04A9-4234-94C1-A2EA5F0C4873}";
    Conversation _conversation;
    ConversationWindow _conversationWindow;
    Automation _automation = LyncClient.GetAutomation();
    
    
    public MainPage()
    {
      InitializeComponent();
      Initialize();
    }

  // Get the hosting Conversation object and application data, register for event notification,
  // and then update the user interface to “Ready”.
  private void Initialize()
  {
    String appData;
    try
    {
      _conversation = (Conversation)Microsoft.Lync.Model.LyncClient.GetHostingConversation();
    }

    catch (LyncClientException ex)
    {
      Logger("LyncClientException error: " + ex.ToString());
    }

    catch (Exception ex)
    {
      Logger("Other conversation initialization error: " + ex.ToString());
    }

    _conversation.ContextDataReceived += OnContextDataReceived;
    _conversation.ContextDataSent += OnContextDataSent;
    _conversation.InitialContextReceived += OnInitialContextReceived;
    _conversationWindow = _automation.GetConversationWindow(_conversation);

    appData = _conversation.GetApplicationData(_appId);
    Logger("Application data: " + appData);
    if (appData.Contains("open"))
    {
      channelStatus.Foreground = new SolidColorBrush(Colors.Green);
      channelStatus.Text = "Ready";
    }
  }

    // Display a string in the Logger textbox.
    private void Logger(string text)
    {
      LoggerTextBox.Text += text + "\n";
    }

    // Handler for the InitialContextReceived event.
    public void OnInitialContextReceived(object sender, InitialContextEventArgs args)
    {
      channelStatus.Foreground = new SolidColorBrush(Colors.Green);
      channelStatus.Text = "Ready";

      Logger("InitialContextReceived event raised. Data received is " + args.ApplicationData);
    }

    // Handler for the ContextDataReceived event on the Conversation object.
    // This handler splits the string in args.ContextData into three substrings.
    // Semicolons divide the three substrings.
    public void OnContextDataReceived(object sender, ContextEventArgs args)
    {
      if ( (args != null) && (args.ContextData.Length != 0) )
      {
        // Split the ContextData string at the semicolons.
        string str = args.ContextData;
        string[] substr = str.Split(new char[] { ';' });

        // Populate the three data boxes.
        fltOrigination.Text = substr[0];
        fltDestination.Text = substr[1];
        fltCost.Text = substr[2];

        Logger("OnContextDataReceived: str = " + args.ContextData + "\nConversation ID = " + ((Conversation)sender).Properties[ConversationProperty.Id].ToString());
      }
    }

    // Handler for the ContextDataSent event on the Conversation object. 
    public void OnContextDataSent(object sender, ContextEventArgs args)
    {
      try
      {
        Logger("OnContextDataSent: AppId = " + args.ApplicationId +
                    " DataType = " + args.ContextDataType +
                    " Data = " + args.ContextData);
      }

      catch (Exception ex)
      {
        Logger("OnContextDataSent error: " + ex.ToString());
      }
    }

    // Handler for the Click event on the Exit button.
    private void SendAdditionalData_Click(object sender, RoutedEventArgs e)
    {
      try
      {
        Logger("Sending additional context");

        _conversation.BeginSendContextData(AppId, @"plain/text", "exit", SendAdditionalDataCallBack, null);
        channelStatus.Foreground = new SolidColorBrush(Colors.Red);
        channelStatus.Text = "Closed";
        fltOrigination.Text = "";
        fltDestination.Text = "";
        fltCost.Text = "";
      }

      catch (InvalidOperationException ex)
      {
        Logger("Invalid operation in SendAdditionalData handler: " + ex.ToString());
      }

      catch (Exception ex)
      {
        Logger("Other SendAdditionalData error: " + ex.ToString());
      }

    }

    // Callback for the BeginSendContextData method.
    private void SendAdditionalDataCallBack(IAsyncResult asyncResult)
    {
      try
      {
        if (asyncResult.IsCompleted)
        {
          _conversation.EndSendContextData(asyncResult);

          Logger("Additional context sent successfully.");
          // Close the Conversation Window Extension.
          _conversationWindow.CloseExtensibilityWindow(AppId);
        }
        else
        {
          Logger("Could not send additional context: " + asyncResult.AsyncState);
        }
      }

      catch (Exception ex)
      {
        Logger("SendAdditionalDataCallBack error: " + ex.ToString());
      }
    }
  }

}

HTML Code

The following HTML code is used to generate the webpage that the Silverlight application opens. The code includes the IMG block that is used for the logo of the fictitious airline, and the object block that loads the application binary, FlightChooser.xap.

sample.html

<html>
<head>
</head>
<body>
<IMG src="Logo.jpg">
<object width="500" height="370" 
  data="data:application/x-silverlight-2," 
  type="application/x-silverlight-2" >
  <param name="source" value="FlightChooser.xap"/>
</object>
</body>
</html>

This series of articles shows how a UCMA 3.0 application and a Lync 2010 application can communicate, by using both the media channel and the context channel. Audio that is sent from the Lync 2010 application through the media channel is used as input to a SpeechRecognitionEngine object in the UCMA 3.0 application, and the recognized semantic results are sent back to the Lync 2010 application by way of the context channel. Recognition occurs when the spoken utterance matches the grammar that is loaded into the SpeechRecognitionEngine object.

Mark Parker is a programming writer at Microsoft whose current responsibility is the UCMA SDK documentation. Mark previously worked on the Microsoft Speech Server 2007 documentation.

Show:
© 2015 Microsoft