Using VoiceXML in a UCMA 3.0 Application: Code Listing and Conclusion (Part 4 of 4)

Summary:   The Microsoft Unified Communications Managed API (UCMA) 3.0 Core SDK can be used to write interactive voice response (IVR) applications that work with VoiceXML documents. This article lists the code that is described in the other articles in this series.

Applies to:   Microsoft Unified Communications Managed API (UCMA) 3.0 Core SDK

Published:   June 2011 | Provided by:   Mark Parker, Microsoft | About the Author

Contents

Download code   Download code

Watch video   See video

This is the last in a series of four articles about how to use VoiceXML in a Microsoft Unified Communications Managed API (UCMA) 3.0 application.

The following example (App.Config) is used for application configuration.

<?xml version="1.0"?>
<configuration>
  <appSettings>
    <!-- Provide parameters necessary for the sample to run without prompting for user input. -->
    <!-- Provide the FQDN of the Microsoft Lync 2010 Server -->
    <!-- <add key="ServerFQDN1" value="" /> -->
    <!-- The user sign-in name that is used to sign in to the application. -->
    <!-- To use credentials used by the currently signed-in user, do not add a value. -->
    <!-- <add key="UserName1" value="" /> -->
    <!-- The user domain name that is used to sign in to the application. -->
    <!-- To use credentials used by the currently signed-in user, do not add a value. -->
    <!-- <add key="UserDomain1" value="" /> -->
    <!-- The user URI that is used to sign in to the application, in the format user@host. -->
    <!-- <add key="UserURI1" value="" /> -->
   </appSettings>
</configuration>

The following example is the main application code.

using System;
using System.Threading;
using System.Configuration;
using Microsoft.Rtc.Collaboration;
using Microsoft.Rtc.Collaboration.AudioVideo;
using Microsoft.Rtc.Collaboration.AudioVideo.VoiceXml;
using Microsoft.Rtc.Collaboration.Sample.Common;
using Microsoft.Speech.VoiceXml;
using Microsoft.Speech.VoiceXml.Common;


namespace Microsoft.Rtc.Collaboration.Sample.VoiceXmlTime
{
  // After the application is started, it waits for an incoming audio-video call. When a call arrives, 
  // the application creates and initializes a Browser instance that loads and interprets a VoiceXML dialog. 
  // The dialog prompts the user for the name of a city, reissuing the prompt if necessary.
  // If the name of the city matches a city in an SRGS grammar, the VoiceXML dialog returns 
  // the time at the requested city.
  // After the application ends the call, it shuts down the platform and ends, and then pauses, so that the 
  // user can view log messages in the console.
  // The UCMA application logs in as UserName1, given in App.config.
  
  public class VoiceXmlAVCall
  {
    // Global variables.
    private UCMASampleHelper _helper;
    private UserEndpoint _userEndpoint;
    private CollaborationPlatform _collabPlatform;
    private AudioVideoCall _audioVideoCall;
    private AudioVideoFlow _audioVideoFlow;

    // The conversation.
    private Conversation _conversation;
 
    // The VoiceXML Browser and the location of the VoiceXML start page.
    private Microsoft.Rtc.Collaboration.AudioVideo.VoiceXml.Browser _voiceXmlBrowser;
    private String startPageURL = @"http://localhost/VoiceXmlTime/GetCityTime.vxml"; 
  
    // Wait handles to keep the main thread and worker thread synchronized.
    private AutoResetEvent _waitForCallReceived = new AutoResetEvent(false);
    private AutoResetEvent _waitForCallAccepted = new AutoResetEvent(false);
    private AutoResetEvent _waitForSessionCompleted = new AutoResetEvent(false);
    private AutoResetEvent _waitForPlatformShutdownCompleted = new AutoResetEvent(false);
    private AutoResetEvent _waitForAudioVideoCallEstablishCompleted = new AutoResetEvent(false);
    private AutoResetEvent _waitForAudioVideoFlowStateChangedToActiveCompleted = new AutoResetEvent(false);

    
    static void Main(string[] args)
    {
      VoiceXmlAVCall BasicAVCall = new VoiceXmlAVCall();
      BasicAVCall.Run();
    }

    public void Run()
    {
      // Create and establish the endpoint, using the credentials of the user the application will be acting as.
      _helper = new UCMASampleHelper();
      _userEndpoint = _helper.CreateEstablishedUserEndpoint("VoiceXML Sample User" /*endpointFriendlyName*/);
      _userEndpoint.RegisterForIncomingCall<AudioVideoCall>(inboundAVCall_CallReceived);
      
      // Pause the main thread until a call is received and then accepted.
      _waitForCallReceived.WaitOne();
      _waitForCallAccepted.WaitOne();

      InitializeVoiceXmlBrowser();
      _voiceXmlBrowser.SetAudioVideoCall(_audioVideoCall);
      Uri startPageURI = new Uri(startPageURL);
      Console.WriteLine("Browser state: " + _voiceXmlBrowser.State.ToString());
      _voiceXmlBrowser.RunAsync(startPageURI, null);
      _waitForSessionCompleted.WaitOne();
              
      _collabPlatform = _conversation.Endpoint.Platform;
      // Terminate the call.
      _audioVideoCall.BeginTerminate(CallTerminateCB, _audioVideoCall);
         
      _waitForPlatformShutdownCompleted.WaitOne();

      // Pause the console to allow the user to view logs.
      Console.WriteLine("Press any key to end the sample.");
      Console.ReadKey();  
    }

    // Initializes the Browser object and registers event handlers.
    private void InitializeVoiceXmlBrowser()
    {
      // Create a Browser instance if one doesn’t already exist.
      if (_voiceXmlBrowser == null)
      {
        // Create the browser object, and bind all associated event handlers. 
        Console.WriteLine("Call state: " + _audioVideoCall.State.ToString() + "\nMedia flow state: " + _audioVideoCall.Flow.State.ToString());

        _voiceXmlBrowser = new Microsoft.Rtc.Collaboration.AudioVideo.VoiceXml.Browser();

        _voiceXmlBrowser.Disconnecting
                += new EventHandler<DisconnectingEventArgs>(HandleDisconnecting);
        _voiceXmlBrowser.Disconnected
                += new EventHandler<DisconnectedEventArgs>(HandleDisconnected);
         _voiceXmlBrowser.SessionCompleted
                += new EventHandler<SessionCompletedEventArgs>(HandleSessionCompleted);
      }
    }

    #region EVENT HANDLERS
  
    // Handler for the StateChanged event on the incoming call.
    void audioVideoCall_StateChanged(object sender, CallStateChangedEventArgs e)
    {
      Console.WriteLine("Call has changed state.\nPrevious state: " + e.PreviousState + "\nCurrent state: " + e.State);
    }
    
    // Handler for the StateChanged event on an AudioVideoFlow instance.
    private void audioVideoFlow_StateChanged(object sender, MediaFlowStateChangedEventArgs e)
    {
      // When the flow is active, media operations can begin.
      Console.WriteLine("Previous flow state: " + e.PreviousState.ToString() + "\nNew flow state: " + e.State.ToString());
    }

    // Handler for the AudioVideoFlowConfigurationRequested event on the call.
    // This event is raised when there is a flow present to begin media operations with, and that it is no longer null.
    public void audioVideoCall_FlowConfigurationRequested(object sender, AudioVideoFlowConfigurationRequestedEventArgs e)
    {
      Console.WriteLine("Flow Created.");
      _audioVideoFlow = e.Flow;

      // Now that the flow is non-null, bind the event handler for the StateChanged event.
      // When the flow goes active, (as indicated by the StateChanged event) the application can take media-related actions on the flow.
      _audioVideoFlow.StateChanged += new EventHandler<MediaFlowStateChangedEventArgs>(audioVideoFlow_StateChanged);
    }

    // The delegate to be called when the incoming call arrives. 
    private void inboundAVCall_CallReceived(object sender, CallReceivedEventArgs<AudioVideoCall> e)
    {
      _waitForCallReceived.Set();
      _audioVideoCall = e.Call;
      
      _audioVideoCall.AudioVideoFlowConfigurationRequested += this.audioVideoCall_FlowConfigurationRequested;
      _audioVideoCall.StateChanged += new EventHandler<CallStateChangedEventArgs>(audioVideoCall_StateChanged);

      // Create a new conversation instance.
      _conversation = new Conversation(_userEndpoint);
      // Accept the call.
      _audioVideoCall.BeginAccept(CallAcceptCB, _audioVideoCall);
      _audioVideoFlow = _audioVideoCall.Flow;
    }

    #endregion 


    #region BROWSER EVENT HANDLERS
 
    // Handler for the SessionCompleted event on the Browser object.
    // This implementation writes the values returned by the VoiceXML dialog to the console.
    private void HandleSessionCompleted(object sender, SessionCompletedEventArgs e)
    {
      _waitForSessionCompleted.Set();
      VoiceXmlResult result = e.Result;
      String cityOffset = result.Namelist["CityOffset"].ToString();
      String utterance = result.Namelist["CityOffset$.utterance"].ToString();
      String confidence = result.Namelist["CityOffset$.confidence"].ToString();
      String requestedTime = result.Namelist["timeAtRequestedCity"].ToString();
      Console.WriteLine("Returned semantic result: " + cityOffset);
      Console.WriteLine("Utterance: " + utterance);
      Console.WriteLine("Confidence: " + confidence);
      Console.WriteLine("Requested time: " + requestedTime);
    }

    // Handler for the Disconnecting event on the Browser object.
    private void HandleDisconnecting(object sender, DisconnectingEventArgs e)
    {
      Console.WriteLine("Disconnecting.");
    }

    // Handler for the Disconnected event on the Browser object.
    private void HandleDisconnected(object sender, DisconnectedEventArgs e)
    {
      Console.WriteLine("Disconnected.");
    }
     
    #endregion

    #region CALLBACKS

    // Callback referenced in the BeginAccept method on the call.
    private void CallAcceptCB(IAsyncResult ar)
    {
      if (ar.IsCompleted)
      {
        _waitForCallAccepted.Set();
        Console.WriteLine("Call is now accepted.");
        _audioVideoCall.EndAccept(ar);
      }
      else
      {
        Console.WriteLine("Couldn't accept the call.");
      }
    }

    // Callback referenced in the BeginTerminate method on the call.
    private void CallTerminateCB(IAsyncResult ar)
    {
      AudioVideoCall AVCall = ar.AsyncState as AudioVideoCall;

      // Complete the termination of the incoming call.
      AVCall.EndTerminate(ar);

      // Terminate the conversation.
      IAsyncResult result = _audioVideoCall.Conversation.BeginTerminate(ConversationTerminateCB, _audioVideoCall.Conversation);
      Console.WriteLine("Waiting for the conversation to be terminated...");
    }

    // Callback referenced in the BeginTerminate method on the conversation.
    private void ConversationTerminateCB(IAsyncResult ar)
    {
      Conversation conv = ar.AsyncState as Conversation;

      // Complete the termination of the conversation.
      conv.EndTerminate(ar);

      // Now, clean up by shutting down the platform.
      Console.WriteLine("Shutting down the platform...");

      _collabPlatform.BeginShutdown(PlatformShutdownCB, _collabPlatform);
    }

    // Callback referenced in the BeginShutdown method on the platform.
    private void PlatformShutdownCB(IAsyncResult ar)
    {
      CollaborationPlatform collabPlatform = ar.AsyncState as CollaborationPlatform;
      try
      {
        // Shutdown actions will not throw.
        collabPlatform.EndShutdown(ar);
        Console.WriteLine("The platform is now shut down.");
      }
      finally
      {
        _waitForPlatformShutdownCompleted.Set();
      }
    }

    #endregion


  }
  
}

The next code sample consists of a class whose member methods create and start a CollaborationPlatform instance, and then create and establish the UserEndpoint instance that is used in the sample. The code appearing in this article is an abbreviated version of the code that appears in the UCMA 3.0 SDK UCMASampleCode.cs file.

The following code differs from UCMASampleCode.cs in the following ways.

  1. The sample presented here is shorter. Methods related to creating and establishing an ApplicationEndpoint instance are removed. The following methods are removed.

    • CreateUserEndpointWithServerPlatform

    • ReadGenericApplicationContactConfiguration

    • ReadApplicationContactConfiguration

    • CreateApplicationEndpoint

    • CreateAndStartServerPlatform

  2. A number of unused private fields are excluded from the following code. Code that uses one of these fields, _serverCollabPlatform, is also excluded from the code sample.

  3. Two variables have been added: _remoteUserURIPrompt and _remoteUserURI.

  4. The GetRemoteUserURI method is added to the following code.

The following example contains a class that has helper methods that are used to create an established endpoint.

using System;
using System.Configuration;
using System.Globalization;
using System.Runtime.InteropServices;
using System.Security.Cryptography.X509Certificates;
using System.Text;
using System.Threading;

using Microsoft.Rtc.Collaboration;
using Microsoft.Rtc.Signaling;

namespace Microsoft.Rtc.Collaboration.Sample.Common
{
  class UCMASampleHelper
  {
    private static ManualResetEvent _sampleFinished = new ManualResetEvent(false);

    // The name of this application, to be used as the outgoing user agent string.
    // The user agent string is put in outgoing message headers to indicate the application that is used.
    private static string _applicationName = "UCMASampleCode";

    const string _sipPrefix = "sip:";

    // These strings are used as keys into the App.Config file to get information to avoid prompting. For most of these strings,
    // suffixes 1-N are used on each subsequent call. For example, UserName1 is used for the first user and UserName2 for the second user.
    private static String _serverFQDNPrompt = "ServerFQDN";
    private static String _userNamePrompt = "UserName";
    private static String _userDomainPrompt = "UserDomain";
    private static String _userURIPrompt = "UserURI";
    private static String _remoteUserURIPrompt = "UserURI";

    // Construct the network credential that the UserEndpoint will use for authentication by the Microsoft Lync Server 2010 computer.
    private string _userName; // User name and password for pair of a user who is authorized to access Lync Server 2010. 
    private string _userPassword;
    private string _userDomain; // Domain that this user signs in to. Note: This is the Active Directory domain, not the portion of the SIP URI following the ‘@’ sign.
    private System.Net.NetworkCredential _credential;

    // The user URI and connection server of the user used.
    private string _userURI; // This should be the URI of the user specified earlier.
    private string _remoteUserURI; // The URI of the remote endpoint.

    // The Server FQDN.
    private static string _serverFqdn;// The FQDN of the Microsoft Lync Server 2010 computer.

    // Transport type used to communicate with Microsoft Lync Server 2010 computere.
    private Microsoft.Rtc.Signaling.SipTransportType _transportType = Microsoft.Rtc.Signaling.SipTransportType.Tls;

    private static CollaborationPlatform _collabPlatform;
    private static bool _isPlatformStarted;
    private AutoResetEvent _platformStartupCompleted = new AutoResetEvent(false);
    private AutoResetEvent _endpointInitCompletedEvent = new AutoResetEvent(false);
    private AutoResetEvent _platformShutdownCompletedEvent = new AutoResetEvent(false);
    private UserEndpoint _userEndpoint;

    private bool _useSuppliedCredentials;
    private static int _appContactCount;
    private static int _userCount = 1;

    // This method attempts to read user settings from the App.Config file. If the settings are not
    // present in the configuration file, this method prompts the user for them in the console.
    // This method returns a UserEndpointSettings object. If you do not want to monitor LocalOwnerPresence, you can 
    // call the CreateEstablishedUserEndpoint method directly. Otherwise, you can call the ReadUserSettings,
    // CreateUserEndpoint, and EstablishUserEndpoint methods, in that order.
    public UserEndpointSettings ReadUserSettings(string userFriendlyName)
    {
      UserEndpointSettings userEndpointSettings = null;
      string prompt = string.Empty;
      if (string.IsNullOrEmpty(userFriendlyName))
      {
        userFriendlyName = "Default User";
      }

      try
      {
        Console.WriteLine(string.Empty);
        Console.WriteLine("Creating User Endpoint for {0}...", userFriendlyName);
        Console.WriteLine();

        if (ConfigurationManager.AppSettings[_serverFQDNPrompt + _userCount] != null)
        {
          _serverFqdn = ConfigurationManager.AppSettings[_serverFQDNPrompt + _userCount];
          Console.WriteLine("Using {0} as Microsoft Lync Server", _serverFqdn);
        }
        else
        {
          // Prompt user for server FQDN. If server FQDN was entered previously, then use the saved value.
          string localServer;
          StringBuilder promptBuilder = new StringBuilder();
          if (!string.IsNullOrEmpty(_serverFqdn))
          {
            promptBuilder.Append("Current Microsoft Lync Server = ");
            promptBuilder.Append(_serverFqdn);
            promptBuilder.AppendLine(". Please hit ENTER to retain this setting - OR - ");
          }

          promptBuilder.Append("Please enter the FQDN of the Microsoft Lync Server that the ");
          promptBuilder.Append(userFriendlyName);
          promptBuilder.Append(" endpoint is homed on => ");
          localServer = PromptUser(promptBuilder.ToString(), null);

          if (!String.IsNullOrEmpty(localServer))
          {
            _serverFqdn = localServer;
          }
        }

        // Prompt user for user name
        prompt = String.Concat("Please enter the User Name for ",
                                    userFriendlyName,
                                    " (or hit the ENTER key to use current credentials)\r\n" +
                                    "Please enter the User Name => ");
        _userName = PromptUser(prompt, _userNamePrompt + _userCount);

        // If user name is empty, use current credentials
        if (string.IsNullOrEmpty(_userName))
        {
          Console.WriteLine("Username was empty - using current credentials...");
          _useSuppliedCredentials = true;
        }
        else
        {
          // Prompt for password
          prompt = String.Concat("Enter the User Password for ", userFriendlyName, " => ");
          _userPassword = PromptUser(prompt, null);

          prompt = String.Concat("Please enter the User Domain for ", userFriendlyName, " => ");
          _userDomain = PromptUser(prompt, _userDomainPrompt + _userCount);
        }

        // Prompt user for user URI
        prompt = String.Concat("Please enter the User URI for ", userFriendlyName, " in the User@Host format => ");
        _userURI = PromptUser(prompt, _userURIPrompt + _userCount);
        if (!(_userURI.ToLower().StartsWith("sip:") || _userURI.ToLower().StartsWith("tel:")))
        {
          _userURI = "sip:" + _userURI;
        }
        // Increment the last user number
        _userCount++;

        // Initialize and register the endpoint, using the credentials of the user that the application represents.
        // NOTE: the _userURI should always use the "sip:user@host" format.
        userEndpointSettings = new UserEndpointSettings(_userURI, _serverFqdn);

        if (!_useSuppliedCredentials)
        {
          _credential = new System.Net.NetworkCredential(_userName, _userPassword, _userDomain);
          userEndpointSettings.Credential = _credential;
        }
        else
        {
          userEndpointSettings.Credential = System.Net.CredentialCache.DefaultNetworkCredentials;
        }
      }
      catch (InvalidOperationException iOpEx)
      {
        // InvalidOperationException should be thrown only on poorly-entered input.
        Console.WriteLine("Invalid Operation Exception: " + iOpEx.ToString());
      }

      return userEndpointSettings;
    }

    // This method creates an endpoint, using the specified UserEndpointSettings object.
    // This method returns a UserEndpoint object so that you can register endpoint-specific event handlers. 
    // If you do not want to get endpoint-specific event information at the time the endpoint is established, you can 
    // call the CreateEstablishedUserEndpoint method directly. Otherwise, you can call the ReadUserSettings,
    // CreateUserEndpoint, and EstablishUserEndpoint methods, in that order.
    public UserEndpoint CreateUserEndpoint(UserEndpointSettings userEndpointSettings)
    {
      // Reuse the platform instance so that all endpoints share the same platform.
      if (_collabPlatform == null)
      {
        // Initialize and start the platform.
        ClientPlatformSettings clientPlatformSettings = new ClientPlatformSettings(_applicationName, _transportType);
        _collabPlatform = new CollaborationPlatform(clientPlatformSettings);
      }

      _userEndpoint = new UserEndpoint(_collabPlatform, userEndpointSettings);
      return _userEndpoint;
    }
    

    // This method establishes a previously created UserEndpoint.
    // This method returns an established UserEndpoint object. If you do not want to monitor LocalOwnerPresence, you can 
    // call the CreateEstablishedUserEndpoint method directly. Otherwise, you can call the ReadUserSettings,
    // CreateUserEndpoint, and EstablishUserEndpoint methods, in that order.
    public bool EstablishUserEndpoint(UserEndpoint userEndpoint)
    {
       // Start the platform, if not already started.
       if (_isPlatformStarted == false)
       {
         userEndpoint.Platform.BeginStartup(EndPlatformStartup, userEndpoint.Platform);

         // Wait for the platform startup to be completed.
         _platformStartupCompleted.WaitOne();
         Console.WriteLine("Platform started...");
         _isPlatformStarted = true;
       }
       // Establish the user endpoint.
       userEndpoint.BeginEstablish(EndEndpointEstablish, userEndpoint);

      // Wait until the endpoint is established.
      _endpointInitCompletedEvent.WaitOne();
      Console.WriteLine("Endpoint established...");
      return true;
    }

    

    // This method creates an established UserEndpoint.
    // This method returns an established UserEndpoint object. If you do not want to monitor LocalOwnerPresence, you can 
    // call this CreateEstablishedUserEndpoint method directly. Otherwise, you can call the ReadUserSettings,
    // CreateUserEndpoint, and EstablishUserEndpoint methods, in that order.
    public UserEndpoint CreateEstablishedUserEndpoint(string endpointFriendlyName)
    {
      UserEndpointSettings userEndpointSettings;
      UserEndpoint userEndpoint = null;
      try
      {
        // Read user settings
        userEndpointSettings = ReadUserSettings(endpointFriendlyName);

        // Create User Endpoint
        userEndpoint = CreateUserEndpoint(userEndpointSettings);

        // Establish the user endpoint
        EstablishUserEndpoint(userEndpoint);
      }
      catch (InvalidOperationException iOpEx)
      {
        // InvalidOperationException should be thrown only on poorly-entered input.
        Console.WriteLine("Invalid Operation Exception: " + iOpEx.ToString());
      }

      return userEndpoint;
    }

    // Returns the remote user URI.
    // This method is not present in the original UCMASampleHelper.cs
    public String GetRemoteUserURI()
    {
      String str = "";
      try
      {
        if (ConfigurationManager.AppSettings[_remoteUserURIPrompt + _userCount] != null)
        {
          _remoteUserURI = ConfigurationManager.AppSettings[_remoteUserURIPrompt + _userCount];
          Console.WriteLine("\nUsing {0} as remote user", _remoteUserURI);
          return _remoteUserURI;
        }
        else
        {
          // Prompt user for remote user URI
          _remoteUserURI = UCMASampleHelper.PromptUser("Enter the URI for the remote user logged onto Communicator, in the sip:User@Host format or tel:+1XXXYYYZZZZ format => ", "RemoteUserURI");
          return str;
        }
      }
      catch (InvalidOperationException iOpEx)
      {
        // Invalid Operation Exception should only be thrown on poorly-entered input.
        Console.WriteLine("Invalid Operation Exception: " + iOpEx.ToString());
        return str;
      }
    }

    /// <summary>
    /// If the 'key' is not found in App.Config, prompt the user in the console, using the specified prompt text.
    /// </summary>
    /// <param name="promptText"> The text to be displayed in the console if ‘key’ is not found in App.Config.</param>
    /// <param name="key"> Searches for this key in App.Config and returns a value if found. Pass null to always display a message that requests user input.</param>
    /// <returns>String value either from App.Config or user input.</returns>
    public static string PromptUser(string promptText, string key)
    {
      String value;
      if (String.IsNullOrEmpty(key) || ConfigurationManager.AppSettings[key] == null)
      {
        Console.WriteLine(string.Empty);
        Console.Write(promptText);
        value = Console.ReadLine();
      }
      else
      {
        value = ConfigurationManager.AppSettings[key];
        Console.WriteLine("Using keypair {0} - {1} from AppSettings...", key, value);
      }

      return value;
    }


    private void EndPlatformStartup(IAsyncResult ar)
    {
      CollaborationPlatform collabPlatform = ar.AsyncState as CollaborationPlatform;
      try
      {
        // The platform should now be started.
        collabPlatform.EndStartup(ar);
        // It should be noted that all the re-thrown exceptions will crash the application. This is intentional.
        // Ideal exception handling will report the error and force the application to shut down in an orderly manner. 
        // In production code, consider using an IAsyncResult implementation to report the error 
        // instead of throwing. Alternatively, put the implementation in this try block.
      }
      catch (OperationFailureException opFailEx)
      {
        // OperationFailureException is thrown when the platform cannot establish, usually due to invalid data.
        Console.WriteLine(opFailEx.Message);
        throw;
      }
      catch (ConnectionFailureException connFailEx)
      {
        // ConnectionFailureException is thrown when the platform cannot connect.
        // ClientPlatforms will not throw this exception during startup.
        Console.WriteLine(connFailEx.Message);
        throw;
      }
      catch (RealTimeException realTimeEx)
      {
        // RealTimeException may be thrown as a result of any UCMA operation.
        Console.WriteLine(realTimeEx.Message);
        throw;
      }
      finally
      {
        // Synchronize threads.
        _platformStartupCompleted.Set();
      }

    }

    private void EndEndpointEstablish(IAsyncResult ar)
    {
      LocalEndpoint currentEndpoint = ar.AsyncState as LocalEndpoint;
      try
      {
        currentEndpoint.EndEstablish(ar);
      }
      catch (AuthenticationException authEx)
      {
        // AuthenticationException is thrown when the credentials are not valid.
        Console.WriteLine(authEx.Message);
        throw;
      }
      catch (ConnectionFailureException connFailEx)
      {
        // ConnectionFailureException is thrown when the endpoint cannot connect to the server, or the credentials are not valid.
        Console.WriteLine(connFailEx.Message);
        throw;
      }
      catch (InvalidOperationException iOpEx)
      {
        // InvalidOperationException is thrown when the endpoint is not in a valid connection state to. To connect, the platform must be started and the endpoint must be in the Idle state.
        Console.WriteLine(iOpEx.Message);
        throw;
      }
      finally
      {
        // Synchronize threads.
        _endpointInitCompletedEvent.Set();
      }
    }

    internal void ShutdownPlatform()
    {
      if (_collabPlatform != null)
      {
        _collabPlatform.BeginShutdown(EndPlatformShutdown, _collabPlatform);
      }

      // if (_serverCollabPlatform != null)
      // {
      //   _serverCollabPlatform.BeginShutdown(EndPlatformShutdown, _serverCollabPlatform);
      //}

      // Synchronize threads.
      _platformShutdownCompletedEvent.WaitOne();
    }

    private void EndPlatformShutdown(IAsyncResult ar)
    {
      CollaborationPlatform collabPlatform = ar.AsyncState as CollaborationPlatform;

      try
      {
        // Shutdown actions do not throw.
        collabPlatform.EndShutdown(ar);
        Console.WriteLine("The platform is now shut down.");
      }
      finally
      {
        _platformShutdownCompletedEvent.Set();
      }
    }

    /// <summary>
    /// Read the local store for the certificate that is used to create the platform. This is necessary to establish a connection to the server.
    /// </summary>
    /// <param name="friendlyName">The friendly name of the certificate to use.</param>
    /// <returns>The certificate instance.</returns>
    public static X509Certificate2 GetLocalCertificate(string friendlyName)
    {
      X509Store store = new X509Store(StoreLocation.LocalMachine);

      store.Open(OpenFlags.ReadOnly);
      X509Certificate2Collection certificates = store.Certificates;
      store.Close();

      foreach (X509Certificate2 certificate in certificates)
      {
        if (certificate.FriendlyName.Equals(friendlyName, StringComparison.OrdinalIgnoreCase))
        {
          return certificate;
        }
      }
      return null;
    }

    public static void WriteLine(string line)
    {
      Console.WriteLine(line);
    }

    public static void WriteErrorLine(string line)
    {
      Console.ForegroundColor = ConsoleColor.Red;
      Console.WriteLine(line);
      Console.ResetColor();
    }

    public static void WriteException(Exception ex)
    {
      WriteErrorLine(ex.ToString());
    }

    /// <summary>
    /// Prompts the user to press a key, unblocking any waiting calls to the
    /// <code>WaitForSampleFinish</code> method
    /// </summary>
    public static void FinishSample()
    {
      Console.WriteLine("Please hit any key to end the sample.");
      Console.ReadKey();
      _sampleFinished.Set();
    }

    /// <summary>
    /// 
    /// </summary>
    public static void WaitForSampleFinish()
    {
      _sampleFinished.WaitOne();
    }
      
  }
}

The following example is the VoiceXML document.

<?xml version="1.0" encoding="utf-8" ?>
<vxml version="2.0" xmlns="http://www.w3.org/2001/vxml" xml:lang="en-US" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/vxml http://www.w3.org/TR/voicexml21/vxml.xsd" >

  <script>
  <![CDATA[
    var DST = true; // Set this to true if Daylight Savings Time is in effect.
    var isPM = false;
    var hourAtRequestedCity = 0;
    var timeAtRequestedCity = 0;

    var now = new Date();
    var localHour = Number(now.getHours());
    var localMins = Number(now.getMinutes());
    if (0 <= localMins && localMins <= 9)
    {
       localMins = "0" + localMins.toString();
    }
  ]]>
  </script>
  <form id="get_city">
    <field name="CityOffset">
      <prompt bargein="true" bargeintype="speech" timeout="10s">
         Say the name of a city to find the present time there.
      </prompt>
      <grammar type="application/srgs+xml" src="http://localhost/VoiceXmlTime/CityTimeOffsets.grxml" />
      
      <catch event="nomatch">
        <prompt bargein="true" bargeintype="speech" timeout="4s">
          Sorry, I don't know that city.
        </prompt>
        <reprompt/>
      </catch>    
      
      <catch event="noinput">
        <prompt bargein="true" bargeintype="speech" timeout="4s">
          I didn't hear you.
        </prompt>
        <reprompt/>
      </catch>

      <filled>
        <script>
        <![CDATA[
          var offset = Number(CityOffset);
          if (DST) 
          {
            // Increment the time if Daylight Savings Time is in effect.
            hourAtRequestedCity++; 
          }
          hourAtRequestedCity = localHour.valueOf() + offset.valueOf();
          if (hourAtRequestedCity > 24)
          {
            hourAtRequestedCity = hourAtRequestedCity - 24;
            isPM = false;
          }
          else if (hourAtRequestedCity > 12)
          {
            hourAtRequestedCity = hourAtRequestedCity - 12;
            isPM = true;
          }
          
          timeAtRequestedCity = hourAtRequestedCity.toString() + ":" + localMins.toString();
          if (isPM)
          {
            timeAtRequestedCity = timeAtRequestedCity + " PM.";
          }
          else
          {
            timeAtRequestedCity = timeAtRequestedCity + " AM.";
          }
          
        ]]> 
        </script>
        
        The time in <value expr="CityOffset$.utterance"/> is <value expr="timeAtRequestedCity"/> 
        <exit namelist="CityOffset CityOffset$.utterance CityOffset$.confidence timeAtRequestedCity"/>  
      </filled> 
    </field>
  </form>
</vxml>

The following example is the SRGS grammar that returns the time offset for the cities listed in it.

<?xml version="1.0" encoding="UTF-8" ?>
<grammar version="1.0" xml:lang="en-US" mode="voice" root= "Main"
xmlns="http://www.w3.org/2001/06/grammar" tag-format="semantics/1.0">

<rule id="Main" scope = "public">
  <one-of>
    <item> Anchorage <tag>out="-9";</tag> </item>
    <item> Atlanta <tag>out="-5";</tag> </item>
    <item> Baltimore <tag>out="-5";</tag> </item>
    <item> Boston <tag>out="-5";</tag> </item>
    <item> Dallas <tag>out="-6";</tag> </item>
    <item> Denver <tag>out="-7";</tag> </item>
    <item> Detroit <tag>out="-5";</tag> </item>
    <item> Honolulu <tag>out="-10";</tag> </item>
    <item> London <tag>out="0";</tag> </item>
    <item> Moscow <tag>out="3";</tag> </item>
    <item> Phoenix <tag>out="-7";</tag> </item>
    <item> New York <tag>out="-5";</tag> </item>
    <item> Philadelphia <tag>out="-5";</tag> </item>
    <item> Phoenix <tag>out="-7";</tag> </item>
    <item> Rome <tag>out="1";</tag> </item>
    <item> San Francisco <tag>out="-8";</tag> </item>
    <item> Seattle <tag>out="-8";</tag> </item>
    <item> Vancouver <tag>out="-8";</tag> </item>  </one-of>
</rule>
</grammar>

If you plan to develop and deploy an interactive voice response (IVR) application that uses VoiceXML, consider using a UCMA 3.0 application for its call control capabilities and its ability to interpret VoiceXML documents. The scenario and code examples that appear in this set of articles can help you get started.

Mark Parker is a programming writer at Microsoft whose current responsibility is the UCMA SDK documentation. Mark previously worked on the Microsoft Speech Server 2007 documentation.

Show: