Creating Call Center Applications with Unified Communications SDKs: Introduction (Part 1 of 4)

Summary:   Learn how to use Unified Communications SDKs to implement speech synthesis, speech recognition, and call control technologies to create a simple call center application.

Applies to:   Microsoft Unified Communications Managed API (UCMA) 3.0 Core SDK or Microsoft Unified Communications Managed API 4.0 SDK | Microsoft Lync 2010 SDK | Microsoft Speech Platform SDK version (x64) 10.2

Published:   April 2012 | Provided by:   John Clarkson and Mark Parker, Microsoft | About the Author

Contents

Download code  Download code

Watch video  See video

This is the first in a series of four articles about how to create a call center application.

Introduction

Call centers are used to handle incoming telephone calls from customers seeking information and product support. The role of call centers is summarized by Wikipedia:

"Most major businesses use call centres to interact with their customers. Examples include utility companies, mail order catalogue retailers, and customer support for computer hardware and software. Some businesses even service internal functions through call centres. Examples of this include help desks, retail financial support, and sales support."

The call center application described in this article handles a single customer call. The application queries the caller for their name and the category of their inquiry. This information is used both to determine where it can transfer the call, and to gather information about the caller.

After the dialog between the customer and the application the call is transferred to the appropriate person within the company, the agent, for additional handling. The information about the caller is also passed to the agent and displayed on their monitor at the same time that the call is transferred.

This call center application combines four separate communications technologies.

  • Speech synthesis

  • Speech recognition

  • Call transfer

  • Contextual data

Speech Synthesis

A SpeechSynthesizer object is used to query the customer for their name and the nature of their call. This application uses the Microsoft Speech Platform SDK version 10.2 to perform speech synthesis. Speech synthesis also requires installation of a voice, in this case the Microsoft Server Speech Text to Speech Voice (en-US, Helen).

Speech Recognition

A SpeechRecognitionEngine object is used together with a simple grammar to recognize specific words spoken by the caller. This application uses the Microsoft Speech Platform SDK version 10.2 to perform speech recognition. Speech recognition also requires installation of a runtime language pack, in this case the Microsoft Server Speech Recognition Language - TELE(en-US).

Call Transfer

The Microsoft Unified Communications Managed API (UCMA) 3.0 Core SDK supports unattended, attended, and supervised call transfers. This application uses an attended call transfer to transfer the customer's call to the appropriate human agent. All three transfer types use one of the overloaded BeginTransfer methods on the Call class or on the classes that inherit from the Call class, the InstantMessagingCall and AudioVideoCall classes.

Contextual Data

Contextual data is used to send information about the caller to the human agent receiving the call transfer. Assuming appropriate information is available to send, this allows the agent to know a lot about the caller and their issue before picking up the call. The ConversationContextChannel object in is used to configure and send the contextual data. The Microsoft Lync 2010 SDK Conversation object is used to get the context data for the agent application.

Scenario Overview

This call center scenario involves three parties.

  • The caller is a customer who wants to speak to someone inside an organization.

  • The bot application is an automated switchboard operator.

  • The agent application receives call transfers from the bot, and displays contextual data sent by the bot.

Figure 1. Call center actors

Scenario parties

The scenario begins with the customer calling the bot, and ends with the agent receiving contextual data about the customer, and the call transfer from the bot.

  1. The customer makes a voice call that is answered by the bot.

  2. The bot asks what department the customer wants to speak to, and asks for the customer’s name.

  3. The bot transfers the voice call to the appropriate agent.

  4. The bot finishes a simple lookup, and uses the customer name to get the customer ID and account balance.

  5. The agent picks up the transferred call, and the Lync client contextual application opens on the agent’s desktop. The Lync application displays the customer’s name, account balance, and account number.

Figure 2. Scenario flow

Call flow

Additional Resources

For more information, see the following resources:

About the Author

Mark Parker is a programming writer at Microsoft Corporation. His principal responsibility is the UCMA SDK documentation.

John Clarkson is also a programming writer on the Microsoft Lync feature team.