Design Speech Applications for Easy Reporting and Tuning
This content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release.
Use the Speech Server analysis and tuning tools, such as Analytics and Tuning Studio and Business Intelligence Reports, to improve the efficiency and effectiveness of deployed speech applications. All speech applications benefit from analysis of real log data, from initial trial phases to deployment and beyond. Designing an application with reporting and tuning needs in mind makes it easier to measure its effectiveness and improve the application based on data from real user behavior.
To collect log data, enable event tracing for the application. For more information, see How to: Enable Event Tracing of Call Activity.
Workflow Designer enables the modules of a speech application to be structured into SpeechSequenceActivity activities, which map to tasks in Analytics and Tuning Studio and Business Intelligence Reports. SpeechSequenceActivity activities can be nested, sequential, or both.
In general, it helps to map each SpeechSequenceActivity activity, or task, to a goal that the system or user is trying to accomplish. High-level examples of tasks include booking a flight in a travel application or trying to reach someone in an automated directory services application. Low-level examples include getting a credit card number or a name. Both high-level and low-level tasks are useful.
Consider grouping components into a SpeechSequenceActivity activity to isolate that component for analysis across users and calls or to measure the effectiveness of the application from a task completion perspective.
To enable task logging for a SpeechSequenceActivity activity, ensure that the TaskLoggingEnabled property is set to true (which is the default). This ensures that all events occurring within the SpeechSequenceActivity activity are logged with the context of that SpeechSequenceActivity activity. This enables the structuring of the dialog flow into tasks in the Session Details view and the Task Details view within Analytics and Tuning Studio. Each SpeechSequenceActivity activity should be named intuitively so that the goal of the task is clear at analysis time.
The most important component of the task is the logging of its completion status. This enables direct reporting on the task completion rate, which is a very important business metric and a valuable guide to the effectiveness of the user interface. Every possible exit point of the SpeechSequenceActivity activity must traverse a SetTaskStatusActivity component that sets the completion state of the task in the TaskStatus property. Possible values are Success, Failure, and Unset (the default).
Set the value to Success when the user has succeeded in the task, to Failure when it is clear that the user has failed, and to Unset when it is unclear whether the user has succeeded. (Judging success or failure of a task is generally a subjective matter that differs by application.) This setting is automatically reflected in the Task volume report and the Task ending report, allowing an instant view of the performance of the task.
Set a task completion message to log more detail, particularly in the case of a Failure or Unset status. This value is also set in the SetTaskStatusActivity class in the TaskMessage property. Any string can be used for the message, but it is most useful to set the message to a reason for the task status (for example, user hang-up or too many failed attempts).
Use components on the Speech Dialog Components tab in the Workflow Designer toolbox to automatically gather lower level audio, prompt data, and recognition data into higher level turn components at analysis time. For example, QuestionAnswerActivities, StatementActivities, and RecordAudioActivities are represented automatically as turns in Analytics and Tuning Studio views and reports. Prompt types and other properties of the components are represented within the turn. FormFillingDialogActivity is especially useful, because Analytics and Tuning Studio can use the knowledge of the semantic items manipulated by the turns in the FormFillingDialogActivity to identify the system's turn types (for example, "Ask(PIN)") and the user's response types (for example, "Answer(PIN)"). Explicitly using semantic items with other dialog components, such as QuestionAnswerActivities, also enables this benefit.
In general, it is useful to use a naming protocol so that turn names can reflect their function (for example, "GetPINQA";) and names can reflect critical properties like shared grammars. This allows simpler re-recognition and grammar analysis at tuning time. Any semantic processing code or other code that is related to the recognition result of a turn should be processed within the turn component. This enables updates to semantic items to be scoped within the turn. Store semantic results in SemanticItem objects for easy management and use a naming protocol to name SemanticItems.
To log individual events for viewing in the details views of the Analytics and Tuning Studio, use the String method of the telephony session's LoggingManager. This writes a message to the event trace log (ETL) file that is within the context of its containing session, task, or turn. It is displayed as an ApplicationDataEvent in the relevant context in the Analytics and Tuning Studio details views. The arguments to the method are application-defined strings representing event class, subclass, and message. One sample message might be:
TelephonySession.LoggingManager.LogApplicationData("Database lookup", "UID:8362", "Lookup failed");
Because the ApplicationDataEvent is logged whenever basic trace logging is enabled, these events are displayed in Analytics and Tuning Studio without the need to enable the lower level debugging filters.