UCMA 2.0 Core Object Model
This content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release.
The relationships among the major components of Microsoft Unified Communications Managed API 2.0 Core SDK appear in the following illustration.
The major components appearing in the illustration are LocalEndpoint (of which two implementations are ApplicationEndpoint and UserEndpoint), Conversation, and CollaborationPlatform. A CollaborationPlatform instance can manage multiple LocalEndpoint instances, and each LocalEndpoint instance can have multiple Conversation instances.
In addition to listing many of the UCMA 2.0 Core SDK components, the elements in the illustration are arranged in two dimensions. The horizontal axis is divided into two categories: call controls and media controls. Call controls are concerned with signaling data, while media controls are concerned with the instant message (IM) and audio data that is communicated between participants.
The call control category is further subdivided into multiparty controls and two-party controls, which are concerned with, respectively, conversations among three or more participants or those between two participants.
The media control category is further subdivided into media flows, devices, and media providers. Each type of media (IM or audio/video) has its own type of flow. The devices in the devices column can be used to record an audio stream, play an audio stream, and send or receive telephone keypad tones. There are also two devices that, when used in conjunction with Microsoft.Speech object model, can be used to recognize speech, and to synthesize speech. Two of the media providers shown are provided with UCMA 2.0 Core SDK. The third (labeled as ContosoProvider in the illustration) is not provided, but can be implemented by third-party developers. Media providers are not directly accessible, but the flows they provide are accessible.
The color-coded components at the same horizontal level represent the components that take part in a particular communication mode. For example, the AudioVideoProvider sends audio/video media to an AudioVideoFlow, and then to either an AudioVideoCall (for two parties) or to an AudioVideoMcuSession (for more than two parties). The objects shown in the Devices column can attach an AudioVideoFlow, from which audio media can come (Recorder, ToneController, SpeechRecognitionConnector), or to which audio media can go (Player, ToneController, SpeechSynthesisConnector).
The vertical axis is divided into two principal categories: single modal and multimodal. These categories indicate whether communication occurs by means of a single mode (for example, using IM only) or by multiple modes (for example, using IM and audio).
UCMA 2.0 Core SDK provides built-in support for instant messaging and audio communication modalities. The platform can be extended to provide support for other modalities. The components in the top row in Conversation show the components that third-party developers can create to provide this support.