The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location.

DirectSound Playback Objects

The following objects are used in playing sounds:

Object Number Purpose Main interfaces
Device One in each application Manages the device and creates sound buffers. IDirectSound8
Secondary buffer One for each sound Manages a static or streaming sound and plays it into the primary buffer. IDirectSoundBuffer8, IDirectSound3DBuffer8, IDirectSoundNotify8
Primary buffer One in each application Mixes and plays sounds from secondary buffers, and controls global 3D parameters. IDirectSoundBuffer, IDirectSound3DListener8
Effect Zero or more for each secondary buffer Transforms the sound in a secondary buffer. Interface for the particular effect, such as IDirectSoundFXChorus8

The first step in implementing DirectSound in an application is to create a device object, which represents the rendering device. This object is then used to create buffers.

DirectSound is based on the Component Object Model (COM). However, you do not have to initialize COM explicitly, by calling CoInitialize or CoInitializeEx, unless you are using effect DMOs.

Secondary buffers are created and managed by the application. DirectSound automatically creates and manages the primary buffer, and an application can play sounds without obtaining an interface to this object. However, in order to obtain the IDirectSound3DListener8 interface, the application must explicitly create a primary buffer object.

When sounds in secondary buffers are played, DirectSound mixes them in the primary buffer and sends them to the rendering device. Only the available processing time limits the number of buffers that DirectSound can mix.

Under the Windows driver model (WDM), mixing is done by the kernel mixer, and the primary buffer does not actually contain any data. For more information, see DirectSound Driver Models.

A short sound can be loaded into a secondary buffer in its entirety and played at any time by a simple method call. Longer sounds have to be streamed. An application can ascertain when it is time to stream more data into the buffer either by polling the position of the play cursor or by requesting notification when the play cursor reaches certain points.

Secondary buffers can contain effects such as chorus and echo. They can also have 3D control capabilities. Global 3D parameters are controlled by the IDirectSound3DListener8 interface on the primary buffer.

Community Additions