DirectSound buffer objects control the delivery of waveform data from a source to a destination. The source might be a synthesizer, another buffer, a WAV file, or a resource. For most buffers, the destination is a mixing engine called the primary buffer. From the primary buffer, the data goes to the hardware that converts the samples to sound waves.
Information about using DirectSound buffers is contained in the following topics:
For information about capture buffers, see Capturing Waveforms.
Your application must create at least one secondary sound buffer for storing and playing individual sounds.
A secondary buffer can exist throughout the life of an application, or it can be destroyed when no longer needed. It can be a static buffer that contains a single short sound, or a streaming buffer that is refreshed with new data as it plays. To limit demands on memory, long sounds should be played through streaming buffers that hold no more than a few seconds of data.
You mix sounds from different secondary buffers simply by playing them at the same time. Any number of secondary buffers can be played at one time, up to the limits of available processing power.
Secondary buffers are not all created alike. Characteristics of buffers include the following:
Format. The format of a buffer must match the format of the waveform data it plays.
Controls. Different buffers can have different controls, such as volume, frequency, and movement in two or three dimensions. When creating a buffer, you should specify only the controls you need; for example, don't create a 3D buffer for a sound that isn't in a 3D environment.
Location. A buffer can be in memory managed by hardware, or in memory managed by software. Hardware buffers are more efficient but limited in number. Hardware buffers are not supported in 64-bit operating systems.