SDK Samples That Use the Core Audio APIs

The Windows SDK includes the following code samples that demonstrate the use of the Core Audio APIs. The following samples are located in the directory %MSSdk%\samples\multimedia\audio, where %MSSdk% is the root directory of the Windows SDK installation on your computer.

Sample Deascription
AECMicArray This sample uses the MMDevice, WASAPI, DeviceTopology, and EndpointVolume APIs to capture a high-quality voice stream. The sample supports acoustic echo cancellation (AEC) and microphone array processing by using the AEC DMO also called the Voice capture DSP provided by Microsoft .
CaptureSharedEventDriven This sample application uses the Core Audio APIs to capture audio data from an input device, specified by the user and writes it to a uniquely named .WAV file in the current directory. This sample demonstrates event-driven buffering.
CaptureSharedTimerDriven This sample application uses the Core Audio APIs to capture audio data from an input device, specified by the user and writes it to a uniquely named .WAV file in the current directory. This sample demonstrates timer-driven buffering.
DuckingCaptureSample This sample application demonstrates opening and closing communication streams and causing ducking events that an application can get to implement stream attenuation. This application implements a chat client that uses Core Audio APIs to read audio data from a communication device and to play it on the output device.
EndpointVolume This sample application uses the Core Audio APIs to change the volume of the device, specified by the user.
OSD This sample uses the MMDevice and EndpointVolume APIs to implement an on-screen display that shows volume changes to the output stream that plays through the default audio-rendering endpoint device. The on-screen display appears when the user adjusts the volume level in the Windows volume-control program, Sndvol.exe, and it disappears after the volume level remains unchanged for a short period.
RenderExclusiveEventDriven This sample application uses the Core Audio APIs to render audio data to an output device, specified by the user. This sample demonstrates event-driven buffering for a rendering client in exclusive mode. For an exclusive-mode stream, the client shares the endpoint buffer with the audio device.
RenderExclusiveTimerDriven This sample application uses the Core Audio APIs to render audio data to an output device, specified by the user. This sample demonstrates timer-driven buffering for a rendering client in exclusive mode. For an exclusive-mode stream, the client shares the endpoint buffer with the audio device.
RenderSharedEventDriven This sample application uses the Core Audio APIs to render audio data to an output device, specified by the user. This sample demonstrates event-driven buffering for a rendering client in shared mode. For a shared-mode stream, the client shares the endpoint buffer with the audio engine.
RenderSharedTimerDriven This sample application uses the Core Audio APIs to render audio data to an output device, specified by the user. This sample demonstrates timer-driven buffering for a rendering client in shared mode. For a shared-mode stream, the client shares the endpoint buffer with the audio engine.
WinAudio This sample uses the MMDevice API and WASAPI to play and capture audio streams. The user interface of this sample application enables users to select audio endpoint devices, to change the volume level of the local audio session, and to play .wav files and microphone input. Note: This sample has been deprecated in Windows 7.

 

You can download the Windows SDK from the Microsoft Windows SDK Download Center website.

About the Windows Core Audio APIs