This topic is an overview of the implementation key points that you must be aware of, when you develop a driver for an audio adapter that is capable of processing hardware-offloaded audio streams.
In Windows 8 and later operating systems, support has been provided for a new type of audio adapter that can use an on-board hardware audio engine to process audio streams. When you develop such an audio adapter, the associated audio driver must expose this fact to the user mode audio system in a specific manner, so that the audio system can discover, use and properly expose the features of this adapter and its driver.
To make it possible for audio drivers to expose the hardware capabilities of these new audio adapters, Windows 8 introduces a new KS-filter topology that the driver must use:
As you can see in the preceding figure, a KS-filter topology represents the data paths through the hardware, and also shows the functions that are available on those paths. In the case of an audio adapter that can process offloaded audio, there are the following inputs and outputs (called pins) on the KS-filter:
One Host Process pin. This represents the input into the KS-filter from the software audio engine.
One Loopback pin. This represents an output from the hardware audio engine to the Windows audio session API (WASAPI) layer.
A number of Offloaded-audio pins. Although the figure shows only one pin of this type, an IHV is free to implement any number (n) of pins.
The actual service in the user mode audio system that "leads" to the discovery of the audio adapter and its driver, is the AudioEndpointBuilder. The AudioEndpointBuilder service monitors the KSCATEGORY_AUDIO class for device interface arrivals and removals. When an audio device driver registers a new instance of the KSCATEGORY_AUDIO device interface class, a device interface arrival notification is fired off. The AudioEndpointBuilder service detects the device interface arrival notification and uses an algorithm to examine the topology of the audio devices in the system so that it can take appropriate action.
So when you develop your audio driver to support an adapter that is capable of processing offloaded-audio, your driver must use the newly-defined KSNODETYPE_AUDIO_ENGINE audio endpoint to expose the capabilities of the hardware audio engine. For more information about the audio endpoint discovery process, see Audio Endpoint Builder Algorithm.
You developed your audio driver to control the underlying hardware capabilities of an audio adapter that is capable of processing offloaded-audio. This means that your driver has the best knowledge about how to control the adapter's features. So you must develop a UI that will expose the features of the adapter to the end user in the form of options that they can select, enable and/or disable.
If however, you already have a UI that is used for controlling audio processing objects (APOs) that you developed, this UI could be extended to work with your new audio adapter. In this case, your extensions to the UI would provide software control for the APOs, and hardware control for the adapter.
The functionality described for this new type of audio adapter and its associated driver, can be used by Windows Store apps via WASAPI, Media Foundation, Media Engine, or the HTML 5 <audio> tags. Note that Wave and DSound cannot be used, as they are not available to Windows Store apps. Also note that Desktop applications cannot use the offloading capabilities of audio adapters that support hardware-offloaded audio. These applications can still render audio, but only through the host pin which makes use of the software audio engine.
If a Windows Store app streams media content and uses Media Foundation, Media Engine, or the HTML 5 <audio> tags, the app is automatically opted-in for hardware offloading as long as the proper audio category has been set for the stream. Opting-in for hardware offloading is done on a per stream basis.
Windows Store apps that use WASAPI or streaming communications have to explicitly opt-in for hardware offloading.