Export (0) Print
Expand All
Information
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location.

Capture Performance

Windows Mobile 6.5
A version of this page is also available for
4/8/2010

This topic describes a number of important factors that affect capture performance when using DirectShow. For an overview of the capture architecture in DirectShow, see Video Capture.

Your choice of color spaces for device's components can have a noticeable impact on your device's capture performance. The most important consideration is the number of times that captured data has to be converted from one color space into another as the data pass through the capture graph. In general, the Capture Graph Builder inserts the color space converter filter (see Color Space Converter Filter) when it detects differences in color spaces between two pins that have to be connected together in the capture graph.

Avoiding color space conversions is especially important when you are using an encoder that cannot encode in real time. When an encoder does not support real time encoding, the Capture Graph Builder inserts a buffering filter into the capture graph. To enable smooth previews the buffering filters thread runs at BELOW_NORMAL priority. This has the effect of slowing down the capture graph, which makes any other inefficiencies in the graph, such as color conversions, all the more costly.

Aside from simply adding to the computational burden of the capture graph, the presence of the color space converter filter has two additional side effects. First, it limits the choice of input formats of downstream filters to the output formats of the color space converter. For example, the color space converter filter does not handle conversions from the YCbCr 4:2:2 planar format to the RGB16 format. As a result, YCbCr 4:2:2 planar cannot be used to supply the preview display because the preview display must be able to render with GDI and therefore must be convertible to an RGB format. You can work around problems like this by using a third-party color conversion filter.

Second, the color space converter filter cannot pass a pointer to an overlay surface to the camera driver. The color space converter retains pointers to overlay surfaces internally and instead passes pointers to system memory to the camera driver. The camera driver fills the system memory with image data and the color space converter copies this data to the buffer for the overlay surface. This design is in place to both help maintain the security of the device and to enable hardware that does not support scatter/gather DMA. The camera driver will only fill buffers that are registered at initialization time. At initialization time, the video renderer is not aware of DirectDraw yet and no DirectDraw surfaces have been created yet.

If the display driver for the preview display renders with GDI then a three-pin camera driver should be able to use an RGB color format without issue. If you are using a two-pin camera driver in the same scenario then you must make sure that the encoder also accepts the RGB color format. If the encoder only accepts YCbCr then you will have to use a third-party color converter.

If the display driver for the preview display renders RGB overlays using DirectDraw then you must explicitly bring the color converter filter into the capture graph. The camera application does not handle this case.

You can capture a still image from a video stream using the image sink filter, see Image Sink Filter. The YCbCr format is a more compact format than RGB and can therefore be a more desirable format for video data but the image sink filter does not support YCbCr for input data. One approach to the problem of trying to record still images from YCbCr data would be to introduce a third-party color conversion filter into the filter graph. If your hardware supports it, a much higher performance approach would be to encode YCbCr data in the display driver through hardware acceleration.

To support JPEG encoding in hardware, the camera driver must expose the MEDIASUBTYPE_IJPG media subtype for data that the driver has encoded internally. Once the driver passes the buffer containing the encoded JPEG data into the capture graph, the image sink filter records the data directly through simple file I/O operations.

Sometimes the resolution of the camera driver's output and the resolution of the preview display do not match so the video renderer must perform a scaling operation in the preview pipeline. The highest performance solution to this problem is to use display hardware with built-in support for scaling that is exposed through DirectDraw. If DirectDraw support is not available then the scaling operation can be done in software through GDI, but only if the data is in RGB format. If your video data is in a YUV format then you must provide your own optimized scaling routine in software in your display driver.

Community Additions

Show:
© 2014 Microsoft