Implementing an Image Encoder Object (Windows Embedded CE 6.0)

1/6/2010

The Imaging API is used to standardize and facilitate the communication between an image source and an image sink. To encode an image, you must implement an image encoder in the code for the image source.

An encoder object takes in raw image data, and uses a codec to encode that data into a specified graphic format. This encoded data is then stored by an image sink. The image data is pushed to the image sink by the image source.

Use the following sequence to implement an image encoder and use Imaging API calls to encode image data and push the data to your sink.

To implement an image encoder object

  1. Create an image encoder object by calling IImagingfactory::CreateImageEncoderToStream with a pointer to a new IStream object for streaming the image data.

    The constructor for this object should not perform any operations that must be performed before every image file is encoded. These operations should be implemented in the IImageEncoder::InitEncoder method.

  2. Call QueryInterface on the newly created encoder object to receive the IImageEncoder interface.

  3. Call the IImageEncoder::InitEncoder method with the IStream interface created above to initialize the encoder object.

    All necessary operations to initialize writing a single image should be included in this method.

  4. If you need determine which encoder parameters are supported by the encoder object, you can optionally call IImageEncoder::SetEncoderParameters.

  5. If you need to set specific parameters in the encoder object, you can optionally call IImageEncoder::SetEncoderParameters.

  6. If the encoder supports multiframe images and the application is creating a multiframe image, call IImageEncoder::SetFrameDimension.

  7. Call IImageEncoder::GetEncodeSink to retrieve an IImageSink interface. The IImageSink interface is implemented as follows:

    1. Call IImageSink::BeginSink to negotiate the values contained in the ImageInfo structure for encoding the current frame.
    2. Call IImageSink::SetPalette to pass color palette information about the current image frame to the image sink.
    3. If you need to pass property data to the image sink, you can optionally call IImageSink::GetPropertyBuffer to obtain a buffer that will contain the property data.
    4. If GetPropertyBuffer is called above, you must next call IImageSink::PushPropertyItems to transfer the property data to the image sink.
      The buffer that was allocated by GetPropertyBuffer must be deallocated in the implementation for PushPropertyItems.
    5. Call IImageSink::PushPixelData or IImageSink::GetPixelDataBuffer to begin the data transfer, depending on how the image data is stored in the source:
    • If the image source has allocated memory for the image, use PushPixelData.
    • If the image source has not allocated memory for the image, use GetPixelDataBuffer. For each call to GetPixelDataBuffer, IImageSink::ReleasePixelDataBufferwce50lrfIImageSinkReleasePixelDataBuffer must also be called.
    1. Call ImageSink::EndSink to complete the encoding process.
    2. Call ImageSink::Release to free the IImagesink interface.
  8. Once the image sink has been released, the call to IImageEncoder::GetEncodeSink can be repeated for each frame of the image.

  9. Call IImageEncoder::TerminateEncoder to break the association between the encoder object and the image data stream.

    After TerminateEncoder is called, the application can start a new encoding process by calling IImageEncoder::InitEncoder with a new IStream interface.

  10. The destructor for the encoder object is called after the object's interface is released. In the destructor, the encoder object must verify that TerminateEncoder was called. If not, the destructor should call TerminateEncoder.

See Also

Concepts

Imaging Application Development
Implementing an Image Decoder Object