Core concepts

This chapter provides an overview of the Imaging SDK, an explanation of its basic building blocks, and several code examples. It provides the knowledge that you need to use the Imaging SDK on both basic and more advanced levels.

The libraries

Before starting to use the functionality provided by the Lumia Imaging SDK, you must add the SDK libraries to your project. You do this by using the Visual Studio NuGet package manager. For detailed instructions, see the chapter Adding libraries to the project.

The bulk of the functionality in the Lumia Imaging SDK is provided as a Windows Runtime Component. For more information about the Windows Runtime, see the API reference for Windows Runtime apps. For API information about Windows Phone apps, see the Windows Phone API reference (MSDN).

In addition to the Windows Runtime Component, the SDK also contains a .NET class library. This provides APIs that are intended to make your life a bit easier by allowing you to work with .NET types such as Stream and WriteableBitmap, and by providing base classes for implementation of custom image sources and effects.

The basic building blocks

The SDK allows image data to be accessed without decoding a whole JPEG image for fast previews, rotation, cropping of high resolution images, and applying one or several of the 60 effects provided. All of these use cases are implemented by mixing and matching three basic elements: image sources, effects and a renderer. This constitutes an image processing graph - "graph" for short.

Mt598507.BasicBuildingBlocks(en-us,WIN.10).jpg

These elements, because they implement the interfaces IImageProvider and IImageConsumer, can be connected in different ways, forming links and branches, and flexibly expressing a powerful image processing graph.

The three categories of elements are used to perform image processing in these ways:

  • Image source: Originates an image in some way, for example, by generating or loading it from storage and sets it up to be used further in the graph. All image sources implement the interface IImageProvider.
  • Effect: Takes one or more images as input(s), performs some kind of processing, and outputs one new image. All effects implement the interface IImageConsumer, which enables them to accept one primary source image. If secondary sources are required, they are available as properties as well. And just like image sources, all effects also implement IImageProvider to output the resulting image.
  • Renderer: Placed at the end of the image processing graph. This renders the resulting image into a certain format or container for consumption. All renderers implement the interface IImageConsumer, because they need to accept at least one source image.

The SDK contains a number of concrete implementations of image sources, effects, and renderers, to fulfill various needs.

Image source classes are named according to what they accept as input. Examples: StreamImageSource, BufferImageSource, and BitmapImageSource.

Effect classes are named according to what kind of processing they perform. One example is RotationEffect, which is able rotate the source by a desired angle. It is one example of more than 60 effects provided by the Lumia Imaging SDK.

Renderer classes are named according to the sort of output they produce. Examples: JpegRenderer and BitmapRenderer.

After they've been created, any of these objects can be kept by the app, and reused and reassembled into a different image processing graph as necessary.

A practical example of an image processing graph

For example, letΓÇÖs say that an image is available in a System.IO.Stream, and that image is selected by the user with the PhotoChooserTask. If we choose to apply two effects to that image: an antique effect followed by a rotation effect, the resulting image will be rendered into a WriteableBitmap.

Mt598507.PracticalExampleOfPipeline_1(en-us,WIN.10).jpg

Mt598507.PracticalExampleOfPipeline_2(en-us,WIN.10).jpg Mt598507.PracticalExampleOfPipeline_3(en-us,WIN.10).jpg
Original image After "Antique" effect After "Rotation" effect

The image processing graph will be set up like this:

Mt598507.PracticalExampleOfPipeline_4(en-us,WIN.10).jpg

  1. A StreamImageSource is created to use a JPEG image in a System.IO.Stream.
  2. An AntiqueEffect is created and the previous StreamImageSource is passed in as the source.
  3. A RotationEffect is created and the previous AntiqueEffect is passed in as the source. This is an example of chaining several effects that are processed in sequence to produce the end result.
  4. A WriteableBitmapRenderer is created and the previous RotationEffect is passed in as the source, along with a WriteableBitmap to render into.
  5. The method RenderAsync on the renderer is called, which results in a WriteableBitmap that contains the processed image.
  6. The objects are disposed.

Here's the same example as written in C#:

using (var imageSource = new StorageFileImageSource(storageFile))
using (var antiqueEffect = new AntiqueEffect(imageSource))
using (var rotationEffect = new RotationEffect(antiqueEffect) { RotationAngle = 35.0 })
using (var renderer = new WriteableBitmapRenderer(rotationEffect, writeableBitmap))
{
    var writeableBitmap = await renderer.RenderAsync();
}

While the asynchronous rendering operation is running, you are free to modify the properties on the effects used in your rendering chain. That said, the values at the start of the rendering operations are used. This is by design and protects against unintended results.

After the rendering operation is complete, the app is free to start the rendering operation again after changing the properties on any of the objects in the image processing graph, for example, the angle of the RotationEffect. To see the new result, just call RenderAsync on the renderer again.

The result can be found in the WriteableBitmap that was passed into the WriteableBitmapRenderer. It is also returned by RenderAsync as an IAsyncOperation, so the app could pass that IAsyncOperation to another part of the app as a "future result," without also having to track the original WriteableBitmap object.

Also note that, in this example, the objects involved are created and disposed of using a chain of using statements. This is a good practice when the image processing scenario is simple and self-contained. However, as long as the objects are properly disposed of when not needed, they can just as well be kept as class members or in collections and be reused for multiple renderings.