Effects

Effects process the image in some way, and together with sources and renderers they constitute the building blocks of an image processing graph. Effects implement both IImageConsumer and IImageProvider, and can thus be chained with other effects, a source, or a renderer: RotationEffect, HdrEffect, InteractiveForegroundSegmenter, LensBlurEffect and so on. There are also a few types that process images that don't work quite in this way that are covered in this document: AutoFixAnalyzer, ImageAligner and ObjectExtractor.

Setting slider values to match the valid range of a property of an effect

Some effects have properties that modify their affect on an image. Sometimes the property is in a form of an enum while other times these are values on a range from a minimum to a maximum value. Often developers will match ranges of sliders within their applications to that range.

To help with this task effects that have numeric properties now expose a property PropertyDescriptions. That gives the developer access to a Dictionary containing a PropertyDescription for each property of the effect.

Here is an example that matches BlurEffect's KernelSize property range to the Slider in the application.

var effect = new BlurEffect();

var propertyName = nameof(effect.KernelSize);
var effectPropertyDescriptions = effect as IPropertyDescriptions;
if (effectPropertyDescriptions != null) 
{
    var propertyDescription = effectPropertyDescriptions.PropertyDescriptions[propertyName];
    
    m_effectSlider.Minimum = (int)propertyDescription.MinValue;
    m_effectSlider.Maximum = (int)propertyDescription.MaxValue;
    m_effectSlider.Value = (int)propertyDescription.DefaultValue;
}

AutoFixAnalyzer

AutoFixAnalyzer analyzes an image and suggests how to improve it. It can be used in combination with TemperatureAndTintEffect and/or SaturationLightnessEffect. A call to AnalyzeAsync() analyzes the image and returns saturation and lightness curves, and temperature and tint parameter values. The caller can then choose to apply any combination of these to the image. In the sample below, all the returned parameters are applied to the analyzed image, and the result is rendered to a JPEG buffer.

using (var imageSource = new StorageFileImageSource(image))
using (var saturationLightnessEffect = new SaturationLightnessEffect(imageSource))
using (var temperatureAndTint = new TemperatureAndTintEffect(saturationLightnessEffect))
using (var renderer = new JpegRenderer(temperatureAndTint))
{
    var analyzer = new AutoFixAnalyzer(imageSource);
    AutoFixAnalyzerResult autoFixSuggestions = await analyzer.AnalyzeAsync();

    saturationLightnessEffect.SaturationCurve = autoFixSuggestions.SaturationCurve;
    saturationLightnessEffect.LightnessCurve = autoFixSuggestions.LightnessCurve;

    temperatureAndTint.Temperature = autoFixSuggestions.TemperatureParameter;
    temperatureAndTint.Tint = autoFixSuggestions.TintParameter;

    var buffer = await renderer.RenderAsync();
}
ImageResult
Mt598517.autofix_analyzer_original(en-us,WIN.10).jpgMt598517.autofix_analyzer_result(en-us,WIN.10).jpg

Mapping auto fix curves to slider values

Additional methods in the Curve class allow the user to map the saturation and lightness result of the Analyzer to a slider value. This can be done by defining two extreme curves and a slider that is used to interpolate between them.

In the case of the AutoFixAnalyzer, the reverse operation is also needed because the AutoFixAnalyzer returns curves for saturation or lightness. Again, methods in the Curve class can be used to find the interpolation value between the extreme curves that produces the closest curve.

var lowLightnessCurve = new Curve(CurveInterpolation.NaturalCubicSpline);
lowLightnessCurve.SetPoint(148, 108);

var highLightnessCurve = new Curve(CurveInterpolation.NaturalCubicSpline);
highLightnessCurve.SetPoint(108, 148);

var minMaxLightnessPair = new CurveMinMaxPair(lowLightnessCurve, highLightnessCurve);

using (var imageSource = new StorageFileImageSource(image))
{
    var analyzer = new AutoFixAnalyzer(imageSource);
    AutoFixAnalyzerResult autoFixSuggestions = await analyzer.AnalyzeAsync();

    var suggestedSliderValue = Curve.EstimateInterpolationFactor(autoFixSuggestions.LightnessCurve, minMaxLightnessPair);

    await RenderSourceWithSaturationValue(imageSource, minMaxLightnessPair, suggestedSliderValue);

    // Simulate user interaction.
    var fakeSliderValue = 0.8;
    await RenderSourceWithSaturationValue(imageSource, minMaxLightnessPair, fakeSliderValue);
}

...

private async Task RenderSourceWithSaturationValue(IImageProvider source, CurveMinMaxPair minMaxLightnessPair, double lightnessValue)
{
    var userModifiedLightnessCurve = Curve.Interpolate(minMaxLightnessPair, lightnessValue);

    var saturationLightnessEffect = new SaturationLightnessEffect(source);
    saturationLightnessEffect.LightnessCurve = userModifiedLightnessCurve;

    using (var renderer = new JpegRenderer(saturationLightnessEffect))
    {
            var buffer = await renderer.RenderAsync();
    }
}
Saturation valueGenerated curveRendered result
0.96 - AutoFixAnalyzer suggested valueMt598517.autofix_analyzer_suggested_curve(en-us,WIN.10).jpgMt598517.autofix_analyzer_suggested_result(en-us,WIN.10).jpg
0.3 - User interaction valueMt598517.autofix_analyzer_interactive_curve(en-us,WIN.10).jpgMt598517.autofix_analyzer_interaction_result(en-us,WIN.10).jpg

Blend Effect

The BlendEffect takes a background image source and blends it with a foreground image source.

If an alpha channel is present in the foreground image, it is used to combine the result of the blend effect with the original foreground image. A grayscale image can be provided as a separate alpha mask, and can then be used instead of the alpha channel in the foreground image. A level property functions as a global alpha value, and is multiplied with the alpha value for each pixel to produce the actual value used.

The following code sample blends an image consisting of a black frame around an otherwise transparent image onto another image.

using (var backgroundSource = new StorageFileImageSource(backgroundFile))
using (var foregroundSource = new StorageFileImageSource(foregroundFile))
using (var blendEffect = new BlendEffect(backgroundSource, foregroundSource, BlendFunction.Normal))
using (var renderer = new BitmapRenderer(blendEffect))
{
    var buffer = await renderer.RenderAsync();
}
Mt598517.store_clerk(en-us,WIN.10).jpgMt598517.blend_frame(en-us,WIN.10).jpgMt598517.blend_result(en-us,WIN.10).jpg
Background imageForeground imageBlend result

The blend Effect can also work on an image and a separate alpha mask, represented by a grayscale image. This is useful for several reasons:

  • The GradientImageSource can be used to generate grayscale masks.
  • The output of the 'InteractiveForegroundSegmenter' is a black and white mask, which can be used directly as an input to the blend effect.
  • Conserving memory. See the description of the AlphaToGrayscaleEffect below for an explanation of how to save memory when blending is done repeatedly with the same image, or with a set of images that have an alpha mask.

The following code sample demonstrates using a foreground image without alpha channel, and a separate grayscale image as an alpha mask.

using (var backgroundSource = new StorageFileImageSource(backgroundFile))
using (var foregroundImageSource = new StorageFileImageSource(foregroundImageFile))
using (var foregroundMaskSource = new StorageFileImageSource(foregroundMaskFile))
using (var blendEffect = new BlendEffect(backgroundSource, foregroundImageSource))
using (var renderer = new BitmapRenderer(blendEffect))
{
    blendEffect.MaskSource = foregroundMaskSource;
    blendEffect.BlendFunction = BlendFunction.Normal;
    var buffer = await renderer.RenderAsync();
}
Mt598517.blend_2_original(en-us,WIN.10).jpgMt598517.blend_2_foreground(en-us,WIN.10).jpgMt598517.blend_2_mask(en-us,WIN.10).jpgMt598517.blend_2_result(en-us,WIN.10).jpg
Background imageForeground imageForeground maskBlend result

Local Blending

Blending can also be done into a target area of the background source. The TargetArea is specified with a Rect object, using the unit coordinate system of the background image, i.e. the top left corner of the background image is at (0, 0), and the bottom right corner is at (1, 1). The area can also be rotated around its center, by setting TargetAreaRotation to the desired angle of counter clockwise rotation.

There is also a TargetOutputOption property that is used to control how the foreground is rendered into the target area, using any of the below values:

  • Stretch, the foreground image will be resized to fit the target area exactly.
  • PreserveAspectRatio, the foreground image will be blended into the target area centered and with the original aspect ratio intact.
  • PreserveSize, the size portion of the target area will be ignored, and the foreground image will be blended in its original size.

The following code uses the same input images as the example above, but blends into a smaller area.

using (var backgroundSource = new StorageFileImageSource(backgroundFile))
using (var foregroundImageSource = new StorageFileImageSource(foregroundImageFile))
using (var foregroundMaskSource = new StorageFileImageSource(foregroundMaskFile))
using (var blendEffect = new BlendEffect(backgroundSource, foregroundImageSource))
using (var renderer = new BitmapRenderer(blendEffect))
{
    blendEffect.MaskSource = foregroundMaskSource;
    blendEffect.BlendFunction = BlendFunction.Normal;
    blendEffect.TargetArea = new Rect(0.33, -0.2, 0.5, 0.5);
    blendEffect.TargetAreaRotation = 180;
    blendEffect.TargetOutputOption = OutputOption.PreserveAspectRatio;

    var buffer = await renderer.RenderAsync();
}
Mt598517.local_blend_result(en-us,WIN.10).jpg
Blend to target area

Caching Effect

The CachingEffect flattens the source graph into a bitmap, and caches that until the user calls Invalidate(). This helps the user to be explicit about avoiding costly re-rendering.

An effect graph may contain an arbitrary number of effects that will be applied to the source image every time it is rendered. In the example, we'll blend the result of two graphs ending with effectA and effectB, which are connected to the same source. In this case, expensiveEffect will be applied twice; once to produce the effectA, and once again to produce the effectB.

Mt598517.caching_effect_before(en-us,WIN.10).png

To make this more efficient, it is possible to cache the result of expensiveEffect, by using CachingEffect, which keeps the result of the applied effect in memory. Now the effect expensiveEffect is only applied once when blending A and B. CachingEffect also provides a method to refresh the cache using the Invalidate() method.

Mt598517.caching_effect_after(en-us,WIN.10).png

using (var imageSource = new StorageFileImageSource(image))
using (var blurEffect = new BlurEffect(imageSource))
using (var cachingEffect = new CachingEffect(blurEffect))
using (var brightnessEffect = new BrightnessEffect(blurEffect))
using (var grayscaleEffect = new GrayscaleEffect(blurEffect))
using (var blendEffect = new BlendEffect(brightnessEffect, grayscaleEffect, BlendFunction.Multiply))
using (var renderer = new BitmapRenderer(blendEffect))
{
    var stopwatch = new Stopwatch();

    // Uncached render
    stopwatch.Start();
    var buffer = await renderer.RenderAsync();
    stopwatch.Stop();

    var unCachedRenderTimeMs = stopwatch.ElapsedMilliseconds;

    // Cached render
    stopwatch.Reset();
    brightnessEffect.Source = cachingEffect;
    grayscaleEffect.Source = cachingEffect;

    stopwatch.Start();
    buffer = await renderer.RenderAsync();
    stopwatch.Stop();

    var cachedRenderTimeMs = stopwatch.ElapsedMilliseconds;

    Assert.IsTrue(cachedRenderTimeMs < unCachedRenderTimeMs);
}

Alpha to Grayscale Effect

The AlphaToGrayscaleEffect copies the alpha channel to the color channels, resulting in a grayscale representation of the alpha channel. The alpha channel is set to 255. This effect can be used to split up an image that contains alpha information (e.g. coming from a PNG file) into an image with color information only, and a grayscale mask. If this is used as a preprocessing step, these two images can then later be used as inputs e.g. to the blend effect as described above, thus saving memory since JPEG files can be processed much more efficiently than PNG files.

using (var imageSource = new StorageFileImageSource(pngFile))
using (var alphaToGrayscale = new AlphaToGrayscaleEffect(imageSource))
using (var jpegRenderer = new JpegRenderer())
{
    jpegRenderer.Source = imageSource;
    var imageBuffer = await jpegRenderer.RenderAsync();

    jpegRenderer.Source = alphaToGrayscale;
    var maskBuffer = await jpegRenderer.RenderAsync();
}  

Hue Saturation Lightness Effect

This effect can be used when changing the hue to correct or adjust the color tone in an image. In addition to changing the hue, the lightness and saturation can be raised or lowered for any particular hue. The HueSaturationLightnessEffect works with three curve properties:

  • HueCurve maps hue to hue. The x-axis is restricted to the values [0, 255], which represent the hue range [0, 359]. The values on the y-axis are restricted to 0-510, representing the hue range [0, 718].
  • SaturationCurve maps hue to a change in saturation. The x-axis is restricted to the values [0, 255], which represent the hue range [0, 359]. On the y-axis, the permitted range is [-255, 255], where 0 represents no change, 255 represents a maximum increase in saturation, and -255 represents a maximum decrease in saturation, producing a black-and-white image.
  • LightnessCurve maps hue to a change in lightness. The x-axis is restricted to the values [0, 255], which represent the hue range [0, 359]. On the y-axis, the permitted range is [-255, 255], where 0 represents no change, 255 represents a maximum increase in lightness, and -255 represents a maximum decrease in lightness.

Setting a curve property to null will leave that property unchanged. Null is the default value for all the properties.

In the sample below, we adjust hues in the blue range to become green, causing the green tones of the wall to turn blue.

using (var source = new StorageFileImageSource(sourceFile))
using (var hueSaturationLightness = new HueSaturationLightnessEffect(source))
using (var renderer = new BitmapRenderer(hueSaturationLightness))
{
    var hueCurve = new Curve();

    hueCurve.SetPoint(56, 56);
    hueCurve.SetPoint(57, 140);
    hueCurve.SetPoint(70, 160);
    hueCurve.SetPoint(71, 71);

    hueSaturationLightness.HueCurve = hueCurve;

    var buffer = await renderer.RenderAsync();
}
Mt598517.hsl_hue_change_orginal(en-us,WIN.10).jpgMt598517.hsl_hue_change_result(en-us,WIN.10).jpg
Original ImageResult

The hue curve maps an old hue to a new hue. Here, we map most of the range using the identity curve, but the range corresponding to green is transposed into the blue range.

Mt598517.hsl_hue_change_hue_curve(en-us,WIN.10).jpg
Hue curve

In the sample below, we increase the saturation for green hues to give the image more vibrant colors.

using (var source = new StorageFileImageSource(sourceFile))
using (var hueSaturationLightness = new HueSaturationLightnessEffect(source))
using (var renderer = new BitmapRenderer(hueSaturationLightness))
{
    var saturationCurve = new Curve();

    saturationCurve.SetPoint(25, 0);
    saturationCurve.SetPoint(40, 255);
    saturationCurve.SetPoint(80, 255);
    saturationCurve.SetPoint(95, 0);
    saturationCurve.SetPoint(255, 0);

    hueSaturationLightness.SaturationCurve = saturationCurve;
    var buffer = await renderer.RenderAsync();
}
Mt598517.hsl_increase_saturation_orginal(en-us,WIN.10).jpgMt598517.hsl_increase_saturation_result(en-us,WIN.10).jpg
Original imageResult after increasing saturation for green hues

The saturation is increased for green hues (hue and lightness remain unchanged).

Mt598517.hsl_increase_saturation_curve(en-us,WIN.10).jpg
Saturation curve

Reframing Effect

The ReframingEffect lets the user freely reframe the image by effectively specifying a new "canvas". A reframing area is placed over the image by specifying a rectangle, a rotation, and optionally a pivot point which otherwise defaults to the center of the reframing area. This rectangle can extend outside the current boundaries of the image, and any such area will be rendered in transparent black.

Here is a code sample that performs three reframing operations on a image:

  1. The image is reframed as a close up around the girl in the image, by setting up a ReframingArea.
  2. The area from step 1 is reframed, rotating the reframing area by 25 degrees using the center of the reframing area as the pivot point.
  3. The area from step 1 is reframed, rotating the reframing area by 25 degrees, this time using the top left corner of the reframing area as the pivot point.

    using (var imageSource = new StorageFileImageSource(storageFile))
    using (var reframingEffect = new ReframingEffect(imageSource))
    using (var renderer = new BitmapRenderer(reframingEffect))
    {

    reframingEffect.ReframingArea = new Windows.Foundation.Rect(
        200, 20,
        750, 950);
    reframingEffect.Angle = 0;
    
    var buffer1 = await renderer.RenderAsync();
    
    reframingEffect.Angle = 25;
    var buffer2 = await renderer.RenderAsync();
    
    reframingEffect.PivotPoint = new Windows.Foundation.Point(0, 0);
    var buffer3 = await renderer.RenderAsync();
    

    }

Mt598517.store_clerk(en-us,WIN.10).jpgMt598517.reframed-1(en-us,WIN.10).pngMt598517.reframed-2(en-us,WIN.10).pngMt598517.reframed-3(en-us,WIN.10).png
Original imageFirst reframingSecond reframingThird reframing

For simple crop operations within the boundaries of the original image, use the CropEffect. To rotate the image by an arbitrary angle while resizing the "canvas" so that the entire original image is shown, use the RotationEffect.

Saturation Lightness Effect

This effect can be used when changing the lightness or adjust the brightness of the colors the image. The lightness can be lowered to show less details and the lightness can be raised to show more details. Increase the saturation to show brighter colors. Decrease the saturation to zero for a black and white effect. The SaturationLightnessEffect works with two curve properties:

  • LightnessCurve: The x-axis refers to the current lightness values and the matching y-axis values will become the new lightness values. The x-axis has a range [0, 255]. Valid y-values are in the range [0, 255]. If no change to lightness is to be done, then the curve should be just linear to index values without any offsets i.e x = y.
  • SaturationCurve: The x-axis refers to the current saturation values and the matching y-axis values will become the new saturation values. The x-axis has a range [0, 255]. Valid y-values are in the range [0, 255]. If no change to saturation is to be done, then the curve should be just linear to index values without any offsets i.e x = y.

Setting a curve property to null will leave that property unchanged. Null is the default value for all the properties.

In the sample below, we use the lightness curve to increase the contrast in the shadows, and also to boost the saturation somewhat using the saturation curve. Note the use of Curve.CombineIntervals to force the upper half of the lightness curve to the identity curve.

using (var source = new StorageFileImageSource(sourceFile))
using (var saturationLightness = new SaturationLightnessEffect(source))
using (var renderer = new BitmapRenderer(saturationLightness))
{
    var saturationCurve = new Curve(CurveInterpolation.NaturalCubicSpline);
    saturationCurve.SetPoint(30, 70);
    saturationCurve.SetPoint(90, 110);

    var lightnessCurve = new Curve();
    lightnessCurve.SetPoint(110, 136);

    saturationLightness.LightnessCurve = lightnessCurve;
    saturationLightness.SaturationCurve = saturationCurve;

    var buffer = await renderer.RenderAsync();
}
Mt598517.saturation_lightness_orginal(en-us,WIN.10).jpgMt598517.saturation_lightness_result(en-us,WIN.10).jpgMt598517.saturation_lightness_lightness_curve(en-us,WIN.10).jpgMt598517.saturation_lightness_saturation_curve(en-us,WIN.10).jpg
OriginalResultLightness curveSaturation curve

HDR Effect

The HdrEffect applies local tone mapping to a single image to achieve an HDR-like effect. It can be used to apply an "auto fix" to the image, resulting in improved image quality for the majority of images. It can also be used to apply "artistic HDR" to the image.

The Strength property controls how strong the local tone mapping effect will be on the image. With a higher strength setting more noise is introduced, and this can be suppressed using the NoiseSuppression property. If strength is set to a high value and noise suppression is kept low, the effect will produce dramatic, surrealistic images.

The effect also has properties to control global Gamma and Saturation. For both of these properties, 1.0 implies no change. For saturation, values lower than 1 will decrease, and values greater than 1 will increase the saturation in the final image. For gamma, values lower than 1 will produce a lighter image, and values greater than 1 will produce a darker image.

The following example demonstrates how the default settings produce an improved image, and how modifying the settings can result in a much more dramatic image:

using (var source = new StorageFileImageSource(sourceFile))
using (var hdrEffect = new HdrEffect(source))
using (var renderer = new BitmapRenderer(hdrEffect))
{
     var improvedBuffer = await renderer.RenderAsync();

     hdrEffect.Strength = 0.9;
     hdrEffect.NoiseSuppression = 0.01;
     var artisticHdrBuffer = await renderer.RenderAsync();
}  
Original imageImage improved with HDRArtistic HDR
Mt598517.hdr_original(en-us,WIN.10).pngMt598517.hdr_autofix(en-us,WIN.10).pngMt598517.hdr_artistic(en-us,WIN.10).png

Image Aligner

The ImageAligner is used to align a series of images that differ by a small movement, e.g. a series of images taken in the burst capture mode. Alignment works for small movements only, for example, those that occur when the user tries to hold the camera still, and quickly degenerates if the images are moved too much. It also works only on a constant or near constant exposure settings.

Start the alignment by assigning a list of image sources to the Sources property. Optionally, the ReferenceSource property can be set to specify which image in the list will serve as a reference image in the alignment process. The other images will then be modified to become aligned with this image. If the property is not set, or explicitly set to null, the ReferenceImageSource will default to the middle element in the source list.

When the sources are set, you can call the CanAlignAsync() method to find out if it is possible to align a particular image source. One or more images may fail to align without the whole alignment process failing. If a source can be aligned, an image source for the aligned image is retrieved by calling AlignAsync(). This method will throw an exception if it is called for a source that cannot be aligned.

The example below tries to align a list of images, using the second source as a reference, and saves successfully aligned sources. The input and output are visualized as animated GIF images. See the documentation on the GifRenderer for information about how to render animated GIFs.

using (var aligner = new ImageAligner())
using (var renderer = new JpegRenderer())
{
    aligner.Sources = unalignedSources;
    aligner.ReferenceSource = unalignedSources[1];

    var alignedSources = await aligner.AlignAsync();

    foreach (var alignedSource in alignedSources)
    {
        if (alignedSource != null)
        {
            renderer.Source = alignedSource;
            var alignedBuffer = await renderer.RenderAsync();
            Save(alignedBuffer);
        }
    }
}
Mt598517.aligner_unaligned(en-us,WIN.10).gifMt598517.aligner_aligned(en-us,WIN.10).gif
Unaligned imagesAligned images

Interactive Foreground Segmenter

The InteractiveForegroundSegmenter segments the image into foreground and background based on annotations to the image provided by the end-user.

As input, InteractiveForegroundSegmenter takes the image to segment and an annotation image where representative parts of the foreground and background areas in the image have been marked using the foreground and background colors that can be set on the object. Using these annotations, it segments the image and generates a mask where the foreground is white and the background is black.

Here is an example that uses the interactive foreground segmenter and blend effect to adjust the hue of the foreground of the image. The user provides us with a "UserAnnotations" image, where the red area represents the foreground of the photo, and blue represents the background.

Main imageUser annotationsOverlay demoResult maskFinal result
Mt598517.segmenter_source(en-us,WIN.10).jpgMt598517.segmenter_annotations(en-us,WIN.10).pngMt598517.segmenter_overlaid(en-us,WIN.10).pngMt598517.segmenter_result(en-us,WIN.10).jpgMt598517.segmenter_blend_result(en-us,WIN.10).jpg

Here is the code used to produce the final result above, assuming the user annotations are loaded with a StorageFileImageSource:

using (var source = new StorageFileImageSource(MainImage))
using (var annotations = new StorageFileImageSource(UserAnnotations))
using (var redCanvas = new ColorImageSource(new Size(300, 370), Color.FromArgb(255, 255, 0, 0)))
using (var segmenter = new InteractiveForegroundSegmenter(source))
using (var blendEffect = new BlendEffect(source, redCanvas, segmenter, BlendFunction.Colorburn, 0.7))
using (var renderer = new JpegRenderer(blendEffect))
{
    segmenter.AnnotationsSource = annotations;
    segmenter.ForegroundColor = Color.FromArgb(255, 251, 0, 0);
    segmenter.BackgroundColor = Color.FromArgb(255, 0, 0, 250);

    var buffer = await renderer.RenderAsync();
}

One could also use a WriteableBitmap to allow the user to draw on a canvas and then use the resulting image as annotations. Here is a code sample that demonstrates creating a WriteableBitmap, drawing on it, and finally using it as an annotations source:

WriteableBitmap bmp = new WriteableBitmap(100, 100);
bmp.DrawLine(20, 10, 20, 90, System.Windows.Media.Color.FromArgb(foreground.A, foreground.R, foreground.G, foreground.B));
bmp.DrawLine(50, 30, 50, 70, System.Windows.Media.Color.FromArgb(background.A, background.R, background.G, background.B));
bmp.DrawLine(80, 10, 80, 90, System.Windows.Media.Color.FromArgb(foreground.A, foreground.R, foreground.G, foreground.B));

Bitmap userAnnotations = bmp.AsBitmap();

using (var annotations = new BitmapImageSource(userAnnotations))
{
    ...
}

Note: WriteableBitmap's extension method AsBitmap can be found in the Lumia.InteropServices.WindowsRuntime namespace. The DrawLine extension method is part of the WriteableBitmapEx library.

Segmentation is usually an iterative process, meaning that the user will start with a crude version of annotations and inspect the output that the InteractiveForegroundSegmenter generates. The user will then find the areas where the segmentation could be improved, add more annotations to the original annotations image, and render it again. This process continues until the user is satisfied with the result.

Note that the segmentation process can fail if there is not enough information within the AnnotationsSource image. The bare minimums are one pixel in each foreground and background color; however, usually more will be required. If the segmentation cannot be completed successfully, an ArgumentException will be thrown with the message "Segmentation could not complete successfully. Try adding more annotations to AnnotationsSource."

Segmentation is an expensive operation, so it cannot be performed on all images with default parameters. To allow processing even on large images, the Quality property should be used. It affects the working size of the algorithm, and therefore a lower quality setting will improve both the memory consumption and the processing time of the effect.

Lens Blur Effect

The LensBlurEffect applies blur to an image in a way similar to how out-of-focus areas are rendered by a lens, an effect also known as bokeh. The effect supports setting kernels corresponding to different aperture shapes. There are several predefined shapes included in the SDK (circle, hexagon, flower, star, and heart), and custom, user-defined shapes are also supported.

Lens blur can be applied to the whole image, or alternatively, the user can specify a focus area where no blur will be applied. Different areas of the image can be blurred with different kernels. The user specifies this, and optionally also a focus area, with the kernel map.

A kernel map is a grayscale image where each pixel value represents the index of the kernel that will be applied to the corresponding image pixel. The expected values differ regarding the setting in KernelMapType: The value reserved for the focus area can be either 0 (Continuous) or 255 (ForegroundMask). To give an example, if we have a use case where the center of the image should not be blurred and the KernelMapType is set to ForegroundMask, the center of the kernel map image should have a value of 255.

If we want it to be blurred with the first kernel we have provided, the value should be equal to 0; for the second kernel, it should be equal to 1, and so forth. LensBlurEffect takes an IImageProvider as a KernelMap input, allowing the developer to provide the kernel map from a wide range of sources, or it can be generated with a GradientImageSource or BufferImageSource.

The following example applies the lens blur effect on the background of the image, while the foreground remains in focus. It uses a mask created by the interactive foreground segmenter as kernel map.

MainImageImage with annotationsResult
Mt598517.lens_blur_source(en-us,WIN.10).jpgMt598517.lens_blur_annotations_overlaid(en-us,WIN.10).jpgMt598517.lens_blur_result_segmented_mirroring_on(en-us,WIN.10).jpg
using (var source = new StorageFileImageSource(mainImage))
using (var annotations = new StorageFileImageSource(userAnnotations))
using (var segmenter = new InteractiveForegroundSegmenter(source))
using (var lensBlurEffect = new LensBlurEffect(source, new LensBlurPredefinedKernel(LensBlurPredefinedKernelShape.Circle, 30) ))
using (var renderer = new JpegRenderer(lensBlurEffect))
{
    segmenter.AnnotationsSource = annotations;
    segmenter.ForegroundColor = Color.FromArgb(255, 251, 0, 0);
    segmenter.BackgroundColor = Color.FromArgb(255, 0, 0, 250);

    lensBlurEffect.KernelMap = segmenter;
    var buffer = await renderer.RenderAsync();
}

Special attention is required on the border between the focus area and the edge area. There is extra work required to make this area look natural, and the correct behavior largely depends on the context. Specifically, it is important whether the border follows some natural lines on the image, such as an outline of a person, or is arbitrary. LensBlurEffect allows you to provide this information by setting the FocusAreaEdgeMirroring property. It is an enum with two options:

  • LensBlurFocusAreaEdgeMirroring.On should be used when the border between focus and blurred area follows some natural lines within the image.
  • LensBlurFocusAreaEdgeMirroring.Off should be used when the focus and blurred areas are arbitrary.

Here are some images that show the difference:

MainImageKernelMaskResult with LensBlurFocusAreaEdgeMirroring.OnResult with LensBlurFocusAreaEdgeMirroring.Off
Mt598517.lens_blur_source(en-us,WIN.10).jpgMt598517.lens_blur_segmented_mask(en-us,WIN.10).jpgMt598517.lens_blur_result_segmented_mirroring_on(en-us,WIN.10).jpgMt598517.lens_blur_result_segmented_mirroring_off(en-us,WIN.10).jpg
Mt598517.lens_blur_source(en-us,WIN.10).jpgMt598517.lens_blur_gradient_mask(en-us,WIN.10).jpgMt598517.lens_blur_result_gradient_mirroring_on(en-us,WIN.10).jpgMt598517.lens_blur_result_gradient_mirroring_off(en-us,WIN.10).jpg

As you can see, both options have a valid use case, and it is up to the developer to decide which setting to choose for his use case.

It should be noted that the lens blur is an expensive operation because it requires a lot more resources than a normal BlurEffect. There is a reason for the increased complexity: lens blur's result is of a lot higher quality, and it produces more photo realistic images. The cost can be regulated; the effect can do the bulk of the processing on a smaller image without significantly affecting the end result's quality. The user can regulate the working size with the Quality property, allowing the user to apply the effect even to large images. A lower quality setting will reduce both the memory consumption and processing time of the effect. The size of each kernel that is used by the effect is also affected by the Quality property of the effect, so as the developer, you don't need to adjust those sizes when changing the parameters of the LensBlurEffect. That being said, a Quality setting below 1.0 is a compromise, and it will produce worse results in blurred areas of the image.

Object Extractor

If we have an image with a mask that defines objects in the image - obtained using the interactive foreground segmenter or by some other means - the foreground objects can be extracted and manipulated separately using the ObjectExtractor.

In the sample below, we use a mask to extract an object from an image. We then paste it onto a green background using the blend effect.

using (var source = new StorageFileImageSource(imageStorageFile))
using (var maskSource = new StorageFileImageSource(maskStorageFile))
using (var extractor = new ObjectExtractor(source, maskSource))
{
    var extractedObjects = await extractor.ExtractObjectsAsync();
    var objectRect = extractedObjects[0].ObjectRectangle;

    using (var blendEffect = new BlendEffect())
    using (var finalBackgroundSource = new ColorImageSource(new Size(objectRect.Width, objectRect.Height), Color.FromArgb(255, 130, 167, 97)))
    using (var jpegRenderer = new JpegRenderer(blendEffect))
    {
        blendEffect.Source = finalBackgroundSource;
        blendEffect.ForegroundSource = extractedObjects[0];

        var buffer = await jpegRenderer.RenderAsync();

        // Code below not part of the sample, only here to produce images for the docs
        await ImageVerifier.VerifyImageAsync(buffer);
    }
}
Mt598517.object_extractor_image(en-us,WIN.10).jpgMt598517.object_extractor_mask(en-us,WIN.10).jpg
ImageMask
Mt598517.object_extractor_object_1(en-us,WIN.10).jpgMt598517.object_extractor_object_2(en-us,WIN.10).jpgMt598517.object_extractor_result(en-us,WIN.10).jpg
Extracted objectExtracted objectResult
Show: