Posts Tagged 'Windows Forms'

C# How to: Image Colour Average

Article purpose

This article’s intension is focussed on providing a discussion on the tasks involved in implementing Image Colour Averaging. Pixel colour averages are calculated from neighbouring pixels.

Sample source code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

The sample source code associated with this article includes a based sample application. The sample application is provided with the intention of illustrating the concepts explored in this article. In addition the sample application serves as a means of testing and replicating results.

By clicking the Load Image button users are able to select input/source from the local system. On the right hand side of the screen various controls enable the user to control the implementation of colour averaging. The three labelled Red, Green and Blue relates to whether an individual colour component is to be included in calculating colour averages.

The filter intensity can be specified through selecting a filter size from the dropdown , specifying higher values will result in output images expressing more colour averaging intensity.

Additional image filter effects can be achieved through implementing colour component shifting/swapping. When colour components are shifted left the result will be:

  • Blue is set to the original value of the Red component.
  • Red is set to the original value of the Green component.
  • Green is set to the original value of the Blue component.

When colour components are shifted right the result will be:

  • Red is set to the original value of the Blue component
  • Blue is set to the original value of the Green component
  • Green is set to the original value of the Red Component

Resulting can be saved by the user to the local file system by clicking the Save Image button. The following image is a screenshot of the Image Colour Average sample application in action:

Image Colour Average Sample Application

Averaging Colours

In this article and the accompanying sample source code colour averaging is implemented on a per pixel basis. An average colour value is calculated based on a pixel’s neighbouring pixels’ colour. Determining neighbouring pixels in the sample source code has been implemented in much the same method as . The major difference to is the absence of a fixed /.

Additional resulting visual effects can be achieved through various options/settings implemented whilst calculating colour averages. Additional options include being able to specify which colour component averages to implement. Furthermore colour components can be swapped/shifted around.

The sample source code implements the AverageColoursFilter , targeting the class. The extent or degree to which colour averaging will be evident in resulting can be controlled through specifying different values set to the matrixSize parameter. The matrixSize parameter in essence determines the number of neighbouring pixels involved in calculating an average colour.

The individual pixel colour components Red, Green and Blue can either be included or excluded in calculating averages. The three method boolean parameters applyBlue, applyGreen and applyRed will determine an individual colour components inclusion in averaging calculations. If a colour component is to be excluded from averaging the resulting will instead express the original source/input image’s colour component.

The intensity of a specific colour component average can be applied to another colour component by means of swapping/shifting colour components, which is indicated through the shiftType method parameter.

The following code snippet provides the implementation of the AverageColoursFilter :

public static Bitmap AverageColoursFilter(
                            this Bitmap sourceBitmap,  
                            int matrixSize,   
                            bool applyBlue = true, 
                            bool applyGreen = true, 
                            bool applyRed = true, 
                            ColorShiftType shiftType = 
                            ColorShiftType.None)  
{ 
    BitmapData sourceData =  
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly,  
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
int blue = 0; int green = 0; int red = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
blue = 0; green = 0; red = 0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += pixelBuffer[calcOffset]; green += pixelBuffer[calcOffset + 1]; red += pixelBuffer[calcOffset + 2]; } }
blue = blue / matrixSize; green = green / matrixSize; red = red / matrixSize;
if (applyBlue == false) { blue = pixelBuffer[byteOffset]; }
if (applyGreen == false) { green = pixelBuffer[byteOffset + 1]; }
if (applyRed == false) { red = pixelBuffer[byteOffset + 2]; }
if (shiftType == ColorShiftType.None) { resultBuffer[byteOffset] = (byte)blue; resultBuffer[byteOffset + 1] = (byte)green; resultBuffer[byteOffset + 2] = (byte)red; resultBuffer[byteOffset + 3] = 255; } else if (shiftType == ColorShiftType.ShiftLeft) { resultBuffer[byteOffset] = (byte)green; resultBuffer[byteOffset + 1] = (byte)red; resultBuffer[byteOffset + 2] = (byte)blue; resultBuffer[byteOffset + 3] = 255; } else if (shiftType == ColorShiftType.ShiftRight) { resultBuffer[byteOffset] = (byte)red; resultBuffer[byteOffset + 1] = (byte)blue; resultBuffer[byteOffset + 2] = (byte)green; resultBuffer[byteOffset + 3] = 255; } } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

The definition of the ColorShiftType :

public enum ColorShiftType  
{
    None, 
    ShiftLeft, 
    ShiftRight 
}

Sample

The original image used in generating the sample images that form part of this article, has been licensed under the Attribution-Share Alike , , and license. The can be from .

Original Image

Rose_Amber_Flush_20070601

Colour Average Blue Size 11

Colour Average Blue Size 11

Colour Average Blue Size 11 Shift Left

Colour Average Blue Size 11 Shift Left

Colour Average Blue Size 11 Shift Right

Colour Average Blue Size 11 Shift Right

Colour Average Green Size 11 Shift Right

Colour Average Green Size 11 Shift Right

Colour Average Green, Blue Size 11

Colour Average Green, Blue Size 11

Colour Average Green, Blue Size 11 Shift Left

Colour Average Green, Blue Size 11 Shift Left

Colour Average Green, Blue Size 11 Shift Right

Colour Average Green, Blue Size 11 Shift Right

Colour Average Red Size 11

Colour Average Red Size 11

Colour Average Red Size 11 Shift Left

Colour Average Red Size 11 Shift Left

Colour Average Red, Blue Size 11

Colour Average Red, Blue Size 11

Colour Average Red, Blue Size 11 Shift Left

Colour Average Red, Blue Size 11 Shift Left

Colour Average Red, Green Size 11

Colour Average Red, Green Size 11

Colour Average Red, Green Size 11 Shift Left

Colour Average Red, Green Size 11 Shift Left

Colour Average Red, Green Size 11 Shift Right

Colour Average Red, Green Size 11 Shift Right

Colour Average Red, Green, Blue Size 11

Colour Average Red, Green, Blue Size 11

Colour Average Red, Green, Blue Size 11 Shift Left

Colour Average Red, Green, Blue Size 11 Shift Left

Colour Average Red, Green, Blue Size 11 Shift Right

Colour Average Red, Green, Blue Size 11 Shift Right

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Generating Icons from Images

Article Purpose

This illustrates the process of generating files (*.ico) from user specified input . The accompanying Sample Source Code implements a , allowing for easily testing the generation process.

Sample source code

This is accompanied by a sample source code Visual Studio project which is available for download .

Using the sample Application

The Sample Application can be used to test/implement the concepts described in this . The user interface enables the user to browse and select an file from the file system, which loads as a scaled preview. In addition, the user can select an size from a list of standard dimensions: 16×16, 24×24, 32×32, 48×48, 64×64, 96×96 and 128×128 pixels. When a user clicks on the “Save Icon” button the sample application generates an in memory based on the specified size converting and scaling the provided input . If an was successfully generated, the in-memory representation will be saved to the file system, based on the filename and file path specified by the user.

The image below is screenshot of the Image to Icon Generator application in action:

Image To Icon Generator

The source features Bellis perennis also known as the common European Daisy (see Wikipedia). The file is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license. The original can be downloaded from .

The resulting file generated by the sample application:

Generating Icons from Images

Scaling and Aspect Ratio

conform to a set of standard dimensions, all of which equate to a square due to the width and height values being equal. A potential issue exists in that the specified source might not have exact square dimensions. In other words, the width and height values of specified source might differ. The solution lies in creating a square based on the specified source . Consider the concept of a square canvas onto which is drawn the source whilst maintaining its aspect ratio, implementing center alignment from the horizontal and vertical aspect. Listed below is the implementation of an defined as CopyToSquareCanvas, targeting the class.

public static Bitmap CopyToSquareCanvas(this Bitmap sourceBitmap, Color canvasBackground)
{
    int maxSide = sourceBitmap.Width > sourceBitmap.Height ? sourceBitmap.Width : sourceBitmap.Height;

Bitmap bitmapResult = new Bitmap(maxSide, maxSide, PixelFormat.Format32bppArgb);
using (Graphics graphicsResult = Graphics.FromImage(bitmapResult)) { graphicsResult.Clear(canvasBackground);
int xOffset = (sourceBitmap.Width - maxSide) / 2; int yOffset = (sourceBitmap.Height - maxSide) / 2;
graphicsResult.DrawImage(sourceBitmap, new Point(xOffset, xOffset)); }
return bitmapResult; }

The size of the resulting is determined by the source ’s longest side, either width or height. To ensure middle alignment both vertically and horizontally the source is drawn at an offset, determined by the additional buffer area added by the canvas.

Generating the Icon

Once we have created an conforming to exact square dimensions the next step would be to scale said to the desired size. A convenient method of quickly scaling source to icon dimensions comes in the form of creating .

The sample source code defines the enumeration IconSizeDimensions, which serves to provide a developer friendly reference coupled with actual dimension values by means of specifying explicit enumeration values. Consider the following code snippet:

public enum IconSizeDimensions
{
    IconSize16x16Pixels = 16,
    IconSize24x24Pixels = 24,
    IconSize32x32Pixels = 32,
    IconSize48x48Pixels = 48,
    IconSize64x64Pixels = 64,
    IconSize96x96Pixels = 96,
    IconSize128x128Pixels = 128
}

The crux of this and sample source code can considered to be the definition of the CreateIcon , which targets the class. The definition is as follows:

public static Icon CreateIcon(this Bitmap sourceBitmap, IconSizeDimensions iconSize)
{
    Bitmap squareCanvas = sourceBitmap.CopyToSquareCanvas(Color.Transparent);
    squareCanvas = (Bitmap)squareCanvas.GetThumbnailImage((int)iconSize, (int)iconSize, null, new IntPtr());

Icon iconResult = Icon.FromHandle(squareCanvas.GetHicon());
return iconResult; }

As discussed the first step is to ensure that the source conforms to square dimensions, implemented here by invoking the CopyToSquareCanvas . Next the source code implements scaling by creating a of which the size is based on the specified IconSizeDimensions value. The method returns a handle to an in the form of an , which serves as a parameter to the static method, which returns an instance of the class.

The Implementation

When the user clicks the “Save” button the Sample Application will present the user with a file save dialog. After the user specifies a file name and file path the Sample Application creates a reference of the source by casting the picturebox’s property to type .

private void btnSaveIcon_Click(object sender, EventArgs e)
{
    if (picSource.Image != null)
    {
        SaveFileDialog sfd = new SaveFileDialog();
        sfd.Title = "Specify a file name and file path";
        sfd.Filter = "Icon Files(*.ico)|*.ico"; 
if (sfd.ShowDialog() == System.Windows.Forms.DialogResult.OK) { System.Drawing.Icon tempIcon = ((Bitmap)picSource.Image).CreateIcon( (IconSizeDimensions)cmbIconSize.SelectedItem);
using (StreamWriter streamWriter = new StreamWriter(sfd.FileName, false)) { tempIcon.Save(streamWriter.BaseStream);
streamWriter.Flush(); streamWriter.Close(); } } } }

When the CreateIcon is invoked, the dimensions selected through the user interface will be passed as a parameter. The last step performed involves persisting the in-memory data to the file system.

C# How to: Blending Bitmap images using colour filters

Article purpose

In this we explore how to combine or blend two by implementing various colour filters affecting how an appear as part of the resulting blended . The concepts detailed in this are reinforced and easily reproduced by making use of the sample source code that accompanies this .

Sample source code

This is accompanied by a sample source code Visual Studio project which is available for download .

Using the sample Application

The provided sample source code builds a which can be used to blend two separate files read from the file system. The methods employed when applying blending operations on two can be adjusted to a fine degree when using the sample application. In addition the sample application provides the functionality allowing the user to save the resulting blended and also being able to save the exact colour filters/algorithms that were applied. Filters created and saved previously can be loaded from the file system allowing users to apply the exact same filtering without having to reconfigure filtering as exposed through the application front end.

Here is a screenshot of the sample application in action:

Blending Bitmap images using colour filters

This scenario blends an of an airplane with an of a pathway in a forest. As a result of the colour filtering implemented when blending the two the output appears to provide somewhat equal elements of both , without expressing all the elements of the original .

What is Colour Filtering?

In general terms making reference to a filter would imply some or other mechanism being capable of sorting or providing selective inclusion/exclusion. There is a vast number of filter algorithms in existence, each applying some form of logic in order to achieve a desired outcome.

In C# implementing colour filter algorithms targeting have the ability to modify or manipulate the colour values expressed by an . The extent to which colour values change as well as the level of detail implemented  is determined by the algorithms/calculations performed when a filter operation becomes active.

An can be thought of as being similar to a two dimensional array with each array element representing the data required to be expressed as an individual pixel. Two dimensional arrays are similar to the concept of a grid defined by rows and columns. The “rows” of pixels represented by an are required to all be of equal length. When thinking in terms of a two dimensional array or grid analogy its likely to make sense that the number of rows and columns in fact translate to an ’s width and height.

An ’s colour data can be expressed beyond rows and columns of pixels. Each pixel contained in an defines a colour value. In this we focus on 32bit of type ARGB.

Image Bit Depth and Pixel Colour components

bit depth is also referred to as the number of bits per pixel. Bits per pixel (Bpp) is an indicator of the amount of storage space required to store an image. A bit value is smallest unit of storage available in a storage medium, it can simply be referred to as being either a value of 1 or 0.

In C# natively the smallest unit of storage expressed comes in the form of the integral type. A value occupies 8 bits of storage, it is not possible in C# to express a value as a bit. The .net framework provides functionality enabling developers to develop in the concept of bit values, which in reality occupies far more than 1 bit of memory when implemented.

If we consider as an example an formatted as 32 bits per pixel, as the name implies, for each pixel contained in an 32 bits of additional storage is required. Knowing that a represents 8 bits logic dictates that 32 bits are equal to four , determined by dividing 32 bits by 8. as 32 bits per pixel therefore can be considered as being encoded as 4 per pixel images. 32 Bpp require 4 to express a single pixel.

The data represented by a pixel’s bytes

32 Bpp equating to 4 per pixel, each in turn representing a colour component, together being capable of expressing a single colour representing the underlying pixel. When an format is referred to as 32 Bpp Argb the four colour components contained in each pixel are: Red, Green, Blue and the Alpha component.

An Alpha component is in indication of a particular pixel’s transparency. Each colour component is represented by a value. The possible value range in terms of stretch from 0 to 255, inclusive, therefore a colour component can only contain a value within this range. A colour value is in fact a representation of that colour’s intensity associated with a single image pixel. A value of 0 indicating the highest possible intensity and 255 reflecting no intensity at all. When the Alpha component is set to a value of 255 it is an expression of no transparency. In the  same fashion when an Alpha component equates to 0 the associated pixel is considered completely transparent, thus negating whatever colour values are expressed by the remaining colour components.

Take note that not all support transparency, almost similar to how certain can only represent grayscale or just black and white pixels. An expressed only in terms of Red, Blue and Green colour values, containing no Alpha component can be implemented as a 24 Bit RGB . Each pixel’s storage requirement being 24 bits or 3 bytes, representing colour intensity values for Red, Green and Blue, each limited to a value ranging from 0 to 255.

Manipulating a Pixel’s individual colour components

In C# it is possible to manipulate the value of a pixel’s individual colour components. Updating the value expressed by a colour component will most likely result in the colour value expressed by the pixel as a whole to also change. The resulting change affects the colour component’s intensity. As an example, if you were to double the value of the Blue colour component the result will most likely be that the associated pixel will represent a colour having twice the intensity of Blue when compared to the previous value.

When you manipulate colour components you are in affect applying a colour filter to the underlying . Iterating the values and performing updates based on a calculation or a conditional check qualifies as an algorithm, an /colour filter in other words.

Retrieving Image Colour components as an array of bytes

As a result of the .net Framework’s memory handling infrastructure and the implementation of the it is a likely possibility that the memory address assigned to a variable when instantiated could change during the variable’s lifetime and scope, not necessarily reflecting the same address in memory when going out of scope. It is the responsibility of the to ensure updating a variable’s memory reference when an operation performed by the results in values kept in memory moving to a new address in memory.

When accessing an ’s underlying data expressed as a array we require a mechanism to signal the that an additional memory reference exist, which might not be updated when values shift in memory  to a new address.

The class provides the method, when invoked the values representing pixel data will not be moved in memory by the until the values are unlocked by invoking the method also defined by the class.

The method defines a return value of type , which contains data related to the lock operation. When invoking the method you are required to pass the object returned when you invoked .

The method provides an overridden implementation allowing the calling code to specify only a certain part of a to lock in memory, expressed as

The sample code that accompanies this updates the entire and therefore specifies the portion to lock as a equal in size to the whole .

The class defines the property of type . contains the address in memory of the ’s first pixel, in other words the starting point of a ’s underlying data.

Creating an algorithm to Filter Bitmap Colour components

The sample code is implemented to blend two , considering one as as source or base canvas and the second as an overlay . The algorithm used to implement a blending operation is defined as a class exposing public properties which affect how blending is achieved.

The source code that implements the algorithm functions by creating an object instance of BitmapFilterData class and setting the associated public properties to certain values, as specified by the user through the user interface at runtime. The BitmapFilterData class can thus be considered to qualify as a dynamic algorithm. The following source code contains the definition of the BitmapFilterData class.

[Serializable]
public class BitmapFilterData
{
    private bool sourceBlueEnabled = false;
    public bool SourceBlueEnabled { get { return sourceBlueEnabled; }set { sourceBlueEnabled = value; } }

private bool sourceGreenEnabled = false; public bool SourceGreenEnabled { get { return sourceGreenEnabled; }set { sourceGreenEnabled = value; } }
private bool sourceRedEnabled = false; public bool SourceRedEnabled { get { return sourceRedEnabled; } set { sourceRedEnabled = value; } }
private bool overlayBlueEnabled = false; public bool OverlayBlueEnabled { get { return overlayBlueEnabled; } set { overlayBlueEnabled = value; } }
private bool overlayGreenEnabled = false; public bool OverlayGreenEnabled { get { return overlayGreenEnabled; } set { overlayGreenEnabled = value; } }
private bool overlayRedEnabled = false; public bool OverlayRedEnabled { get { return overlayRedEnabled; }set { overlayRedEnabled = value; } }
private float sourceBlueLevel = 1.0f; public float SourceBlueLevel { get { return sourceBlueLevel; } set { sourceBlueLevel = value; } }
private float sourceGreenLevel = 1.0f; public float SourceGreenLevel { get { return sourceGreenLevel; } set { sourceGreenLevel = value; } }
private float sourceRedLevel = 1.0f; public float SourceRedLevel {get { return sourceRedLevel; } set { sourceRedLevel = value; } }
private float overlayBlueLevel = 0.0f; public float OverlayBlueLevel { get { return overlayBlueLevel; } set { overlayBlueLevel = value; } }
private float overlayGreenLevel = 0.0f; public float OverlayGreenLevel { get { return overlayGreenLevel; } set { overlayGreenLevel = value; } }
private float overlayRedLevel = 0.0f; public float OverlayRedLevel { get { return overlayRedLevel; } set { overlayRedLevel = value; } }
private ColorComponentBlendType blendTypeBlue = ColorComponentBlendType.Add; public ColorComponentBlendType BlendTypeBlue { get { return blendTypeBlue; }set { blendTypeBlue = value; } }
private ColorComponentBlendType blendTypeGreen = ColorComponentBlendType.Add; public ColorComponentBlendType BlendTypeGreen { get { return blendTypeGreen; } set { blendTypeGreen = value; } }
private ColorComponentBlendType blendTypeRed = ColorComponentBlendType.Add; public ColorComponentBlendType BlendTypeRed { get { return blendTypeRed; } set { blendTypeRed = value; } }
public static string XmlSerialize(BitmapFilterData filterData) { XmlSerializer xmlSerializer = new XmlSerializer(typeof(BitmapFilterData));
XmlWriterSettings xmlSettings = new XmlWriterSettings(); xmlSettings.Encoding = Encoding.UTF8; xmlSettings.Indent = true;
MemoryStream memoryStream = new MemoryStream(); XmlWriter xmlWriter = XmlWriter.Create(memoryStream, xmlSettings);
xmlSerializer.Serialize(xmlWriter, filterData); xmlWriter.Flush();
string xmlString = xmlSettings.Encoding.GetString(memoryStream.ToArray());
xmlWriter.Close(); memoryStream.Close(); memoryStream.Dispose();
return xmlString; }
public static BitmapFilterData XmlDeserialize(string xmlString) { XmlSerializer xmlSerializer = new XmlSerializer(typeof(BitmapFilterData)); MemoryStream memoryStream = new MemoryStream(Encoding.UTF8.GetBytes(xmlString));
XmlReader xmlReader = XmlReader.Create(memoryStream);
BitmapFilterData filterData = null;
if(xmlSerializer.CanDeserialize(xmlReader) == true) { xmlReader.Close(); memoryStream.Position = 0;
filterData = (BitmapFilterData)xmlSerializer.Deserialize(memoryStream); }
memoryStream.Close(); memoryStream.Dispose();
return filterData; } }
public enum ColorComponentBlendType { Add, Subtract, Average, DescendingOrder, AscendingOrder }

The BitmapFilterData class define 6 public properties relating to whether a colour component should be included in calculating the specific component’s new value. The properties are:

  • SourceBlueEnabled
  • SourceGreenEnabled
  • SourceRedEnabled
  • OverlayBlueEnabled
  • OverlayGreenEnabled
  • OverlayRedEnabled

In addition a further 6 related public properties are defined which dictate a factor by which to apply a colour component as input towards the value of the resulting colour component. The properties are:

  • SourceBlueLevel
  • SourceGreenLevel
  • SourceRedLevel
  • OverlayBlueLevel
  • OverlayGreenLevel
  • OverlayRedLevel

Only if a colour component’s related Enabled property is set to true will the associated Level property be applicable when calculating the new value of the colour component.

The BitmapFilterData class next defines 3 public properties of type ColorComponentBlendType. This ’s value determines the calculation performed between source and overlay colour components. Only after each colour component has been modified by applying the associated Level property factor will the calculation defined by the ColorComponentBlendType value be performed.

Source and Overlay colour components can be Added together, Subtracted, Averaged, Discard Larger value or Discard Smaller value. The 3 public properties that define the calculation type to be performed are:

  • BlendTypeBlue
  • BlendTypeGreen
  • BlendTypeRed

Notice the two public method defined by the BitmapFilterData class, XmlSerialize and XmlDeserialize. These two methods enables the calling code to Xml serialize a BitmapFilterData object to a string variable containing the object’s Xml representation, or to create an object instance based on an Xml representation.

The sample application implements Xml serialization and deserialization when saving a BitmapFilterData object to the file system and when creating a BitmapFilterData object previously saved on the file system.

Bitmap Blending Implemented as an extension method

The sample source code provides the definition for the BlendImage method, an targeting the class. This method creates a new memory , of which the colour values are calculated from a source and an overlay as defined by a BitmapFilterData object instance parameter. The source code listing for the BlendImage method:

public static Bitmap BlendImage(this Bitmap baseImage, Bitmap overlayImage, BitmapFilterData filterData)
{
    BitmapData baseImageData = baseImage.LockBits(new Rectangle(0, 0, baseImage.Width, baseImage.Height),
      System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
    byte[] baseImageBuffer = new byte[baseImageData.Stride * baseImageData.Height];

Marshal.Copy(baseImageData.Scan0, baseImageBuffer, 0, baseImageBuffer.Length);
BitmapData overlayImageData = overlayImage.LockBits(new Rectangle(0, 0, overlayImage.Width, overlayImage.Height),
                   System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
    byte[] overlayImageBuffer = new byte[overlayImageData.Stride * overlayImageData.Height];

Marshal.Copy(overlayImageData.Scan0, overlayImageBuffer, 0, overlayImageBuffer.Length);
float sourceBlue = 0; float sourceGreen = 0; float sourceRed = 0;
float overlayBlue = 0; float overlayGreen = 0; float overlayRed = 0;
for (int k = 0; k < baseImageBuffer.Length && k < overlayImageBuffer.Length; k += 4) { sourceBlue = (filterData.SourceBlueEnabled ? baseImageBuffer[k] * filterData.SourceBlueLevel : 0); sourceGreen = (filterData.SourceGreenEnabled ? baseImageBuffer[k+1] * filterData.SourceGreenLevel : 0); sourceRed = (filterData.SourceRedEnabled ? baseImageBuffer[k+2] * filterData.SourceRedLevel : 0);
overlayBlue = (filterData.OverlayBlueEnabled ? overlayImageBuffer[k] * filterData.OverlayBlueLevel : 0); overlayGreen = (filterData.OverlayGreenEnabled ? overlayImageBuffer[k + 1] * filterData.OverlayGreenLevel : 0); overlayRed = (filterData.OverlayRedEnabled ? overlayImageBuffer[k + 2] * filterData.OverlayRedLevel : 0);
baseImageBuffer[k] = CalculateColorComponentBlendValue(sourceBlue, overlayBlue, filterData.BlendTypeBlue); baseImageBuffer[k + 1] = CalculateColorComponentBlendValue(sourceGreen, overlayGreen, filterData.BlendTypeGreen); baseImageBuffer[k + 2] = CalculateColorComponentBlendValue(sourceRed, overlayRed, filterData.BlendTypeRed); }
Bitmap bitmapResult = new Bitmap(baseImage.Width, baseImage.Height, PixelFormat.Format32bppArgb); BitmapData resultImageData = bitmapResult.LockBits(new Rectangle(0, 0, bitmapResult.Width, bitmapResult.Height),
                 System.Drawing.Imaging.ImageLockMode.WriteOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);

Marshal.Copy(baseImageBuffer, 0, resultImageData.Scan0, baseImageBuffer.Length);
bitmapResult.UnlockBits(resultImageData); baseImage.UnlockBits(baseImageData); overlayImage.UnlockBits(overlayImageData);
return bitmapResult; }

The BlendImage method starts off by creating 2 arrays, intended to contain the source and overlay ’s pixel data, expressed in 32 Bits per pixel Argb . As discussed earlier, both ’s data are locked in memory by invoking the method.

Before copying values we first need to declare a array to copy to. The size in of a ’s raw data can be determined by multiplying and  . We obtained an instance of the class when we invoked . The associated object contains data about the lock operation, hence it needs to be supplied as a parameter when unlocking the bits when invoking .

The property refers to the ’s scan width. The property can also be described as the total number of colour components found within one row of pixels from a . Note that the property is rounded up to a four boundary. We can make the deduction that the property modulus 4 would always equal 0. The source code multiplies the property and the property.  Also known as the number of scan lines, the property is equal to the ’s height in pixels from which the object was derived.

Multiplying and   can be considered the same as multiplying an ’s bit depth, Width and also Height.

Once the two arrays have been obtained from the source and overlay the sample source iterates both arrays simultaneously. Notice how the code iterates through the arrays, the for loop increment statement being incremented by a value of 4 each time the loop executes. The for loop executes in terms of pixel values, remember that one pixel consists of 4 /colour components when the has been encoded as a 32Bit Argb .

Within the for loop the code firstly calculates a value for each colour component, except the Alpha  component. The calculation is based on the values defined by the BitmapFilterData object.

Did you notice that the source code assigns colour components in the order of Blue, Green, Red? An easy oversight when it comes to manipulating Bitmaps comes in the form of not remembering the correct order of colour components. The pixel format is referred to as 32bppArgb, in fact most references state Argb being Alpha, Red, Green and Blue.  The colour components representing a pixel are in fact ordered Blue, Green, Red and Alpha, the exact opposite to most naming conventions. Since the for loop’s increment statement increments by four with each loop we can access the Green, Red and Alpha components by adding the values 1, 2 or 3 to the loop’s index component, in this case defined as the variable k. Remember the colour component ordering, it can make for some interesting unintended consequences/features.

If a colour component is set to disabled in the BitmapFilterData object the new colour component will be set to a value of 0. Whilst iterating the arrays a set of calculations are also performed as defined by each colour component’s associated ColorComponentBlendType, defined by the BitmapFilterData object. The calculations are implemented by invoking the CalculateColorComponentBlendValue method.

Once all calculations are completed a new object is created to which the updated pixel data is copied only after the newly created has been locked in memory. Before returning the which now contains the blended colour component values all are unlocked in memory.

Calculating Colour component Blend Values

The CalculateColorComponentBlendValue method is implemented to calculate blend values as follows:

private static byte CalculateColorComponentBlendValue(float source, float overlay, ColorComponentBlendType blendType)
{
    float resultValue = 0;
    byte resultByte = 0;

if (blendType == ColorComponentBlendType.Add) { resultValue = source + overlay; } else if (blendType == ColorComponentBlendType.Subtract) { resultValue = source - overlay; } else if (blendType == ColorComponentBlendType.Average) { resultValue = (source + overlay) / 2.0f; else if (blendType == ColorComponentBlendType.AscendingOrder) { resultValue = (source > overlay ? overlay : source); } else if (blendType == ColorComponentBlendType.DescendingOrder) { resultValue = (source < overlay ? overlay : source); }
if (resultValue > 255) { resultByte = 255; } else if (resultValue < 0) { resultByte = 0; } else { resultByte = (byte)resultValue; }
return resultByte; }

Different sets of calculations are performed  for each colour component based on the Blend type specified by the BitmapFilterData object. It is important to note that before the method returns a result a check is performed ensuring the new colour component’s value falls within the valid range of values, 0 to 255. Should a value exceed 255 that value will be assigned to 255, also if a value is negative that value will be assigned to 0.

The implementation – A Windows Forms Application

The accompanying source code defines a . The ImageBlending application enables a user to specify the source and overlay input , both of which are displayed in a scaled size on the left-hand side of the . On the right-hand side of the users are presented with various controls that can be used to adjust the current colour filtering algorithm.

The following screenshot shows the user specified input source and overlay :

Blending Bitmap images using colour filters

The next screenshot details the controls available which can be used to modify the colour filtering algorithm being applied:

Blending Bitmap images using colour filters

Notice the checkboxes labelled Blue, Green and Red, the check state determines whether the associated colour component’s value will be used in calculations. The six trackbar controls shown above can be used to set the exact factoring value applied to the associated colour component when implementing the colour filter. Lastly towards the bottom of the screen the three comboboxes provided indicate the ColorComponentBlendType value implemented in the colour filter calculations.

The blended output implementing the specified colour filter is located towards the middle of the . Remember in the screenshot shown above, the input being of a beach scene and the overlay being a rock concert at night. The degree and intensity towards how colours from both feature in the output is determined by the colour filter, as defined earlier through the user interface.

Blending Bitmap images using colour filters

Towards the bottom of this part of the screen two buttons can be seen, labelled “Save Filter” and “Load Filter”. When a user creates a filter the filter values can be persisted to the file system by clicking Save Filter and specifying a file path. To implement a colour filter created and saved earlier users can click the Load Filter button and file browse to the correct file.

For the sake of clarity the screenshot featured below captures the entire application front end. The previous images consist of parts taken from this image:

Blending Bitmap images using colour filters

The trackbar controls provided by the user interface enables a user to quickly and effortlessly test a colour filter. The ImageBlend application has been implemented to provide instant feedback/processing based on user input. Changing any of the colour filter values exposed by the user interface results in the colour filter being applied instantly.

Loading a Source Image

When the user clicks on the Load button associated with source a standard Open File Dialog displays prompting the user to select a source file. Once a source file has been selected the ImageBlend application creates a memory from the file system . The method in which colour filtering is implemented requires that input files have a bit depth of 32 bits and is formatted as an Argb . If a user attempts to specify an that is not in a 32bppArgb format the source code will attempt to convert the input image to the correct format by making use of LoadArgbBitmap .

Converting to the 32BppArgb format

The sample source code defines the LoadArgbBitmap targeting the string class. In addition to converting , this method can also be implemented to resize provided . The LoadArgbBitmap method is invoked when a user specifies a source and also when specifying an overlay . In order to provide better colour filtering source and overlay are required to be the same size. Source are never resized, only overlay images are resized to match the size dimensions of the specified source . The following code snippet provides the implementation of the LoadArgbBitmap method:

public  static  Bitmap  LoadArgbBitmap(this  string  filePath, Size? imageDimensions = null )
{
    StreamReader streamReader = new  StreamReader(filePath);
    Bitmap  fileBmp = (Bitmap)Bitmap.FromStream(streamReader.BaseStream);
    streamReader.Close();

int width = fileBmp.Width; int height = fileBmp.Height;
if(imageDimensions != null) { width = imageDimensions.Value.Width; height = imageDimensions.Value.Height; }
if(fileBmp.PixelFormat != PixelFormat.Format32bppArgb || fileBmp.Width != width || fileBmp.Height != height) { fileBmp = GetArgbCopy(fileBmp, width, height); }
return fileBmp; }

are only resized if the required size is known and if an is not already the same size specified as the required size. A scenario where the required size might not yet be known can occur when the user first selects an overlay with no source specified yet. Whenever the source changes the overlay is recreated from the file system and resized if needed.

If the specified does not conform to a 32bppArgb the LoadArgbBitmap invokes the GetArgbCopy method, which is defined as follows:

private static Bitmap GetArgbCopy(Bitmap sourceImage, int width, int height)
{
    Bitmap bmpNew = new Bitmap(width, height, PixelFormat.Format32bppArgb);

using (Graphics graphics = Graphics.FromImage(bmpNew)) { graphics.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality; graphics.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBilinear; graphics.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality; graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality; graphics.DrawImage(sourceImage, new Rectangle(0, 0, bmpNew.Width, bmpNew.Height), new Rectangle(0, 0, sourceImage.Width, sourceImage.Height), GraphicsUnit.Pixel); graphics.Flush(); }
return bmpNew; }

The GetArgbCopy method implements GDI drawing using the class in order to resize an .

Bitmap Blending examples

RHCP1

RHCP5

DayNightBlend

In the example above two photos were taken from the same location. The first photo was taken in the late afternoon, the second once it was dark. The resulting blended appears to show the time of day being closer to sundown than in the first photo. The darker colours from the second photo are mostly excluded by the filter, lighter elements such as the stage lighting appear pronounced as a yellow tint in the output .

The filter values specified are listed below as Xml. To reproduce the filter you can copy the Xml markup and save to disk specifying the file extension *.xbmp.

<?xml version="1.0" encoding="utf-8"?>
<BitmapFilterData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <SourceBlueEnabled>true</SourceBlueEnabled>
  <SourceGreenEnabled>true</SourceGreenEnabled>
  <SourceRedEnabled>true</SourceRedEnabled>
  <OverlayBlueEnabled>true</OverlayBlueEnabled>
  <OverlayGreenEnabled>true</OverlayGreenEnabled>
  <OverlayRedEnabled>true</OverlayRedEnabled>
  <SourceBlueLevel>0.5</SourceBlueLevel>
  <SourceGreenLevel>0.3</SourceGreenLevel>
  <SourceRedLevel>0.2</SourceRedLevel>
  <OverlayBlueLevel>0.75</OverlayBlueLevel>
  <OverlayGreenLevel>0.5</OverlayGreenLevel>
  <OverlayRedLevel>0.6</OverlayRedLevel>
  <BlendTypeBlue>Subtract</BlendTypeBlue>
  <BlendTypeGreen>Add</BlendTypeGreen>
  <BlendTypeRed>Add</BlendTypeRed>
</BitmapFilterData>

Blending two night time :

RHCP5

RHCP6

NightTimePhotoBlend

Fun Fact: In this some of the photos featured were taken at a live concert of The Red Hot Chilli Peppers, performing at Soccer City, Johannesburg, South Africa.

Related Articles

C# How to: Image filtering implemented using a ColorMatrix

Article purpose

This is based around creating basic filters. The different types of filters discussed are: Grayscale, Transparency, Image Negative and Sepia tone. All filters are implemented as targeting the Image class, as well as the Bitmap class as the result of inheritance and upcasting.

Note: This is a follow up to . The previously published related article implements filtering by performing calculations and updating pixel colour component values namely Alpha, Red, Green and Blue. This achieves the same filtering through implementing various transformations, in essence providing an alternative solution. For the sake of convenience I have included the pixel manipulation in addition to the detailed by this article.

Sample source code

This is accompanied by a sample source code Visual Studio project which is available for download .

Implementing a ColorMatrix

From :

Defines a 5 x 5 matrix that contains the coordinates for the RGBAW space. Several methods of the ImageAttributes class adjust image colors by using a color matrix.

The matrix coefficients constitute a 5 x 5 linear transformation that is used for transforming ARGB homogeneous values. For example, an ARGB vector is represented as red, green, blue, alpha and w, where w is always 1.

When implementing a translation using the class values specified are added to one or more of the four colour components. A value that is to be added may only range from 0 to 1 inclusive. Note that adding a negative value results in subtracting values. A good article that illustrates implementing a can be found on MSDN: How to: Translate Image Colors.

The following code snippet provides the implementation of the ApplyColorMatrix method.

private static Bitmap ApplyColorMatrix(Image sourceImage, ColorMatrix colorMatrix)
{
    Bitmap bmp32BppSource = GetArgbCopy(sourceImage);
    Bitmap bmp32BppDest = new Bitmap(bmp32BppSource.Width, bmp32BppSource.Height, PixelFormat.Format32bppArgb);

using (Graphics graphics = Graphics.FromImage(bmp32BppDest)) { ImageAttributes bmpAttributes = new ImageAttributes(); bmpAttributes.SetColorMatrix(colorMatrix); graphics.DrawImage(bmp32BppSource, new Rectangle(0, 0, bmp32BppSource.Width, bmp32BppSource.Height), 0, 0, bmp32BppSource.Width, bmp32BppSource.Height, GraphicsUnit.Pixel, bmpAttributes);
}
bmp32BppSource.Dispose();
return bmp32BppDest; }

The ApplyColorMatrix method signature defines a parameter of type and a second parameter of type . This method is intended to apply the specified upon the parameter specified.

The source is firstly copied in order to ensure that the that is to be transformed is defined with a pixel format of 32 bits per pixel, consisting of the colour components Alpha, Red, Green and Blue. Next we create a blank memory defined to reflect the same size dimensions as the original source . A can be implemented by means of applying an when invoking the defined by the class.

Creating an ARGB copy

The source code snippet listed below converts source images into 32Bit ARGB formatted :

private static Bitmap GetArgbCopy(Image sourceImage)
{
    Bitmap bmpNew = new Bitmap(sourceImage.Width, sourceImage.Height, PixelFormat.Format32bppArgb);

using(Graphics graphics = Graphics.FromImage(bmpNew)) { graphics.DrawImage(sourceImage, new Rectangle (0, 0, bmpNew.Width, bmpNew.Height), new Rectangle (0, 0, bmpNew.Width, bmpNew.Height), GraphicsUnit.Pixel); graphics.Flush(); }
return bmpNew; }

The GetArgbCopy method creates a blank memory Bitmap having the same size dimensions as the source image. The newly created Bitmap is explicitly specified to conform to a 32Bit ARGB format. By making use of a Graphics object of which the context is bound to the new Bitmap instance the source code draws the original image to the new Bitmap.

The Transparency Filter

The transparency filter is intended to create a copy of an , increase the copy’s level of transparency and return the modified copy to the calling code. Listed below is source code which defines the DrawWithTransparency extension method.

public static Bitmap DrawWithTransparency(this Image sourceImage)
{
    ColorMatrix colorMatrix = new ColorMatrix(new float[][]
                        {
                            new float[]{1, 0, 0, 0, 0},
                            new float[]{0, 1, 0, 0, 0},
                            new float[]{0, 0, 1, 0, 0},
                            new float[]{0, 0, 0, 0.3f, 0},
                            new float[]{0, 0, 0, 0, 1}
                        });

return ApplyColorMatrix(sourceImage, colorMatrix); }

Due to the ApplyColorMatrix method defined earlier implementing an filter simply consists of defining the filter algorithm in the form of a and then invoking ApplyColorMatrix.

The is defined to apply no change to the Red, Green and Blue components whilst reducing the Alpha component by 70%.

Image Filters Transparency ColorMatrix

The Grayscale Filter

All of the filter illustrated in this are implemented in a fashion similar to the DrawWithTransparency method. The DrawAsGrayscale is implemented as follows:

public static Bitmap DrawAsGrayscale(this Image sourceImage)
{
    ColorMatrix colorMatrix = new ColorMatrix(new float[][]
                        {
                            new float[]{.3f, .3f, .3f, 0, 0},
                            new float[]{.59f, .59f, .59f, 0, 0},
                            new float[]{.11f, .11f, .11f, 0, 0},
                            new float[]{0, 0, 0, 1, 0},
                            new float[]{0, 0, 0, 0, 1}
                        });

return ApplyColorMatrix(sourceImage, colorMatrix); }

The grayscale filter is achieved by adding together 11% blue, 59% green and 30% red, then assigning the total value to each colour component.

Image Filters Grayscale ColorMatrix

The Sepia Tone Filter

The sepia tone filter is implemented in the DrawAsSepiaTone. Notice how this method follows the same convention as the previously discussed filters. The source code listing is detailed below.

 public static Bitmap DrawAsSepiaTone(this Image sourceImage)
{
     ColorMatrix colorMatrix = new ColorMatrix(new float[][] 
                {
                        new float[]{.393f, .349f, .272f, 0, 0},
                        new float[]{.769f, .686f, .534f, 0, 0},
                        new float[]{.189f, .168f, .131f, 0, 0},
                        new float[]{0, 0, 0, 1, 0},
                        new float[]{0, 0, 0, 0, 1}
                });

return ApplyColorMatrix(sourceImage, colorMatrix); }

The formula used to calculate a sepia tone differs significantly from the grayscale filter discussed previously. The formula can be simplified as follows:

  • Red Component: Sum total of: 39.3% red, 34.9% green , 27.2% blue
  • Green Component: Sum total of: 76.9% red, 68.6% green , 53.4% blue
  • Blue Component: Sum total of: 18.9% red, 16.8% green , 13.1% blue

Image Filters Sepia ColorMatrix

The Negative Image Filter

We can implement an filter that resembles film negatives by literally inverting every pixel’s colour components. Listed below is the source code implementation of the DrawAsNegative .

 public static Bitmap DrawAsNegative(this Image sourceImage)
{
     ColorMatrix colorMatrix = new ColorMatrix(new float[][] 
                    {
                            new float[]{-1, 0, 0, 0, 0},
                            new float[]{0, -1, 0, 0, 0},
                            new float[]{0, 0, -1, 0, 0},
                            new float[]{0, 0, 0, 1, 0},
                            new float[]{1, 1, 1, 1, 1}
                    });

return ApplyColorMatrix(sourceImage, colorMatrix); }

Notice how the negative filter subtracts 1 from each colour component, remember the valid range being 0 to 1 inclusive. This in reality inverts each pixel’s colour component bits. The transform being applied can also be expressed as implementing the bitwise compliment operator on each pixel.

Image Filters Negative Color Matrix

The implementation

The filters described in this are all implemented by means of a . filtering is applied by selecting the corresponding radio button. The source loaded from the file system serves as input to the various filter methods, the filtered copy returned will be displayed next to the original source .

The following code snippet details the radio button checked changed event handler:

private void OnCheckChangedEventHandler(object sender, EventArgs e)
{
    if (picSource.BackgroundImage != null)
    {
        if (rdGrayscaleBits.Checked == true)
        {
             picOutput.BackgroundImage = picSource.BackgroundImage.CopyAsGrayscale();
        }
        else if (rdGrayscaleDraw.Checked == true)
        {
             picOutput.BackgroundImage = picSource.BackgroundImage.DrawAsGrayscale();
        }
        else if (rdTransparencyBits.Checked == true)
        {
             picOutput.BackgroundImage = picSource.BackgroundImage.CopyWithTransparency();
        }
        else if (rdTransparencyDraw.Checked == true)
        {
             picOutput.BackgroundImage = picSource.BackgroundImage.DrawWithTransparency();
        }
        else if (rdNegativeBits.Checked == true)
        {
             picOutput.BackgroundImage = picSource.BackgroundImage.CopyAsNegative();
        }
        else if (rdNegativeDraw.Checked == true)
        {
             picOutput.BackgroundImage = picSource.BackgroundImage.DrawAsNegative();
        }
        else if (rdSepiaBits.Checked == true)
        {
             picOutput.BackgroundImage = picSource.BackgroundImage.CopyAsSepiaTone();
        }
        else if (rdSepiaDraw.Checked == true)
        {
             picOutput.BackgroundImage = picSource.BackgroundImage.DrawAsSepiaTone();
        }
    }
}

Related Articles

C# How to: Using Microsoft Kinect with Windows Forms

Article purpose

This article illustrates a basic introductory level explanation of the steps required to interface with the Microsoft Kinect for Windows sensor using the Kinect for Windows Sdk implemented in a Windows Forms application.

Introduction

Most of the introduction level articles and code samples I’ve come across tend to be implemented as WPF or XNA based applications. Interfacing to the Microsoft Kinect for Windows sensor from Windows Forms based applications can be achieved fairly effortlessly. I find when wanting to create test applications a implementation can be developed quickly with relatively little hassle.

This article details creating a application which displays a live Video feed from the ’s Color camera whilst providing an option in specifying the resolution/frame rate.

Sample source code

This article is accompanied by sample source code in a Visual Studio project which is available for download here.

Sample Source Overview

The sample source code provided with this article consists of a which host a PictureBox control used to display the incoming Video feed. It is possible that more than one might be connected, the user can therefore select a sensor from a ComboBox. Possible resolutions/frame rates can also be configured via an additional ComboBox.

Available Kinect Sensors

The sample application implements the KinectSensor.KinectSensors property to retrieve a list of available . The associated ComboBox will be populated with references to available sensors.

private void PopulateAvailableSensors()
{
    cmbSensors.Items.Clear();

foreach (KinectSensor sensor in KinectSensor.KinectSensors) { cmbSensors.Items.Add(sensor.UniqueKinectId); cmbSensors.SelectedItem = sensor.UniqueKinectId; } }

Initial setup: Form Constructor

In the Form constructor we define the available display modes by adding the relevant enumeration values to the associated ComboBox.

public  MainForm() 
{ 
   InitializeComponent(); 

cmbDisplayMode.Items.Add(ColorImageFormat.RgbResolution640x480Fps30); cmbDisplayMode.Items.Add(ColorImageFormat.RgbResolution1280x960Fps12); cmbDisplayMode.SelectedIndex = 0;
PopulateAvailableSensors(); }

In this scenario we specify:

In the constructor the method PopulateAvailableSensors(), as described earlier, will be invoked after having specified available display modes.

Refreshing Available Sensors

Clicking the refresh button results in calling PopulateAvailableSensors(), which will re-determine available sensors and populate the associated ComboBox. Listed below is the click event handler subscribed to the Refresh button’s click event.

private void btnRefresh_Click(object sender, EventArgs e) 
{ 
    PopulateAvailableSensors(); 
} 

Activating the Sensor

When clicking on the “Activate Sensor” button the sample code first attempts to deactivate the sensor currently in use. After ensuring that resources related to the current sensor have been released the application attempts to assign an instance of the KinectSensor member object based on user selection.

private  void  btnActivateSensor_Click(object  sender, EventArgs  e) 
{ 
    if (cmbSensors.SelectedIndex != -1) 
    { 
        DeActivateSensor(); 

string sensorID = cmbSensors.SelectedItem.ToString();
foreach (KinectSensor sensor in KinectSensor.KinectSensors) { if (sensor.UniqueKinectId == sensorID) { kinectSensor = sensor; SetupSensorVideoInput(); } } } }

Sensor Video setup

In the sample application we implement the Kinect sensor’s ColorStream camera. The exposes an event which informs subscribed event handlers when a video frame is available for processing. The following code snippet shows the code implemented in setting up the video format.

private void SetupSensorVideoInput() 
{ 
    if (kinectSensor != null) 
    { 
         imageFormat = (ColorImageFormat)cmbDisplayMode.SelectedItem; 
         kinectSensor.ColorStream.Enable(imageFormat);     

kinectSensor.ColorFrameReady += new EventHandler<ColorImageFrameReadyEventArgs>(kinectSensor_ColorFrameReady);
kinectSensor.Start(); } }

Processing Video Frames

The ColorFrameReady event informs subscribed event handlers whenever a Color frame becomes available for processing.

private void kinectSensor_ColorFrameReady(object  sender, ColorImageFrameReadyEventArgs  e) 
{ 
    ColorImageFrame colorFrame = e.OpenColorImageFrame();   

if (colorFrame == null) { return; }
Bitmap bitmapFrame = ColorImageFrameToBitmap(colorFrame); picVideoDisplay.Image = bitmapFrame; }
private static Bitmap ColorImageFrameToBitmap(ColorImageFrame colorFrame) { byte[] pixelBuffer = new byte[colorFrame.PixelDataLength]; colorFrame.CopyPixelDataTo(pixelBuffer);
Bitmap bitmapFrame = new Bitmap(colorFrame.Width, colorFrame.Height, PixelFormat.Format32bppRgb);
BitmapData bitmapData = bitmapFrame.LockBits(new Rectangle(0, 0, colorFrame.Width, colorFrame.Height), ImageLockMode.WriteOnly, bitmapFrame.PixelFormat);
IntPtr intPointer = bitmapData.Scan0; Marshal.Copy(pixelBuffer, 0, intPointer, colorFrame.PixelDataLength);
bitmapFrame.UnlockBits(bitmapData);
return bitmapFrame; }

The event handler parameters provides access to a ColorImageFrame object. Before the video frame can be displayed the ColorImageFrame object first needs to be converted to a memory Bitmap, which can then be displayed by the Form’s PictureBox. Notice the ColorImageFrameToBitmap method, it functions by extracting data from the ColorImageFrame as a array, then creates a new memory Bitmap and copies the array of data.

Deactivating a sensor

Earlier when we activated the sensor we deactivated the active sensor. Listed below is the relevant code snippet.

private void DeActivateSensor() 
{ 
    if (kinectSensor != null) 
    { 
        kinectSensor.Stop(); 

kinectSensor.ColorFrameReady -= new EventHandler<ColorImageFrameReadyEventArgs> (kinectSensor_ColorFrameReady);
kinectSensor.Dispose(); } }

Dewald Esterhuizen

Blog Stats

  • 869,746 hits

Enter your email address to follow and receive notifications of new posts by email.

Join 228 other subscribers

Archives

RSS SoftwareByDefault on MSDN

  • An error has occurred; the feed is probably down. Try again later.