Archive for the 'Generics' Category

Article Purpose

This article is intended to serve as an introduction to the concepts related to creating and processing filters being applied on . The filters discussed are: Blur, Gaussian Blur, Soften, Motion Blur, High Pass, Edge Detect, Sharpen and Emboss.

Using the Sample Application

A Sample Application has been included with this article’s sample source code. The Sample Application has been developed to target the platform. Using the Sample Application users are able to select a source/input from the local file system and from a drop down select a filter to apply. Filtered can be saved to the local file system when a user clicks the ‘Save’ button.

The following screenshot shows the Image Convolution Filter sample application in action.

Image Convolution

Before delving into discussions on technical implementation details it is important to have a good understanding of the concepts behind .

In relation to can be considered as algorithms being implemented resulting in translating input/source . Algorithms being applied generally take the form of accepting two input values and producing a third value considered to be a modified version of one of the input values.

can be implemented to produce filters such as: Blurring, Smoothing, Edge Detection, Sharpening and Embossing. The resulting filtered still bares a relation to the input source .

Convolution Matrix

In this article we will be implementing through means of a or representing the algorithms required to produce resulting filtered . A should be considered as a two dimensional array or grid. It is required that the number or rows and columns be of an equal size, which is furthermore required to not be a factor of two. Examples of valid dimensions could be 3×3 or 5×5. Dimensions such as 2×2 or 4×4 would not be valid. Generally the sum total of all the values expressed in a equates to one, although it is not a strict requirement.

The following table represents an example /:

 2 0 0 0 -1 0 0 0 -1

An important aspect to keep in mind: When implementing a the value of a pixel will be determined by the values of the pixel’s neighbouring pixels. The values contained in a represent factor values intended to be multiplied with pixel values. In a the centre pixel represents the pixel currently being modified. Neighbouring matrix values express the factor to be applied to the corresponding neighbouring pixels in regards to the pixel currently being modified.

The ConvolutionFilterBase class

The sample code defines the class ConvolutionFilterBase. This class is intended to represent the minimum requirements of a . When defining a we will be inheriting from the ConvolutionFilterBase class. Because this class and its members have been defined as , implementing classes are required to implement all defined members.

The following code snippet details the ConvolutionFilterBase definition:

```public abstract class ConvolutionFilterBase
{
public abstract string FilterName
{
get;
}

public abstract double Factor
{
get;
}

public abstract double Bias
{
get;
}

public abstract double[,] FilterMatrix
{
get;
}
}```

As to be expected the member property FilterMatrix is intended to represent a two dimensional array containing a . In some instances when the sum total of values do not equate to 1  a filter might implement a Factor value other than the default of 1. Additionally some filters may also require a Bias value to be added the final result value when calculating the matrix.

Calculating a Convolution Filter

Calculating filters and creating the resulting can be achieved by invoking the ConvolutionFilter method. This method is defined as an targeting the class. The definition of the ConvolutionFilter as follows:

``` public static Bitmap ConvolutionFilter<T>(this Bitmap sourceBitmap, T filter)
where T : ConvolutionFilterBase
{
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0,
sourceBitmap.Width, sourceBitmap.Height),

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);

double blue = 0.0;
double green = 0.0;
double red = 0.0;

int filterWidth = filter.FilterMatrix.GetLength(1);
int filterHeight = filter.FilterMatrix.GetLength(0);

int filterOffset = (filterWidth-1) / 2;
int calcOffset = 0;

int byteOffset = 0;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
blue = 0;
green = 0;
red = 0;

byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{

calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blue += (double)(pixelBuffer[calcOffset]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];

green += (double)(pixelBuffer[calcOffset + 1]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];

red += (double)(pixelBuffer[calcOffset + 2]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];
}
}

blue = filter.Factor * blue + filter.Bias;
green = filter.Factor * green + filter.Bias;
red = filter.Factor * red + filter.Bias;

if (blue > 255)
{ blue = 255; }
else if (blue < 0)
{ blue = 0; }

if (green > 255)
{ green = 255; }
else if (green < 0)
{ green = 0; }

if (red > 255)
{ red = 255; }
else if (red < 0)
{ red = 0; }

resultBuffer[byteOffset] = (byte)(blue);
resultBuffer[byteOffset + 1] = (byte)(green);
resultBuffer[byteOffset + 2] = (byte)(red);
resultBuffer[byteOffset + 3] = 255;
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

The following section provides a detailed discussion of the ConvolutionFilter .

ConvolutionFilter<T> – Method Signature

```public static Bitmap ConvolutionFilter<T>
(this Bitmap sourceBitmap,
T filter)
where T : ConvolutionFilterBase ```

The ConvolutionFilter method defines a generic type T constrained by the requirement to be of type ConvolutionFilterBase. The filter parameter being of generic type T has to be of type ConvolutionFilterBase or a type which inherits from the ConvolutionFilterBase class.

Notice how the sourceBitmap parameter type definition is preceded by the indicating the method can be implemented as an . Keep in mind are required to be declared as static.

The sourceBitmap parameter represents the source/input upon which the filter is to be applied. Note that the ConvolutionFilter method is implemented as immutable. The input parameter values are not modified, instead a new instance will be created and returned.

ConvolutionFilter<T> – Creating the Data Buffer

```BitmapData sourceData = sourceBitmap.LockBits
(new Rectangle(0, 0,
sourceBitmap.Width,
sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride *
sourceData.Height];

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer,
0, pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);```

In order to access the underlying ARGB values from a object we first need to lock the into memory by invoking the method. Locking a into memory prevents the from moving a object to a new location in memory.

When invoking the method the source code instantiates a object from the return value. The property represents the number of in a single pixel row. In this scenario the property should be equal to the ’s width in pixels multiplied by four seeing as every pixel consists of four : Alpha, Red, Green and Blue.

The ConvolutionFilter method defines two buffers, of which the size is set to equal the size of the ’s underlying data. The property of type represents the memory address of the first value of a ’s underlying buffer. Using the method we specify the starting point memory address from where to start copying the ’s buffer.

Important to remember is the next operation being performed: invoking the method. If a has been locked into memory ensure releasing the lock by invoking the method.

ConvolutionFilter<T> – Iterating Rows and Columns

```double blue = 0.0;
double green = 0.0;
double red = 0.0;

int filterWidth = filter.FilterMatrix.GetLength(1);
int filterHeight = filter.FilterMatrix.GetLength(0);

int filterOffset = (filterWidth-1) / 2;
int calcOffset = 0;

int byteOffset = 0;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
blue = 0;
green = 0;
red = 0;

byteOffset = offsetY *
sourceData.Stride +
offsetX * 4; ```

The ConvolutionFilter method employs two for loops in order to iterate each pixel represented in the ARGB data buffer. Defining two for loops to iterate a one dimensional array simplifies the concept of accessing the array in terms of rows and columns.

Note that the inner loop is limited to the width of the source/input parameter, in other words the number of horizontal pixels. Remember that the data buffer represents four , Alpha, Red, Green and Blue, for each pixel. The inner loop therefore iterates entire pixels.

As discussed earlier the filter has to be declared as a two dimensional array with the same odd number of rows and columns. If the current pixel being processed relates to the element at the centre of the matrix, the width of the matrix less one divided by two equates to the neighbouring pixel index values.

The index of the current pixel can be calculated by multiplying the current row index (offsetY) and the number of ARGB byte values per row of pixels (sourceData.Stride), to which is added the current column/pixel index (offsetX) multiplied by four.

ConvolutionFilter<T> – Iterating the Matrix

```for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{
calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blue += (double)(pixelBuffer[calcOffset]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];

green += (double)(pixelBuffer[calcOffset + 1]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];

red += (double)(pixelBuffer[calcOffset + 2]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];
}
}```

The ConvolutionFilter method iterates the two dimensional by implementing two for loops, iterating rows and for each row iterating columns. Both loops have been declared to have a starting point equal to the negative value of half the length (filterOffset). Initiating the loops with negative values simplifies implementing the concept of neighbouring pixels.

The first statement performed within the inner loop calculates the index of the neighbouring pixel in relation to the current pixel. Next the value is applied as a factor to the corresponding neighbouring pixel’s individual colour components. The results are added to the totals variables blue, green and red.

In regards to each iteration iterating in terms of an entire pixel, to access individual colour components the source code adds the required colour component offset. Note: ARGB colour components are in fact expressed in reversed order: Blue, Green, Red and Alpha. In other words, a pixel’s first (offset 0) represents Blue, the second (offset 1) represents Green, the third (offset 2) represents Red and the last (offset 3) representing the Alpha component.

ConvolutionFilter<T> – Applying the Factor and Bias

```blue = filter.Factor * blue + filter.Bias;
green = filter.Factor * green + filter.Bias;
red = filter.Factor * red + filter.Bias;

if (blue > 255)
{ blue = 255; }
else if (blue < 0)
{ blue = 0; }

if (green > 255)
{ green = 255; }
else if (green < 0)
{ green = 0; }

if (red > 255)
{ red = 255; }
else if (red < 0)
{ red = 0; }

resultBuffer[byteOffset] = (byte)(blue);
resultBuffer[byteOffset + 1] = (byte)(green);
resultBuffer[byteOffset + 2] = (byte)(red);
resultBuffer[byteOffset + 3] = 255; ```

After iterating the matrix and calculating the matrix values of the current pixel’s Red, Green and Blue colour components we apply the Factor and add the Bias defined by the filter parameter.

Colour components may only contain a value ranging from 0 to 255 inclusive. Before we assign the newly calculated colour component value we ensure that the value falls within the required range. Values which exceed 255 are set to 255 and values less than 0 are set to 0. Note that assignment is implemented in terms of the result buffer, the original source buffer remains unchanged.

ConvolutionFilter<T> – Returning the Result

```Bitmap resultBitmap = new Bitmap(sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(
new Rectangle(0, 0,
resultBitmap.Width,
resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0,
resultBuffer.Length);

resultBitmap.UnlockBits(resultData);

return resultBitmap; ```

The final steps performed by the ConvolutionFilter method involves creating a new object instance and copying the calculated result buffer. In a similar fashion to reading underlying pixel data we copy the result buffer to the object.

Creating Filters

The main requirement when creating a filter is to inherit from the ConvolutionBaseFilter class. The following sections of this article will discuss various filter types and variations where applicable.

To illustrate the different effects resulting from applying filters all of the filters discussed make use of the same source . The original file is licensed under the Creative Commons Attribution 2.0 Generic license and can be downloaded from:

Blur Filters

is typically used to reduce and detail. The filter’s matrix size affects the level of . A larger results in higher level of , whereas a smaller results in a lesser level of .

Blur3x3Filter

The Blur3x3Filter results in a slight to medium level of . The consists of 9 elements in a 3×3 configuration.

```public class Blur3x3Filter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Blur3x3Filter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 0.0, 0.2, 0.0, },
{ 0.2, 0.2, 0.2, },
{ 0.0, 0.2, 0.2, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Blur5x5Filter

The Blur5x5Filter results in a medium level of . The consists of 25 elements in a 5×5 configuration. Notice the factor of 1.0 / 13.0.

```public class Blur5x5Filter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Blur5x5Filter"; }
}

private double factor = 1.0 / 13.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 0, 0, 1, 0, 0, },
{ 0, 1, 1, 1, 0, },
{ 1, 1, 1, 1, 1, },
{ 0, 1, 1, 1, 0, },
{ 0, 0, 1, 0, 0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Gaussian3x3BlurFilter

The Gaussian3x3BlurFilter implements a through a matrix of 9 elements in a 3×3 configuration. The sum total of all elements equal 16, therefore the Factor is defined as 1.0 / 16.0. Applying this filter results in a slight to medium level of .

```public class Gaussian3x3BlurFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Gaussian3x3BlurFilter"; }
}

private double factor = 1.0 / 16.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 1, 2, 1, },
{ 2, 4, 2, },
{ 1, 2, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Gaussian5x5BlurFilter

The Gaussian5x5BlurFilter implements a through a matrix of 25 elements in a 5×5 configuration. The sum total of all elements equal 159, therefore the Factor is defined as 1.0 / 159.0. Applying this filter results in a medium level of .

```public class Gaussian5x5BlurFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Gaussian5x5BlurFilter"; }
}

private double factor = 1.0 / 159.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 2, 04, 05, 04, 2, },
{ 4, 09, 12, 09, 4, },
{ 5, 12, 15, 12, 5, },
{ 4, 09, 12, 09, 4, },
{ 2, 04, 05, 04, 2, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

MotionBlurFilter

By implementing the MotionBlurFilter resulting indicate the appearance of a high level of associated with motion/movement. This filter is a combination of left to right and right to left . The matrix consists of 81 elements in a 9×9 configuration.

```public class MotionBlurFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "MotionBlurFilter"; }
}

private double factor = 1.0 / 18.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 1, },
{ 0, 1, 0, 0, 0, 0, 0, 1, 0, },
{ 0, 0, 1, 0, 0, 0, 1, 0, 0, },
{ 0, 0, 0, 1, 0, 1, 0, 0, 0, },
{ 0, 0, 0, 0, 1, 0, 0, 0, 0, },
{ 0, 0, 0, 1, 0, 1, 0, 0, 0, },
{ 0, 0, 1, 0, 0, 0, 1, 0, 0, },
{ 0, 1, 0, 0, 0, 0, 0, 1, 0, },
{ 1, 0, 0, 0, 0, 0, 0, 0, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

MotionBlurLeftToRightFilter

The MotionBlurLeftToRightFilter creates the effect of as a result of left to right movement. The matrix consists of 81 elements in a 9×9 configuration.

```public class MotionBlurLeftToRightFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "MotionBlurLeftToRightFilter"; }
}

private double factor = 1.0 / 9.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 0, },
{ 0, 1, 0, 0, 0, 0, 0, 0, 0, },
{ 0, 0, 1, 0, 0, 0, 0, 0, 0, },
{ 0, 0, 0, 1, 0, 0, 0, 0, 0, },
{ 0, 0, 0, 0, 1, 0, 0, 0, 0, },
{ 0, 0, 0, 0, 0, 1, 0, 0, 0, },
{ 0, 0, 0, 0, 0, 0, 1, 0, 0, },
{ 0, 0, 0, 0, 0, 0, 0, 1, 0, },
{ 0, 0, 0, 0, 0, 0, 0, 0, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

MotionBlurRightToLeftFilter

The MotionBlurRightToLeftFilter creates the effect of as a result of right to left movement. The consists of 81 elements in a 9×9 configuration.

```public class MotionBlurRightToLeftFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "MotionBlurRightToLeftFilter"; }
}

private double factor = 1.0 / 9.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 0, 0, 0, 0, 0, 0, 0, 0, 1, },
{ 0, 0, 0, 0, 0, 0, 0, 1, 0, },
{ 0, 0, 0, 0, 0, 0, 1, 0, 0, },
{ 0, 0, 0, 0, 0, 1, 0, 0, 0, },
{ 0, 0, 0, 0, 1, 0, 0, 0, 0, },
{ 0, 0, 0, 1, 0, 0, 0, 0, 0, },
{ 0, 0, 1, 0, 0, 0, 0, 0, 0, },
{ 0, 1, 0, 0, 0, 0, 0, 0, 0, },
{ 1, 0, 0, 0, 0, 0, 0, 0, 0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Soften Filter

The SoftenFilter can be used to smooth or soften an . The consists of 9 elements in a 3×3 configuration.

```public class SoftenFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "SoftenFilter"; }
}

private double factor = 1.0 / 8.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 1, 1, 1, },
{ 1, 1, 1, },
{ 1, 1, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Sharpen Filters

Sharpening an does not add additional detail to an image but rather adds emphasis to existing image details. is sometimes referred to as image crispness.

SharpenFilter

This filter is intended as a general usage . In a variety of scenarios this filter should provide a reasonable level of depending on source quality.

```public class SharpenFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "SharpenFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -1, -1, },
{ -1,  9, -1, },
{ -1, -1, -1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Sharpen3x3Filter

The Sharpen3x3Filter results in a medium level of , less intense when compared to the SharpenFilter discussed previously.

```public class Sharpen3x3Filter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Sharpen3x3Filter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { {  0, -1,  0, },
{ -1,  5, -1, },
{  0, -1,  0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Sharpen3x3FactorFilter

The Sharpen3x3FactorFilter provides a level of similar to the Sharpen3x3Filter explored previously. Both filters define a 9 element 3×3 . The filters differ in regards to Factor values. The Sharpen3x3Filter matrix values equate to a sum total of 1, the Sharpen3x3FactorFilter in contrast equate to a sum total of 3. The Sharpen3x3FactorFilter defines a Factor of 1 / 3, resulting in sum total being negated to 1.

```public class Sharpen3x3FactorFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Sharpen3x3FactorFilter"; }
}

private double factor = 1.0 / 3.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { {  0, -2,  0, },
{ -2, 11, -2, },
{  0, -2,  0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Sharpen5x5Filter

The Sharpen5x5Filter matrix defines 25 elements in a 5×5 configuration. The level of resulting from implementing this filter to a greater extent is depended on the source . In some scenarios result images may appear slightly softened.

```public class Sharpen5x5Filter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Sharpen5x5Filter"; }
}

private double factor = 1.0 / 8.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -1, -1, -1, -1, },
{ -1,  2,  2,  2, -1, },
{ -1,  2,  8,  2,  1, },
{ -1,  2,  2,  2, -1, },
{ -1, -1, -1, -1, -1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

IntenseSharpenFilter

The IntenseSharpenFilter produces result with overly emphasized edge lines.

```public class IntenseSharpenFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "IntenseSharpenFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 1,  1, 1, },
{ 1, -7, 1, },
{ 1,  1, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Edge Detection Filters

is the first step towards feature detection and feature extraction in . Edges are generally perceived in in areas exhibiting sudden differences in brightness.

EdgeDetectionFilter

The EdgeDetectionFilter is intended to be used as a general purpose filter, considered appropriate in the majority of scenarios applied.

```public class EdgeDetectionFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "EdgeDetectionFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -1, -1, },
{ -1,  8, -1, },
{ -1, -1, -1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

EdgeDetection45DegreeFilter

The EdgeDetection45DegreeFilter has the ability to detect edges at 45 degree angles more effectively than other filters.

```public class EdgeDetection45DegreeFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "EdgeDetection45DegreeFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1,  0,  0,  0,  0, },
{  0, -2,  0,  0,  0, },
{  0,  0,  6,  0,  0, },
{  0,  0,  0, -2,  0, },
{  0,  0,  0,  0, -1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

HorizontalEdgeDetectionFilter

The HorizontalEdgeDetectionFilter has the ability to detect horizontal edges more effectively than other filters.

```public class HorizontalEdgeDetectionFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "HorizontalEdgeDetectionFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { {  0,  0,  0,  0,  0, },
{  0,  0,  0,  0,  0, },
{ -1, -1,  2,  0,  0, },
{  0,  0,  0,  0,  0, },
{  0,  0,  0,  0,  0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

VerticalEdgeDetectionFilter

The VerticalEdgeDetectionFilter has the ability to detect vertical edges more effectively than other filters.

```public class VerticalEdgeDetectionFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "VerticalEdgeDetectionFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 0,  0, -1,  0,  0, },
{ 0,  0, -1,  0,  0, },
{ 0,  0,  4,  0,  0, },
{ 0,  0, -1,  0,  0, },
{ 0,  0, -1,  0,  0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

EdgeDetectionTopLeftBottomRightFilter

This filter closely resembles an indicating object depth whilst still providing a reasonable level of detail.

```public class EdgeDetectionTopLeftBottomRightFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "EdgeDetectionTopLeftBottomRightFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -5,  0,  0, },
{  0,  0,  0, },
{  0,  0,  5, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Emboss Filters

filters produce result with an emphasis on depth, based on lines/edges expressed in an input/source . Result give the impression of being three dimensional to a varying extent, depended on details defined by input .

EmbossFilter

The EmbossFilter is intended as a general application filter. Take note of the Bias value of 128. Without a bias value, result would be very dark or mostly black.

```public class EmbossFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "EmbossFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 128.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 2,  0,  0, },
{ 0, -1,  0, },
{ 0,  0, -1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Emboss45DegreeFilter

The Emboss45DegreeFilter has the ability to produce result with good emphasis on 45 degree edges/lines.

```public class Emboss45DegreeFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Emboss45DegreeFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 128.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -1,  0, },
{ -1,  0,  1, },
{  0,  1,  1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

EmbossTopLeftBottomRightFilter

The EmbossTopLeftBottomRightFilter provides a more subtle level of result .

```public class EmbossTopLeftBottomRightFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "EmbossTopLeftBottomRightFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 128.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, 0, 0, },
{  0, 0, 0, },
{  0, 0, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

IntenseEmbossFilter

When implementing the IntenseEmbossFilter result provide a good three dimensional/depth level. A drawback of this filter can sometimes be noticed in a reduction detail.

```public class IntenseEmbossFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "IntenseEmbossFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 128.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -1, -1, -1,  0, },
{ -1, -1, -1,  0,  1, },
{ -1, -1,  0,  1,  1, },
{ -1,  0,  1,  1,  1, },
{  0,  1,  1,  1,  1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

High Pass

produce result where only high frequency components are retained.

```public class HighPass3x3Filter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "HighPass3x3Filter"; }
}

private double factor = 1.0 / 16.0;
public override double Factor
{
get { return factor; }
}

private double bias = 128.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -2, -1, },
{ -2, 12, -2, },
{ -1, -2, -1,, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

Article Purpose

The objective of this article is to illustrate Arithmetic being implemented when blending/combining two separate into a single result . The types of Image Arithmetic discussed are: Average, Add, SubtractLeft, SubtractRight, Difference, Multiply, Min, Max and Amplitude.

I created the following by implementing Image Arithmetic using as input a photo of a friend’s ear and a photograph taken at a live concert performance by The Red Hot Chili Peppers.

Using the Sample Application

The Sample source code accompanying this article includes a Sample Application developed on a platform. The Sample Application is indented to provide an implementation of the various types of Image Arithmetic explored in this article.

The Image Arithmetic sample application allows the user to select two source/input from the local file system. The user interface defines a ComboBox dropdown populated with entries relating to types of Image Arithmetic.

The following is a screenshot taken whilst creating the “Red Hot Chili Peppers Concert – Side profile Ear” blended illustrated in the first shown in this article. Notice the stark contrast when comparing the source/input preview . Implementing Image Arithmetic allows us to create a smoothly blended result :

Newly created can be saved to the local file system by clicking the ‘Save Image’ button.

Image Arithmetic

In simple terms Image Arithmetic involves the process of performing calculations on two ’ corresponding pixel colour components. The values resulting from performing calculations represent a single which is combination of the two original source/input . The extent to which a source/input will be represented in the resulting is dependent on the type of Image Arithmetic employed.

The ArithmeticBlend Extension method

In this article Image Arithmetic has been implemented as a single targeting the class. The ArithmeticBlend expects as parameters two source/input objects and a value indicating the type of Image Arithmetic to perform.

The ColorCalculationType defines an value for each type of Image Arithmetic supported. The definition as follows:

```public enum ColorCalculationType
{
Average,
SubtractLeft,
SubtractRight,
Difference,
Multiply,
Min,
Max,
Amplitude
}```

It is only within the ArithmeticBlend that we perform Image Arithmetic. This method accesses the underlying pixel data of each sample and creates copies stored in arrays. Each element within the array data buffer represents a single colour component, either Alpha, Red, Green or Blue.

The following code snippet details the implementation of the ArithmeticBlend :

``` public static Bitmap ArithmeticBlend(this Bitmap sourceBitmap, Bitmap blendBitmap,
ColorCalculator.ColorCalculationType calculationType)
{
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0,
sourceBitmap.Width, sourceBitmap.Height),

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);

BitmapData blendData = blendBitmap.LockBits(new Rectangle (0, 0,
blendBitmap.Width, blendBitmap.Height),

byte[] blendBuffer = new byte [blendData.Stride * blendData.Height];
Marshal.Copy(blendData.Scan0, blendBuffer, 0, blendBuffer.Length);
blendBitmap.UnlockBits(blendData);

for (int k = 0; (k + 4 < pixelBuffer.Length) &&
(k + 4 < blendBuffer.Length); k += 4)
{
pixelBuffer[k] = ColorCalculator.Calculate(pixelBuffer[k],
blendBuffer[k], calculationType);

pixelBuffer[k + 1] = ColorCalculator.Calculate(pixelBuffer[k + 1],
blendBuffer[k + 1], calculationType);

pixelBuffer[k + 2] = ColorCalculator.Calculate(pixelBuffer[k + 2],
blendBuffer[k + 2], calculationType);
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

Marshal.Copy(pixelBuffer, 0, resultData.Scan0, pixelBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

We access and copy the underlying pixel data of each input by making use of the method and also the method.

The method iterates both array data buffers simultaneously, having set the for loop condition to regard the array size of both arrays. Scenarios where array data buffers will differ in size occurs when the source specified are not equal in terms of size dimensions.

Notice how each iteration increments the loop counter by a factor of four allowing us to treat each iteration as a complete pixel value. Remember that each data buffer element represents an individual colour component. Every four elements represents a single pixel consisting of the components: Alpha, Red, Green and Blue

Take Note: The ordering of colour components are the exact opposite of the expected order. Each pixel’s colour components are ordered: Blue, Green, Red, Alpha. Since we are iterating an entire pixel with each iteration the for loop counter value will always equate to an element index representing the Blue colour component. In order to access the Red and Green colour components we simply add the values one and two respectively to the for loop counter value, depending on whether accessing the Green or Red colour components.

The task of performing the actual arithmetic has been encapsulated within the static Calculate method, a public member of the static class ColorCalculator. The Calculate method is more detail in the following section of this article.

The final task performed by the ArithmeticBlend method involves creating a new instance of the class which is then updated/populated using the resulting array data buffer previously modified.

The ColorCalculator.Calculate method

The algorithms implemented in Image Arithmetic are encapsulated within the ColorCalculator.Calculate method. When implementing this method no knowledge of the technical implementation details are required. The parameters required are two values each representing a single colour component, one from each source . The only other required parameter is an value of type ColorCalculationType which will indicate which type of Image Arithmetic should be implemented using the parameters as operands.

The following code snippet details the full implementation of the ColorCalculator.Calculate method:

``` public static byte Calculate(byte color1, byte color2,
ColorCalculationType calculationType)
{
byte resultValue = 0;
int intResult = 0;

if (calculationType == ColorCalculationType.Add)
{
intResult = color1 + color2;
}
else if (calculationType == ColorCalculationType.Average)
{
intResult = (color1 + color2) / 2;
}
else if (calculationType == ColorCalculationType.SubtractLeft)
{
intResult = color1 - color2;
}
else if (calculationType == ColorCalculationType.SubtractRight)
{
intResult = color2 - color1;
}
else if (calculationType == ColorCalculationType.Difference)
{
intResult = Math.Abs(color1 - color2);
}
else if (calculationType == ColorCalculationType.Multiply)
{
intResult = (int)((color1 / 255.0 * color2 / 255.0) * 255.0);
}
else if (calculationType == ColorCalculationType.Min)
{
intResult = (color1 < color2 ? color1 : color2);
}
else if (calculationType == ColorCalculationType.Max)
{
intResult = (color1 > color2 ? color1 : color2);
}
else if (calculationType == ColorCalculationType.Amplitude)
{
intResult = (int)(Math.Sqrt(color1 * color1 + color2 * color2)
/ Math .Sqrt(2.0));
}

if (intResult < 0)
{
resultValue = 0;
}
else if (intResult > 255)
{
resultValue = 255;
}
else
{
resultValue = (byte)intResult;
}

return resultValue;
}
```

The bulk of the ColorCalculator.Calculate method’s implementation is set around a series of if/else if statements evaluating the method parameter passed when the method had been invoked.

Colour component values can only range from 0 to 255 inclusive. Calculations performed might result in values which do not fall within the valid range of values. Calculated values less than zero are set to zero and values exceeding 255 are set to 255, sometimes this is referred to clamping.

The following sections of this article provides an explanation of each type of Image Arithmetic implemented.

```if (calculationType == ColorCalculationType.Add)
{
intResult = color1 + color2;
}```

The Add algorithm is straightforward, simply adding together the two colour component values. In other words the resulting colour component will be set to equal the sum of both source colour component, provided the total does not exceed 255.

Sample Image

Image Arithmetic: Average

```if (calculationType == ColorCalculationType.Average)
{
intResult = (color1 + color2) / 2;
}```

The Average algorithm calculates a simple average by adding together the two colour components and then dividing the result by two.

Sample Image

Image Arithmetic: SubtractLeft

```if (calculationType == ColorCalculationType.SubtractLeft)
{
intResult = color1 - color2;
}```

The SubtractLeft algorithm subtracts the value of the second colour component parameter from the first colour component parameter.

Sample Image

Image Arithmetic: SubtractRight

```if (calculationType == ColorCalculationType.SubtractRight)
{
intResult = color2 - color1;
}```

The SubtractRight algorithm, in contrast to SubtractLeft, subtracts the value of the first colour component parameter from the second colour component parameter.

Sample Image

Image Arithmetic: Difference

```if (calculationType == ColorCalculationType.Difference)
{
intResult = Math.Abs(color1 - color2);
}```

The Difference algorithm subtracts the value of the second colour component parameter from the first colour component parameter. By passing the result of the subtraction as a parameter to the Math.Abs method the algorithm ensures only calculating absolute/positive values. In other words calculating the difference in value between colour component parameters.

Sample Image

Image Arithmetic: Multiply

```if (calculationType == ColorCalculationType.Multiply)
{
intResult = (int)((color1 / 255.0 * color2 / 255.0) * 255.0);
}```

The Multiply algorithm divides each colour component parameter by a value of 255 and the proceeds to multiply the results of the division, the result is then further multiplied by a value of 255.

Sample Image

Image Arithmetic: Min

```if (calculationType == ColorCalculationType.Min)
{
intResult = (color1 < color2 ? color1 : color2);
}```

The Min algorithm simply compares the two colour component parameters and returns the smallest value of the two.

Sample Image

Image Arithmetic: Max

```if (calculationType == ColorCalculationType.Max)
{
intResult = (color1 > color2 ? color1 : color2);
}```

The Max algorithm, as can be expected, will produce the exact opposite result when compared to the Min algorithm. This algorithm compares the two colour component parameters and returns the larger value of the two.

Sample Image

Image Arithmetic: Amplitude

``` else if (calculationType == ColorCalculationType.Amplitude)
{
intResult = (int)(Math.Sqrt(color1 * color1 +
color2 * color2) /
Math.Sqrt(2.0));
}
```

The Amplitude algorithm calculates the amplitude of the two colour component parameters by multiplying each colour component by itself and then sums the results. The last step divides the result thus far by the square root of two.

Sample Image

Article purpose

In this we explore how to combine or blend two by implementing various colour filters affecting how an appear as part of the resulting blended . The concepts detailed in this are reinforced and easily reproduced by making use of the sample source code that accompanies this .

Sample source code

This is accompanied by a sample source code Visual Studio project which is available for download .

Using the sample Application

The provided sample source code builds a which can be used to blend two separate files read from the file system. The methods employed when applying blending operations on two can be adjusted to a fine degree when using the sample application. In addition the sample application provides the functionality allowing the user to save the resulting blended and also being able to save the exact colour filters/algorithms that were applied. Filters created and saved previously can be loaded from the file system allowing users to apply the exact same filtering without having to reconfigure filtering as exposed through the application front end.

Here is a screenshot of the sample application in action:

This scenario blends an of an airplane with an of a pathway in a forest. As a result of the colour filtering implemented when blending the two the output appears to provide somewhat equal elements of both , without expressing all the elements of the original .

What is Colour Filtering?

In general terms making reference to a filter would imply some or other mechanism being capable of sorting or providing selective inclusion/exclusion. There is a vast number of filter algorithms in existence, each applying some form of logic in order to achieve a desired outcome.

In C# implementing colour filter algorithms targeting have the ability to modify or manipulate the colour values expressed by an . The extent to which colour values change as well as the level of detail implemented  is determined by the algorithms/calculations performed when a filter operation becomes active.

An can be thought of as being similar to a two dimensional array with each array element representing the data required to be expressed as an individual pixel. Two dimensional arrays are similar to the concept of a grid defined by rows and columns. The “rows” of pixels represented by an are required to all be of equal length. When thinking in terms of a two dimensional array or grid analogy its likely to make sense that the number of rows and columns in fact translate to an ’s width and height.

An ’s colour data can be expressed beyond rows and columns of pixels. Each pixel contained in an defines a colour value. In this we focus on 32bit of type ARGB.

Image Bit Depth and Pixel Colour components

bit depth is also referred to as the number of bits per pixel. Bits per pixel (Bpp) is an indicator of the amount of storage space required to store an image. A bit value is smallest unit of storage available in a storage medium, it can simply be referred to as being either a value of 1 or 0.

In C# natively the smallest unit of storage expressed comes in the form of the integral type. A value occupies 8 bits of storage, it is not possible in C# to express a value as a bit. The .net framework provides functionality enabling developers to develop in the concept of bit values, which in reality occupies far more than 1 bit of memory when implemented.

If we consider as an example an formatted as 32 bits per pixel, as the name implies, for each pixel contained in an 32 bits of additional storage is required. Knowing that a represents 8 bits logic dictates that 32 bits are equal to four , determined by dividing 32 bits by 8. as 32 bits per pixel therefore can be considered as being encoded as 4 per pixel images. 32 Bpp require 4 to express a single pixel.

The data represented by a pixel’s bytes

32 Bpp equating to 4 per pixel, each in turn representing a colour component, together being capable of expressing a single colour representing the underlying pixel. When an format is referred to as 32 Bpp Argb the four colour components contained in each pixel are: Red, Green, Blue and the Alpha component.

An Alpha component is in indication of a particular pixel’s transparency. Each colour component is represented by a value. The possible value range in terms of stretch from 0 to 255, inclusive, therefore a colour component can only contain a value within this range. A colour value is in fact a representation of that colour’s intensity associated with a single image pixel. A value of 0 indicating the highest possible intensity and 255 reflecting no intensity at all. When the Alpha component is set to a value of 255 it is an expression of no transparency. In the  same fashion when an Alpha component equates to 0 the associated pixel is considered completely transparent, thus negating whatever colour values are expressed by the remaining colour components.

Take note that not all support transparency, almost similar to how certain can only represent grayscale or just black and white pixels. An expressed only in terms of Red, Blue and Green colour values, containing no Alpha component can be implemented as a 24 Bit RGB . Each pixel’s storage requirement being 24 bits or 3 bytes, representing colour intensity values for Red, Green and Blue, each limited to a value ranging from 0 to 255.

Manipulating a Pixel’s individual colour components

In C# it is possible to manipulate the value of a pixel’s individual colour components. Updating the value expressed by a colour component will most likely result in the colour value expressed by the pixel as a whole to also change. The resulting change affects the colour component’s intensity. As an example, if you were to double the value of the Blue colour component the result will most likely be that the associated pixel will represent a colour having twice the intensity of Blue when compared to the previous value.

When you manipulate colour components you are in affect applying a colour filter to the underlying . Iterating the values and performing updates based on a calculation or a conditional check qualifies as an algorithm, an /colour filter in other words.

Retrieving Image Colour components as an array of bytes

As a result of the .net Framework’s memory handling infrastructure and the implementation of the it is a likely possibility that the memory address assigned to a variable when instantiated could change during the variable’s lifetime and scope, not necessarily reflecting the same address in memory when going out of scope. It is the responsibility of the to ensure updating a variable’s memory reference when an operation performed by the results in values kept in memory moving to a new address in memory.

When accessing an ’s underlying data expressed as a array we require a mechanism to signal the that an additional memory reference exist, which might not be updated when values shift in memory  to a new address.

The class provides the method, when invoked the values representing pixel data will not be moved in memory by the until the values are unlocked by invoking the method also defined by the class.

The method defines a return value of type , which contains data related to the lock operation. When invoking the method you are required to pass the object returned when you invoked .

The method provides an overridden implementation allowing the calling code to specify only a certain part of a to lock in memory, expressed as

The sample code that accompanies this updates the entire and therefore specifies the portion to lock as a equal in size to the whole .

The class defines the property of type . contains the address in memory of the ’s first pixel, in other words the starting point of a ’s underlying data.

Creating an algorithm to Filter Bitmap Colour components

The sample code is implemented to blend two , considering one as as source or base canvas and the second as an overlay . The algorithm used to implement a blending operation is defined as a class exposing public properties which affect how blending is achieved.

The source code that implements the algorithm functions by creating an object instance of BitmapFilterData class and setting the associated public properties to certain values, as specified by the user through the user interface at runtime. The BitmapFilterData class can thus be considered to qualify as a dynamic algorithm. The following source code contains the definition of the BitmapFilterData class.

```[Serializable]
public class BitmapFilterData
{
private bool sourceBlueEnabled = false;
public bool SourceBlueEnabled { get { return sourceBlueEnabled; }set { sourceBlueEnabled = value; } }

private bool sourceGreenEnabled = false;
public bool SourceGreenEnabled { get { return sourceGreenEnabled; }set { sourceGreenEnabled = value; } }

private bool sourceRedEnabled = false;
public bool SourceRedEnabled { get { return sourceRedEnabled; } set { sourceRedEnabled = value; } }

private bool overlayBlueEnabled = false;
public bool OverlayBlueEnabled { get { return overlayBlueEnabled; } set { overlayBlueEnabled = value; } }

private bool overlayGreenEnabled = false;
public bool OverlayGreenEnabled { get { return overlayGreenEnabled; } set { overlayGreenEnabled = value; } }

private bool overlayRedEnabled = false;
public bool OverlayRedEnabled { get { return overlayRedEnabled; }set { overlayRedEnabled = value; } }

private float sourceBlueLevel = 1.0f;
public float SourceBlueLevel { get { return sourceBlueLevel; } set { sourceBlueLevel = value; } }

private float sourceGreenLevel = 1.0f;
public float SourceGreenLevel { get { return sourceGreenLevel; } set { sourceGreenLevel = value; } }

private float sourceRedLevel = 1.0f;
public float SourceRedLevel {get { return sourceRedLevel; } set { sourceRedLevel = value; } }

private float overlayBlueLevel = 0.0f;
public float OverlayBlueLevel { get { return overlayBlueLevel; } set { overlayBlueLevel = value; } }

private float overlayGreenLevel = 0.0f;
public float OverlayGreenLevel { get { return overlayGreenLevel; } set { overlayGreenLevel = value; } }

private float overlayRedLevel = 0.0f;
public float OverlayRedLevel { get { return overlayRedLevel; } set { overlayRedLevel = value; } }

private ColorComponentBlendType blendTypeBlue = ColorComponentBlendType.Add;
public ColorComponentBlendType BlendTypeBlue { get { return blendTypeBlue; }set { blendTypeBlue = value; } }

private ColorComponentBlendType blendTypeGreen = ColorComponentBlendType.Add;
public ColorComponentBlendType BlendTypeGreen { get { return blendTypeGreen; } set { blendTypeGreen = value; } }

private ColorComponentBlendType blendTypeRed = ColorComponentBlendType.Add;
public ColorComponentBlendType BlendTypeRed { get { return blendTypeRed; } set { blendTypeRed = value; } }

public static string XmlSerialize(BitmapFilterData filterData)
{
XmlSerializer xmlSerializer = new XmlSerializer(typeof(BitmapFilterData));

XmlWriterSettings xmlSettings = new XmlWriterSettings();
xmlSettings.Encoding = Encoding.UTF8;
xmlSettings.Indent = true;

MemoryStream memoryStream = new MemoryStream();
XmlWriter xmlWriter = XmlWriter.Create(memoryStream, xmlSettings);

xmlSerializer.Serialize(xmlWriter, filterData);
xmlWriter.Flush();

string xmlString = xmlSettings.Encoding.GetString(memoryStream.ToArray());

xmlWriter.Close();
memoryStream.Close();
memoryStream.Dispose();

return xmlString;
}

public static BitmapFilterData XmlDeserialize(string xmlString)
{
XmlSerializer xmlSerializer = new XmlSerializer(typeof(BitmapFilterData));
MemoryStream memoryStream = new MemoryStream(Encoding.UTF8.GetBytes(xmlString));

BitmapFilterData filterData = null;

{
memoryStream.Position = 0;

filterData = (BitmapFilterData)xmlSerializer.Deserialize(memoryStream);
}

memoryStream.Close();
memoryStream.Dispose();

return filterData;
}
}

public enum ColorComponentBlendType
{
Subtract,
Average,
DescendingOrder,
AscendingOrder
}```

The BitmapFilterData class define 6 public properties relating to whether a colour component should be included in calculating the specific component’s new value. The properties are:

• SourceBlueEnabled
• SourceGreenEnabled
• SourceRedEnabled
• OverlayBlueEnabled
• OverlayGreenEnabled
• OverlayRedEnabled

In addition a further 6 related public properties are defined which dictate a factor by which to apply a colour component as input towards the value of the resulting colour component. The properties are:

• SourceBlueLevel
• SourceGreenLevel
• SourceRedLevel
• OverlayBlueLevel
• OverlayGreenLevel
• OverlayRedLevel

Only if a colour component’s related Enabled property is set to true will the associated Level property be applicable when calculating the new value of the colour component.

The BitmapFilterData class next defines 3 public properties of type ColorComponentBlendType. This ’s value determines the calculation performed between source and overlay colour components. Only after each colour component has been modified by applying the associated Level property factor will the calculation defined by the ColorComponentBlendType value be performed.

Source and Overlay colour components can be Added together, Subtracted, Averaged, Discard Larger value or Discard Smaller value. The 3 public properties that define the calculation type to be performed are:

• BlendTypeBlue
• BlendTypeGreen
• BlendTypeRed

Notice the two public method defined by the BitmapFilterData class, XmlSerialize and XmlDeserialize. These two methods enables the calling code to Xml serialize a BitmapFilterData object to a string variable containing the object’s Xml representation, or to create an object instance based on an Xml representation.

The sample application implements Xml serialization and deserialization when saving a BitmapFilterData object to the file system and when creating a BitmapFilterData object previously saved on the file system.

Bitmap Blending Implemented as an extension method

The sample source code provides the definition for the BlendImage method, an targeting the class. This method creates a new memory , of which the colour values are calculated from a source and an overlay as defined by a BitmapFilterData object instance parameter. The source code listing for the BlendImage method:

```public static Bitmap BlendImage(this Bitmap baseImage, Bitmap overlayImage, BitmapFilterData filterData)
{
BitmapData baseImageData = baseImage.LockBits(new Rectangle(0, 0, baseImage.Width, baseImage.Height),```
```      System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
byte[] baseImageBuffer = new byte[baseImageData.Stride * baseImageData.Height];

Marshal.Copy(baseImageData.Scan0, baseImageBuffer, 0, baseImageBuffer.Length);

BitmapData overlayImageData = overlayImage.LockBits(new Rectangle(0, 0, overlayImage.Width, overlayImage.Height),```
```                   System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
byte[] overlayImageBuffer = new byte[overlayImageData.Stride * overlayImageData.Height];

Marshal.Copy(overlayImageData.Scan0, overlayImageBuffer, 0, overlayImageBuffer.Length);

float sourceBlue = 0;
float sourceGreen = 0;
float sourceRed = 0;

float overlayBlue = 0;
float overlayGreen = 0;
float overlayRed = 0;

for (int k = 0; k < baseImageBuffer.Length && k < overlayImageBuffer.Length; k += 4)
{
sourceBlue = (filterData.SourceBlueEnabled ? baseImageBuffer[k] * filterData.SourceBlueLevel : 0);
sourceGreen = (filterData.SourceGreenEnabled ? baseImageBuffer[k+1] * filterData.SourceGreenLevel : 0);
sourceRed = (filterData.SourceRedEnabled ? baseImageBuffer[k+2] * filterData.SourceRedLevel : 0);

overlayBlue = (filterData.OverlayBlueEnabled ? overlayImageBuffer[k] * filterData.OverlayBlueLevel : 0);
overlayGreen = (filterData.OverlayGreenEnabled ? overlayImageBuffer[k + 1] * filterData.OverlayGreenLevel : 0);
overlayRed = (filterData.OverlayRedEnabled ? overlayImageBuffer[k + 2] * filterData.OverlayRedLevel : 0);

baseImageBuffer[k] = CalculateColorComponentBlendValue(sourceBlue, overlayBlue, filterData.BlendTypeBlue);
baseImageBuffer[k + 1] = CalculateColorComponentBlendValue(sourceGreen, overlayGreen, filterData.BlendTypeGreen);
baseImageBuffer[k + 2] = CalculateColorComponentBlendValue(sourceRed, overlayRed, filterData.BlendTypeRed);
}

Bitmap bitmapResult = new Bitmap(baseImage.Width, baseImage.Height, PixelFormat.Format32bppArgb);
BitmapData resultImageData = bitmapResult.LockBits(new Rectangle(0, 0, bitmapResult.Width, bitmapResult.Height),```
```                 System.Drawing.Imaging.ImageLockMode.WriteOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);

Marshal.Copy(baseImageBuffer, 0, resultImageData.Scan0, baseImageBuffer.Length);

bitmapResult.UnlockBits(resultImageData);
baseImage.UnlockBits(baseImageData);
overlayImage.UnlockBits(overlayImageData);

return bitmapResult;
}```

The BlendImage method starts off by creating 2 arrays, intended to contain the source and overlay ’s pixel data, expressed in 32 Bits per pixel Argb . As discussed earlier, both ’s data are locked in memory by invoking the method.

Before copying values we first need to declare a array to copy to. The size in of a ’s raw data can be determined by multiplying and  . We obtained an instance of the class when we invoked . The associated object contains data about the lock operation, hence it needs to be supplied as a parameter when unlocking the bits when invoking .

The property refers to the ’s scan width. The property can also be described as the total number of colour components found within one row of pixels from a . Note that the property is rounded up to a four boundary. We can make the deduction that the property modulus 4 would always equal 0. The source code multiplies the property and the property.  Also known as the number of scan lines, the property is equal to the ’s height in pixels from which the object was derived.

Multiplying and   can be considered the same as multiplying an ’s bit depth, Width and also Height.

Once the two arrays have been obtained from the source and overlay the sample source iterates both arrays simultaneously. Notice how the code iterates through the arrays, the for loop increment statement being incremented by a value of 4 each time the loop executes. The for loop executes in terms of pixel values, remember that one pixel consists of 4 /colour components when the has been encoded as a 32Bit Argb .

Within the for loop the code firstly calculates a value for each colour component, except the Alpha  component. The calculation is based on the values defined by the BitmapFilterData object.

Did you notice that the source code assigns colour components in the order of Blue, Green, Red? An easy oversight when it comes to manipulating Bitmaps comes in the form of not remembering the correct order of colour components. The pixel format is referred to as 32bppArgb, in fact most references state Argb being Alpha, Red, Green and Blue.  The colour components representing a pixel are in fact ordered Blue, Green, Red and Alpha, the exact opposite to most naming conventions. Since the for loop’s increment statement increments by four with each loop we can access the Green, Red and Alpha components by adding the values 1, 2 or 3 to the loop’s index component, in this case defined as the variable k. Remember the colour component ordering, it can make for some interesting unintended consequences/features.

If a colour component is set to disabled in the BitmapFilterData object the new colour component will be set to a value of 0. Whilst iterating the arrays a set of calculations are also performed as defined by each colour component’s associated ColorComponentBlendType, defined by the BitmapFilterData object. The calculations are implemented by invoking the CalculateColorComponentBlendValue method.

Once all calculations are completed a new object is created to which the updated pixel data is copied only after the newly created has been locked in memory. Before returning the which now contains the blended colour component values all are unlocked in memory.

Calculating Colour component Blend Values

The CalculateColorComponentBlendValue method is implemented to calculate blend values as follows:

```private static byte CalculateColorComponentBlendValue(float source, float overlay, ColorComponentBlendType blendType)
{
float resultValue = 0;
byte resultByte = 0;

if (blendType == ColorComponentBlendType.Add)
{
resultValue = source + overlay;
}
else if (blendType == ColorComponentBlendType.Subtract)
{
resultValue = source - overlay;
}
else if (blendType == ColorComponentBlendType.Average)
{
resultValue = (source + overlay) / 2.0f;

else if (blendType == ColorComponentBlendType.AscendingOrder)
{
resultValue = (source > overlay ? overlay : source);
}
else if (blendType == ColorComponentBlendType.DescendingOrder)
{
resultValue = (source < overlay ? overlay : source);
}

if (resultValue > 255)
{
resultByte = 255;
}
else if (resultValue < 0)
{
resultByte = 0;
}
else
{
resultByte = (byte)resultValue;
}

return resultByte;
}```

Different sets of calculations are performed  for each colour component based on the Blend type specified by the BitmapFilterData object. It is important to note that before the method returns a result a check is performed ensuring the new colour component’s value falls within the valid range of values, 0 to 255. Should a value exceed 255 that value will be assigned to 255, also if a value is negative that value will be assigned to 0.

The implementation – A Windows Forms Application

The accompanying source code defines a . The ImageBlending application enables a user to specify the source and overlay input , both of which are displayed in a scaled size on the left-hand side of the . On the right-hand side of the users are presented with various controls that can be used to adjust the current colour filtering algorithm.

The following screenshot shows the user specified input source and overlay :

The next screenshot details the controls available which can be used to modify the colour filtering algorithm being applied:

Notice the checkboxes labelled Blue, Green and Red, the check state determines whether the associated colour component’s value will be used in calculations. The six trackbar controls shown above can be used to set the exact factoring value applied to the associated colour component when implementing the colour filter. Lastly towards the bottom of the screen the three comboboxes provided indicate the ColorComponentBlendType value implemented in the colour filter calculations.

The blended output implementing the specified colour filter is located towards the middle of the . Remember in the screenshot shown above, the input being of a beach scene and the overlay being a rock concert at night. The degree and intensity towards how colours from both feature in the output is determined by the colour filter, as defined earlier through the user interface.

Towards the bottom of this part of the screen two buttons can be seen, labelled “Save Filter” and “Load Filter”. When a user creates a filter the filter values can be persisted to the file system by clicking Save Filter and specifying a file path. To implement a colour filter created and saved earlier users can click the Load Filter button and file browse to the correct file.

For the sake of clarity the screenshot featured below captures the entire application front end. The previous images consist of parts taken from this image:

The trackbar controls provided by the user interface enables a user to quickly and effortlessly test a colour filter. The ImageBlend application has been implemented to provide instant feedback/processing based on user input. Changing any of the colour filter values exposed by the user interface results in the colour filter being applied instantly.

When the user clicks on the Load button associated with source a standard Open File Dialog displays prompting the user to select a source file. Once a source file has been selected the ImageBlend application creates a memory from the file system . The method in which colour filtering is implemented requires that input files have a bit depth of 32 bits and is formatted as an Argb . If a user attempts to specify an that is not in a 32bppArgb format the source code will attempt to convert the input image to the correct format by making use of LoadArgbBitmap .

Converting to the 32BppArgb format

The sample source code defines the LoadArgbBitmap targeting the string class. In addition to converting , this method can also be implemented to resize provided . The LoadArgbBitmap method is invoked when a user specifies a source and also when specifying an overlay . In order to provide better colour filtering source and overlay are required to be the same size. Source are never resized, only overlay images are resized to match the size dimensions of the specified source . The following code snippet provides the implementation of the LoadArgbBitmap method:

```public  static  Bitmap  LoadArgbBitmap(this  string  filePath, Size? imageDimensions = null )
{
Bitmap  fileBmp = (Bitmap)Bitmap.FromStream(streamReader.BaseStream);

int width = fileBmp.Width;
int height = fileBmp.Height;

if(imageDimensions != null)
{
width = imageDimensions.Value.Width;
height = imageDimensions.Value.Height;
}

if(fileBmp.PixelFormat != PixelFormat.Format32bppArgb || fileBmp.Width != width || fileBmp.Height != height)
{
fileBmp = GetArgbCopy(fileBmp, width, height);
}

return  fileBmp;
}```

are only resized if the required size is known and if an is not already the same size specified as the required size. A scenario where the required size might not yet be known can occur when the user first selects an overlay with no source specified yet. Whenever the source changes the overlay is recreated from the file system and resized if needed.

If the specified does not conform to a 32bppArgb the LoadArgbBitmap invokes the GetArgbCopy method, which is defined as follows:

```private static Bitmap GetArgbCopy(Bitmap sourceImage, int width, int height)
{
Bitmap bmpNew = new Bitmap(width, height, PixelFormat.Format32bppArgb);

using (Graphics graphics = Graphics.FromImage(bmpNew))
{
graphics.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
graphics.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBilinear;
graphics.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;
graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;

graphics.DrawImage(sourceImage, new Rectangle(0, 0, bmpNew.Width, bmpNew.Height), new Rectangle(0, 0, sourceImage.Width, sourceImage.Height), GraphicsUnit.Pixel);
graphics.Flush();
}

return bmpNew;
}```

The GetArgbCopy method implements GDI drawing using the class in order to resize an .

Bitmap Blending examples

In the example above two photos were taken from the same location. The first photo was taken in the late afternoon, the second once it was dark. The resulting blended appears to show the time of day being closer to sundown than in the first photo. The darker colours from the second photo are mostly excluded by the filter, lighter elements such as the stage lighting appear pronounced as a yellow tint in the output .

The filter values specified are listed below as Xml. To reproduce the filter you can copy the Xml markup and save to disk specifying the file extension *.xbmp.

```﻿<?xml version="1.0" encoding="utf-8"?>
<BitmapFilterData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<SourceBlueEnabled>true</SourceBlueEnabled>
<SourceGreenEnabled>true</SourceGreenEnabled>
<SourceRedEnabled>true</SourceRedEnabled>
<OverlayBlueEnabled>true</OverlayBlueEnabled>
<OverlayGreenEnabled>true</OverlayGreenEnabled>
<OverlayRedEnabled>true</OverlayRedEnabled>
<SourceBlueLevel>0.5</SourceBlueLevel>
<SourceGreenLevel>0.3</SourceGreenLevel>
<SourceRedLevel>0.2</SourceRedLevel>
<OverlayBlueLevel>0.75</OverlayBlueLevel>
<OverlayGreenLevel>0.5</OverlayGreenLevel>
<OverlayRedLevel>0.6</OverlayRedLevel>
<BlendTypeBlue>Subtract</BlendTypeBlue>
</BitmapFilterData>```

Blending two night time :

Fun Fact: In this some of the photos featured were taken at a live concert of The Red Hot Chilli Peppers, performing at Soccer City, Johannesburg, South Africa.

Article purpose

This article will illustrate how to create deep copies of an by making use of the implemented in the form of an extension method with generic type support.

Shallow Copy and Deep Copy

When creating a copy of an in memory, the type of copy can be described as either a shallow copy or a deep copy. The class defines the method, which performs a bit by bit copy of an ’s value type members. In the case of reference type members the method will create a copy of the reference, but not a copy of the being referenced. Creating a copy of an using the method will thus result in copies and the original still referencing the same member in memory when that is a reference type. The method performs a shallow copy when invoked.

A deep copy of an results in copies and the original not referencing the same reference type member in memory.

This article is a follow up article on: . When using being have to be decorated with any number of attributes which aid and . An object’s definition has to at the very least specify the Serializable attribute, if not attempting results in a runtime exception.

The advantage of implementing deep copy operations by making use of a can be argued around not having to specify . Although, as is the case with , only objects that define a can be without specifying any additional attributes.

Example custom data type

The code snippet listed below illustrates several user/custom defined data types. Notice the complete absence of any code attributes, as usually required for successful serialization/deserialization. Also pay attention to the private member variables, being an and user defined reference type defined towards the end of this snippet.

For the sake of convenience I overload the , returning a string representation of an object’s member values.

```public class CustomDataType
{
private ExampleReferenceType referenceType = new ExampleReferenceType();

public void RefreshReferenceType()
{
referenceType.Refresh();
}

private int intMember = 0;
public int IntMember
{
get { return intMember; }
set { intMember = value; }
}

private string stringMember = String.Empty;
public string StringMember
{
get { return stringMember; }
set { stringMember = value; }
}

private DateTime dateTimeMember = DateTime.MinValue;
public DateTime DateTimeMember
{
get { return dateTimeMember; }
set { dateTimeMember = value; }
}

public override string ToString()
{
return "IntMember: " + IntMember +
", DateTimeMember: " + DateTimeMember.ToString() +
", StringMember: " + stringMember + ", EnumMember: " +
enumMember.ToString() +
", ReferenceType: " + referenceType.ToString();
}

public void SetEnumValue(CustomEnum enumValue)
{
enumMember = enumValue;
}
}

public class ExampleReferenceType
{
private DateTime createdDate = DateTime.Now;

public void Refresh()
{
createdDate = DateTime.Now;
}

public override string ToString()
{
return createdDate.ToString("HH:mm:ss.fff");
}
}

{
EnumVal1 = 2,
EnumVal2 = 4,
EnumVal3 = 8,
EnumVal4 = 16,
} ```

The DeepCopy method – Implementation as an extension method with generic type support

Extension method architecture enables developers to create methods which, from a syntactic and implementation point of view appear to be part of an existing data type. create the perception of being updates or additions, literarily extending a data type as the name implies. do not require access to the source code of the particular types being extended, nor does the implementation thereof require recompilation of the referenced types.

This article illustrates a combined implementation of extending the functionality of . The following code snippet provides the definition.

```public static class ExtObject
{
public static T DeepCopy<T>(this T objectToCopy)
{
MemoryStream memoryStream = new MemoryStream();

NetDataContractSerializer netFormatter =
new NetDataContractSerializer();

netFormatter.Serialize(memoryStream, objectToCopy);

memoryStream.Position = 0;
T returnValue = (T)netFormatter.Deserialize(memoryStream);

memoryStream.Close();
memoryStream.Dispose();

return returnValue;
}
} ```

The DeepCopy method is defined as an by virtue of being a static method of a static class and by specifying the keyword in its parameter definition.

DeepCopy additionally defines the <T> which determines the return value’s type and the type of the parameter objectToCopy.

The method body creates an instance of a object and an object instance of type . When the is invoked the Xml representation of the objectToCopy parameter is written to the specified . In a similar fashion is invoked next, reading the Xml representation from the specified . The returned is cast to the same type as the originally .

Note: :

Serializes and deserializes an instance of a type into XML stream or document using the supplied .NET Framework types.

The NetDataContractSerializer differs from the in one important way: the NetDataContractSerializer includes CLR type information in the serialized XML, whereas the does not. Therefore, the NetDataContractSerializer can be used only if both the serializing and deserializing ends share the same CLR types.

In the scenario illustrated it can be considered safe to use since objects being are only persisted to memory for a few milliseconds and then back to an instance.

The implementation

The DeepCopy method illustrated above appears as a member method to the CustomDataType class created earlier.

```static void Main(string[] args)
{
CustomDataType originalObject = new CustomDataType();
originalObject.DateTimeMember = DateTime.Now;
originalObject.IntMember = 42;
originalObject.StringMember = "Some random string";

CustomDataType deepCopyObject = originalObject.DeepCopy();
deepCopyObject.DateTimeMember = DateTime.MinValue;
deepCopyObject.IntMember = 123;
deepCopyObject.StringMember = "Something else...";
deepCopyObject.RefreshReferenceType();

Console.WriteLine("originalObject: ");
Console.WriteLine(originalObject.ToString());
Console.WriteLine();

Console.WriteLine("deepCopyObject: ");
Console.WriteLine(deepCopyObject.ToString());
Console.WriteLine();

Console.WriteLine("Press any key...");
} ```

The code snippet listed above is a console application which implements the DeepCopy on objects of type CustomDataType. Modifying the member properties of the second object instance will not result in the first instance properties being modified.

Article purpose

This will illustrate how to create deep copies of an object by making use of binary serialization implemented in the form of an extension method with generic type support.

Sample source code

This is accompanied by a sample source code Visual Studio project which is available for download .

Shallow Copy and Deep Copy

When creating a copy of an object in memory, the type of copy can be described as either a shallow copy or a deep copy. The Object class defines the MemberwiseClone method, which performs a bit by bit copy of an object’s value type members. In the case of reference type members the MemberwiseClone method will create a copy of the reference, but not a copy of the object being referenced. Creating a copy of an object using the MemberwiseClone method will thus result in copies and the original object still referencing the same member object in memory when that object is a reference type. The MemberwiseClone method performs a shallow copy when invoked.

A deep copy of an object results in copies and the original object not referencing the same reference type member object in memory.

Example custom data type

The sample source code provided with this provides a user defined data type, the CustomDataType class of which the code snippet is listed below.

``` [Serializable]
public class CustomDataType
{
private int intMember = 0;
public int IntMember
{
get { return  intMember; }
set { intMember = value; }
}

private string stringMember = String.Empty;
public string StringMember
{
get { return stringMember; }
set { stringMember = value; }
}

private DateTime dateTimeMember = DateTime.MinValue;
public DateTime DateTimeMember
{
get { return dateTimeMember; }
set { dateTimeMember = value; }
}

public override string ToString()
{
return "IntMember: " + IntMember +
", DateTimeMember: " + DateTimeMember.ToString() +
", StringMember: " + stringMember;
}
} ```

Notice that the CustomDataType class definition is marked with the . Objects of which the type definition is not marked with the cannot be serialized. Trying to perform serialization on objects not marked as will result in an exception being thrown.

The DeepCopy method – Implementation as an extension method with generic type support

Extension method architecture enables developers to create methods which, from a syntactic and implementation point of view appear to be part of an existing data type. create the perception of being updates or additions, literarily extending a data type as the name implies. do not require access to the source code of the particular types being extended, nor does the implementation thereof require recompilation of the referenced types.

This illustrates a combined implementation of extending the functionality of generic types. The following code snippet provides the definition.

```public staticclassExtObject
{
public static T DeepCopy<T>(this T objectToCopy)
{
MemoryStream memoryStream = new MemoryStream();
BinaryFormatter binaryFormatter = new BinaryFormatter();
binaryFormatter.Serialize(memoryStream, objectToCopy);

memoryStream.Position = 0;
T returnValue = (T)binaryFormatter.Deserialize(memoryStream);

memoryStream.Close();
memoryStream.Dispose();

return returnValue;
}
} ```

The DeepCopy method is defined as an by virtue of being a static method of a static class and by specifying the keyword in its parameter definition.

DeepCopy additionally defines the generic type <T> which determines the return value’s type and the type of the parameter objectToCopy.

The method body creates an instance of a  object and an object instance of type . When is invoked the representation of the objectToCopy parameter is written to the specified . In a similar fashion is invoked next, reading the representation from the specified . The object returned is cast to the same type as the object originally serialized.

The implementation

The DeepCopy method illustrated above appears as a member method to the CustomDataType class created earlier.

``` static void Main(string[] args)
{
CustomDataType originalObject = new CustomDataType();
originalObject.DateTimeMember = DateTime.Now;
originalObject.IntMember = 42;
originalObject.StringMember = "Some random string";

CustomDataType deepCopyObject = originalObject.DeepCopy();
deepCopyObject.DateTimeMember = DateTime.MinValue;
deepCopyObject.IntMember = 123;
deepCopyObject.StringMember = "Something else...";

Console.WriteLine("originalObject: " );
Console.WriteLine(originalObject.ToString());
Console.WriteLine();

Console.WriteLine("deepCopyObject: " );
Console.WriteLine(deepCopyObject.ToString());
Console.WriteLine();

Console.WriteLine("Press any key..." );
} ```

The code snippet listed above is a console application which implements the DeepCopy extension method on objects of type CustomDataType. Modifying the member properties of the second object instance will not result in the first object instance properties being modified.

• 836,214 hits