### Article purpose

The purpose of this article is aimed at exploring the concepts of , , and . In addition this article extends conventional and implementations through partial colour variations of and .

### Using the sample application

Included in this article’s sample source code you’ll find a based sample application. The sample application can be used to test and replicate the concepts we explore in this article.

When executing the sample application source/input can selected from the local system by clicking the Load Image button. On the right-hand side of the sample application’s user interface users can adjust the provided controls in order to modify the method of filtering being implemented.

The three labelled Red, Green and Blue relate to whether the relevant colour component will be regarded or not when implementing the configured filter.

Users are required to select an filter: , or . The interface selection is expressed by means of four respectively labelled Dilate, Erode, Open and Closed.

The only other input required from a user comes in the form of selecting the filter intensity/filter size. The dropdown indicated as Filter Size provides the user with several intensity levels ranging from 3×3 to 17×17. Note: Larger filter sizes result in additional processing required when implementing the filter. Large set to implement large sized filters may require more processor cycles.

Resulting filtered can be saved to the local file system by clicking the Save Image button. The screenshot below illustrates the Image Erosion and Dilation sample application in action:

### Mathematical Morphology

A description of as expressed on :

Mathematical morphology (MM) is a theory and technique for the analysis and processing of geometrical structures, based on , , , and . MM is most commonly applied to , but it can be employed as well on , , , and many other spatial structures.

and -space concepts such as size, , , , and , were introduced by MM on both continuous and . MM is also the foundation of morphological image processing, which consists of a set of operators that transform images according to the above characterizations.

MM was originally developed for , and was later extended to and images. The subsequent generalization to is widely accepted today as MM’s theoretical foundation.

In this article we explore , , as well as and . The implementation of these filters are significantly easier to grasp when compared to most definitions of .

### Image Erosion and Dilation

and are implementations of , a subset of . In simpler terms can be defined by this :

Dilation is one of the two basic operators in the area of , the other being . It is typically applied to , but there are versions that work on . The basic effect of the operator on a binary image is to gradually enlarge the boundaries of regions of foreground (i.e. white pixels, typically). Thus areas of foreground pixels grow in size while holes within those regions become smaller.

being a related concept is defined by this :

Erosion is one of the two basic operators in the area of mathematical morphology, the other being . It is typically applied to binary images, but there are versions that work on . The basic effect of the operator on a binary image is to erode away the boundaries of regions of foreground (i.e. white pixels, typically). Thus areas of foreground pixels shrink in size, and holes within those areas become larger.

From the definitions listed above we gather that increases the size of edges contained in an image. In contrast decreases or shrinks the size of an Image’s edges.

### Open and Closed Morphology

Building upon the concepts of and this section explores and . A good definition of can be expressed as :

The basic effect of an opening is somewhat like erosion in that it tends to remove some of the foreground (bright) pixels from the edges of regions of foreground pixels. However it is less destructive than erosion in general. As with other morphological operators, the exact operation is determined by a . The effect of the operator is to preserve foreground regions that have a similar shape to this structuring element, or that can completely contain the structuring element, while eliminating all other regions of foreground pixels.

In turn can be defined as :

Closing is similar in some ways to dilation in that it tends to enlarge the boundaries of foreground (bright) regions in an image (and shrink background color holes in such regions), but it is less destructive of the original boundary shape. As with other , the exact operation is determined by a . The effect of the operator is to preserve background regions that have a similar shape to this structuring element, or that can completely contain the structuring element, while eliminating all other regions of background pixels.

### Implementing Image Erosion and Dilation

In this article we implement and by iterating each pixel contained within an image. The colour of each pixel is determined by taking into regard a pixel’s neighbouring pixels.

When implementing a pixel’s value is determined by comparing neighbouring pixels’ colour values, determining the highest colour value expressed amongst neighbouring pixels.

In contrast to we implement by also inspecting neighbouring pixels’ colour values, determining the lowest colour value expressed amongst neighbouring pixels.

In addition to conventional and the sample source code provides the ability to perform  and targeting only specific colour components. The result of specific colour and produces images which express the effects of and only in certain colours. Depending on filter parameters specified edges appear to have a coloured glow or shadow.

The sample source code provides the definition for the DilateAndErodeFilter , targeting the class. The following code snippet details the implementation of the DilateAndErodeFilter :

```public static Bitmap DilateAndErodeFilter(
this Bitmap sourceBitmap,
int matrixSize,
MorphologyType morphType,
bool applyBlue = true,
bool applyGreen = true,
bool applyRed = true )
{
BitmapData sourceData =
sourceBitmap.LockBits(new Rectangle (0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride *
sourceData.Height];

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0,
pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);

int filterOffset = (matrixSize - 1) / 2;
int calcOffset = 0;

int byteOffset = 0;

byte blue = 0;
byte green = 0;
byte red = 0;

byte morphResetValue = 0;

if (morphType == MorphologyType.Erosion)
{
morphResetValue = 255;
}

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

blue = morphResetValue;
green = morphResetValue;
red = morphResetValue;

if (morphType == MorphologyType.Dilation)
{
for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{
calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

if (pixelBuffer[calcOffset] > blue)
{
blue = pixelBuffer[calcOffset];
}

if (pixelBuffer[calcOffset + 1] > green)
{
green = pixelBuffer[calcOffset + 1];
}

if (pixelBuffer[calcOffset + 2] > red)
{
red = pixelBuffer[calcOffset + 2];
}
}
}
}
else if (morphType == MorphologyType .Erosion)
{
for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{
calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

if (pixelBuffer[calcOffset] < blue)
{
blue = pixelBuffer[calcOffset];
}

if (pixelBuffer[calcOffset + 1] < green)
{
green = pixelBuffer[calcOffset + 1];
}

if (pixelBuffer[calcOffset + 2] < red)
{
red = pixelBuffer[calcOffset + 2];
}
}
}
}

if (applyBlue == false )
{
blue = pixelBuffer[byteOffset];
}

if (applyGreen == false )
{
green = pixelBuffer[byteOffset + 1];
}

if (applyRed == false )
{
red = pixelBuffer[byteOffset + 2];
}

resultBuffer[byteOffset] = blue;
resultBuffer[byteOffset + 1] = green;
resultBuffer[byteOffset + 2] = red;
resultBuffer[byteOffset + 3] = 255;
}
}

Bitmap resultBitmap = new Bitmap (sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData =
resultBitmap.LockBits(new Rectangle (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0,
resultBuffer.Length);

resultBitmap.UnlockBits(resultData);

return resultBitmap;
}```

### Implementing Open and Closed Morphology

The sample source code implements by first implementing on a source image, the resulting image is then filtered by implementing .

In a reverse fashion is achieved by first implementing on a source image, which is then further filtered by implementing .

The sample source code defines the OpenMorphologyFilter and CloseMorphologyFilter , both targeting the class. The implementation as follows:

```public static Bitmap OpenMorphologyFilter(
this Bitmap sourceBitmap,
int matrixSize,
bool applyBlue = true,
bool applyGreen = true,
bool applyRed = true )
{
Bitmap resultBitmap =
sourceBitmap.DilateAndErodeFilter(
matrixSize, MorphologyType.Erosion,
applyBlue, applyGreen, applyRed);

resultBitmap =
resultBitmap.DilateAndErodeFilter(
matrixSize,
MorphologyType.Dilation,
applyBlue, applyGreen, applyRed);

return resultBitmap;
}

public static Bitmap CloseMorphologyFilter(
this Bitmap sourceBitmap,
int matrixSize,
bool applyBlue = true,
bool applyGreen = true,
bool applyRed = true )
{
Bitmap resultBitmap =
sourceBitmap.DilateAndErodeFilter(
matrixSize, MorphologyType.Dilation,
applyBlue, applyGreen, applyRed);

resultBitmap =
resultBitmap.DilateAndErodeFilter(
matrixSize,
MorphologyType.Erosion,
applyBlue, applyGreen, applyRed);

return resultBitmap;
} ```

### Sample Images

The original source image used to create all of the sample images in this article has been licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license. The original image is attributed to Kenneth Dwain Harrelson and can be downloaded from .

The Original Image

Image Dilation 3×3 Blue

Image Dilation 3×3 Blue, Green

Image Dilation 3×3 Green

Image Dilation 3×3 Red

Image Dilation 3×3 Red, Blue

Image Dilation 3×3 Red, Green, Blue

Image Dilation 13×13 Blue

Image Erosion 3×3 Green, Blue

Image Erosion 3×3 Green

Image Erosion 3×3 Red

Image Erosion 3×3 Red, Blue

Image Erosion 3×3 Red, Green

Image Erosion 3×3 Red, Green, Blue

Image Erosion 9×9 Green

Image Erosion 9×9 Red

Image Open Morphology 11×11 Green

Image Open Morphology 11×11 Green Blue

Image Open Morphology 11×11 Red

Image Open Morphology 11×11 Red, Blue

### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

### Article purpose

This article’s intension is focussed on providing a discussion on the tasks involved in implementing Image Colour Averaging. Pixel colour averages are calculated from neighbouring pixels.

### Using the Sample Application

The sample source code associated with this article includes a based sample application. The sample application is provided with the intention of illustrating the concepts explored in this article. In addition the sample application serves as a means of testing and replicating results.

By clicking the Load Image button users are able to select input/source from the local system. On the right hand side of the screen various controls enable the user to control the implementation of colour averaging. The three labelled Red, Green and Blue relates to whether an individual colour component is to be included in calculating colour averages.

The filter intensity can be specified through selecting a filter size from the dropdown , specifying higher values will result in output images expressing more colour averaging intensity.

Additional image filter effects can be achieved through implementing colour component shifting/swapping. When colour components are shifted left the result will be:

• Blue is set to the original value of the Red component.
• Red is set to the original value of the Green component.
• Green is set to the original value of the Blue component.

When colour components are shifted right the result will be:

• Red is set to the original value of the Blue component
• Blue is set to the original value of the Green component
• Green is set to the original value of the Red Component

Resulting can be saved by the user to the local file system by clicking the Save Image button. The following image is a screenshot of the Image Colour Average sample application in action:

### Averaging Colours

In this article and the accompanying sample source code colour averaging is implemented on a per pixel basis. An average colour value is calculated based on a pixel’s neighbouring pixels’ colour. Determining neighbouring pixels in the sample source code has been implemented in much the same method as . The major difference to is the absence of a fixed /.

Additional resulting visual effects can be achieved through various options/settings implemented whilst calculating colour averages. Additional options include being able to specify which colour component averages to implement. Furthermore colour components can be swapped/shifted around.

The sample source code implements the AverageColoursFilter , targeting the class. The extent or degree to which colour averaging will be evident in resulting can be controlled through specifying different values set to the matrixSize parameter. The matrixSize parameter in essence determines the number of neighbouring pixels involved in calculating an average colour.

The individual pixel colour components Red, Green and Blue can either be included or excluded in calculating averages. The three method boolean parameters applyBlue, applyGreen and applyRed will determine an individual colour components inclusion in averaging calculations. If a colour component is to be excluded from averaging the resulting will instead express the original source/input image’s colour component.

The intensity of a specific colour component average can be applied to another colour component by means of swapping/shifting colour components, which is indicated through the shiftType method parameter.

The following code snippet provides the implementation of the AverageColoursFilter :

```public static Bitmap AverageColoursFilter(
this Bitmap sourceBitmap,
int matrixSize,
bool applyBlue = true,
bool applyGreen = true,
bool applyRed = true,
ColorShiftType shiftType =
ColorShiftType.None)
{
BitmapData sourceData =
sourceBitmap.LockBits(new Rectangle(0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride *
sourceData.Height];

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0,
pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);

int filterOffset = (matrixSize - 1) / 2;
int calcOffset = 0;

int byteOffset = 0;

int blue = 0;
int green = 0;
int red = 0;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

blue = 0;
green = 0;
red = 0;

for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{
calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blue += pixelBuffer[calcOffset];
green += pixelBuffer[calcOffset + 1];
red += pixelBuffer[calcOffset + 2];
}
}

blue = blue / matrixSize;
green = green / matrixSize;
red = red / matrixSize;

if (applyBlue == false)
{
blue = pixelBuffer[byteOffset];
}

if (applyGreen == false)
{
green = pixelBuffer[byteOffset + 1];
}

if (applyRed == false)
{
red = pixelBuffer[byteOffset + 2];
}

if (shiftType == ColorShiftType.None)
{
resultBuffer[byteOffset] = (byte)blue;
resultBuffer[byteOffset + 1] = (byte)green;
resultBuffer[byteOffset + 2] = (byte)red;
resultBuffer[byteOffset + 3] = 255;
}
else if (shiftType == ColorShiftType.ShiftLeft)
{
resultBuffer[byteOffset] = (byte)green;
resultBuffer[byteOffset + 1] = (byte)red;
resultBuffer[byteOffset + 2] = (byte)blue;
resultBuffer[byteOffset + 3] = 255;
}
else if (shiftType == ColorShiftType.ShiftRight)
{
resultBuffer[byteOffset] = (byte)red;
resultBuffer[byteOffset + 1] = (byte)blue;
resultBuffer[byteOffset + 2] = (byte)green;
resultBuffer[byteOffset + 3] = 255;
}
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData =
resultBitmap.LockBits(new Rectangle(0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0,
resultBuffer.Length);

resultBitmap.UnlockBits(resultData);

return resultBitmap;
}```

The definition of the ColorShiftType :

```public enum ColorShiftType
{
None,
ShiftLeft,
ShiftRight
}```

### Sample Images

The original image used in generating the sample images that form part of this article, has been licensed under the Attribution-Share Alike , , and license. The can be from .

Original Image

Colour Average Blue Size 11

Colour Average Blue Size 11 Shift Left

Colour Average Blue Size 11 Shift Right

Colour Average Green Size 11 Shift Right

Colour Average Green, Blue Size 11

Colour Average Green, Blue Size 11 Shift Left

Colour Average Green, Blue Size 11 Shift Right

Colour Average Red Size 11

Colour Average Red Size 11 Shift Left

Colour Average Red, Blue Size 11

Colour Average Red, Blue Size 11 Shift Left

Colour Average Red, Green Size 11

Colour Average Red, Green Size 11 Shift Left

Colour Average Red, Green Size 11 Shift Right

Colour Average Red, Green, Blue Size 11

Colour Average Red, Green, Blue Size 11 Shift Left

Colour Average Red, Green, Blue Size 11 Shift Right

### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

### Article purpose

The purpose of this article is to explore and illustrate the concept of . This article implements in the form of a 3×3 , 5×5 , 3×3 Mean filter and a 5×5 Mean filter.

### Using the Sample Application

The sample source code associated with this article includes a based sample application implementing the concepts explored throughout this article.

When using the Image Unsharp Mask sample application users can select a source/input image from the local system by clicking the Load Image button. The dropdown at the bottom of the screen allows the user to select an unsharp masking variation. On the right hand side of the screen users can specify the level/intensity of resulting .

Clicking the Save Image button allows a user to save resulting to the local file system. The image below is a screenshot of the Image Unsharp Mask sample application in action:

### What is Image Unsharp Masking?

A good definition of can be found on :

Unsharp masking (USM) is an image manipulation technique, often available in software.

The "unsharp" of the name derives from the fact that the technique uses a blurred, or "unsharp", positive image to create a "mask" of the original image. The unsharped mask is then combined with the negative image, creating an image that is less blurry than the original. The resulting image, although clearer, probably loses accuracy with respect to the image’s subject. In the context of , an unsharp mask is generally a or filter that amplifies high-frequency components.

In this article we implement by first creating a blurred copy of a source/input then subtracting the blurred from the original , which is known as the mask. Increased is achieved by adding a factor of the mask to the original .

### Applying a Convolution Matrix filter

The sample source code provides the definition for the ConvolutionFilter targeting the class. method is invoked when implementing . The definition of the ConvolutionFilter as follows:

``` private static Bitmap ConvolutionFilter(Bitmap sourceBitmap,
double[,] filterMatrix,
double factor = 1,
int bias = 0,
bool grayscale = false )
{
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);

if (grayscale == true)
{
float rgb = 0;

for (int k = 0; k < pixelBuffer.Length; k += 4)
{
rgb = pixelBuffer[k] * 0.11f;
rgb += pixelBuffer[k + 1] * 0.59f;
rgb += pixelBuffer[k + 2] * 0.3f;

pixelBuffer[k] = (byte )rgb;
pixelBuffer[k + 1] = pixelBuffer[k];
pixelBuffer[k + 2] = pixelBuffer[k];
pixelBuffer[k + 3] = 255;
}
}

double blue = 0.0;
double green = 0.0;
double red = 0.0;

int filterWidth = filterMatrix.GetLength(1);
int filterHeight = filterMatrix.GetLength(0);

int filterOffset = (filterWidth-1) / 2;
int calcOffset = 0;

int byteOffset = 0;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int  offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
blue = 0;
green = 0;
red = 0;

byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

for (int  filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int  filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{

calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blue += (double)(pixelBuffer[calcOffset]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];

green += (double)(pixelBuffer[calcOffset + 1]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];

red += (double)(pixelBuffer[calcOffset + 2]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];
}
}

blue = factor * blue + bias;
green = factor * green + bias;
red = factor * red + bias;

if (blue > 255)
{ blue = 255; }
else if (blue < 0)
{ blue = 0; }

if (green > 255)
{ green = 255; }
else if (green < 0)
{ green = 0; }

if (red > 255)
{ red = 255; }
else if (red < 0)
{ red = 0; }

resultBuffer[byteOffset] = (byte )(blue);
resultBuffer[byteOffset + 1] = (byte )(green);
resultBuffer[byteOffset + 2] = (byte )(red);
resultBuffer[byteOffset + 3] = 255;
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

### Subtracting and Adding Images

An important step required when implementing comes in the form of creating a mask by subtracting a blurred copy from the original and then adding a factor of the mask to the original . In order to achieve increased performance the sample source code combines the process of creating the mask and adding the mask to the original .

The SubtractAddFactorImage iterates every pixel that forms part of an . In a single step the blurred pixel is subtracted from the original pixel, multiplied by a user specified factor and then added to the original pixel. The definition of the SubtractAddFactorImage as follows:

```private static Bitmap SubtractAddFactorImage(
this Bitmap subtractFrom,
Bitmap subtractValue,
float factor = 1.0f)
{
BitmapData sourceData =
subtractFrom.LockBits(new Rectangle (0, 0,
subtractFrom.Width, subtractFrom.Height),
PixelFormat.Format32bppArgb);

byte[] sourceBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, sourceBuffer, 0,
sourceBuffer.Length);

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

BitmapData subtractData =
subtractValue.LockBits(new Rectangle (0, 0,
subtractValue.Width, subtractValue.Height),
PixelFormat.Format32bppArgb);

byte[] subtractBuffer = new byte[subtractData.Stride *
subtractData.Height];

Marshal.Copy(subtractData.Scan0, subtractBuffer, 0,
subtractBuffer.Length);

subtractFrom.UnlockBits(sourceData);
subtractValue.UnlockBits(subtractData);

double blue = 0;
double green = 0;
double red = 0;

for (int k = 0; k < resultBuffer.Length &&
k < subtractBuffer.Length; k += 4)
{
blue = sourceBuffer[k] +
(sourceBuffer[k] -
subtractBuffer[k]) * factor;

green = sourceBuffer[k + 1] +
(sourceBuffer[k + 1] -
subtractBuffer[k + 1]) * factor;

red = sourceBuffer[k + 2] +
(sourceBuffer[k + 2] -
subtractBuffer[k + 2]) * factor;

blue = (blue < 0 ? 0 : (blue > 255 ? 255 : blue));
green = (green < 0 ? 0 : (green > 255 ? 255 : green));
red = (red < 0 ? 0 : (red > 255 ? 255 : red));

resultBuffer[k] = (byte )blue;
resultBuffer[k + 1] = (byte )green;
resultBuffer[k + 2] = (byte )red;
resultBuffer[k + 3] = 255;
}

Bitmap resultBitmap = new Bitmap (subtractFrom.Width,
subtractFrom.Height);

BitmapData resultData =
resultBitmap.LockBits(new Rectangle (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0,
resultBuffer.Length);

resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

### Matrix Definition

The image blurring filters implemented by the sample source code relies on static / values defined in the Matrix class. The variants of implemented are: 3×3 , 5×5 Gaussian, 3×3 Mean and 5×5 Mean. The definition of the Matrix class is detailed by the following code snippet:

```public static class Matrix
{
public static double[,] Gaussian3x3
{
get
{
return new double[,]
{ { 1, 2, 1, },
{ 2, 4, 2, },
{ 1, 2, 1, }, };
}
}

public static double[,] Gaussian5x5Type1
{
get
{
return new double[,]
{ { 2, 04, 05, 04, 2 },
{ 4, 09, 12, 09, 4 },
{ 5, 12, 15, 12, 5 },
{ 4, 09, 12, 09, 4 },
{ 2, 04, 05, 04, 2 }, };
}
}

public static double[,] Mean3x3
{
get
{
return new double[,]
{ { 1, 1, 1, },
{ 1, 1, 1, },
{ 1, 1, 1, }, };
}
}

public static double[,] Mean5x5
{
get
{
return new double[,]
{ { 1, 1, 1, 1, 1 },
{ 1, 1, 1, 1, 1 },
{ 1, 1, 1, 1, 1 },
{ 1, 1, 1, 1, 1 },
{ 1, 1, 1, 1, 1 }, };
}
}
}```

### Implementing Image Unsharpening

This article explores four variants of , relating to the four types of image blurring discussed in the previous section. The sample source code defines the following : UnsharpGaussian3x3, UnsharpGaussian5x5, UnsharpMean3x3 and UnsharpMean5x5. All four methods are defined as targeting the class. When looking at the sample images in the following section you will notice the correlation between increased and enhanced . The definition as follows:

```public static Bitmap UnsharpGaussian3x3(
this Bitmap sourceBitmap,
float factor = 1.0f)
{
Bitmap blurBitmap = ExtBitmap.ConvolutionFilter(
sourceBitmap,
Matrix.Gaussian3x3,
1.0 / 16.0);

Bitmap resultBitmap =
blurBitmap, factor);

return resultBitmap;
}

public static Bitmap UnsharpGaussian5x5(
this Bitmap sourceBitmap,
float factor = 1.0f)
{
Bitmap blurBitmap = ExtBitmap.ConvolutionFilter(
sourceBitmap,
Matrix.Gaussian5x5Type1,
1.0 / 159.0);

Bitmap resultBitmap =
blurBitmap, factor);

return resultBitmap;
}

public static Bitmap UnsharpMean3x3(
this Bitmap sourceBitmap,
float factor = 1.0f)
{
Bitmap blurBitmap = ExtBitmap.ConvolutionFilter(
sourceBitmap,
Matrix.Mean3x3,
1.0 / 9.0);

Bitmap resultBitmap =
blurBitmap, factor);

return resultBitmap;
}

public static Bitmap UnsharpMean5x5(
this Bitmap sourceBitmap,
float factor = 1.0f)
{
Bitmap blurBitmap = ExtBitmap.ConvolutionFilter(
sourceBitmap,
Matrix.Mean5x5,
1.0 / 25.0);

Bitmap resultBitmap =
blurBitmap, factor);

return resultBitmap;
}```

### Sample Images

The Original Image

Unsharp Gaussian 3×3

Unsharp Gaussian 5×5

Unsharp Mean 3×3

Unsharp Gaussian 5×5

### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

### Article purpose

The objective of this article is focussed on providing a discussion on implementing a on an . This article illustrates varying levels of filter intensity: 3×3, 5×5, 7×7, 9×9, 11×11 and 13×13.

### Using the Sample Application

The concepts explored in this article can be easily replicated by making use of the Sample Application, which forms part of the associated sample source code accompanying this article.

When using the Image Median Filter sample application you can specify a input/source image by clicking the Load Image button. The dropdown combobox towards the bottom middle part of the screen relates the various levels of filter intensity.

If desired a user can save the resulting filtered image to the local file system by clicking the Save Image button.

The following image is screenshot of the Image Median Filter sample application in action:

### What is a Median Filter

From we gain the following :

In , it is often desirable to be able to perform some kind of noise reduction on an image or signal. The median filter is a nonlinear technique, often used to remove noise. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, on an image). Median filtering is very widely used in digital because, under certain conditions, it preserves edges while removing noise (but see discussion below).

The main idea of the median filter is to run through the signal entry by entry, replacing each entry with the of neighboring entries. The pattern of neighbors is called the "window", which slides, entry by entry, over the entire signal. For 1D signals, the most obvious window is just the first few preceding and following entries, whereas for 2D (or higher-dimensional) signals such as images, more complex window patterns are possible (such as "box" or "cross" patterns). Note that if the window has an odd number of entries, then the is simple to define: it is just the middle value after all the entries in the window are sorted numerically. For an even number of entries, there is more than one possible median, see for more details.

In simple terms, a can be applied to in order to achieve smoothing or reduction. The in contrast to most smoothing methods, to a degree exhibits edge preservation properties.

### Applying a Median Filter

The sample source code defines the MedianFilter targeting the class. The matrixSize parameter determines the intensity of the being applied.

The MedianFilter iterates each pixel of the source . When iterating pixels we determine the neighbouring pixels of the pixel currently being iterated. After having built up a list of neighbouring pixels, the List is then sorted and from there we determine the middle pixel value. The final step involves assigning the determined middle pixel to the current pixel in the resulting , represented as an array of pixel colour component .

```public static Bitmap MedianFilter(this Bitmap sourceBitmap,
int matrixSize,
int bias = 0,
bool grayscale = false)
{
BitmapData sourceData =
sourceBitmap.LockBits(new Rectangle (0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride *
sourceData.Height];

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0,
pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);

if (grayscale == true)
{
float rgb = 0;

for (int k = 0; k < pixelBuffer.Length; k += 4)
{
rgb = pixelBuffer[k] * 0.11f;
rgb += pixelBuffer[k + 1] * 0.59f;
rgb += pixelBuffer[k + 2] * 0.3f;

pixelBuffer[k] = (byte )rgb;
pixelBuffer[k + 1] = pixelBuffer[k];
pixelBuffer[k + 2] = pixelBuffer[k];
pixelBuffer[k + 3] = 255;
}
}

int filterOffset = (matrixSize - 1) / 2;
int calcOffset = 0;

int byteOffset = 0;

List<int> neighbourPixels = new List<int>();
byte[] middlePixel;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

neighbourPixels.Clear();

for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{

calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

pixelBuffer, calcOffset));
}
}

neighbourPixels.Sort();

middlePixel = BitConverter.GetBytes(
neighbourPixels[filterOffset]);

resultBuffer[byteOffset] = middlePixel[0];
resultBuffer[byteOffset + 1] = middlePixel[1];
resultBuffer[byteOffset + 2] = middlePixel[2];
resultBuffer[byteOffset + 3] = middlePixel[3];
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData =
resultBitmap.LockBits(new Rectangle (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0,
resultBuffer.Length);

resultBitmap.UnlockBits(resultData);

return resultBitmap;
}```

### Sample Images

The sample images illustrated in this article were rendered from the same source image which is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license. The original image is attributed to Luc Viatourwww.Lucnix.be and can be downloaded from Wikipedia.

The Original Source Image

Median 3×3 Filter

Median 5×5 Filter

Median 7×7 Filter

Median 9×9 Filter

Median 11×11 Filter

Median 13×13 Filter

### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

### Article purpose

In we explore the concept of . implements as a means of achieving . All of the concepts explored are implemented by accessing  and manipulating the raw pixel data exposed by an , no GDI+ or conventional drawing code is required.

### Sample source code

is accompanied by a sample source code Visual Studio project which is available for download here.

### Using the Sample Application

The concepts explored in can be easily replicated by making use of the Sample Application, which forms part of the associated sample source code accompanying this article.

When using the Difference Of Gaussians sample application you can specify a input/source image by clicking the Load Image button. The dropdown towards the bottom middle part of the screen relates the various methods discussed.

If desired a user can save the resulting image to the local file system by clicking the Save Image button.

The following image is screenshot of the Difference Of Gaussians sample application in action:

### What is Difference of Gaussians?

, commonly abbreviated as DoG, is a method of implementing  . Central to the method of is the application of .

From we gain the following :

In , Difference of Gaussians is a enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original. In the simple case of , the blurred images are obtained by the original with Gaussian kernels having differing standard deviations. Blurring an image using a suppresses only information. Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the difference of Gaussians is a that discards all but a handful of spatial frequencies that are present in the original grayscale image.

In simple terms can be implemented by applying two of different intensity levels to the same source . The resulting is then created by subtracting the two of different .

### Applying a Convolution Matrix filter

In the sample source code accompanying is applied by invoking the ConvolutionFilter method. This method accepts a two dimensional array of type double representing the convolution /. This method is also capable of first converting source to , which can be specified as a method parameter. Resulting sometimes tend to be very dark, which can be corrected by specifying a suitable bias value.

The following code snippet provides the implementation of the ConvolutionFilter method:

``` private static Bitmap ConvolutionFilter(Bitmap sourceBitmap,
double[,] filterMatrix,
double factor = 1,
int bias = 0,
bool grayscale = false )
{
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);

if (grayscale == true)
{
float rgb = 0;

for (int k = 0; k < pixelBuffer.Length; k += 4)
{
rgb = pixelBuffer[k] * 0.11f;
rgb += pixelBuffer[k + 1] * 0.59f;
rgb += pixelBuffer[k + 2] * 0.3f;

pixelBuffer[k] = (byte )rgb;
pixelBuffer[k + 1] = pixelBuffer[k];
pixelBuffer[k + 2] = pixelBuffer[k];
pixelBuffer[k + 3] = 255;
}
}

double blue = 0.0;
double green = 0.0;
double red = 0.0;

int filterWidth = filterMatrix.GetLength(1);
int filterHeight = filterMatrix.GetLength(0);

int filterOffset = (filterWidth-1) / 2;
int calcOffset = 0;

int byteOffset = 0;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int  offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
blue = 0;
green = 0;
red = 0;

byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

for (int  filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int  filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{

calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blue += (double)(pixelBuffer[calcOffset]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];

green += (double)(pixelBuffer[calcOffset + 1]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];

red += (double)(pixelBuffer[calcOffset + 2]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];
}
}

blue = factor * blue + bias;
green = factor * green + bias;
red = factor * red + bias;

if (blue > 255)
{ blue = 255; }
else if (blue < 0)
{ blue = 0; }

if (green > 255)
{ green = 255; }
else if (green < 0)
{ green = 0; }

if (red > 255)
{ red = 255; }
else if (red < 0)
{ red = 0; }

resultBuffer[byteOffset] = (byte )(blue);
resultBuffer[byteOffset + 1] = (byte )(green);
resultBuffer[byteOffset + 2] = (byte )(red);
resultBuffer[byteOffset + 3] = 255;
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode .WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

### The Gaussian Matrix

The sample source code defines three / values, a 3×3 and two slightly different 5×5 matrices. The Gaussian3x3 requires a factor of 1 / 16, the Gaussian5x5Type1 a factor of 1 / 159 and the factor required by the Gaussian5x5Type2 equates to 1 / 256.

```public static class Matrix
{
public static double[,] Gaussian3x3
{
get
{
return new double[,]
{ { 1, 2, 1, },
{ 2, 4, 2, },
{ 1, 2, 1, }, };
}
}

public static double[,] Gaussian5x5Type1
{
get
{
return new double[,]
{ { 2, 04, 05, 04, 2  },
{ 4, 09, 12, 09, 4  },
{ 5, 12, 15, 12, 5  },
{ 4, 09, 12, 09, 4  },
{ 2, 04, 05, 04, 2  }, };
}
}

public static double[,] Gaussian5x5Type2
{
get
{
return new   double[,]
{ {  1,   4,  6,  4,  1  },
{  4,  16, 24, 16,  4  },
{  6,  24, 36, 24,  6  },
{  4,  16, 24, 16,  4  },
{  1,   4,  6,  4,  1  }, };
}
}
}```

### Subtracting Images

When implementing the method of after having applied two varying levels of the resulting need to be subtracted. The sample source code associated with implements the SubtractImage when subtracting .

The following code snippet details the implementation of the SubtractImage :

```private static void SubtractImage(this Bitmap subtractFrom,
Bitmap subtractValue,
bool invert = false,
int bias = 0)
{
BitmapData sourceData =
subtractFrom.LockBits(new Rectangle(0, 0,
subtractFrom.Width, subtractFrom.Height),
PixelFormat.Format32bppArgb);

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, resultBuffer, 0,
resultBuffer.Length);

BitmapData subtractData =
subtractValue.LockBits(new Rectangle(0, 0,
subtractValue.Width, subtractValue.Height),
PixelFormat.Format32bppArgb);

byte[] subtractBuffer = new byte[subtractData.Stride *
subtractData.Height];

Marshal.Copy(subtractData.Scan0, subtractBuffer, 0,
subtractBuffer.Length);

subtractValue.UnlockBits(subtractData);

int blue = 0;
int green = 0;
int red = 0;

for (int k = 0; k < resultBuffer.Length &&
k < subtractBuffer.Length; k += 4)
{
if (invert == true )
{
blue = 255 - resultBuffer[k] -
subtractBuffer[k] + bias;

green = 255 - resultBuffer[k + 1] -
subtractBuffer[k + 1] + bias;

red = 255 - resultBuffer[k + 2] -
subtractBuffer[k + 2] + bias;
}
else
{
blue = resultBuffer[k] -
subtractBuffer[k] + bias;

green = resultBuffer[k + 1] -
subtractBuffer[k + 1] + bias;

red = resultBuffer[k + 2] -
subtractBuffer[k + 2] + bias;
}

blue = (blue < 0 ? 0 : (blue > 255 ? 255 : blue));
green = (green < 0 ? 0 : (green > 255 ? 255 : green));
red = (red < 0 ? 0 : (red > 255 ? 255 : red));

resultBuffer[k] = (byte )blue;
resultBuffer[k + 1] = (byte )green;
resultBuffer[k + 2] = (byte )red;
resultBuffer[k + 3] = 255;
}

Marshal.Copy(resultBuffer, 0, sourceData.Scan0,
resultBuffer.Length);

subtractFrom.UnlockBits(sourceData);
}```

### Difference of Gaussians Extension methods

The sample source code implements   by means of two : DifferenceOfGaussians3x5Type1 and DifferenceOfGaussians3x5Type2. Both methods are virtually identical, the only difference being the 5×5 being implemented.

Both methods create two new , each having a of different levels of intensity applied. The two new are subtracted in order to create a single resulting .

The following source code snippet provides the implementation of the DifferenceOfGaussians3x5Type1 and DifferenceOfGaussians3x5Type2 :

```public static Bitmap DifferenceOfGaussians3x5Type1(
this Bitmap sourceBitmap,
bool grayscale = false,
bool invert = false,
int bias = 0)
{
Bitmap bitmap3x3 = ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian3x3, 1.0 / 16.0,
0, grayscale);

Bitmap bitmap5x5 = ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian5x5Type1, 1.0 / 159.0,
0, grayscale);

bitmap3x3.SubtractImage(bitmap5x5, invert, bias);

return bitmap3x3;
}

public static Bitmap DifferenceOfGaussians3x5Type2(
this Bitmap sourceBitmap,
bool grayscale = false,
bool invert = false,
int bias = 0)
{
Bitmap bitmap3x3 = ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian3x3, 1.0 / 16.0,
0, true );

Bitmap bitmap5x5 = ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian5x5Type2, 1.0 / 256.0,
0, true );

bitmap3x3.SubtractImage(bitmap5x5, invert, bias);

return bitmap3x3;
}```

### Sample Images

The Original Image

Difference Of Gaussians 3×5 Type1

Difference Of Gaussians 3×5 Type2

Difference Of Gaussians 3×5 Type1 Bias 128

Difference Of Gaussians 3×5 Type 2 Bias 96

### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

### Article Purpose

The objective of this article is to explore various algorithms. The types of discussed are: , , , and . All instances are implemented by means of .

### Using the Sample Application

The concepts explored in this article can be easily replicated by making use of the Sample Application, which forms part of the associated sample source code accompanying this article.

When using the Image Edge Detection sample application you can specify a input/source image by clicking the Load Image button. The dropdown towards the bottom middle part of the screen relates the various methods discussed.

If desired a user can save the resulting image to the local file system by clicking the Save Image button.

The following image is screenshot of the Image Edge Detection sample application in action:

### Edge Detection

A good description of edge detection forms part of the on :

Edge detection is the name for a set of mathematical methods which aim at identifying points in a at which the changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. The same problem of finding discontinuities in 1D signals is known as and the problem of finding signal discontinuities over time is known as . Edge detection is a fundamental tool in , and , particularly in the areas of and .

### Image Convolution

A good introduction article  to can be found at: http://homepages.inf.ed.ac.uk/rbf/HIPR2/convolve.htm. From the article we learn the following:

Convolution is a simple mathematical operation which is fundamental to many common image processing operators. Convolution provides a way of `multiplying together’ two arrays of numbers, generally of different sizes, but of the same dimensionality, to produce a third array of numbers of the same dimensionality. This can be used in image processing to implement operators whose output pixel values are simple linear combinations of certain input pixel values.

In an image processing context, one of the input arrays is normally just a graylevel image. The second array is usually much smaller, and is also two-dimensional (although it may be just a single pixel thick), and is known as the kernel.

### Single Matrix Convolution

The sample source code implements the ConvolutionFilter method, an targeting the class. The ConvolutionFilter method is intended to apply a user defined and optionally covert an to grayscale. The implementation as follows:

```private static Bitmap ConvolutionFilter(Bitmap sourceBitmap,
double[,] filterMatrix,
double factor = 1,
int bias = 0,
bool grayscale = false)
{
BitmapData sourceData =
sourceBitmap.LockBits(new Rectangle(0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride *
sourceData.Height];

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0,
pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);

if(grayscale == true)
{
float rgb = 0;

for(int k = 0; k < pixelBuffer.Length; k += 4)
{
rgb = pixelBuffer[k] * 0.11f;
rgb += pixelBuffer[k + 1] * 0.59f;
rgb += pixelBuffer[k + 2] * 0.3f;

pixelBuffer[k] = (byte)rgb;
pixelBuffer[k + 1] = pixelBuffer[k];
pixelBuffer[k + 2] = pixelBuffer[k];
pixelBuffer[k + 3] = 255;
}
}

double blue = 0.0;
double green = 0.0;
double red = 0.0;

int filterWidth = filterMatrix.GetLength(1);
int filterHeight = filterMatrix.GetLength(0);

int filterOffset = (filterWidth-1) / 2;
int calcOffset = 0;

int byteOffset = 0;

for(int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for(int offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
blue = 0;
green = 0;
red = 0;

byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

for(int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for(int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{

calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blue += (double)(pixelBuffer[calcOffset]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];

green += (double)(pixelBuffer[calcOffset+1]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];

red += (double)(pixelBuffer[calcOffset+2]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];
}
}

blue = factor * blue + bias;
green = factor * green + bias;
red = factor * red + bias;

if(blue > 255)
{ blue = 255;}
else if(blue < 0)
{ blue = 0;}

if(green > 255)
{ green = 255;}
else if(green < 0)
{ green = 0;}

if(red > 255)
{ red = 255;}
else if(red < 0)
{ red = 0;}

resultBuffer[byteOffset] = (byte)(blue);
resultBuffer[byteOffset + 1] = (byte)(green);
resultBuffer[byteOffset + 2] = (byte)(red);
resultBuffer[byteOffset + 3] = 255;
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData =
resultBitmap.LockBits(new Rectangle(0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0,
resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
}```

### Horizontal and Vertical Matrix Convolution

The ConvolutionFilter has been overloaded to accept two matrices, representing a vertical and a horizontal . The implementation as follows:

```public static Bitmap ConvolutionFilter(this Bitmap sourceBitmap,
double[,] xFilterMatrix,
double[,] yFilterMatrix,
double factor = 1,
int bias = 0,
bool grayscale = false)
{
BitmapData sourceData =
sourceBitmap.LockBits(new Rectangle(0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride *
sourceData.Height];

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0,
pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);

if (grayscale == true)
{
float rgb = 0;

for (int k = 0; k < pixelBuffer.Length; k += 4)
{
rgb = pixelBuffer[k] * 0.11f;
rgb += pixelBuffer[k + 1] * 0.59f;
rgb += pixelBuffer[k + 2] * 0.3f;

pixelBuffer[k] = (byte)rgb;
pixelBuffer[k + 1] = pixelBuffer[k];
pixelBuffer[k + 2] = pixelBuffer[k];
pixelBuffer[k + 3] = 255;
}
}

double blueX = 0.0;
double greenX = 0.0;
double redX = 0.0;

double blueY = 0.0;
double greenY = 0.0;
double redY = 0.0;

double blueTotal = 0.0;
double greenTotal = 0.0;
double redTotal = 0.0;

int filterOffset = 1;
int calcOffset = 0;

int byteOffset = 0;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
blueX = greenX = redX = 0;
blueY = greenY = redY = 0;

blueTotal = greenTotal = redTotal = 0.0;

byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{
calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blueX += (double)
(pixelBuffer[calcOffset]) *
xFilterMatrix[filterY +
filterOffset,
filterX +
filterOffset];

greenX += (double)
(pixelBuffer[calcOffset + 1]) *
xFilterMatrix[filterY +
filterOffset,
filterX +
filterOffset];

redX += (double)
(pixelBuffer[calcOffset + 2]) *
xFilterMatrix[filterY +
filterOffset,
filterX +
filterOffset];

blueY += (double)
(pixelBuffer[calcOffset]) *
yFilterMatrix[filterY +
filterOffset,
filterX +
filterOffset];

greenY += (double)
(pixelBuffer[calcOffset + 1]) *
yFilterMatrix[filterY +
filterOffset,
filterX +
filterOffset];

redY += (double)
(pixelBuffer[calcOffset + 2]) *
yFilterMatrix[filterY +
filterOffset,
filterX +
filterOffset];
}
}

blueTotal = Math.Sqrt((blueX * blueX) +
(blueY * blueY));

greenTotal = Math.Sqrt((greenX * greenX) +
(greenY * greenY));

redTotal = Math.Sqrt((redX * redX) +
(redY * redY));

if (blueTotal > 255)
{ blueTotal = 255; }
else if (blueTotal < 0)
{ blueTotal = 0; }

if (greenTotal > 255)
{ greenTotal = 255; }
else if (greenTotal < 0)
{ greenTotal = 0; }

if (redTotal > 255)
{ redTotal = 255; }
else if (redTotal < 0)
{ redTotal = 0; }

resultBuffer[byteOffset] = (byte)(blueTotal);
resultBuffer[byteOffset + 1] = (byte)(greenTotal);
resultBuffer[byteOffset + 2] = (byte)(redTotal);
resultBuffer[byteOffset + 3] = 255;
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData =
resultBitmap.LockBits(new Rectangle(0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0,
resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
}```

### Original Sample Image

The original source image used to create all of the sample images in this article has been licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license. The original image is attributed to Kenneth Dwain Harrelson and can be downloaded from Wikipedia.

### Laplacian Edge Detection

The method of counts as one of the commonly used implementations. From we gain the following definition:

Discrete Laplace operator is often used in image processing e.g. in edge detection and motion estimation applications. The discrete Laplacian is defined as the sum of the second derivatives and calculated as sum of differences over the nearest neighbours of the central pixel.

A number of / variations may be applied with results ranging from slight to fairly pronounced. In the following sections of this article we explore two common implementations, 3×3 and 5×5.

### Laplacian 3×3

When implementing a 3×3 you will notice little difference between colour and grayscale result .

```public static Bitmap
Laplacian3x3Filter(this Bitmap sourceBitmap,
bool grayscale = true)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Laplacian3x3,
1.0, 0, grayscale);

return resultBitmap;
}```
```public static double[,] Laplacian3x3
{
get
{
return new double[,]
{ { -1, -1, -1, },
{ -1,  8, -1, },
{ -1, -1, -1, }, };
}
} ```

Laplacian 3×3

Laplacian 3×3 Grayscale

### Laplacian 5×5

The 5×5  produces result with a noticeable difference between colour and grayscale . The detected edges are expressed in a fair amount of fine detail, although the has a tendency to be sensitive to .

```public static Bitmap
Laplacian5x5Filter(this Bitmap sourceBitmap,
bool grayscale = true)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Laplacian5x5,
1.0, 0, grayscale);

return resultBitmap;
}```
```public static double[,] Laplacian5x5
{
get
{
return new double[,]
{ { -1, -1, -1, -1, -1, },
{ -1, -1, -1, -1, -1, },
{ -1, -1, 24, -1, -1, },
{ -1, -1, -1, -1, -1, },
{ -1, -1, -1, -1, -1  } };
}
}```

Laplacian 5×5

Laplacian 5×5 Grayscale

### Laplacian of Gaussian

The (LoG) is a common variation of the filter. is intended to counter the noise sensitivity of the regular filter.

attempts to remove noise by implementing smoothing by means of a . In order to optimize performance we can calculate a single representing a and .

```public static Bitmap
LaplacianOfGaussian(this Bitmap sourceBitmap)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.LaplacianOfGaussian,
1.0, 0, true);

return resultBitmap;
}```
```public static double[,] LaplacianOfGaussian
{
get
{
return new double[,]
{ {  0,  0, -1,  0,  0 },
{  0, -1, -2, -1,  0 },
{ -1, -2, 16, -2, -1 },
{  0, -1, -2, -1,  0 },
{  0,  0, -1,  0,  0 } };
}
} ```

Laplacian of Gaussian

### Laplacian (3×3) of Gaussian (3×3)

Different variations can be combined in an attempt to produce results best suited to the input . In this case we first apply a 3×3 followed by a 3×3 filter.

```public static Bitmap
Laplacian3x3OfGaussian3x3Filter(this Bitmap sourceBitmap)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian3x3,
1.0 / 16.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap,
Matrix.Laplacian3x3, 1.0, 0, false);

return resultBitmap;
}```
```public static double[,] Laplacian3x3
{
get
{
return new double[,]
{ { -1, -1, -1, },
{ -1,  8, -1, },
{ -1, -1, -1, }, };
}
} ```
```public static double[,] Gaussian3x3
{
get
{
return new double[,]
{ { 1, 2, 1, },
{ 2, 4, 2, },
{ 1, 2, 1, } };
}
} ```

Laplacian 3×3 Of Gaussian 3×3

### Laplacian (3×3) of Gaussian (5×5 – Type 1)

In this scenario we apply a variation of a 5×5 followed by a 3×3 filter.

```public static Bitmap
Laplacian3x3OfGaussian5x5Filter1(this Bitmap sourceBitmap)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian5x5Type1,
1.0 / 159.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap,
Matrix.Laplacian3x3, 1.0, 0, false);

return resultBitmap;
}```
```public static double[,] Laplacian3x3
{
get
{
return new double[,]
{ { -1, -1, -1, },
{ -1,  8, -1, },
{ -1, -1, -1, }, };
}
} ```
```public static double[,] Gaussian5x5Type1
{
get
{
return new double[,]
{ { 2, 04, 05, 04, 2 },
{ 4, 09, 12, 09, 4 },
{ 5, 12, 15, 12, 5 },
{ 4, 09, 12, 09, 4 },
{ 2, 04, 05, 04, 2 }, };
}
} ```

Laplacian 3×3 Of Gaussian 5×5 – Type 1

### Laplacian (3×3) of Gaussian (5×5 – Type 2)

The following implementation is very similar to the previous implementation. Applying a variation of a 5×5 results in slight differences.

```public static Bitmap
Laplacian3x3OfGaussian5x5Filter2(this Bitmap sourceBitmap)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian5x5Type2,
1.0 / 256.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap,
Matrix.Laplacian3x3, 1.0, 0, false);

return resultBitmap;
}```
```public static double[,] Laplacian3x3
{
get
{
return new double[,]
{ { -1, -1, -1, },
{ -1,  8, -1, },
{ -1, -1, -1, }, };
}
} ```
```public static double[,] Gaussian5x5Type2
{
get
{
return new double[,]
{ {  1,   4,  6,  4,  1 },
{  4,  16, 24, 16,  4 },
{  6,  24, 36, 24,  6 },
{  4,  16, 24, 16,  4 },
{  1,   4,  6,  4,  1 }, };
}
} ```

Laplacian 3×3 Of Gaussian 5×5 – Type 2

### Laplacian (5×5) of Gaussian (3×3)

This variation of the filter implements a 3×3 , followed by a 5×5 . The resulting appears significantly brighter when compared to a 3×3 .

```public static Bitmap
Laplacian5x5OfGaussian3x3Filter(this Bitmap sourceBitmap)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian3x3,
1.0 / 16.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap,
Matrix.Laplacian5x5, 1.0, 0, false);

return resultBitmap;
}```
```public static double[,] Laplacian5x5
{
get
{
return new double[,]
{ { -1, -1, -1, -1, -1, },
{ -1, -1, -1, -1, -1, },
{ -1, -1, 24, -1, -1, },
{ -1, -1, -1, -1, -1, },
{ -1, -1, -1, -1, -1  } };
}
}```
```public static double[,] Gaussian3x3
{
get
{
return new double[,]
{ { 1, 2, 1, },
{ 2, 4, 2, },
{ 1, 2, 1, } };
}
} ```

Laplacian 5×5 Of Gaussian 3×3

### Laplacian (5×5) of Gaussian (5×5 – Type 1)

Implementing a larger results in a higher degree of smoothing, equating to less .

```public static Bitmap
Laplacian5x5OfGaussian5x5Filter1(this Bitmap sourceBitmap)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian5x5Type1,
1.0 / 159.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap,
Matrix.Laplacian5x5, 1.0, 0, false);

return resultBitmap;
}```
```public static double[,] Laplacian5x5
{
get
{
return new double[,]
{ { -1, -1, -1, -1, -1, },
{ -1, -1, -1, -1, -1, },
{ -1, -1, 24, -1, -1, },
{ -1, -1, -1, -1, -1, },
{ -1, -1, -1, -1, -1  } };
}
}```
```public static double[,] Gaussian5x5Type1
{
get
{
return new double[,]
{ { 2, 04, 05, 04, 2 },
{ 4, 09, 12, 09, 4 },
{ 5, 12, 15, 12, 5 },
{ 4, 09, 12, 09, 4 },
{ 2, 04, 05, 04, 2 }, };
}
} ```

Laplacian 5×5 Of Gaussian 5×5 – Type 1

### Laplacian (5×5) of Gaussian (5×5 – Type 2)

The variation of most applicable when implementing a filter depends on expressed by a source . In this scenario the first variations (Type 1) appears to result in less .

```public static Bitmap
Laplacian5x5OfGaussian5x5Filter2(this Bitmap sourceBitmap)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian5x5Type2,
1.0 / 256.0, 0, true);

resultBitmap =
ExtBitmap.ConvolutionFilter(resultBitmap,
Matrix.Laplacian5x5,
1.0, 0, false);

return resultBitmap;
}```
```public static double[,] Laplacian5x5
{
get
{
return new double[,]
{ { -1, -1, -1, -1, -1, },
{ -1, -1, -1, -1, -1, },
{ -1, -1, 24, -1, -1, },
{ -1, -1, -1, -1, -1, },
{ -1, -1, -1, -1, -1  } };
}
}```
```public static double[,] Gaussian5x5Type2
{
get
{
return new double[,]
{ {  1,   4,  6,  4,  1 },
{  4,  16, 24, 16,  4 },
{  6,  24, 36, 24,  6 },
{  4,  16, 24, 16,  4 },
{  1,   4,  6,  4,  1 }, };
}
} ```

Laplacian 5×5 Of Gaussian 5×5 – Type 2

### Sobel Edge Detection

is another common implementation of . We gain the following from :

The Sobel operator is used in , particularly within edge detection algorithms. Technically, it is a , computing an approximation of the of the image intensity function. At each point in the image, the result of the Sobel operator is either the corresponding gradient vector or the norm of this vector. The Sobel operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. On the other hand, the gradient approximation that it produces is relatively crude, in particular for high frequency variations in the image.

Unlike the filters discussed earlier, filter results differ significantly when comparing colour and grayscale . The filter tends to be less sensitive to compared to the filter. The detected edge lines are not as finely detailed/granular as the detected edge lines resulting from filters.

```public static Bitmap
Sobel3x3Filter(this Bitmap sourceBitmap,
bool grayscale = true)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Sobel3x3Horizontal,
Matrix.Sobel3x3Vertical,
1.0, 0, grayscale);

return resultBitmap;
}```
```
public static double[,] Sobel3x3Horizontal
{
get
{
return new double[,]
{ { -1,  0,  1, },
{ -2,  0,  2, },
{ -1,  0,  1, }, };
}
} ```
```public static double[,] Sobel3x3Vertical
{
get
{
return new double[,]
{ {  1,  2,  1, },
{  0,  0,  0, },
{ -1, -2, -1, }, };
}
}```

Sobel 3×3

Sobel 3×3 Grayscale

### Prewitt Edge Detection

As with the other methods of discussed in this article the method is also a fairly common implementation. From we gain the following quote:

The Prewitt operator is used in , particularly within algorithms. Technically, it is a , computing an approximation of the of the image intensity function. At each point in the image, the result of the Prewitt operator is either the corresponding gradient vector or the norm of this vector. The Prewitt operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. On the other hand, the gradient approximation which it produces is relatively crude, in particular for high frequency variations in the image. The Prewitt operator was developed by Judith M. S. Prewitt.

In simple terms, the operator calculates the of the image intensity at each point, giving the direction of the largest possible increase from light to dark and the rate of change in that direction. The result therefore shows how "abruptly" or "smoothly" the image changes at that point, and therefore how likely it is that that part of the image represents an edge, as well as how that edge is likely to be oriented. In practice, the magnitude (likelihood of an edge) calculation is more reliable and easier to interpret than the direction calculation.

Similar to the filter, resulting express a significant difference when comparing colour and grayscale .

```public static Bitmap
PrewittFilter(this Bitmap sourceBitmap,
bool grayscale = true)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Prewitt3x3Horizontal,
Matrix.Prewitt3x3Vertical,
1.0, 0, grayscale);

return resultBitmap;
}```
```public static double[,] Prewitt3x3Horizontal
{
get
{
return new double[,]
{ { -1,  0,  1, },
{ -1,  0,  1, },
{ -1,  0,  1, }, };
}
} ```
```
public static double[,] Prewitt3x3Vertical
{
get
{
return new double[,]
{ {  1,  1,  1, },
{  0,  0,  0, },
{ -1, -1, -1, }, };
}
}```

Prewitt

Prewitt Grayscale

### Kirsch Edge Detection

The method is often implemented in the form of Compass . In the following scenario we only implement two components: Horizontal and Vertical. Resulting tend to have a high level of brightness.

```public static Bitmap
KirschFilter(this Bitmap sourceBitmap,
bool grayscale = true)
{
Bitmap resultBitmap =
ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Kirsch3x3Horizontal,
Matrix.Kirsch3x3Vertical,
1.0, 0, grayscale);

return resultBitmap;
}```
```public static double[,] Kirsch3x3Horizontal
{
get
{
return new double[,]
{ {  5,  5,  5, },
{ -3,  0, -3, },
{ -3, -3, -3, }, };
}
} ```
```public static double[,] Kirsch3x3Vertical
{
get
{
return new double[,]
{ {  5, -3, -3, },
{  5,  0, -3, },
{  5, -3, -3, }, };
}
}```

Kirsch

Kirsch Grayscale

### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

### Article Purpose

This article is intended to serve as an introduction to the concepts related to creating and processing filters being applied on . The filters discussed are: Blur, Gaussian Blur, Soften, Motion Blur, High Pass, Edge Detect, Sharpen and Emboss.

### Using the Sample Application

A Sample Application has been included with this article’s sample source code. The Sample Application has been developed to target the platform. Using the Sample Application users are able to select a source/input from the local file system and from a drop down select a filter to apply. Filtered can be saved to the local file system when a user clicks the ‘Save’ button.

The following screenshot shows the Image Convolution Filter sample application in action.

### Image Convolution

Before delving into discussions on technical implementation details it is important to have a good understanding of the concepts behind .

In relation to can be considered as algorithms being implemented resulting in translating input/source . Algorithms being applied generally take the form of accepting two input values and producing a third value considered to be a modified version of one of the input values.

can be implemented to produce filters such as: Blurring, Smoothing, Edge Detection, Sharpening and Embossing. The resulting filtered still bares a relation to the input source .

### Convolution Matrix

In this article we will be implementing through means of a or representing the algorithms required to produce resulting filtered . A should be considered as a two dimensional array or grid. It is required that the number or rows and columns be of an equal size, which is furthermore required to not be a factor of two. Examples of valid dimensions could be 3×3 or 5×5. Dimensions such as 2×2 or 4×4 would not be valid. Generally the sum total of all the values expressed in a equates to one, although it is not a strict requirement.

The following table represents an example /:

 2 0 0 0 -1 0 0 0 -1

An important aspect to keep in mind: When implementing a the value of a pixel will be determined by the values of the pixel’s neighbouring pixels. The values contained in a represent factor values intended to be multiplied with pixel values. In a the centre pixel represents the pixel currently being modified. Neighbouring matrix values express the factor to be applied to the corresponding neighbouring pixels in regards to the pixel currently being modified.

### The ConvolutionFilterBase class

The sample code defines the class ConvolutionFilterBase. This class is intended to represent the minimum requirements of a . When defining a we will be inheriting from the ConvolutionFilterBase class. Because this class and its members have been defined as , implementing classes are required to implement all defined members.

The following code snippet details the ConvolutionFilterBase definition:

```public abstract class ConvolutionFilterBase
{
public abstract string FilterName
{
get;
}

public abstract double Factor
{
get;
}

public abstract double Bias
{
get;
}

public abstract double[,] FilterMatrix
{
get;
}
}```

As to be expected the member property FilterMatrix is intended to represent a two dimensional array containing a . In some instances when the sum total of values do not equate to 1  a filter might implement a Factor value other than the default of 1. Additionally some filters may also require a Bias value to be added the final result value when calculating the matrix.

### Calculating a Convolution Filter

Calculating filters and creating the resulting can be achieved by invoking the ConvolutionFilter method. This method is defined as an targeting the class. The definition of the ConvolutionFilter as follows:

``` public static Bitmap ConvolutionFilter<T>(this Bitmap sourceBitmap, T filter)
where T : ConvolutionFilterBase
{
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0,
sourceBitmap.Width, sourceBitmap.Height),

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);

double blue = 0.0;
double green = 0.0;
double red = 0.0;

int filterWidth = filter.FilterMatrix.GetLength(1);
int filterHeight = filter.FilterMatrix.GetLength(0);

int filterOffset = (filterWidth-1) / 2;
int calcOffset = 0;

int byteOffset = 0;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
blue = 0;
green = 0;
red = 0;

byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{

calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blue += (double)(pixelBuffer[calcOffset]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];

green += (double)(pixelBuffer[calcOffset + 1]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];

red += (double)(pixelBuffer[calcOffset + 2]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];
}
}

blue = filter.Factor * blue + filter.Bias;
green = filter.Factor * green + filter.Bias;
red = filter.Factor * red + filter.Bias;

if (blue > 255)
{ blue = 255; }
else if (blue < 0)
{ blue = 0; }

if (green > 255)
{ green = 255; }
else if (green < 0)
{ green = 0; }

if (red > 255)
{ red = 255; }
else if (red < 0)
{ red = 0; }

resultBuffer[byteOffset] = (byte)(blue);
resultBuffer[byteOffset + 1] = (byte)(green);
resultBuffer[byteOffset + 2] = (byte)(red);
resultBuffer[byteOffset + 3] = 255;
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

The following section provides a detailed discussion of the ConvolutionFilter .

### ConvolutionFilter<T> – Method Signature

```public static Bitmap ConvolutionFilter<T>
(this Bitmap sourceBitmap,
T filter)
where T : ConvolutionFilterBase ```

The ConvolutionFilter method defines a generic type T constrained by the requirement to be of type ConvolutionFilterBase. The filter parameter being of generic type T has to be of type ConvolutionFilterBase or a type which inherits from the ConvolutionFilterBase class.

Notice how the sourceBitmap parameter type definition is preceded by the indicating the method can be implemented as an . Keep in mind are required to be declared as static.

The sourceBitmap parameter represents the source/input upon which the filter is to be applied. Note that the ConvolutionFilter method is implemented as immutable. The input parameter values are not modified, instead a new instance will be created and returned.

### ConvolutionFilter<T> – Creating the Data Buffer

```BitmapData sourceData = sourceBitmap.LockBits
(new Rectangle(0, 0,
sourceBitmap.Width,
sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride *
sourceData.Height];

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer,
0, pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);```

In order to access the underlying ARGB values from a object we first need to lock the into memory by invoking the method. Locking a into memory prevents the from moving a object to a new location in memory.

When invoking the method the source code instantiates a object from the return value. The property represents the number of in a single pixel row. In this scenario the property should be equal to the ’s width in pixels multiplied by four seeing as every pixel consists of four : Alpha, Red, Green and Blue.

The ConvolutionFilter method defines two buffers, of which the size is set to equal the size of the ’s underlying data. The property of type represents the memory address of the first value of a ’s underlying buffer. Using the method we specify the starting point memory address from where to start copying the ’s buffer.

Important to remember is the next operation being performed: invoking the method. If a has been locked into memory ensure releasing the lock by invoking the method.

### ConvolutionFilter<T> – Iterating Rows and Columns

```double blue = 0.0;
double green = 0.0;
double red = 0.0;

int filterWidth = filter.FilterMatrix.GetLength(1);
int filterHeight = filter.FilterMatrix.GetLength(0);

int filterOffset = (filterWidth-1) / 2;
int calcOffset = 0;

int byteOffset = 0;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
blue = 0;
green = 0;
red = 0;

byteOffset = offsetY *
sourceData.Stride +
offsetX * 4; ```

The ConvolutionFilter method employs two for loops in order to iterate each pixel represented in the ARGB data buffer. Defining two for loops to iterate a one dimensional array simplifies the concept of accessing the array in terms of rows and columns.

Note that the inner loop is limited to the width of the source/input parameter, in other words the number of horizontal pixels. Remember that the data buffer represents four , Alpha, Red, Green and Blue, for each pixel. The inner loop therefore iterates entire pixels.

As discussed earlier the filter has to be declared as a two dimensional array with the same odd number of rows and columns. If the current pixel being processed relates to the element at the centre of the matrix, the width of the matrix less one divided by two equates to the neighbouring pixel index values.

The index of the current pixel can be calculated by multiplying the current row index (offsetY) and the number of ARGB byte values per row of pixels (sourceData.Stride), to which is added the current column/pixel index (offsetX) multiplied by four.

### ConvolutionFilter<T> – Iterating the Matrix

```for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{
calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blue += (double)(pixelBuffer[calcOffset]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];

green += (double)(pixelBuffer[calcOffset + 1]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];

red += (double)(pixelBuffer[calcOffset + 2]) *
filter.FilterMatrix[filterY + filterOffset,
filterX + filterOffset];
}
}```

The ConvolutionFilter method iterates the two dimensional by implementing two for loops, iterating rows and for each row iterating columns. Both loops have been declared to have a starting point equal to the negative value of half the length (filterOffset). Initiating the loops with negative values simplifies implementing the concept of neighbouring pixels.

The first statement performed within the inner loop calculates the index of the neighbouring pixel in relation to the current pixel. Next the value is applied as a factor to the corresponding neighbouring pixel’s individual colour components. The results are added to the totals variables blue, green and red.

In regards to each iteration iterating in terms of an entire pixel, to access individual colour components the source code adds the required colour component offset. Note: ARGB colour components are in fact expressed in reversed order: Blue, Green, Red and Alpha. In other words, a pixel’s first (offset 0) represents Blue, the second (offset 1) represents Green, the third (offset 2) represents Red and the last (offset 3) representing the Alpha component.

### ConvolutionFilter<T> – Applying the Factor and Bias

```blue = filter.Factor * blue + filter.Bias;
green = filter.Factor * green + filter.Bias;
red = filter.Factor * red + filter.Bias;

if (blue > 255)
{ blue = 255; }
else if (blue < 0)
{ blue = 0; }

if (green > 255)
{ green = 255; }
else if (green < 0)
{ green = 0; }

if (red > 255)
{ red = 255; }
else if (red < 0)
{ red = 0; }

resultBuffer[byteOffset] = (byte)(blue);
resultBuffer[byteOffset + 1] = (byte)(green);
resultBuffer[byteOffset + 2] = (byte)(red);
resultBuffer[byteOffset + 3] = 255; ```

After iterating the matrix and calculating the matrix values of the current pixel’s Red, Green and Blue colour components we apply the Factor and add the Bias defined by the filter parameter.

Colour components may only contain a value ranging from 0 to 255 inclusive. Before we assign the newly calculated colour component value we ensure that the value falls within the required range. Values which exceed 255 are set to 255 and values less than 0 are set to 0. Note that assignment is implemented in terms of the result buffer, the original source buffer remains unchanged.

### ConvolutionFilter<T> – Returning the Result

```Bitmap resultBitmap = new Bitmap(sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(
new Rectangle(0, 0,
resultBitmap.Width,
resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0,
resultBuffer.Length);

resultBitmap.UnlockBits(resultData);

return resultBitmap; ```

The final steps performed by the ConvolutionFilter method involves creating a new object instance and copying the calculated result buffer. In a similar fashion to reading underlying pixel data we copy the result buffer to the object.

### Creating Filters

The main requirement when creating a filter is to inherit from the ConvolutionBaseFilter class. The following sections of this article will discuss various filter types and variations where applicable.

To illustrate the different effects resulting from applying filters all of the filters discussed make use of the same source . The original file is licensed under the Creative Commons Attribution 2.0 Generic license and can be downloaded from:

### Blur Filters

is typically used to reduce and detail. The filter’s matrix size affects the level of . A larger results in higher level of , whereas a smaller results in a lesser level of .

### Blur3x3Filter

The Blur3x3Filter results in a slight to medium level of . The consists of 9 elements in a 3×3 configuration.

```public class Blur3x3Filter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Blur3x3Filter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 0.0, 0.2, 0.0, },
{ 0.2, 0.2, 0.2, },
{ 0.0, 0.2, 0.2, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Blur5x5Filter

The Blur5x5Filter results in a medium level of . The consists of 25 elements in a 5×5 configuration. Notice the factor of 1.0 / 13.0.

```public class Blur5x5Filter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Blur5x5Filter"; }
}

private double factor = 1.0 / 13.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 0, 0, 1, 0, 0, },
{ 0, 1, 1, 1, 0, },
{ 1, 1, 1, 1, 1, },
{ 0, 1, 1, 1, 0, },
{ 0, 0, 1, 0, 0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Gaussian3x3BlurFilter

The Gaussian3x3BlurFilter implements a through a matrix of 9 elements in a 3×3 configuration. The sum total of all elements equal 16, therefore the Factor is defined as 1.0 / 16.0. Applying this filter results in a slight to medium level of .

```public class Gaussian3x3BlurFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Gaussian3x3BlurFilter"; }
}

private double factor = 1.0 / 16.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 1, 2, 1, },
{ 2, 4, 2, },
{ 1, 2, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Gaussian5x5BlurFilter

The Gaussian5x5BlurFilter implements a through a matrix of 25 elements in a 5×5 configuration. The sum total of all elements equal 159, therefore the Factor is defined as 1.0 / 159.0. Applying this filter results in a medium level of .

```public class Gaussian5x5BlurFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Gaussian5x5BlurFilter"; }
}

private double factor = 1.0 / 159.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 2, 04, 05, 04, 2, },
{ 4, 09, 12, 09, 4, },
{ 5, 12, 15, 12, 5, },
{ 4, 09, 12, 09, 4, },
{ 2, 04, 05, 04, 2, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### MotionBlurFilter

By implementing the MotionBlurFilter resulting indicate the appearance of a high level of associated with motion/movement. This filter is a combination of left to right and right to left . The matrix consists of 81 elements in a 9×9 configuration.

```public class MotionBlurFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "MotionBlurFilter"; }
}

private double factor = 1.0 / 18.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 1, },
{ 0, 1, 0, 0, 0, 0, 0, 1, 0, },
{ 0, 0, 1, 0, 0, 0, 1, 0, 0, },
{ 0, 0, 0, 1, 0, 1, 0, 0, 0, },
{ 0, 0, 0, 0, 1, 0, 0, 0, 0, },
{ 0, 0, 0, 1, 0, 1, 0, 0, 0, },
{ 0, 0, 1, 0, 0, 0, 1, 0, 0, },
{ 0, 1, 0, 0, 0, 0, 0, 1, 0, },
{ 1, 0, 0, 0, 0, 0, 0, 0, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### MotionBlurLeftToRightFilter

The MotionBlurLeftToRightFilter creates the effect of as a result of left to right movement. The matrix consists of 81 elements in a 9×9 configuration.

```public class MotionBlurLeftToRightFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "MotionBlurLeftToRightFilter"; }
}

private double factor = 1.0 / 9.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 0, },
{ 0, 1, 0, 0, 0, 0, 0, 0, 0, },
{ 0, 0, 1, 0, 0, 0, 0, 0, 0, },
{ 0, 0, 0, 1, 0, 0, 0, 0, 0, },
{ 0, 0, 0, 0, 1, 0, 0, 0, 0, },
{ 0, 0, 0, 0, 0, 1, 0, 0, 0, },
{ 0, 0, 0, 0, 0, 0, 1, 0, 0, },
{ 0, 0, 0, 0, 0, 0, 0, 1, 0, },
{ 0, 0, 0, 0, 0, 0, 0, 0, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### MotionBlurRightToLeftFilter

The MotionBlurRightToLeftFilter creates the effect of as a result of right to left movement. The consists of 81 elements in a 9×9 configuration.

```public class MotionBlurRightToLeftFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "MotionBlurRightToLeftFilter"; }
}

private double factor = 1.0 / 9.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 0, 0, 0, 0, 0, 0, 0, 0, 1, },
{ 0, 0, 0, 0, 0, 0, 0, 1, 0, },
{ 0, 0, 0, 0, 0, 0, 1, 0, 0, },
{ 0, 0, 0, 0, 0, 1, 0, 0, 0, },
{ 0, 0, 0, 0, 1, 0, 0, 0, 0, },
{ 0, 0, 0, 1, 0, 0, 0, 0, 0, },
{ 0, 0, 1, 0, 0, 0, 0, 0, 0, },
{ 0, 1, 0, 0, 0, 0, 0, 0, 0, },
{ 1, 0, 0, 0, 0, 0, 0, 0, 0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Soften Filter

The SoftenFilter can be used to smooth or soften an . The consists of 9 elements in a 3×3 configuration.

```public class SoftenFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "SoftenFilter"; }
}

private double factor = 1.0 / 8.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 1, 1, 1, },
{ 1, 1, 1, },
{ 1, 1, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Sharpen Filters

Sharpening an does not add additional detail to an image but rather adds emphasis to existing image details. is sometimes referred to as image crispness.

### SharpenFilter

This filter is intended as a general usage . In a variety of scenarios this filter should provide a reasonable level of depending on source quality.

```public class SharpenFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "SharpenFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -1, -1, },
{ -1,  9, -1, },
{ -1, -1, -1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Sharpen3x3Filter

The Sharpen3x3Filter results in a medium level of , less intense when compared to the SharpenFilter discussed previously.

```public class Sharpen3x3Filter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Sharpen3x3Filter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { {  0, -1,  0, },
{ -1,  5, -1, },
{  0, -1,  0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Sharpen3x3FactorFilter

The Sharpen3x3FactorFilter provides a level of similar to the Sharpen3x3Filter explored previously. Both filters define a 9 element 3×3 . The filters differ in regards to Factor values. The Sharpen3x3Filter matrix values equate to a sum total of 1, the Sharpen3x3FactorFilter in contrast equate to a sum total of 3. The Sharpen3x3FactorFilter defines a Factor of 1 / 3, resulting in sum total being negated to 1.

```public class Sharpen3x3FactorFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Sharpen3x3FactorFilter"; }
}

private double factor = 1.0 / 3.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { {  0, -2,  0, },
{ -2, 11, -2, },
{  0, -2,  0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Sharpen5x5Filter

The Sharpen5x5Filter matrix defines 25 elements in a 5×5 configuration. The level of resulting from implementing this filter to a greater extent is depended on the source . In some scenarios result images may appear slightly softened.

```public class Sharpen5x5Filter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Sharpen5x5Filter"; }
}

private double factor = 1.0 / 8.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -1, -1, -1, -1, },
{ -1,  2,  2,  2, -1, },
{ -1,  2,  8,  2,  1, },
{ -1,  2,  2,  2, -1, },
{ -1, -1, -1, -1, -1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### IntenseSharpenFilter

The IntenseSharpenFilter produces result with overly emphasized edge lines.

```public class IntenseSharpenFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "IntenseSharpenFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 1,  1, 1, },
{ 1, -7, 1, },
{ 1,  1, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Edge Detection Filters

is the first step towards feature detection and feature extraction in . Edges are generally perceived in in areas exhibiting sudden differences in brightness.

### EdgeDetectionFilter

The EdgeDetectionFilter is intended to be used as a general purpose filter, considered appropriate in the majority of scenarios applied.

```public class EdgeDetectionFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "EdgeDetectionFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -1, -1, },
{ -1,  8, -1, },
{ -1, -1, -1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### EdgeDetection45DegreeFilter

The EdgeDetection45DegreeFilter has the ability to detect edges at 45 degree angles more effectively than other filters.

```public class EdgeDetection45DegreeFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "EdgeDetection45DegreeFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1,  0,  0,  0,  0, },
{  0, -2,  0,  0,  0, },
{  0,  0,  6,  0,  0, },
{  0,  0,  0, -2,  0, },
{  0,  0,  0,  0, -1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### HorizontalEdgeDetectionFilter

The HorizontalEdgeDetectionFilter has the ability to detect horizontal edges more effectively than other filters.

```public class HorizontalEdgeDetectionFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "HorizontalEdgeDetectionFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { {  0,  0,  0,  0,  0, },
{  0,  0,  0,  0,  0, },
{ -1, -1,  2,  0,  0, },
{  0,  0,  0,  0,  0, },
{  0,  0,  0,  0,  0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### VerticalEdgeDetectionFilter

The VerticalEdgeDetectionFilter has the ability to detect vertical edges more effectively than other filters.

```public class VerticalEdgeDetectionFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "VerticalEdgeDetectionFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 0,  0, -1,  0,  0, },
{ 0,  0, -1,  0,  0, },
{ 0,  0,  4,  0,  0, },
{ 0,  0, -1,  0,  0, },
{ 0,  0, -1,  0,  0, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### EdgeDetectionTopLeftBottomRightFilter

This filter closely resembles an indicating object depth whilst still providing a reasonable level of detail.

```public class EdgeDetectionTopLeftBottomRightFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "EdgeDetectionTopLeftBottomRightFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 0.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -5,  0,  0, },
{  0,  0,  0, },
{  0,  0,  5, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Emboss Filters

filters produce result with an emphasis on depth, based on lines/edges expressed in an input/source . Result give the impression of being three dimensional to a varying extent, depended on details defined by input .

### EmbossFilter

The EmbossFilter is intended as a general application filter. Take note of the Bias value of 128. Without a bias value, result would be very dark or mostly black.

```public class EmbossFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "EmbossFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 128.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { 2,  0,  0, },
{ 0, -1,  0, },
{ 0,  0, -1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Emboss45DegreeFilter

The Emboss45DegreeFilter has the ability to produce result with good emphasis on 45 degree edges/lines.

```public class Emboss45DegreeFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "Emboss45DegreeFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 128.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -1,  0, },
{ -1,  0,  1, },
{  0,  1,  1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### EmbossTopLeftBottomRightFilter

The EmbossTopLeftBottomRightFilter provides a more subtle level of result .

```public class EmbossTopLeftBottomRightFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "EmbossTopLeftBottomRightFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 128.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, 0, 0, },
{  0, 0, 0, },
{  0, 0, 1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### IntenseEmbossFilter

When implementing the IntenseEmbossFilter result provide a good three dimensional/depth level. A drawback of this filter can sometimes be noticed in a reduction detail.

```public class IntenseEmbossFilter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "IntenseEmbossFilter"; }
}

private double factor = 1.0;
public override double Factor
{
get { return factor; }
}

private double bias = 128.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -1, -1, -1,  0, },
{ -1, -1, -1,  0,  1, },
{ -1, -1,  0,  1,  1, },
{ -1,  0,  1,  1,  1, },
{  0,  1,  1,  1,  1, }, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### High Pass

produce result where only high frequency components are retained.

```public class HighPass3x3Filter : ConvolutionFilterBase
{
public override string FilterName
{
get { return "HighPass3x3Filter"; }
}

private double factor = 1.0 / 16.0;
public override double Factor
{
get { return factor; }
}

private double bias = 128.0;
public override double Bias
{
get { return bias; }
}

private double[,] filterMatrix =
new double[,] { { -1, -2, -1, },
{ -2, 12, -2, },
{ -1, -2, -1,, };

public override double[,] FilterMatrix
{
get { return filterMatrix; }
}
} ```

### Article Purpose

The objective of this article is to illustrate Arithmetic being implemented when blending/combining two separate into a single result . The types of Image Arithmetic discussed are: Average, Add, SubtractLeft, SubtractRight, Difference, Multiply, Min, Max and Amplitude.

I created the following by implementing Image Arithmetic using as input a photo of a friend’s ear and a photograph taken at a live concert performance by The Red Hot Chili Peppers.

### Using the Sample Application

The Sample source code accompanying this article includes a Sample Application developed on a platform. The Sample Application is indented to provide an implementation of the various types of Image Arithmetic explored in this article.

The Image Arithmetic sample application allows the user to select two source/input from the local file system. The user interface defines a ComboBox dropdown populated with entries relating to types of Image Arithmetic.

The following is a screenshot taken whilst creating the “Red Hot Chili Peppers Concert – Side profile Ear” blended illustrated in the first shown in this article. Notice the stark contrast when comparing the source/input preview . Implementing Image Arithmetic allows us to create a smoothly blended result :

Newly created can be saved to the local file system by clicking the ‘Save Image’ button.

### Image Arithmetic

In simple terms Image Arithmetic involves the process of performing calculations on two ’ corresponding pixel colour components. The values resulting from performing calculations represent a single which is combination of the two original source/input . The extent to which a source/input will be represented in the resulting is dependent on the type of Image Arithmetic employed.

### The ArithmeticBlend Extension method

In this article Image Arithmetic has been implemented as a single targeting the class. The ArithmeticBlend expects as parameters two source/input objects and a value indicating the type of Image Arithmetic to perform.

The ColorCalculationType defines an value for each type of Image Arithmetic supported. The definition as follows:

```public enum ColorCalculationType
{
Average,
SubtractLeft,
SubtractRight,
Difference,
Multiply,
Min,
Max,
Amplitude
}```

It is only within the ArithmeticBlend that we perform Image Arithmetic. This method accesses the underlying pixel data of each sample and creates copies stored in arrays. Each element within the array data buffer represents a single colour component, either Alpha, Red, Green or Blue.

The following code snippet details the implementation of the ArithmeticBlend :

``` public static Bitmap ArithmeticBlend(this Bitmap sourceBitmap, Bitmap blendBitmap,
ColorCalculator.ColorCalculationType calculationType)
{
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0,
sourceBitmap.Width, sourceBitmap.Height),

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);

BitmapData blendData = blendBitmap.LockBits(new Rectangle (0, 0,
blendBitmap.Width, blendBitmap.Height),

byte[] blendBuffer = new byte [blendData.Stride * blendData.Height];
Marshal.Copy(blendData.Scan0, blendBuffer, 0, blendBuffer.Length);
blendBitmap.UnlockBits(blendData);

for (int k = 0; (k + 4 < pixelBuffer.Length) &&
(k + 4 < blendBuffer.Length); k += 4)
{
pixelBuffer[k] = ColorCalculator.Calculate(pixelBuffer[k],
blendBuffer[k], calculationType);

pixelBuffer[k + 1] = ColorCalculator.Calculate(pixelBuffer[k + 1],
blendBuffer[k + 1], calculationType);

pixelBuffer[k + 2] = ColorCalculator.Calculate(pixelBuffer[k + 2],
blendBuffer[k + 2], calculationType);
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

Marshal.Copy(pixelBuffer, 0, resultData.Scan0, pixelBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

We access and copy the underlying pixel data of each input by making use of the method and also the method.

The method iterates both array data buffers simultaneously, having set the for loop condition to regard the array size of both arrays. Scenarios where array data buffers will differ in size occurs when the source specified are not equal in terms of size dimensions.

Notice how each iteration increments the loop counter by a factor of four allowing us to treat each iteration as a complete pixel value. Remember that each data buffer element represents an individual colour component. Every four elements represents a single pixel consisting of the components: Alpha, Red, Green and Blue

Take Note: The ordering of colour components are the exact opposite of the expected order. Each pixel’s colour components are ordered: Blue, Green, Red, Alpha. Since we are iterating an entire pixel with each iteration the for loop counter value will always equate to an element index representing the Blue colour component. In order to access the Red and Green colour components we simply add the values one and two respectively to the for loop counter value, depending on whether accessing the Green or Red colour components.

The task of performing the actual arithmetic has been encapsulated within the static Calculate method, a public member of the static class ColorCalculator. The Calculate method is more detail in the following section of this article.

The final task performed by the ArithmeticBlend method involves creating a new instance of the class which is then updated/populated using the resulting array data buffer previously modified.

### The ColorCalculator.Calculate method

The algorithms implemented in Image Arithmetic are encapsulated within the ColorCalculator.Calculate method. When implementing this method no knowledge of the technical implementation details are required. The parameters required are two values each representing a single colour component, one from each source . The only other required parameter is an value of type ColorCalculationType which will indicate which type of Image Arithmetic should be implemented using the parameters as operands.

The following code snippet details the full implementation of the ColorCalculator.Calculate method:

``` public static byte Calculate(byte color1, byte color2,
ColorCalculationType calculationType)
{
byte resultValue = 0;
int intResult = 0;

if (calculationType == ColorCalculationType.Add)
{
intResult = color1 + color2;
}
else if (calculationType == ColorCalculationType.Average)
{
intResult = (color1 + color2) / 2;
}
else if (calculationType == ColorCalculationType.SubtractLeft)
{
intResult = color1 - color2;
}
else if (calculationType == ColorCalculationType.SubtractRight)
{
intResult = color2 - color1;
}
else if (calculationType == ColorCalculationType.Difference)
{
intResult = Math.Abs(color1 - color2);
}
else if (calculationType == ColorCalculationType.Multiply)
{
intResult = (int)((color1 / 255.0 * color2 / 255.0) * 255.0);
}
else if (calculationType == ColorCalculationType.Min)
{
intResult = (color1 < color2 ? color1 : color2);
}
else if (calculationType == ColorCalculationType.Max)
{
intResult = (color1 > color2 ? color1 : color2);
}
else if (calculationType == ColorCalculationType.Amplitude)
{
intResult = (int)(Math.Sqrt(color1 * color1 + color2 * color2)
/ Math .Sqrt(2.0));
}

if (intResult < 0)
{
resultValue = 0;
}
else if (intResult > 255)
{
resultValue = 255;
}
else
{
resultValue = (byte)intResult;
}

return resultValue;
}
```

The bulk of the ColorCalculator.Calculate method’s implementation is set around a series of if/else if statements evaluating the method parameter passed when the method had been invoked.

Colour component values can only range from 0 to 255 inclusive. Calculations performed might result in values which do not fall within the valid range of values. Calculated values less than zero are set to zero and values exceeding 255 are set to 255, sometimes this is referred to clamping.

The following sections of this article provides an explanation of each type of Image Arithmetic implemented.

### Image Arithmetic: Add

```if (calculationType == ColorCalculationType.Add)
{
intResult = color1 + color2;
}```

The Add algorithm is straightforward, simply adding together the two colour component values. In other words the resulting colour component will be set to equal the sum of both source colour component, provided the total does not exceed 255.

Sample Image

### Image Arithmetic: Average

```if (calculationType == ColorCalculationType.Average)
{
intResult = (color1 + color2) / 2;
}```

The Average algorithm calculates a simple average by adding together the two colour components and then dividing the result by two.

Sample Image

### Image Arithmetic: SubtractLeft

```if (calculationType == ColorCalculationType.SubtractLeft)
{
intResult = color1 - color2;
}```

The SubtractLeft algorithm subtracts the value of the second colour component parameter from the first colour component parameter.

Sample Image

### Image Arithmetic: SubtractRight

```if (calculationType == ColorCalculationType.SubtractRight)
{
intResult = color2 - color1;
}```

The SubtractRight algorithm, in contrast to SubtractLeft, subtracts the value of the first colour component parameter from the second colour component parameter.

Sample Image

### Image Arithmetic: Difference

```if (calculationType == ColorCalculationType.Difference)
{
intResult = Math.Abs(color1 - color2);
}```

The Difference algorithm subtracts the value of the second colour component parameter from the first colour component parameter. By passing the result of the subtraction as a parameter to the Math.Abs method the algorithm ensures only calculating absolute/positive values. In other words calculating the difference in value between colour component parameters.

Sample Image

### Image Arithmetic: Multiply

```if (calculationType == ColorCalculationType.Multiply)
{
intResult = (int)((color1 / 255.0 * color2 / 255.0) * 255.0);
}```

The Multiply algorithm divides each colour component parameter by a value of 255 and the proceeds to multiply the results of the division, the result is then further multiplied by a value of 255.

Sample Image

### Image Arithmetic: Min

```if (calculationType == ColorCalculationType.Min)
{
intResult = (color1 < color2 ? color1 : color2);
}```

The Min algorithm simply compares the two colour component parameters and returns the smallest value of the two.

Sample Image

### Image Arithmetic: Max

```if (calculationType == ColorCalculationType.Max)
{
intResult = (color1 > color2 ? color1 : color2);
}```

The Max algorithm, as can be expected, will produce the exact opposite result when compared to the Min algorithm. This algorithm compares the two colour component parameters and returns the larger value of the two.

Sample Image

### Image Arithmetic: Amplitude

``` else if (calculationType == ColorCalculationType.Amplitude)
{
intResult = (int)(Math.Sqrt(color1 * color1 +
color2 * color2) /
Math.Sqrt(2.0));
}
```

The Amplitude algorithm calculates the amplitude of the two colour component parameters by multiplying each colour component by itself and then sums the results. The last step divides the result thus far by the square root of two.

Sample Image

### Article Purpose

In this article you’ll find a discussion on the topic of blending  images into a single . Various possible methods can be employed in blending images. In this scenario image blending is achieved through means of bitwise operations, implemented on individual colour components Red, Green and Blue.

### Bitwise Operations

In this article we will be implementing the following bitwise operators:

• & Binary And
• | Binary Or
• ^ Exclusive Binary Or (XOR)

A good description of how these operators work can be found on MSDN:

The bitwise-AND operator compares each bit of its first operand to the corresponding bit of its second operand. If both bits are 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.

The bitwise-exclusive-OR operator compares each bit of its first operand to the corresponding bit of its second operand. If one bit is 0 and the other bit is 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.

The bitwise-inclusive-OR operator compares each bit of its first operand to the corresponding bit of its second operand. If either bit is 1, the corresponding result bit is set to 1. Otherwise, the corresponding result bit is set to 0.

### Using the sample Application

Included with this article is a Visual Studio solution containing sample source code and a sample application. The Bitwise Bitmap Blending Sample application allows the user to select two input/source images from the local file system. Selected source images, once specified are displayed as previews with the majority of the application front end being occupied by an output .

The following image is a screenshot of the Bitwise Bitmap Blending application in action:

If the user decides to, blended images can be saved to the local file system by clicking the Save button.

### The BitwiseBlend Extension method

The Sample Source provides the definition for the BitwiseBlend extension method. This method’s declaration indicates being an extension method targeting the class.

The BitwiseBlend method requires 4 parameters: the being blended with and three parameters all of type BitwiseBlendType. The enumeration defines the available blending types in regards to bitwise operations. The following code snippet provides the definition of the BitwiseBlendType enum:

```public enum BitwiseBlendType
{
None,
Or,
And,
Xor
}```

The three BitwiseBlendType parameters relate to a pixel’s colour components: Red, Green and Blue.

The code snippet below details the implementation of the BitwiseBlend Extension method:

``` public static Bitmap BitwiseBlend(this Bitmap sourceBitmap, Bitmap blendBitmap,
BitwiseBlendType blendTypeBlue, BitwiseBlendType
blendTypeGreen, BitwiseBlendType blendTypeRed)
{
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0,
sourceBitmap.Width, sourceBitmap.Height),

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);

BitmapData blendData = blendBitmap.LockBits(new Rectangle(0, 0,
blendBitmap.Width, blendBitmap.Height),

byte[] blendBuffer = new byte[blendData.Stride * blendData.Height];
Marshal.Copy(blendData.Scan0, blendBuffer, 0, blendBuffer.Length);
blendBitmap.UnlockBits(blendData);

int blue = 0, green = 0, red = 0;

for (int k = 0; (k + 4 < pixelBuffer.Length) &&
(k + 4 < blendBuffer.Length); k += 4)
{
if (blendTypeBlue == BitwiseBlendType.And)
{
blue = pixelBuffer[k] & blendBuffer[k];
}
else if (blendTypeBlue == BitwiseBlendType.Or)
{
blue = pixelBuffer[k] | blendBuffer[k];
}
else if (blendTypeBlue == BitwiseBlendType.Xor)
{
blue = pixelBuffer[k] ^ blendBuffer[k];
}

if (blendTypeGreen == BitwiseBlendType.And)
{
green = pixelBuffer[k+1] & blendBuffer[k+1];
}
else if (blendTypeGreen == BitwiseBlendType.Or)
{
green = pixelBuffer[k+1] | blendBuffer[k+1];
}
else if (blendTypeGreen == BitwiseBlendType.Xor)
{
green = pixelBuffer[k+1] ^ blendBuffer[k+1];
}

if (blendTypeRed == BitwiseBlendType.And)
{
red = pixelBuffer[k+2] & blendBuffer[k+2];
}
else if (blendTypeRed == BitwiseBlendType.Or)
{
red = pixelBuffer[k+2] | blendBuffer[k+2];
}
else if (blendTypeRed == BitwiseBlendType.Xor)
{
red = pixelBuffer[k+2] ^ blendBuffer[k+2];
}

if (blue < 0)
{ blue = 0; }
else if (blue > 255)
{ blue = 255; }

if (green < 0)
{ green = 0; }
else if (green > 255)
{ green = 255; }

if (red < 0)
{ red = 0; }
else if (red > 255)
{ red = 255; }

pixelBuffer[k] = (byte)blue;
pixelBuffer[k + 1] = (byte)green;
pixelBuffer[k + 2] = (byte)red;
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle  (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

Marshal.Copy(pixelBuffer, 0, resultData.Scan0, pixelBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
}      ```

All image manipulation tasks performed by the BitwiseBlend Extension method are implemented by directly accessing a ’s underlying raw pixel data.

A first needs to be locked in memory by invoking the method. Once the object has been locked in memory the method instantiates a array, representing a pixel data buffer. Each element present in the data buffer reflects an individual colour component: Alpha, Red, Green or Blue.

Take note: Colour component ordering is opposite to the expected ordering. Colour components are ordered: Blue, Green, Red, Alpha. The short explanation for reverse ordering can be attributed to Little Endian CPU architecture and Blue being represented by the least significant bits of a pixel.

In order to perform bitwise operations on each pixel representing the specified the sample source code employs a for loop, iterating both data buffers. The possibility exists that the two s specified might not have the same size dimensions. Notice how the for loop defines two conditional statements, preventing the loop from iterating past the maximum bounds of the smallest .

Did you notice how the for loop increments the defined counter by four at each loop operation? The reasoning being that every four elements of the data buffer represents a pixel, being composed of: Blue, Green, Red and Alpha. Iterating four elements per iteration thus allows us to manipulate all the colour components of a pixel.

The operations performed within the for loop are fairly straight forward. The source code checks to determine which type of bitwise operation to implement per colour component. Colour components can only range from 0 to 255 inclusive, we therefore perform range checking before assigning calculated values back to the data buffer.

The final step performed involves creating a new resulting object and populating the new with the updated pixel data buffer.

### Sample Images

In generating the sample images two source images were specified, a sunflower and bouquet of roses. The sunflower image has been released into the public domain and can be downloaded from Wikipedia. The bouquet of roses image has been licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license and can be downloaded from  Wikipedia.

 The Original Images The Blended Images

### Article Purpose

Adjusting the contrast of an is a fairly common task in image processing. This article explores the steps involved in adjusting image contrast by directly manipulating image pixels.

### What is Image Contrast?

Contrast within an image results in differences in colour and brightness being perceived. The greater the difference between colours and brightness in an image results in a greater chance of being perceived as different.

From we learn the following quote:

Contrast is the difference in and/or that makes an object (or its representation in an image or display) distinguishable. In of the real world, contrast is determined by the difference in the and of the object and other objects within the same . Because the human visual system is more sensitive to contrast than absolute , we can perceive the world similarly regardless of the huge changes in illumination over the day or from place to place.

### Using the sample Application

The sample source code that accompanies this article includes a sample application, which can be used to implement, test and illustrate the concept of Image Contrast.

The Image Contrast sample application enables the user to load a source image from the local file system. Once a source image has been loaded the contrast can adjusted by dragging the contrast threshold trackbar control. Threshold values range from 100 to –100 inclusive, where positive values increase image contrast and negative values decrease image contrast. A threshold value of 0 results in no change.

The following image is a screenshot of the Image Contrast sample application in action:

### The Contrast Extension Method

The sample source code provides the definition for the Contrast extension method. The method has been defined as an extension method targeting the class.

The following code snippet details the implementation of the Contrast extension method:

``` public static Bitmap Contrast(this Bitmap sourceBitmap, int threshold)
{
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0,
sourceBitmap.Width, sourceBitmap.Height),

byte[] pixelBuffer = new byte  [sourceData.Stride * sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);

double contrastLevel = Math.Pow((100.0 + threshold) / 100.0, 2);

double blue = 0;
double green = 0;
double red = 0;

for (int k = 0; k + 4 < pixelBuffer.Length; k += 4)
{
blue = ((((pixelBuffer[k] / 255.0) - 0.5) *
contrastLevel) + 0.5) * 255.0;

green = ((((pixelBuffer[k + 1] / 255.0) - 0.5) *
contrastLevel) + 0.5) * 255.0;

red = ((((pixelBuffer[k + 2] / 255.0) - 0.5) *
contrastLevel) + 0.5) * 255.0;

if  (blue > 255)
{ blue = 255; }
else if  (blue < 0)
{ blue = 0; }

if (green > 255)
{ green = 255; }
else if (green < 0)
{ green = 0; }

if (red > 255)
{ red = 255; }
else if (red < 0)
{ red = 0; }

pixelBuffer[k] = (byte)blue;
pixelBuffer[k + 1] = (byte)green;
pixelBuffer[k + 2] = (byte)red;
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

Marshal.Copy(pixelBuffer, 0, resultData.Scan0, pixelBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

In order to manipulate pixel colour component values directly we first need to lock the source into memory by invoking the method. Once the source is locked into memory we can copy the underlying pixel buffer using the method.

Based on the value of the threshold method parameter we calculate a contrast level. The formula implemented can be expressed as:

C = ((100.0 + T) / 100.0)2

Where C represents the calculated Contrast and T represents the variable threshold.

The next step involves iterating through the buffer of colour components. Notice how each iteration modifies an entire pixel by iterating by 4. The formula used in adjusting the contrast of a pixel’s colour components can be expressed as:

B = ( ( ( (B1 / 255.0) – 0.5) * C) + 0.5) * 255.0

G = ( ( ( (G1 / 255.0) – 0.5) * C) + 0.5) * 255.0

R = ( ( ( (R1 / 255.0) – 0.5) * C) + 0.5) * 255.0

In the formula the symbols B, G and R represent the contrast adjusted colour components Blue, Green and Red. B1, G1 and R1 represents the original values of the colour components Blue, Green and Red prior to being updated. The symbol C represents the contrast level calculated earlier.

Blue, Green and Red colour component values may only from 0 to 255 inclusive. We therefore need to test if the newly calculated values fall within the valid range of values.

The final operation performed by the Contrast method involves copying the modified pixel buffer into a newly created object which will be returned to the calling code.

### Sample Images

The original source used to create the sample images in this article has been licensed under the Creative Commons Attribution 2.0 Generic license. The original image is attributed to Luc Viatour and can be downloaded from Wikipedia. Luc Viatour’s website can be viewed at: http://www.lucnix.be

## Gravatar :: Dewald Esterhuizen

SoftwareByDefault QR Code