## Posts Tagged 'Image Algorithms'

### Article Purpose

This article’s objective is to illustrate concepts relating to Compass . The methods implemented in this article include: , , Scharr, and Isotropic.

Wasp: Scharr 3 x 3 x 8

### Using the Sample Application

The sample source code accompanying this article includes a based sample application. When using the sample application users are able to load source/input from and save result to the local system. The user interface provides a which contains the supported methods of Compass Edge Detection. Selecting an item from the results in the related Compass Edge Detection method being applied to the current source/input . Supported methods are:

• Prewitt3x3x4 – 3×3 in 4 compass directions
• Prewitt3x3x8 – 3×3 in 8 compass directions
• Prewitt5x5x4 – 5×5 in 4 compass directions
• Sobel3x3x4 – 3×3 in 4 compass directions
• Sobel3x3x8 – 3×3 in 8 compass directions
• Sobel5x5x4 – 5×5 in 4 compass directions
• Scharr3x3x4 – 3×3 Scharr in 4 compass directions
• Scharr3x3x8 – 3×3 Scharr in 8 compass directions
• Scharr5x5x4 – 5×5 Scharr in 4 compass directions
• Kirsch3x3x4 – 3×3 in 4 compass directions
• Kirsch3x3x8 – 3×3 in 8 compass directions
• Isotropic3x3x4 – 3×3 Isotropic in 4 compass directions
• Isotropic3x3x8 – 3×3 Isotropic in 8 compass directions

The following image is a screenshot of the Compass Edge Detection Sample Application in action:

Bee: Isotropic 3 x 3 x 8

### Compass Edge Detection Overview

Compass Edge Detection as a concept title can be explained through the implementation of compass directions. Compass Edge Detection can be implemented through , using multiple , each suited to detecting edges in a specific direction. Often the edge directions implemented are:

• North
• North East
• East
• South East
• South
• South West
• West
• North West

Each of the compass directions listed above differ by 45 degrees. Applying a rotation of 45 degrees to an existing direction specific results in a new suited to detecting edges in the next compass direction.

Various can be implemented in Compass Edge Detection. This article and accompanying sample source code implements the following types:

Prey Mantis: Sobel 3 x 3 x 8

The steps required when implementing Compass Edge Detection can be described as follows:

1. Determine the compass kernels. When an   suited to a specific direction is known, the suited to the 7 remaining compass directions can be calculated. Rotating a by 45 degrees around a central axis equates to the suited to the next compass direction. As an example, if the suited to detect edges in a northerly direction were to be rotated clockwise by 45 degrees around a central axis the result would be an suited to edges in a North Easterly direction.
2. Iterate source image pixels. Every pixel forming part of the source/input should be iterated, implementing using each of the compass .
3. Determine the most responsive kernel convolution. After having applied each compass to the pixel currently being iterated, the most responsive compass determines the output value. In other words, after having applied eight times on the same pixel using each compass direction the output value should be set to the highest value calculated.
4. Validate and set output result. Ensure that the highest value returned from does not equate to less than 0 or more than 255. Should a value be less than zero the result should be assigned as zero. In a similar fashion, should a value exceed 255 the result should be assigned as 255.

Prewitt Compass Kernels

LadyBug: Prewitt 3 x 3 x 8

### Rotating Convolution Kernels

can be rotated by implementing a . Repeatedly rotating by 45 degrees results in calculating 8 , each suited to a different direction. The algorithm implemented when performing a can be expressed as follows:

Rotate Horizontal Algorithm

Rotate Vertical Algorithm

I’ve published an in-depth article on rotation available here:

Butterfly: Sobel 3 x 3 x 8

### Implementing Kernel Rotation

The sample source code defines the RotateMatrix method. This method accepts as parameter a single , defined as a two dimensional array of type double. In addition the method also expects as a parameter the degree to which the specified should be rotated. The definition as follows:

```public static double[, ,] RotateMatrix(double[,] baseKernel,
double degrees)
{
double[, ,] kernel = new double[(int )(360 / degrees),
baseKernel.GetLength(0), baseKernel.GetLength(1)];

int xOffset = baseKernel.GetLength(1) / 2;
int yOffset = baseKernel.GetLength(0) / 2;

for (int y = 0; y < baseKernel.GetLength(0); y++)
{
for (int x = 0; x < baseKernel.GetLength(1); x++)
{
for (int compass = 0; compass <
kernel.GetLength(0); compass++)
{
double radians = compass * degrees *
Math.PI / 180.0;

int resultX = (int)(Math.Round((x - xOffset) *
Math.Cos(radians) - (y - yOffset) *

int resultY = (int )(Math.Round((x - xOffset) *
Math.Sin(radians) + (y - yOffset) *

kernel[compass, resultY, resultX] =
baseKernel[y, x];
}
}
}

return kernel;
} ```

Butterfly: Prewitt 3 x 3 x 8

### Implementing Compass Edge Detection

The sample source code defines several which are implemented in . The following code snippet provides the of all defined:

```public static double[, ,] Prewitt3x3x4
{
get
{
double[,] baseKernel = new double[,]
{ {  -1,  0,  1,  },
{  -1,  0,  1,  },
{  -1,  0,  1,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 90);

return kernel;
}
}

public static double[, ,] Prewitt3x3x8
{
get
{
double[,] baseKernel = new double[,]
{ {  -1,  0,  1,  },
{  -1,  0,  1,  },
{  -1,  0,  1,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 45);

return kernel;
}
}

public static double[, ,] Prewitt5x5x4
{
get
{
double[,] baseKernel = new double[,]
{ {  -2, -1,  0,  1, 2,  },
{  -2, -1,  0,  1, 2,  },
{  -2, -1,  0,  1, 2,  },
{  -2, -1,  0,  1, 2,  },
{  -2, -1,  0,  1, 2,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 90);

return kernel;
}
}

public static double[, ,] Kirsch3x3x4
{
get
{
double[,] baseKernel = new double[,]
{ {  -3, -3,  5,  },
{  -3,  0,  5,  },
{  -3, -3,  5,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 90);

return kernel;
}
}

public static double[, ,] Kirsch3x3x8
{
get
{
double[,] baseKernel = new double[,]
{ {  -3, -3,  5,  },
{  -3,  0,  5,  },
{  -3, -3,  5,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 45);

return kernel;
}
}

public static double[, ,] Sobel3x3x4
{
get
{
double[,] baseKernel = new double[,]
{ {  -1,  0,  1,  },
{  -2,  0,  2,  },
{  -1,  0,  1,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 90);

return kernel;
}
}

public static double[, ,] Sobel3x3x8
{
get
{
double[,] baseKernel = new double[,]
{ {  -1,  0,  1,  },
{  -2,  0,  2,  },
{  -1,  0,  1,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 45);

return kernel;
}
}

public static double[, ,] Sobel5x5x4
{
get
{
double[,] baseKernel = new double[,]
{ {   -5,  -4,  0,   4,  5,  },
{   -8, -10,  0,  10,  8,  },
{  -10, -20,  0,  20, 10,  },
{   -8, -10,  0,  10,  8,  },
{   -5,  -4,  0,   4,  5,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 90);

return kernel;
}
}

public static double[, ,] Scharr3x3x4
{
get
{
double[,] baseKernel = new double[,]
{ {  -1,  0,  1,  },
{  -3,  0,  3,  },
{  -1,  0,  1,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 90);

return kernel;
}
}

public static double[, ,] Scharr3x3x8
{
get
{
double[,] baseKernel = new double[,]
{ {  -1,  0,  1,  },
{  -3,  0,  3,  },
{  -1,  0,  1,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 45);

return kernel;
}
}

public static double[, ,] Scharr5x5x4
{
get
{
double[,] baseKernel = new double[,]
{ {   -1,  -1,  0,   1,  1,  },
{   -2,  -2,  0,   2,  2,  },
{   -3,  -6,  0,   6,  3,  },
{   -2,  -2,  0,   2,  2,  },
{   -1,  -1,  0,   1,  1,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 90);

return kernel;
}
}

public static double[, ,] Isotropic3x3x4
{
get
{
double[,] baseKernel = new double[,]
{ {             -1,  0,             1,  },
{  -Math.Sqrt(2),  0,  Math.Sqrt(2),  },
{             -1,  0,             1,  },  };

double[, ,] kernel = RotateMatrix(baseKernel, 90);

return kernel;
}
}

public static double[, ,] Isotropic3x3x8
{
get
{
double[,] baseKernel = new double[,]
{ {             -1,  0,             1,  },
{  -Math.Sqrt(2),  0,  Math.Sqrt(2),  },
{             -1,  0,             1,  }, };

double[, ,] kernel = RotateMatrix(baseKernel, 45);

return kernel;
}
}  ```

Notice how each property invokes the RotateMatrix method discussed in the previous section.

Butterfly: Scharr 3 x 3 x 8

The CompassEdgeDetectionFilter method is defined as an targeting the class. The purpose of this method is to act as a wrapper method encapsulating the technical implementation. The definition as follows:

```public static Bitmap CompassEdgeDetectionFilter(this Bitmap sourceBitmap,
CompassEdgeDetectionType compassType)
{
Bitmap resultBitmap = null;

switch (compassType)
{
case CompassEdgeDetectionType.Sobel3x3x4:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Sobel3x3x4, 1.0 / 4.0);
} break;
case CompassEdgeDetectionType.Sobel3x3x8:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Sobel3x3x8, 1.0/ 4.0);
} break;
case CompassEdgeDetectionType.Sobel5x5x4:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Sobel5x5x4, 1.0/ 84.0);
} break;
case CompassEdgeDetectionType.Prewitt3x3x4:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Prewitt3x3x4, 1.0 / 3.0);
} break;
case CompassEdgeDetectionType.Prewitt3x3x8:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Prewitt3x3x8, 1.0/ 3.0);
} break;
case CompassEdgeDetectionType.Prewitt5x5x4:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Prewitt5x5x4, 1.0 / 15.0);
} break;
case CompassEdgeDetectionType.Scharr3x3x4:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Scharr3x3x4, 1.0 / 4.0);
} break;
case CompassEdgeDetectionType.Scharr3x3x8:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Scharr3x3x8, 1.0 / 4.0);
} break;
case CompassEdgeDetectionType .Scharr5x5x4:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Scharr5x5x4, 1.0 / 21.0);
} break;
case CompassEdgeDetectionType.Kirsch3x3x4:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Kirsch3x3x4, 1.0 / 15.0);
} break;
case CompassEdgeDetectionType.Kirsch3x3x8:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Kirsch3x3x8, 1.0 / 15.0);
} break;
case CompassEdgeDetectionType.Isotropic3x3x4:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Isotropic3x3x4, 1.0 / 3.4);
} break;
case CompassEdgeDetectionType.Isotropic3x3x8:
{
resultBitmap =
sourceBitmap.ConvolutionFilter(Matrix.Isotropic3x3x8, 1.0 / 3.4);
} break;
}

return resultBitmap;
} ```

Rose: Scharr 3 x 3 x 8

Notice from the code snippet listed above, each case statement invokes the ConvolutionFilter method. This method has been defined as an targeting the class. The ConvolutionFilter performs the actual task of . This method implements each passed as a parameter, the highest result value will be determined as the output value. The definition as follows:

```private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap,
double[,,] filterMatrix,
double factor = 1,
int bias = 0)
{
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte [sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte [sourceData.Stride * sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);

double blue = 0.0;
double green = 0.0;
double red = 0.0;

double blueCompass = 0.0;
double greenCompass = 0.0;
double redCompass = 0.0;

int filterWidth = filterMatrix.GetLength(1);
int filterHeight = filterMatrix.GetLength(0);

int filterOffset = (filterWidth-1) / 2;
int calcOffset = 0;

int byteOffset = 0;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
blue = 0;
green = 0;
red = 0;

byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

for (int compass = 0; compass <
filterMatrix.GetLength(0); compass++)
{

blueCompass = 0.0;
greenCompass = 0.0;
redCompass = 0.0;

for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{
calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blueCompass += (double)(pixelBuffer[calcOffset]) *
filterMatrix[compass,
filterY + filterOffset,
filterX + filterOffset];

greenCompass += (double)(pixelBuffer[calcOffset + 1]) *
filterMatrix[compass,
filterY + filterOffset,
filterX + filterOffset];

redCompass += (double)(pixelBuffer[calcOffset + 2]) *
filterMatrix[compass,
filterY + filterOffset,
filterX + filterOffset];
}
}

blue = (blueCompass > blue ? blueCompass : blue);
green = (greenCompass > green ? greenCompass : green);
red = (redCompass > red ? redCompass : red);
}

blue = factor * blue + bias;
green = factor * green + bias;
red = factor * red + bias;

if(blue > 255)
{ blue = 255; }
else if(blue < 0)
{ blue = 0; }

if(green > 255)
{ green = 255; }
else if(green < 0)
{ green = 0; }

if(red > 255)
{ red = 255; }
else if(red < 0)
{ red = 0; }

resultBuffer[byteOffset] = (byte)(blue);
resultBuffer[byteOffset + 1] = (byte)(green);
resultBuffer[byteOffset + 2] = (byte)(red);
resultBuffer[byteOffset + 3] = 255;
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

Rose: Isotropic 3 x 3 x 8

### Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:

The Original Image

Butterfly: Isotropic 3 x 3 x 4

Butterfly: Isotropic 3 x 3 x 8

Butterfly: Kirsch 3 x 3 x 4

Butterfly: Kirsch 3 x 3 x 8

Butterfly: Prewitt 3 x 3 x 4

Butterfly: Prewitt 3 x 3 x 8

Butterfly: Prewitt 5 x 5 x 4

Butterfly: Scharr 3 x 3 x 4

Butterfly: Scharr 3 x 3 x 8

Butterfly: Scharr 5 x 5 x 4

Butterfly: Sobel 3  x 3 x 4

Butterfly: Sobel 3 x 3 x 8

Butterfly: Sobel 5 x 5 x 4

### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

### Article Purpose

This article is focussed on illustrating the steps required in performing an . All of the concepts explored have been implemented by means of raw pixel data processing, no conventional drawing methods, such as GDI, are required.

Rabbit: Shear X 0.4, Y 0.4

### Using the Sample Application

article features a based sample application which is included as part of the accompanying sample source code. The concepts explored in this article can be illustrated in a practical implementation using the sample application.

The sample application enables a user to load source/input from the local system when clicking the Load Image button. In addition users are also able to save output result to the local file system by clicking the Save Image button.

Image can be applied to either X or Y, or both X and Y pixel coordinates. When using the sample application the user has option of adjusting Shear factors, as indicated on the user interface by the numeric up/down controls labelled Shear X and Shear Y.

The following image is a screenshot of the Image Transform Shear Sample Application in action:

Rabbit: Shear X -0.5, Y -0.25

### Image Shear Transformation

A good definition of the term can be found on the Wikipedia :

In , a shear mapping is a that displaces each point in fixed direction, by an amount proportional to its signed distance from a line that is to that direction.[1] This type of mapping is also called shear transformation, transvection, or just shearing

A can be applied as a horizontal shear, a vertical shear or as both. The algorithms implemented when performing a can be expressed as follows:

Horizontal Shear Algorithm

Vertical Shear Algorithm

The algorithm description:

• Shear(x) : The result of a horizontal – The calculated X-Coordinate representing a .
• Shear(y) : The result of a vertical – The calculated Y-Coordinate representing a .
• σ : The lower case version of the Greek alphabet letter Sigma – Represents the Shear Factor.
• x : The X-Coordinate originating from the source/input – The horizontal coordinate value intended to be sheared.
• y : The Y-Coordinate originating from the source/input – The vertical coordinate value intended to be sheared.
• H : Source height in pixels.
• W : Source width in pixels.

Note: When performing a implementing both the horizontal and vertical planes each coordinate plane can be calculated using a different shearing factor.

The algorithms have been adapted in order to implement a middle pixel offset by means of subtracting the product of the related plane boundary and the specified Shearing Factor, which will then be divided by a factor of two.

Rabbit: Shear X 1.0, Y 0.1

### Implementing a Shear Transformation

The sample source code performs through the implementation of the ShearXY and ShearImage.

The ShearXY targets the structure. The algorithms discussed in the previous sections have been implemented in this function from a C# perspective. The definition as illustrated by the following code snippet:

```public static Point ShearXY(this Point source, double shearX,
double shearY,
int offsetX,
int offsetY)
{
Point result = new Point();

result.X = (int)(Math.Round(source.X + shearX * source.Y));
result.X -= offsetX;

result.Y = (int)(Math.Round(source.Y + shearY * source.X));
result.Y -= offsetY;

return result;
} ```

Rabbit: Shear X 0.0, Y 0.5

The ShearImage targets the class. This method expects as parameter values a horizontal and a vertical shearing factor. Providing a shearing factor of zero results in no shearing being implemented in the corresponding direction. The definition as follows:

```public static Bitmap ShearImage(this Bitmap sourceBitmap,
double shearX,
double shearY)
{
BitmapData sourceData =
sourceBitmap.LockBits(new Rectangle(0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride *
sourceData.Height];

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0,
pixelBuffer.Length);

sourceBitmap.UnlockBits(sourceData);

int xOffset = (int )Math.Round(sourceBitmap.Width *
shearX / 2.0);

int yOffset = (int )Math.Round(sourceBitmap.Height *
shearY / 2.0);

int sourceXY = 0;
int resultXY = 0;

Point sourcePoint = new Point();
Point resultPoint = new Point();

Rectangle imageBounds = new Rectangle(0, 0,
sourceBitmap.Width,
sourceBitmap.Height);

for (int row = 0; row < sourceBitmap.Height; row++)
{
for (int col = 0; col < sourceBitmap.Width; col++)
{
sourceXY = row * sourceData.Stride + col * 4;

sourcePoint.X = col;
sourcePoint.Y = row;

if (sourceXY >= 0 &&
sourceXY + 3 < pixelBuffer.Length)
{
resultPoint = sourcePoint.ShearXY(shearX,
shearY, xOffset, yOffset);

resultXY = resultPoint.Y * sourceData.Stride +
resultPoint.X * 4;

if (imageBounds.Contains(resultPoint) &&
resultXY >= 0)
{
if (resultXY + 6 <= resultBuffer.Length)
{
resultBuffer[resultXY + 4] =
pixelBuffer[sourceXY];

resultBuffer[resultXY + 5] =
pixelBuffer[sourceXY + 1];

resultBuffer[resultXY + 6] =
pixelBuffer[sourceXY + 2];

resultBuffer[resultXY + 7] = 255;
}

if (resultXY - 3 >= 0)
{
resultBuffer[resultXY - 4] =
pixelBuffer[sourceXY];

resultBuffer[resultXY - 3] =
pixelBuffer[sourceXY + 1];

resultBuffer[resultXY - 2] =
pixelBuffer[sourceXY + 2];

resultBuffer[resultXY - 1] = 255;
}

if (resultXY + 3 < resultBuffer.Length)
{
resultBuffer[resultXY] =
pixelBuffer[sourceXY];

resultBuffer[resultXY + 1] =
pixelBuffer[sourceXY + 1];

resultBuffer[resultXY + 2] =
pixelBuffer[sourceXY + 2];

resultBuffer[resultXY + 3] = 255;
}
}
}
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData =
resultBitmap.LockBits(new Rectangle(0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0,
resultBuffer.Length);

resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

Rabbit: Shear X 0.5, Y 0.0

### Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction.

The sample images featuring the image of an Eastern Cottontail Rabbit has been released into the public domain by its author. The original image can be downloaded from .

The sample images featuring the image of a Mountain Cottontail Rabbit is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code. The original image can be downloaded from .

Rabbit: Shear X 1.0, Y 0.0

Rabbit: Shear X 0.5, Y 0.1

Rabbit: Shear X -0.5, Y -0.25

Rabbit: Shear X -0.5, Y 0.0

Rabbit: Shear X 0.25, Y 0.0

Rabbit: Shear X 0.50, Y 0.0

Rabbit: Shear X 0.0, Y 0.5

Rabbit: Shear X 0.0, Y 0.25

Rabbit: Shear X 0.0, Y 1.0

### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

### Article Purpose

The intention of is to explain and illustrate the various possible combinations that can be implemented when swapping the underlying colour channels related to a  image. The concepts explained can easily be replicated by making use of the included sample application.

### Sample source code

is accompanied by a sample source code Visual Studio project which is available for download here.

### Using the sample Application

The sample application associated with allows the user to select a source image, apply a colour shifting option. The user is provided  with the option to save to disk the resulting new . The below is a screenshot of the Bitmap ARGB Swapping application in action:

The scenario illustrated above shows an of flowers being transformed by swapping the underlying colour channels. In this case the ShiftLeft algorithm had been applied. The original is licenced under the , the original image can be downloaded from Wikipedia.

### Types of Colour Swapping

The sample source code defines the type ColorSwapType, which represents the possible combinations of colour channel swapping that can be applied to a . The source code extract below provides the definition of the ColorSwapType :

```public enum ColorSwapType
{
ShiftRight,
ShiftLeft,
SwapBlueAndRed,
SwapBlueAndGreen,
SwapRedAndGreen,
}```

When directly manipulating a object’s pixel values an important detail should be noted: Bitmap colour channels in memory are represented in the order Blue, Green, Red and Alpha despite being commonly referred to by abbreviation ARGB!

The following list describes each colour swapping type’s outcome:

• ShiftRight: Starting at Blue, each colour’s value is set to the colour channel to the right. The value of Blue is applied to Red, Red’s original value applied to Green, Green’s original value applied to Blue.
• ShiftLeft: Starting at Blue, each colour’s value is set to the colour channel to the left. The value of Blue is applied to Green, Green’s original value applied to Red, Red’s original value applied to Blue.
• SwapBlueAndRed: The value of the Blue channel is applied to the Red channel and the original value of the Red channel is then applied to the Blue channel. The value of the Green channel remains unchanged.
• SwapBlueAndGreen: The value of the Blue channel is applied to the Green channel and the original value of the Green channel is then applied to the Blue channel. The value of the Red  channel remains unchanged.
• SwapRedAndGreen: The value of the Red channel is applied to the Green channel and the original value of the Green channel is then applied to the Red channel. The value of the Blue channel remains unchanged.

### The Colour Swap Filter

The sample source code defines the ColorSwapFilter class. This class provides several member properties, which in combination represent the options involved in applying a colour swap filter. The source code snippet below provides the definition of the ColorSwapFilter type:

```public class ColorSwapFilter
{
private ColorSwapType swapType = ColorSwapType.ShiftRight;
public ColorSwapType SwapType
{
get{ return swapType;}
set{ swapType = value;}
}

private bool swapHalfColorValues = false;
public bool SwapHalfColorValues
{
get{ return swapHalfColorValues;}
set{ swapHalfColorValues = value;}
}

private bool invertColorsWhenSwapping = false;
public bool InvertColorsWhenSwapping
{
get{ return invertColorsWhenSwapping;}
set{ invertColorsWhenSwapping = value;}
}

public enum ColorSwapType
{
ShiftRight,
ShiftLeft,
SwapBlueAndRed,
SwapBlueAndGreen,
SwapRedAndGreen,
}
}```

The member properties defined by the ColorSwapFilter class:

• Implementing the ColorSwapType discussed earlier, the SwapType member property defines the type of colour channel swapping to apply.
• Before swapping colour channel values, colour values can be inverted depending on whether InvertColorsWhenSwapping equates to true.
• In order to reduce the intensity of the resulting image, the SwapHalfColorValues property should be set to true. The end result being destination colour channels are set to 50% of relevant source colour channel values.

### Applying the Colour Swap Filter

The sample source code accompanying defines the SwapColorsCopy method, an targeting class. When invoking the SwapColorsCopy extension method, the calling code is required to specify an input and an instance of the ColorSwapFilter class. By virtue of being an the input/source will be specified by the object instance invoking the SwapColorsCopy method.

The source code listing below provides the definition of the SwapColorsCopy .

```public static Bitmap SwapColorsCopy(this Bitmap originalImage, ColorSwapFilter swapFilterData)
{
BitmapData sourceData = originalImage.LockBits
(new Rectangle(0, 0, originalImage.Width, originalImage.Height),

byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
originalImage.UnlockBits(sourceData);

byte sourceBlue = 0, resultBlue = 0,
sourceGreen = 0, resultGreen = 0,
sourceRed = 0, resultRed = 0;
byte byte2 = 2, maxValue = 255;

for (int k = 0; k < resultBuffer.Length; k += 4)
{
sourceBlue = resultBuffer[k];
sourceGreen = resultBuffer[k + 1];
sourceRed = resultBuffer[k + 2];

if (swapFilterData.InvertColorsWhenSwapping == true)
{
sourceBlue = (byte)(maxValue - sourceBlue);
sourceGreen = (byte)(maxValue - sourceGreen);
sourceRed = (byte)(maxValue - sourceRed);
}

if (swapFilterData.SwapHalfColorValues == true)
{
sourceBlue = (byte)(sourceBlue / byte2);
sourceGreen = (byte)(sourceGreen / byte2);
sourceRed = (byte)(sourceRed / byte2);
}

switch (swapFilterData.SwapType)
{
case ColorSwapFilter.ColorSwapType.ShiftRight:
{
resultBlue = sourceGreen;
resultRed = sourceBlue;
resultGreen = sourceRed;

break;
}
case ColorSwapFilter.ColorSwapType.ShiftLeft:
{
resultBlue = sourceRed;
resultRed = sourceGreen;
resultGreen = sourceBlue;

break;
}
case ColorSwapFilter.ColorSwapType.SwapBlueAndRed:
{
resultBlue = sourceRed;
resultRed = sourceBlue;

break;
}
case ColorSwapFilter.ColorSwapType.SwapBlueAndGreen:
{
resultBlue = sourceGreen;
resultGreen = sourceBlue;

break;
}
case ColorSwapFilter.ColorSwapType.SwapRedAndGreen:
{
resultRed = sourceGreen;
resultGreen = sourceGreen;

break;
}
}

resultBuffer[k] = resultBlue;
resultBuffer[k + 1] = resultGreen;
resultBuffer[k + 2] = resultRed;
}

Bitmap resultBitmap = new Bitmap(originalImage.Width, originalImage.Height,
PixelFormat.Format32bppArgb);

BitmapData resultData = resultBitmap.LockBits
(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
}```

Due to the architecture and implementation of the .net when manipulating a object’s underlying colour values we need to ensure locking the relevant data buffer in memory. When invoking the class’ method the calling code prevents the from shifting and updating memory references. Once a ’s underlying pixel buffer has been locked in memory the source code creates a data buffer of type byte array and then copies the ’s underlying pixel buffer data.

```BitmapData sourceData = originalImage.LockBits
(new Rectangle(0, 0, originalImage.Width, originalImage.Height),

byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
originalImage.UnlockBits(sourceData);```

The sample source code next iterates the pixel buffer array. Notice how the for loop increments by 4 with each loop. Every four elements of the data buffer in combination represents one pixel, each colour channel expressed as a value ranging from 0 to 255 inclusive.

`for (int k = 0; k < resultBuffer.Length; k += 4)`

If required each colour channel will first be assigned to a value equating to its inverse value by subtracting from 255.

```if (swapFilterData.InvertColorsWhenSwapping == true)
{
sourceBlue = (byte)(maxValue - sourceBlue);
sourceGreen = (byte)(maxValue - sourceGreen);
sourceRed = (byte)(maxValue - sourceRed);
}```

When the supplied ColorSwapFilter object method parameter defines SwapHalfColorValues as true the source colour value will be divided by 2.

```if (swapFilterData.SwapHalfColorValues == true)
{
sourceBlue = (byte)(sourceBlue / byte2);
sourceGreen = (byte)(sourceGreen / byte2);
sourceRed = (byte)(sourceRed / byte2);
}
```

The next section implements a case statement, each option implementing the required colour channel swap algorithm. The last step expressed as part of the for loop results in assigning newly manipulated values to the data buffer.

The SwapColorsCopy extension method can be described as being immutable in the sense that the input value remains unchanged, instead manipulating and returning a copy of the input data. Following the data buffer iteration the sample source creates a new instance of the class and locks it into memory by invoking the method. By implementing the method the source code copies the data buffer to the underlying buffer associated with the newly created object.

``` Bitmap resultBitmap = new Bitmap(originalImage.Width, originalImage.Height,
PixelFormat.Format32bppArgb);

BitmapData resultData = resultBitmap.LockBits
(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;```

### The implementation: a Windows Forms Application

The sample source code accompanying defines a , the intention of which being to illustrate a test implementation. The following series of images were created using the sample application:

The source/input image is licenced under the , the original image can be downloaded from Wikipedia.

The Original Image

The ShiftLeft Colour Swapping algorithm:

Inverted:

The ShiftRight Colour Swapping algorithm:

Inverted:

The SwapBlueAndGreen Colour Swapping algorithm:

Inverted:

The SwapBlueAndRed Colour Swapping algorithm:

Inverted:

The SwapRedAndGreen Colour Swapping algorithm:

Inverted:

### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

### Article Purpose

This article is aimed at detailing how to implement the process of substituting the colour values that form part of a image. Colour substitution is implemented by means of a threshold value. By implementing a threshold a range of similar colours can be substituted.

### Using the sample Application

The provided sample source code builds a Windows Forms application which can be used to test/implement the concepts described in this article. The sample application enables the user to load an file from the file system, the user can then specify the colour to replace, the replacement colour and the threshold to apply. The following image is a screenshot of the sample application in action.

The scenario detailed in the above screenshot shows the sample application being used to create an where the sky has more of a bluish hue when compared to the original .

Notice how replacement colour does not simply appear as a solid colour applied throughout. The replacement colour gets implemented matching the intensity of the colour being substituted.

The colour filter options:

The colour to replace was taken from the original , the replacement colour is specified through a colour picker dialog. When a user clicks on either displayed, the colour of the pixel clicked on sets the value of the replacement colour. By adjusting the threshold value the user can specify how wide or narrow the range of colours to replace should be. The higher the threshold value, the wider the range of colours that will be replaced.

The resulting image can be saved by clicking the “Save Result” button. In order to apply another colour substitution on the resulting image click the button labelled “Set Result as Source”.

### Colour Substitution Filter Data

The sample source code provides the definition for the ColorSubstitutionFilter class. The purpose of this class is to contain data required when applying colour substitution. The ColorSubstitutionFilter class is defined as follows:

```public class ColorSubstitutionFilter
{
private int thresholdValue = 10;
public int ThresholdValue
{
get { return thresholdValue; }
set { thresholdValue = value; }
}

private Color sourceColor = Color.White;
public Color SourceColor
{
get { return sourceColor; }
set { sourceColor = value; }
}

private Color newColor = Color.White;
public Color NewColor
{
get { return newColor; }
set { newColor = value; }
}
}```

To implement a colour substitution filter we first have to create an object instance of type ColorSubstitutionFilter. A colour substitution requires specifying a SourceColor, which is the colour to replace/substitute and a NewColour, which defines the colour that will replace the SourceColour. Also required is a ThresholdValue, which determines a range of colours based on the SourceColor.

### Colour Substitution implemented as an Extension method

The sample source code defines the ColorSubstitution extension method which targets the class. Invoking the ColorSubstitution requires passing a parameter of type ColorSubstitutionFilter, which defines how colour substitution is to be implemented. The following code snippet contains the definition of the ColorSubstitution method.

```public static Bitmap ColorSubstitution(this Bitmap sourceBitmap, ColorSubstitutionFilter filterData)
{
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height, PixelFormat.Format32bppArgb);

BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height),
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);

byte[] resultBuffer = new byte[resultData.Stride * resultData.Height];
Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);

sourceBitmap.UnlockBits(sourceData);

byte sourceRed = 0, sourceGreen = 0, sourceBlue = 0, sourceAlpha = 0;
int resultRed = 0, resultGreen = 0, resultBlue = 0;

byte newRedValue = filterData.NewColor.R;
byte newGreenValue = filterData.NewColor.G;
byte newBlueValue = filterData.NewColor.B;

byte redFilter = filterData.SourceColor.R;
byte greenFilter = filterData.SourceColor.G;
byte blueFilter = filterData.SourceColor.B;

byte minValue = 0;
byte maxValue = 255;

for (int k = 0; k < resultBuffer.Length; k += 4)
{
sourceAlpha = resultBuffer[k + 3];

if (sourceAlpha != 0)
{
sourceBlue = resultBuffer[k];
sourceGreen = resultBuffer[k + 1];
sourceRed = resultBuffer[k + 2];

if ((sourceBlue < blueFilter + filterData.ThresholdValue &&
sourceBlue > blueFilter - filterData.ThresholdValue) &&

(sourceGreen < greenFilter + filterData.ThresholdValue &&
sourceGreen > greenFilter - filterData.ThresholdValue) &&

(sourceRed < redFilter + filterData.ThresholdValue &&
sourceRed > redFilter - filterData.ThresholdValue))
{
resultBlue = blueFilter - sourceBlue + newBlueValue;

if (resultBlue > maxValue)
{ resultBlue = maxValue;}
else if (resultBlue < minValue)
{ resultBlue = minValue;}

resultGreen = greenFilter - sourceGreen + newGreenValue;

if (resultGreen > maxValue)
{ resultGreen = maxValue;}
else if (resultGreen < minValue)
{ resultGreen = minValue;}

resultRed = redFilter - sourceRed + newRedValue;

if (resultRed > maxValue)
{ resultRed = maxValue;}
else if (resultRed < minValue)
{ resultRed = minValue;}

resultBuffer[k] = (byte)resultBlue;
resultBuffer[k + 1] = (byte)resultGreen;
resultBuffer[k + 2] = (byte)resultRed;
resultBuffer[k + 3] = sourceAlpha;
}
}
}

Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
}```

The ColorSubstitution method can be labelled as due to its implementation. Being implies that the source/input data will not be modified, instead a new instance will be created reflecting the source data as modified by the operations performed in the particular method.

The first statement defined in the ColorSubstitution method body instantiates an instance of a new , matching the size dimensions of the source object. Next the method invokes the method on the source and result instances. When invoking the underlying data representing a will be locked in memory. Being locked in memory can also be described as signalling/preventing the Garbage Collector to not move around in memory the data being locked. Invoking results in the Garbage Collector functioning as per normal, moving data in memory and updating the relevant memory references when required.

The source code continues by copying all the representing the source to an array of bytes that represents the resulting . At this stage the source and result s are exactly identical and as yet unmodified. In order to determine which pixels based on colour should be modified the source code iterates through the byte array associated with the result .

Notice how the for loop increments by 4 with each loop. The underlying data represents a 32 Bits per pixel Argb , which equates to 8 bits/1 representing an individual colour component, either Alpha, Red, Green or Blue. Defining the for loop to increment by 4 results in each loop iterating 4 or 32 bits, in essence 1 pixel.

Within the for loop we determine if the colour expressed by the current pixel adjusted by the threshold value forms part of the colour range that should be updated. It is important to remember that an individual colour component is a byte value and can only be set to a value between 0 and 255 inclusive.

### The Implementation

The ColorSubstitution method is implemented by the sample source code  through a Windows Forms application. The ColorSubstitution method requires that the source specified must be  formatted as a 32 Bpp Argb . When the user loads a source image from the file system the sample application attempts to convert the selected file by invoking the Format32bppArgbCopy which targets the class. The definition is as follows:

```public static Bitmap Format32bppArgbCopy(this Bitmap sourceBitmap)
{
Bitmap copyBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height, PixelFormat.Format32bppArgb);

using (Graphics graphicsObject = Graphics.FromImage(copyBitmap))
{
graphicsObject.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
graphicsObject.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
graphicsObject.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;
graphicsObject.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;

graphicsObject.DrawImage(sourceBitmap, new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height),
new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), GraphicsUnit.Pixel);
}

return copyBitmap;
}```

### Colour Substitution Examples

The following section illustrates a few examples of colour substitution result . The source image features Bellis perennis also known as the common European Daisy (see Wikipedia). The image file is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license. The original image can be downloaded here. The following image is a scaled down version of the original:

Light Blue Colour Substitution

 Colour Component Source Colour Substitute Colour Red 255 121 Green 223 188 Blue 224 255

Medium Blue Colour Substitution

 Colour Component Source Colour Substitute Colour Red 255 34 Green 223 34 Blue 224 255

Medium Green Colour Substitution

 Colour Component Source Colour Substitute Colour Red 255 0 Green 223 128 Blue 224 0

Purple Colour Substitution

 Colour Component Source Colour Substitute Colour Red 255 128 Green 223 0 Blue 224 255

### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

• 478,814 hits