Posts Tagged 'Edge Detection'



C# How to: Weighted Difference of Gaussians

Article Purpose

It is the purpose of this article to illustrate the concept of  . This article extends the conventional implementation of algorithms through the application of equally sized   only differing by a weight factor.

Frog: Kernel 5×5, Weight1 0.1, Weight2 2.1

Frog: Kernel 5x5, Weight1 0.1, Weight2 2.1

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

This article relies on a sample application included as part of the accompanying sample source code. The sample application serves as a practical implementation of the concepts explored throughout this article.

The sample application user interface enables the user to configure and control the implementation of a filter. The configuration options exposed through the sample application’s user interface can be detailed as follows:

  • Load/Save Images – When executing the sample application users are able to load source/input from the local system through clicking the Load Image button. If desired, the sample application enables users to save resulting to the local file system through clicking the Save Image button.
  • Kernel Size – This option relates to the size of the that is to be implemented when performing through . Smaller are faster to compute and generally result in detected in the source/input to be expressed through thinner gradient edges. Larger can be computationally expensive to compute as sizes increase. In addition, the edges detected in source/input will generally be expressed as thicker gradient edges in resulting .
  • Weight Values – The sample application calculates   and in doing so implements a weight factor. A Weight Factor determines the blur intensity observed in result after having applied . Higher weight factors result in a more intense level of being applied. As expected, lower weight factors values result in  a less intense level of being applied. If the value of the first weight factor exceeds the value of the second weight factor resulting will be generated with a Black background and edges being indicated in White. In a similar fashion, when the second weight factor value exceeds that of the first weight factor resulting will be generated with a White background and edges being indicated in Black. The greater the difference between the first and second weight factor values result in a greater degree of removal. When weight factor values only differ slightly, resulting may be prone to .

The following image is screenshot of the Weighted Difference of Gaussians sample application in action:

Weighted Difference Of Gaussians Sample Application

Frog: Kernel 5×5, Weight1 1.8, Weight2 0.1

Frog: Kernel 5x5, Weight1 1.8, Weight2 0.1

Gaussian Blur

The algorithm can be described as one of the most popular and widely implemented methods of . From we gain the following excerpt:

A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function. It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination. Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales.

Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. This is also known as a two-dimensional Weierstrass transform.

Take Note: The algorithm has the attribute of smoothing detail/definition whilst also having an edge preservation attribute. When applying a to an a level of detail/definition will be blurred/smoothed away, done in a fashion that would exclude/preserve edges.

Frog: Kernel 5×5, Weight1 2.7, Weight2 0.1

Frog: Kernel 5x5, Weight1 2.7, Weight2 0.1

Difference of Gaussians Edge Detection

refers to a specific method of . , common abbreviated as DoG, functions through the implementation of .

A clear and concise description can be found on the Wikipedia Article Page:

In imaging science, difference of Gaussians is a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original. In the simple case of grayscale images, the blurred images are obtained by convolving the original grayscale images with Gaussian kernels having differing standard deviations. Blurring an image using a Gaussian kernel suppresses only high-frequency spatial information. Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the difference of Gaussians is a band-pass filter that discards all but a handful of spatial frequencies that are present in the original grayscale image.

In a conventional sense involves applying to created as copies of the original source/input . There must be a difference in the size of the implemented when applying . A typical example would be applying a 3×3 on one copy whilst applying a 5×5 on another copy. The final step requires creating a result populated by subtracting the two blurred copies. The results obtained from subtraction represents the edges forming part of the source/input .

This article extends beyond the conventional method of implementing . The implementation illustrated in this article retains the core concept of subtracting values which have been blurred to different intensities. The implementation method explored here differs from the conventional method in the sense that the implemented do not differ in size. Both are in fact required to have the same size dimensions.

The implemented are equal in terms of their size dimensions, although values are different. Expressed from another angle: equally sized of which one represents a more intense level of than the other. A resulting intensity can be determined by the weight factor implemented when calculating the values.

Frog: Kernel 5×5, Weight1 3.7, Weight2 0.2

Frog: Kernel 5x5, Weight1 3.7, Weight2 0.2

The advantages of implementing equally sized can be described as follows:

Single Convolution implementation:  involves executing several nested code loops. Application performance can be severely negatively impacted when executing large nested loops. The conventional method of implementing generally involves having to implement two instances of , once per copy. The method implemented in this article executes the code loops related to only once. Considering the are equal in size, both can be iterated within the same set of loops.

Eliminating Image subtraction: In conventional implementations expressing differing intensity levels of have to be subtracted. The implementation method described in this article eliminates the need to perform subtraction. When applying using both simultaneously the two results obtained, one  from each , can be subtracted and assigned to the result . In addition, through calculating both results at the same time further reduces the need to create two temporary source copies.

Frog: Kernel 5×5, Weight1 2.4, Weight2 0.3

Frog: Kernel 5x5, Weight1 2.4, Weight2 0.3

Difference of Gaussians Edge Detection Required Steps

When implementing a several steps are required, those steps are detailed as follows:

  1. Calculate Kernels – Before implementing two have to be calculated. The calculated are required to be of equal size and differ in intensity. The sample application allows the user to configure intensity through updating the weight values, expressed as Weight 1 and Weight 2.
  2. Convert Source Image to Grayscale – Applying on   outperforms on RGB . When converting an RGB pixel to a pixel, colour components are combined to form a single gray level intensity. In other words a   consists of a third of the number of pixels when compared to the RGB from which the had been rendered. In the case of an ARGB the derived will be expressed in 25% of the number of pixels forming part of the source ARGB . When applying the number of processor cycles increases when the pixel count increases.
  3. Perform Convolution Implementing Thresholds – Using the newly created perform for both calculated in the first step. The result value equates to subtracting the two results obtained from .  If the result value exceeds the difference between the first and second weight value, the resulting pixel should be set to White, if not, set the result pixel to Black.

Frog: Kernel 5×5, Weight1 2.1, Weight2 0.5

Frog: Kernel 5x5, Weight1 2.1, Weight2 0.5

Calculating Gaussian Convolution Kernels

The sample application implements calculations. The implemented in are calculated at runtime, as opposed to being hard coded. Being able to dynamically construct has the advantage of providing a greater degree of control in runtime regarding application.

Several steps are involved in calculating . The first required step being to determine the Size and Weight. The size and weight factor of a comprises the two configurable values implemented when calculating . In the case of this article and the sample application those values will be configured by the user through the sample application’s user interface.

The formula implemented in calculating can be expressed as follows:

Gaussian Formula

The formula contains a number of symbols, which define how the filter will be implemented. The symbols forming part of the formula are described in the following list:

  • G(x y) – A value calculated using the Kernel formula. This value forms part of a , representing a single element.
  • π – Pi, one of the better known members of the Greek alphabet. The mathematical constant defined as 22 / 7.
  • σ – The lower case version of the Greek alphabet letter Sigma. This symbol simply represents a threshold or factor value, as specified by the user.
  • e – The formula references a lower case e symbol. The symbol represents . The value of has been defined as a mathematical constant equating to 2.71828182846.
  • x, y – The variables referenced as x and y relate to pixel coordinates within an . y Representing the vertical offset or row and x represents the horizontal offset or column.

Note: The formula’s implementation expects x and y to equal zero values when representing the coordinates of the pixel located in the middle of the .

Frog: Kernel 7×7, Weight1 0.1, Weight2 2.0

Frog: Kernel 7x7, Weight1 0.1, Weight2 2.0

Implementing Gaussian Kernel Calculations

The sample application defines the GaussianCalculator.Calculate method. This method accepts two parameters, kernel size and kernel weight. The following code snippet details the implementation:

public static double[,] Calculate(int lenght, double weight) 
{
    double[,] Kernel = new double [lenght, lenght]; 
    double sumTotal = 0; 

int kernelRadius = lenght / 2; double distance = 0;
double calculatedEuler = 1.0 / (2.0 * Math.PI * Math.Pow(weight, 2));
for (int filterY = -kernelRadius; filterY <= kernelRadius; filterY++) { for (int filterX = -kernelRadius; filterX <= kernelRadius; filterX++) { distance = ((filterX * filterX) + (filterY * filterY)) / (2 * (weight * weight));
Kernel[filterY + kernelRadius, filterX + kernelRadius] = calculatedEuler * Math.Exp(-distance);
sumTotal += Kernel[filterY + kernelRadius, filterX + kernelRadius]; } }
for (int y = 0; y < lenght; y++) { for (int x = 0; x < lenght; x++) { Kernel[y, x] = Kernel[y, x] * (1.0 / sumTotal); } }
return Kernel; }

Frog: Kernel 3×3, Weight1 0.1, Weight2 1.8

Frog: Kernel 3x3, Weight1 0.1, Weight2 1.8

Implementing Difference of Gaussians Edge Detection

The sample source code defines the DifferenceOfGaussianFilter method. This method has been defined as an targeting the class. The following code snippet provides the implementation:

public static Bitmap DifferenceOfGaussianFilter(this Bitmap sourceBitmap,  
                                                int matrixSize, double weight1, 
                                                double weight2) 
{
    double[,] kernel1 =  
    GaussianCalculator.Calculate(matrixSize,  
    (weight1 > weight2 ? weight1 : weight2)); 

double[,] kernel2 = GaussianCalculator.Calculate(matrixSize, (weight1 > weight2 ? weight2 : weight1));
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte [sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte [sourceData.Stride * sourceData.Height]; byte[] grayscaleBuffer = new byte [sourceData.Width * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double rgb = 0;
for (int source = 0, dst = 0; source < pixelBuffer.Length && dst < grayscaleBuffer.Length; source += 4, dst++) { rgb = pixelBuffer * 0.11f; rgb += pixelBuffer * 0.59f; rgb += pixelBuffer * 0.3f;
grayscaleBuffer[dst] = (byte)rgb; }
double color1 = 0.0; double color2 = 0.0;
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
for (int source = 0, dst = 0; source < grayscaleBuffer.Length && dst + 4 < resultBuffer.Length; source++, dst += 4) { color1 = 0; color2 = 0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = source + (filterX) + (filterY * sourceBitmap.Width);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= grayscaleBuffer.Length ? grayscaleBuffer.Length - 1 : calcOffset));
color1 += (grayscaleBuffer[calcOffset]) * kernel1[filterY + filterOffset, filterX + filterOffset];
color2 += (grayscaleBuffer[calcOffset]) * kernel2[filterY + filterOffset, filterX + filterOffset]; } }
color1 = color1 - color2; color1 = (color1 >= weight1 - weight2 ? 255 : 0);
resultBuffer[dst] = (byte)color1; resultBuffer[dst + 1] = (byte)color1; resultBuffer[dst + 2] = (byte)color1; resultBuffer[dst + 3] = 255; }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Frog: Kernel 3×3, Weight1 2.1, Weight2 0.7

Frog: Kernel 3x3, Weight1 2.1, Weight2 0.7

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature as sample images:

Panamanian Golden Frog

Panamanian Golden Frog

Dendropsophus Microcephalus

Dendropsophus Microcephalus

Tyler’s Tree Frog

Tyler's Tree Frog

Mimic Poison Frog

Mimic Poison Frog

Phyllobates Terribilis

Phyllobates Terribilis

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

Advertisements

C# How to: Compass Edge Detection

Article Purpose

This article’s objective is to illustrate concepts relating to Compass . The methods implemented in this article include: , , Scharr, and Isotropic.

Wasp: Scharr 3 x 3 x 8

Wasp Scharr 3 x 3 x 8

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

The sample source code accompanying this article includes a based sample application. When using the sample application users are able to load source/input from and save result to the local system. The user interface provides a which contains the supported methods of Compass Edge Detection. Selecting an item from the results in the related Compass Edge Detection method being applied to the current source/input . Supported methods are:

  • Prewitt3x3x4 – 3×3 in 4 compass directions
  • Prewitt3x3x8 – 3×3 in 8 compass directions
  • Prewitt5x5x4 – 5×5 in 4 compass directions
  • Sobel3x3x4 – 3×3 in 4 compass directions
  • Sobel3x3x8 – 3×3 in 8 compass directions
  • Sobel5x5x4 – 5×5 in 4 compass directions
  • Scharr3x3x4 – 3×3 Scharr in 4 compass directions
  • Scharr3x3x8 – 3×3 Scharr in 8 compass directions
  • Scharr5x5x4 – 5×5 Scharr in 4 compass directions
  • Kirsch3x3x4 – 3×3 in 4 compass directions
  • Kirsch3x3x8 – 3×3 in 8 compass directions
  • Isotropic3x3x4 – 3×3 Isotropic in 4 compass directions
  • Isotropic3x3x8 – 3×3 Isotropic in 8 compass directions

The following image is a screenshot of the Compass Edge Detection Sample Application in action:

Compass Edge Detection Sample Application

Bee: Isotropic 3 x 3 x 8

Bee Isotropic 3 x 3 x 8

Compass Edge Detection Overview

Compass Edge Detection as a concept title can be explained through the implementation of compass directions. Compass Edge Detection can be implemented through , using multiple , each suited to detecting edges in a specific direction. Often the edge directions implemented are:

  • North
  • North East
  • East
  • South East
  • South
  • South West
  • West
  • North West

Each of the compass directions listed above differ by 45 degrees. Applying a rotation of 45 degrees to an existing direction specific results in a new suited to detecting edges in the next compass direction.

Various can be implemented in Compass Edge Detection. This article and accompanying sample source code implements the following types:

Prey Mantis: Sobel 3 x 3 x 8

Prey Mantis Sobel 3 x 3 x 8

The steps required when implementing Compass Edge Detection can be described as follows:

  1. Determine the compass kernels. When an   suited to a specific direction is known, the suited to the 7 remaining compass directions can be calculated. Rotating a by 45 degrees around a central axis equates to the suited to the next compass direction. As an example, if the suited to detect edges in a northerly direction were to be rotated clockwise by 45 degrees around a central axis the result would be an suited to edges in a North Easterly direction.
  2. Iterate source image pixels. Every pixel forming part of the source/input should be iterated, implementing using each of the compass .
  3. Determine the most responsive kernel convolution. After having applied each compass to the pixel currently being iterated, the most responsive compass determines the output value. In other words, after having applied eight times on the same pixel using each compass direction the output value should be set to the highest value calculated.
  4. Validate and set output result. Ensure that the highest value returned from does not equate to less than 0 or more than 255. Should a value be less than zero the result should be assigned as zero. In a similar fashion, should a value exceed 255 the result should be assigned as 255.

Prewitt Compass Kernels

Prewitt Compass Kernels

LadyBug: Prewitt 3 x 3 x 8

LadyBug Prewitt 3 x 3 x 8

Rotating Convolution Kernels

can be rotated by implementing a . Repeatedly rotating by 45 degrees results in calculating 8 , each suited to a different direction. The algorithm implemented when performing a can be expressed as follows:

Rotate Horizontal Algorithm

Rotate Horizontal Algorithm

Rotate Vertical Algorithm

Rotate Vertical Algorithm

I’ve published an in-depth article on rotation available here:  

Butterfly: Sobel 3 x 3 x 8

Butterfly Sobel 3 x 3 x 8

Implementing Kernel Rotation

The sample source code defines the RotateMatrix method. This method accepts as parameter a single , defined as a two dimensional array of type double. In addition the method also expects as a parameter the degree to which the specified should be rotated. The definition as follows:

public static double[, ,] RotateMatrix(double[,] baseKernel,  
                                             double degrees) 
{
    double[, ,] kernel = new double[(int )(360 / degrees),  
        baseKernel.GetLength(0), baseKernel.GetLength(1)]; 

int xOffset = baseKernel.GetLength(1) / 2; int yOffset = baseKernel.GetLength(0) / 2;
for (int y = 0; y < baseKernel.GetLength(0); y++) { for (int x = 0; x < baseKernel.GetLength(1); x++) { for (int compass = 0; compass < kernel.GetLength(0); compass++) { double radians = compass * degrees * Math.PI / 180.0;
int resultX = (int)(Math.Round((x - xOffset) * Math.Cos(radians) - (y - yOffset) * Math.Sin(radians)) + xOffset);
int resultY = (int )(Math.Round((x - xOffset) * Math.Sin(radians) + (y - yOffset) * Math.Cos(radians)) + yOffset);
kernel[compass, resultY, resultX] = baseKernel[y, x]; } } }
return kernel; }

Butterfly: Prewitt 3 x 3 x 8

Butterfly Prewitt 3 x 3 x 8

Implementing Compass Edge Detection

The sample source code defines several which are implemented in . The following code snippet provides the of all defined:

public static double[, ,] Prewitt3x3x4 
{
    get 
    {
        double[,] baseKernel = new double[,]  
         { {  -1,  0,  1,  },  
           {  -1,  0,  1,  },  
           {  -1,  0,  1,  }, }; 

double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Prewitt3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -1, 0, 1, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Prewitt5x5x4 { get { double[,] baseKernel = new double[,] { { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Kirsch3x3x4 { get { double[,] baseKernel = new double[,] { { -3, -3, 5, }, { -3, 0, 5, }, { -3, -3, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Kirsch3x3x8 { get { double[,] baseKernel = new double[,] { { -3, -3, 5, }, { -3, 0, 5, }, { -3, -3, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Sobel3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -2, 0, 2, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Sobel3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -2, 0, 2, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Sobel5x5x4 { get { double[,] baseKernel = new double[,] { { -5, -4, 0, 4, 5, }, { -8, -10, 0, 10, 8, }, { -10, -20, 0, 20, 10, }, { -8, -10, 0, 10, 8, }, { -5, -4, 0, 4, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Scharr3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -3, 0, 3, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Scharr3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -3, 0, 3, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Scharr5x5x4 { get { double[,] baseKernel = new double[,] { { -1, -1, 0, 1, 1, }, { -2, -2, 0, 2, 2, }, { -3, -6, 0, 6, 3, }, { -2, -2, 0, 2, 2, }, { -1, -1, 0, 1, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Isotropic3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -Math.Sqrt(2), 0, Math.Sqrt(2), }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Isotropic3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -Math.Sqrt(2), 0, Math.Sqrt(2), }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }

Notice how each property invokes the RotateMatrix method discussed in the previous section.

Butterfly: Scharr 3 x 3 x 8

Butterfly Scharr 3 x 3 x 8

The CompassEdgeDetectionFilter method is defined as an targeting the class. The purpose of this method is to act as a wrapper method encapsulating the technical implementation. The definition as follows:

public static Bitmap CompassEdgeDetectionFilter(this Bitmap sourceBitmap,  
                                    CompassEdgeDetectionType compassType) 
{ 
    Bitmap resultBitmap = null; 

switch (compassType) { case CompassEdgeDetectionType.Sobel3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel3x3x4, 1.0 / 4.0); } break; case CompassEdgeDetectionType.Sobel3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel3x3x8, 1.0/ 4.0); } break; case CompassEdgeDetectionType.Sobel5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel5x5x4, 1.0/ 84.0); } break; case CompassEdgeDetectionType.Prewitt3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt3x3x4, 1.0 / 3.0); } break; case CompassEdgeDetectionType.Prewitt3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt3x3x8, 1.0/ 3.0); } break; case CompassEdgeDetectionType.Prewitt5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt5x5x4, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Scharr3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr3x3x4, 1.0 / 4.0); } break; case CompassEdgeDetectionType.Scharr3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr3x3x8, 1.0 / 4.0); } break; case CompassEdgeDetectionType .Scharr5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr5x5x4, 1.0 / 21.0); } break; case CompassEdgeDetectionType.Kirsch3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Kirsch3x3x4, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Kirsch3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Kirsch3x3x8, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Isotropic3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Isotropic3x3x4, 1.0 / 3.4); } break; case CompassEdgeDetectionType.Isotropic3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Isotropic3x3x8, 1.0 / 3.4); } break; }
return resultBitmap; }

Rose: Scharr 3 x 3 x 8

Rose Scharr 3 x 3 x 8

Notice from the code snippet listed above, each case statement invokes the ConvolutionFilter method. This method has been defined as an targeting the class. The ConvolutionFilter performs the actual task of . This method implements each passed as a parameter, the highest result value will be determined as the output value. The definition as follows:

private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap,  
                                     double[,,] filterMatrix,  
                                           double factor = 1,  
                                                int bias = 0)  
{
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly,  
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte [sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte [sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double blue = 0.0; double green = 0.0; double red = 0.0;
double blueCompass = 0.0; double greenCompass = 0.0; double redCompass = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth-1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int compass = 0; compass < filterMatrix.GetLength(0); compass++) {
blueCompass = 0.0; greenCompass = 0.0; redCompass = 0.0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blueCompass += (double)(pixelBuffer[calcOffset]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset];
greenCompass += (double)(pixelBuffer[calcOffset + 1]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset];
redCompass += (double)(pixelBuffer[calcOffset + 2]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset]; } }
blue = (blueCompass > blue ? blueCompass : blue); green = (greenCompass > green ? greenCompass : green); red = (redCompass > red ? redCompass : red); }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
if(blue > 255) { blue = 255; } else if(blue < 0) { blue = 0; }
if(green > 255) { green = 255; } else if(green < 0) { green = 0; }
if(red > 255) { red = 255; } else if(red < 0) { red = 0; }
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rose: Isotropic 3 x 3 x 8

Rose Isotropic 3 x 3 x 8

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:

The Original Image

Original Image

Butterfly: Isotropic 3 x 3 x 4

Butterfly Isotropic 3 x 3 x 4

Butterfly: Isotropic 3 x 3 x 8

Butterfly Isotropic 3 x 3 x 8

Butterfly: Kirsch 3 x 3 x 4

Butterfly Kirsch 3 x 3 x 4

Butterfly: Kirsch 3 x 3 x 8

Butterfly Kirsch 3 x 3 x 8

Butterfly: Prewitt 3 x 3 x 4

Butterfly Prewitt 3 x 3 x 4

Butterfly: Prewitt 3 x 3 x 8

Butterfly Prewitt 3 x 3 x 8

Butterfly: Prewitt 5 x 5 x 4

Butterfly Prewitt 5 x 5 x 4

Butterfly: Scharr 3 x 3 x 4

Butterfly Scharr 3 x 3 x 4

Butterfly: Scharr 3 x 3 x 8

Butterfly Scharr 3 x 3 x 8

Butterfly: Scharr 5 x 5 x 4

Butterfly Scharr 5 x 5 x 4

Butterfly: Sobel 3  x 3 x 4

Butterfly Sobel 3  x 3 x 4

Butterfly: Sobel 3 x 3 x 8

Butterfly Sobel 3 x 3 x 8

Butterfly: Sobel 5 x 5 x 4

Butterfly Sobel 5 x 5 x 4

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:


Dewald Esterhuizen

Blog Stats

  • 657,368 hits

Enter your email address to follow and receive notifications of new posts by email.

Join 220 other followers

Archives

Twitter feed


%d bloggers like this: