Archive for the 'Image Convolution' Category

C# How to: Fuzzy Blur Filter

Article Purpose

This article serves to illustrate the concepts involved in implementing a Fuzzy Blur Filter. This filter results in rendering  non-photo realistic images which express a certain artistic effect.

Frog: Filter Size 19×19

Frog: Filter Size 19x19

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

The sample source code accompanying this article includes a based test application. The concepts explored throughout this article can be replicated/tested using the sample application.

When executing the sample application the user interface exposes a number of configurable options:

  • Loading and Saving Images – Users are able to load source/input from the local system by clicking the Load Image button. Clicking the Save Image button allow users to save filter result .
  • Filter Size – The specified filter size affects the filter intensity. Smaller filter sizes result in less blurry being rendered, whereas larger filter sizes result in more blurry being rendered.
  • Edge Factors – The contrast of fuzzy expressed in resulting depend on the specified edge factor values. Values less than one result in detected being darkened and values greater than one result in detected image edges being lightened.

The following image is a screenshot of the Fuzzy Blur Filter sample application in action:

Fuzzy Blur Filter Sample Application

Frog: Filter Size 9×9

Frog: Filter Size 9x9

Fuzzy Blur Overview

The Fuzzy Blur Filter relies on the interference of when performing in order to create a fuzzy effect. In addition results from performing a .

The steps involved in performing a Fuzzy Blur Filter can be described as follows:

  1. Edge Detection and Enhancement – Using the first edge factor specified enhance by performing Boolean Edge detection. Being sensitive to , a fair amount of detected will actually be in addition to actual .
  2. Mean Filter Blur – Using the edge enhanced created in the previous step perform a blur. The enhanced edges will be blurred since a does not have edge preservation properties. The size of the implemented depends on a user specified value.
  3. Edge Detection and Enhancement –  Using the blurred created in the previous step once again perform Boolean Edge detection, enhancing detected edges according to the second edge factor specified.

Frog: Filter Size 9×9

Frog: Filter Size 9x9

Mean Filter

A Blur, also known as a , can be performed through . The size of the / implemented when preforming will be determined through user input.

Every / element should be set to one. The resulting value should be multiplied by a factor value equating to one divided by the / size. As an example, a / size of 3×3 can be expressed as follows:

Mean Kernel

An alternative expression can also be:

Mean Kernel

Frog: Filter Size 9×9

Frog: Filter Size 9x9

Boolean Edge Detection without a local threshold

When performing Boolean Edge Detection a local threshold should be implemented in order to exclude . In this article we rely on the interference of in order to render a fuzzy effect. By not implementing a local threshold when performing Boolean Edge detection the sample source code ensures sufficient interference from .

The steps involved in performing Boolean Edge Detection without a local threshold can be described as follows:

  1. Calculate Neighbourhood Mean – Iterate each forming part of the source/input . Using a 3×3 size calculate the mean value of the neighbourhood surrounding the currently being iterated.
  2. Create Mean comparison Matrix – Once again using a 3×3 size compare each neighbourhood to the newly calculated mean value. Create a temporary 3×3 size , each element’s value should be the result of mean comparison. Should the value expressed by a neighbourhood exceed the mean value the corresponding temporary element should be set to one. When the calculated mean value exceeds the value of a neighbourhood the corresponding temporary  element should be set to zero.
  3. Compare Edge Masks – Using sixteen predefined edge masks compare the temporary created in the previous step to each edge mask. If the temporary matches one of the predefined edge masks multiply the specified factor to the currently being iterated.

Note: A detailed article on Boolean Edge detection implementing a local threshold can be found here:

Frog: Filter Size 9×9

Frog: Filter Size 9x9

The sixteen predefined edge masks each represent an in a different direction. The predefined edge masks can be expressed as:

Boolean Edge Masks

Frog: Filter Size 13×13

Frog: Filter Size 13x13

Implementing a Mean Filter

The sample source code defines the MeanFilter method, an targeting the class. The definition listed as follows:

private static Bitmap MeanFilter(this Bitmap sourceBitmap, 
                                 int meanSize)
{
    byte[] pixelBuffer = sourceBitmap.GetByteArray(); 
    byte[] resultBuffer = new byte[pixelBuffer.Length];

double blue = 0.0, green = 0.0, red = 0.0; double factor = 1.0 / (meanSize * meanSize);
int imageStride = sourceBitmap.Width * 4; int filterOffset = meanSize / 2; int calcOffset = 0, filterY = 0, filterX = 0;
for (int k = 0; k + 4 < pixelBuffer.Length; k += 4) { blue = 0; green = 0; red = 0; filterY = -filterOffset; filterX = -filterOffset;
while (filterY <= filterOffset) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length - 2 ? pixelBuffer.Length - 3 : calcOffset));
blue += pixelBuffer[calcOffset]; green += pixelBuffer[calcOffset + 1]; red += pixelBuffer[calcOffset + 2];
filterX++;
if (filterX > filterOffset) { filterX = -filterOffset; filterY++; } }
resultBuffer[k] = ClipByte(factor * blue); resultBuffer[k + 1] = ClipByte(factor * green); resultBuffer[k + 2] = ClipByte(factor * red); resultBuffer[k + 3] = 255; }
return resultBuffer.GetImage(sourceBitmap.Width, sourceBitmap.Height); }

Frog: Filter Size 19×19

Frog: Filter Size 19x19

Implementing Boolean Edge Detection

Boolean Edge detection is performed in the sample source code through the implementation of the BooleanEdgeDetectionFilter method. This method has been defined as an targeting the class.

The following code snippet provides the definition of the BooleanEdgeDetectionFilter :

public static Bitmap BooleanEdgeDetectionFilter( 
       this Bitmap sourceBitmap, float edgeFactor) 
{
    byte[] pixelBuffer = sourceBitmap.GetByteArray(); 
    byte[] resultBuffer = new byte[pixelBuffer.Length]; 
    Buffer.BlockCopy(pixelBuffer, 0, resultBuffer, 
                     0, pixelBuffer.Length); 

List<string> edgeMasks = GetBooleanEdgeMasks(); int imageStride = sourceBitmap.Width * 4; int matrixMean = 0, pixelTotal = 0; int filterY = 0, filterX = 0, calcOffset = 0; string matrixPatern = String.Empty;
for (int k = 0; k + 4 < pixelBuffer.Length; k += 4) { matrixPatern = String.Empty; matrixMean = 0; pixelTotal = 0; filterY = -1; filterX = -1;
while (filterY < 2) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length - 2 ? pixelBuffer.Length - 3 : calcOffset)); matrixMean += pixelBuffer[calcOffset]; matrixMean += pixelBuffer[calcOffset + 1]; matrixMean += pixelBuffer[calcOffset + 2];
filterX += 1;
if (filterX > 1) { filterX = -1; filterY += 1; } }
matrixMean = matrixMean / 9; filterY = -1; filterX = -1;
while (filterY < 2) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length - 2 ? pixelBuffer.Length - 3 : calcOffset));
pixelTotal = pixelBuffer[calcOffset]; pixelTotal += pixelBuffer[calcOffset + 1]; pixelTotal += pixelBuffer[calcOffset + 2]; matrixPatern += (pixelTotal > matrixMean ? "1" : "0"); filterX += 1;
if (filterX > 1) { filterX = -1; filterY += 1; } }
if (edgeMasks.Contains(matrixPatern)) { resultBuffer[k] = ClipByte(resultBuffer[k] * edgeFactor);
resultBuffer[k + 1] = ClipByte(resultBuffer[k + 1] * edgeFactor);
resultBuffer[k + 2] = ClipByte(resultBuffer[k + 2] * edgeFactor); } }
return resultBuffer.GetImage(sourceBitmap.Width, sourceBitmap.Height); }

Frog: Filter Size 13×13

Frog: Filter Size 13x13

The predefined edge masks implemented in mean comparison have been wrapped by the GetBooleanEdgeMasks method. The definition as follows:

public static List<string> GetBooleanEdgeMasks() 
{
    List<string> edgeMasks = new List<string>(); 

edgeMasks.Add("011011011"); edgeMasks.Add("000111111"); edgeMasks.Add("110110110"); edgeMasks.Add("111111000"); edgeMasks.Add("011011001"); edgeMasks.Add("100110110"); edgeMasks.Add("111011000"); edgeMasks.Add("111110000"); edgeMasks.Add("111011001"); edgeMasks.Add("100110111"); edgeMasks.Add("001011111"); edgeMasks.Add("111110100"); edgeMasks.Add("000011111"); edgeMasks.Add("000110111"); edgeMasks.Add("001011011"); edgeMasks.Add("110110100");
return edgeMasks; }

Frog: Filter Size 19×19

Frog: Filter Size 19x19

Implementing a Fuzzy Blur Filter

The FuzzyEdgeBlurFilter method serves as the implementation of a Fuzzy Blur Filter. As discussed earlier a Fuzzy Blur Filter involves enhancing through Boolean Edge detection, performing a blur and then once again performing Boolean Edge detection. This method has been defined as an extension method targeting the class.

The following code snippet provides the definition of the FuzzyEdgeBlurFilter method:

public static Bitmap FuzzyEdgeBlurFilter(this Bitmap sourceBitmap,  
                                         int filterSize,  
                                         float edgeFactor1,  
                                         float edgeFactor2) 
{
    return  
    sourceBitmap.BooleanEdgeDetectionFilter(edgeFactor1). 
    MeanFilter(filterSize).BooleanEdgeDetectionFilter(edgeFactor2); 
}

Frog: Filter Size 3×3

Frog: Filter Size 3x3

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following images feature as sample images:

Litoria_tyleri

Schrecklicherpfeilgiftfrosch-01

Dendropsophus_microcephalus_-_calling_male_(Cope,_1886)

Atelopus_zeteki1

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Weighted Difference of Gaussians

Article Purpose

It is the purpose of this article to illustrate the concept of  . This article extends the conventional implementation of algorithms through the application of equally sized   only differing by a weight factor.

Frog: Kernel 5×5, Weight1 0.1, Weight2 2.1

Frog: Kernel 5x5, Weight1 0.1, Weight2 2.1

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

This article relies on a sample application included as part of the accompanying sample source code. The sample application serves as a practical implementation of the concepts explored throughout this article.

The sample application user interface enables the user to configure and control the implementation of a filter. The configuration options exposed through the sample application’s user interface can be detailed as follows:

  • Load/Save Images – When executing the sample application users are able to load source/input from the local system through clicking the Load Image button. If desired, the sample application enables users to save resulting to the local file system through clicking the Save Image button.
  • Kernel Size – This option relates to the size of the that is to be implemented when performing through . Smaller are faster to compute and generally result in detected in the source/input to be expressed through thinner gradient edges. Larger can be computationally expensive to compute as sizes increase. In addition, the edges detected in source/input will generally be expressed as thicker gradient edges in resulting .
  • Weight Values – The sample application calculates   and in doing so implements a weight factor. A Weight Factor determines the blur intensity observed in result after having applied . Higher weight factors result in a more intense level of being applied. As expected, lower weight factors values result in  a less intense level of being applied. If the value of the first weight factor exceeds the value of the second weight factor resulting will be generated with a Black background and edges being indicated in White. In a similar fashion, when the second weight factor value exceeds that of the first weight factor resulting will be generated with a White background and edges being indicated in Black. The greater the difference between the first and second weight factor values result in a greater degree of removal. When weight factor values only differ slightly, resulting may be prone to .

The following image is screenshot of the Weighted Difference of Gaussians sample application in action:

Weighted Difference Of Gaussians Sample Application

Frog: Kernel 5×5, Weight1 1.8, Weight2 0.1

Frog: Kernel 5x5, Weight1 1.8, Weight2 0.1

Gaussian Blur

The algorithm can be described as one of the most popular and widely implemented methods of . From we gain the following excerpt:

A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function. It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination. Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales.

Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. This is also known as a two-dimensional Weierstrass transform.

Take Note: The algorithm has the attribute of smoothing detail/definition whilst also having an edge preservation attribute. When applying a to an a level of detail/definition will be blurred/smoothed away, done in a fashion that would exclude/preserve edges.

Frog: Kernel 5×5, Weight1 2.7, Weight2 0.1

Frog: Kernel 5x5, Weight1 2.7, Weight2 0.1

Difference of Gaussians Edge Detection

refers to a specific method of . , common abbreviated as DoG, functions through the implementation of .

A clear and concise description can be found on the Wikipedia Article Page:

In imaging science, difference of Gaussians is a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original. In the simple case of grayscale images, the blurred images are obtained by convolving the original grayscale images with Gaussian kernels having differing standard deviations. Blurring an image using a Gaussian kernel suppresses only high-frequency spatial information. Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the difference of Gaussians is a band-pass filter that discards all but a handful of spatial frequencies that are present in the original grayscale image.

In a conventional sense involves applying to created as copies of the original source/input . There must be a difference in the size of the implemented when applying . A typical example would be applying a 3×3 on one copy whilst applying a 5×5 on another copy. The final step requires creating a result populated by subtracting the two blurred copies. The results obtained from subtraction represents the edges forming part of the source/input .

This article extends beyond the conventional method of implementing . The implementation illustrated in this article retains the core concept of subtracting values which have been blurred to different intensities. The implementation method explored here differs from the conventional method in the sense that the implemented do not differ in size. Both are in fact required to have the same size dimensions.

The implemented are equal in terms of their size dimensions, although values are different. Expressed from another angle: equally sized of which one represents a more intense level of than the other. A resulting intensity can be determined by the weight factor implemented when calculating the values.

Frog: Kernel 5×5, Weight1 3.7, Weight2 0.2

Frog: Kernel 5x5, Weight1 3.7, Weight2 0.2

The advantages of implementing equally sized can be described as follows:

Single Convolution implementation:  involves executing several nested code loops. Application performance can be severely negatively impacted when executing large nested loops. The conventional method of implementing generally involves having to implement two instances of , once per copy. The method implemented in this article executes the code loops related to only once. Considering the are equal in size, both can be iterated within the same set of loops.

Eliminating Image subtraction: In conventional implementations expressing differing intensity levels of have to be subtracted. The implementation method described in this article eliminates the need to perform subtraction. When applying using both simultaneously the two results obtained, one  from each , can be subtracted and assigned to the result . In addition, through calculating both results at the same time further reduces the need to create two temporary source copies.

Frog: Kernel 5×5, Weight1 2.4, Weight2 0.3

Frog: Kernel 5x5, Weight1 2.4, Weight2 0.3

Difference of Gaussians Edge Detection Required Steps

When implementing a several steps are required, those steps are detailed as follows:

  1. Calculate Kernels – Before implementing two have to be calculated. The calculated are required to be of equal size and differ in intensity. The sample application allows the user to configure intensity through updating the weight values, expressed as Weight 1 and Weight 2.
  2. Convert Source Image to Grayscale – Applying on   outperforms on RGB . When converting an RGB pixel to a pixel, colour components are combined to form a single gray level intensity. In other words a   consists of a third of the number of pixels when compared to the RGB from which the had been rendered. In the case of an ARGB the derived will be expressed in 25% of the number of pixels forming part of the source ARGB . When applying the number of processor cycles increases when the pixel count increases.
  3. Perform Convolution Implementing Thresholds – Using the newly created perform for both calculated in the first step. The result value equates to subtracting the two results obtained from .  If the result value exceeds the difference between the first and second weight value, the resulting pixel should be set to White, if not, set the result pixel to Black.

Frog: Kernel 5×5, Weight1 2.1, Weight2 0.5

Frog: Kernel 5x5, Weight1 2.1, Weight2 0.5

Calculating Gaussian Convolution Kernels

The sample application implements calculations. The implemented in are calculated at runtime, as opposed to being hard coded. Being able to dynamically construct has the advantage of providing a greater degree of control in runtime regarding application.

Several steps are involved in calculating . The first required step being to determine the Size and Weight. The size and weight factor of a comprises the two configurable values implemented when calculating . In the case of this article and the sample application those values will be configured by the user through the sample application’s user interface.

The formula implemented in calculating can be expressed as follows:

Gaussian Formula

The formula contains a number of symbols, which define how the filter will be implemented. The symbols forming part of the formula are described in the following list:

  • G(x y) – A value calculated using the Kernel formula. This value forms part of a , representing a single element.
  • π – Pi, one of the better known members of the Greek alphabet. The mathematical constant defined as 22 / 7.
  • σ – The lower case version of the Greek alphabet letter Sigma. This symbol simply represents a threshold or factor value, as specified by the user.
  • e – The formula references a lower case e symbol. The symbol represents . The value of has been defined as a mathematical constant equating to 2.71828182846.
  • x, y – The variables referenced as x and y relate to pixel coordinates within an . y Representing the vertical offset or row and x represents the horizontal offset or column.

Note: The formula’s implementation expects x and y to equal zero values when representing the coordinates of the pixel located in the middle of the .

Frog: Kernel 7×7, Weight1 0.1, Weight2 2.0

Frog: Kernel 7x7, Weight1 0.1, Weight2 2.0

Implementing Gaussian Kernel Calculations

The sample application defines the GaussianCalculator.Calculate method. This method accepts two parameters, kernel size and kernel weight. The following code snippet details the implementation:

public static double[,] Calculate(int lenght, double weight) 
{
    double[,] Kernel = new double [lenght, lenght]; 
    double sumTotal = 0; 

int kernelRadius = lenght / 2; double distance = 0;
double calculatedEuler = 1.0 / (2.0 * Math.PI * Math.Pow(weight, 2));
for (int filterY = -kernelRadius; filterY <= kernelRadius; filterY++) { for (int filterX = -kernelRadius; filterX <= kernelRadius; filterX++) { distance = ((filterX * filterX) + (filterY * filterY)) / (2 * (weight * weight));
Kernel[filterY + kernelRadius, filterX + kernelRadius] = calculatedEuler * Math.Exp(-distance);
sumTotal += Kernel[filterY + kernelRadius, filterX + kernelRadius]; } }
for (int y = 0; y < lenght; y++) { for (int x = 0; x < lenght; x++) { Kernel[y, x] = Kernel[y, x] * (1.0 / sumTotal); } }
return Kernel; }

Frog: Kernel 3×3, Weight1 0.1, Weight2 1.8

Frog: Kernel 3x3, Weight1 0.1, Weight2 1.8

Implementing Difference of Gaussians Edge Detection

The sample source code defines the DifferenceOfGaussianFilter method. This method has been defined as an targeting the class. The following code snippet provides the implementation:

public static Bitmap DifferenceOfGaussianFilter(this Bitmap sourceBitmap,  
                                                int matrixSize, double weight1, 
                                                double weight2) 
{
    double[,] kernel1 =  
    GaussianCalculator.Calculate(matrixSize,  
    (weight1 > weight2 ? weight1 : weight2)); 

double[,] kernel2 = GaussianCalculator.Calculate(matrixSize, (weight1 > weight2 ? weight2 : weight1));
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte [sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte [sourceData.Stride * sourceData.Height]; byte[] grayscaleBuffer = new byte [sourceData.Width * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double rgb = 0;
for (int source = 0, dst = 0; source < pixelBuffer.Length && dst < grayscaleBuffer.Length; source += 4, dst++) { rgb = pixelBuffer * 0.11f; rgb += pixelBuffer * 0.59f; rgb += pixelBuffer * 0.3f;
grayscaleBuffer[dst] = (byte)rgb; }
double color1 = 0.0; double color2 = 0.0;
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
for (int source = 0, dst = 0; source < grayscaleBuffer.Length && dst + 4 < resultBuffer.Length; source++, dst += 4) { color1 = 0; color2 = 0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = source + (filterX) + (filterY * sourceBitmap.Width);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= grayscaleBuffer.Length ? grayscaleBuffer.Length - 1 : calcOffset));
color1 += (grayscaleBuffer[calcOffset]) * kernel1[filterY + filterOffset, filterX + filterOffset];
color2 += (grayscaleBuffer[calcOffset]) * kernel2[filterY + filterOffset, filterX + filterOffset]; } }
color1 = color1 - color2; color1 = (color1 >= weight1 - weight2 ? 255 : 0);
resultBuffer[dst] = (byte)color1; resultBuffer[dst + 1] = (byte)color1; resultBuffer[dst + 2] = (byte)color1; resultBuffer[dst + 3] = 255; }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Frog: Kernel 3×3, Weight1 2.1, Weight2 0.7

Frog: Kernel 3x3, Weight1 2.1, Weight2 0.7

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature as sample images:

Panamanian Golden Frog

Panamanian Golden Frog

Dendropsophus Microcephalus

Dendropsophus Microcephalus

Tyler’s Tree Frog

Tyler's Tree Frog

Mimic Poison Frog

Mimic Poison Frog

Phyllobates Terribilis

Phyllobates Terribilis

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Compass Edge Detection

Article Purpose

This article’s objective is to illustrate concepts relating to Compass . The methods implemented in this article include: , , Scharr, and Isotropic.

Wasp: Scharr 3 x 3 x 8

Wasp Scharr 3 x 3 x 8

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

The sample source code accompanying this article includes a based sample application. When using the sample application users are able to load source/input from and save result to the local system. The user interface provides a which contains the supported methods of Compass Edge Detection. Selecting an item from the results in the related Compass Edge Detection method being applied to the current source/input . Supported methods are:

  • Prewitt3x3x4 – 3×3 in 4 compass directions
  • Prewitt3x3x8 – 3×3 in 8 compass directions
  • Prewitt5x5x4 – 5×5 in 4 compass directions
  • Sobel3x3x4 – 3×3 in 4 compass directions
  • Sobel3x3x8 – 3×3 in 8 compass directions
  • Sobel5x5x4 – 5×5 in 4 compass directions
  • Scharr3x3x4 – 3×3 Scharr in 4 compass directions
  • Scharr3x3x8 – 3×3 Scharr in 8 compass directions
  • Scharr5x5x4 – 5×5 Scharr in 4 compass directions
  • Kirsch3x3x4 – 3×3 in 4 compass directions
  • Kirsch3x3x8 – 3×3 in 8 compass directions
  • Isotropic3x3x4 – 3×3 Isotropic in 4 compass directions
  • Isotropic3x3x8 – 3×3 Isotropic in 8 compass directions

The following image is a screenshot of the Compass Edge Detection Sample Application in action:

Compass Edge Detection Sample Application

Bee: Isotropic 3 x 3 x 8

Bee Isotropic 3 x 3 x 8

Compass Edge Detection Overview

Compass Edge Detection as a concept title can be explained through the implementation of compass directions. Compass Edge Detection can be implemented through , using multiple , each suited to detecting edges in a specific direction. Often the edge directions implemented are:

  • North
  • North East
  • East
  • South East
  • South
  • South West
  • West
  • North West

Each of the compass directions listed above differ by 45 degrees. Applying a rotation of 45 degrees to an existing direction specific results in a new suited to detecting edges in the next compass direction.

Various can be implemented in Compass Edge Detection. This article and accompanying sample source code implements the following types:

Prey Mantis: Sobel 3 x 3 x 8

Prey Mantis Sobel 3 x 3 x 8

The steps required when implementing Compass Edge Detection can be described as follows:

  1. Determine the compass kernels. When an   suited to a specific direction is known, the suited to the 7 remaining compass directions can be calculated. Rotating a by 45 degrees around a central axis equates to the suited to the next compass direction. As an example, if the suited to detect edges in a northerly direction were to be rotated clockwise by 45 degrees around a central axis the result would be an suited to edges in a North Easterly direction.
  2. Iterate source image pixels. Every pixel forming part of the source/input should be iterated, implementing using each of the compass .
  3. Determine the most responsive kernel convolution. After having applied each compass to the pixel currently being iterated, the most responsive compass determines the output value. In other words, after having applied eight times on the same pixel using each compass direction the output value should be set to the highest value calculated.
  4. Validate and set output result. Ensure that the highest value returned from does not equate to less than 0 or more than 255. Should a value be less than zero the result should be assigned as zero. In a similar fashion, should a value exceed 255 the result should be assigned as 255.

Prewitt Compass Kernels

Prewitt Compass Kernels

LadyBug: Prewitt 3 x 3 x 8

LadyBug Prewitt 3 x 3 x 8

Rotating Convolution Kernels

can be rotated by implementing a . Repeatedly rotating by 45 degrees results in calculating 8 , each suited to a different direction. The algorithm implemented when performing a can be expressed as follows:

Rotate Horizontal Algorithm

Rotate Horizontal Algorithm

Rotate Vertical Algorithm

Rotate Vertical Algorithm

I’ve published an in-depth article on rotation available here:  

Butterfly: Sobel 3 x 3 x 8

Butterfly Sobel 3 x 3 x 8

Implementing Kernel Rotation

The sample source code defines the RotateMatrix method. This method accepts as parameter a single , defined as a two dimensional array of type double. In addition the method also expects as a parameter the degree to which the specified should be rotated. The definition as follows:

public static double[, ,] RotateMatrix(double[,] baseKernel,  
                                             double degrees) 
{
    double[, ,] kernel = new double[(int )(360 / degrees),  
        baseKernel.GetLength(0), baseKernel.GetLength(1)]; 

int xOffset = baseKernel.GetLength(1) / 2; int yOffset = baseKernel.GetLength(0) / 2;
for (int y = 0; y < baseKernel.GetLength(0); y++) { for (int x = 0; x < baseKernel.GetLength(1); x++) { for (int compass = 0; compass < kernel.GetLength(0); compass++) { double radians = compass * degrees * Math.PI / 180.0;
int resultX = (int)(Math.Round((x - xOffset) * Math.Cos(radians) - (y - yOffset) * Math.Sin(radians)) + xOffset);
int resultY = (int )(Math.Round((x - xOffset) * Math.Sin(radians) + (y - yOffset) * Math.Cos(radians)) + yOffset);
kernel[compass, resultY, resultX] = baseKernel[y, x]; } } }
return kernel; }

Butterfly: Prewitt 3 x 3 x 8

Butterfly Prewitt 3 x 3 x 8

Implementing Compass Edge Detection

The sample source code defines several which are implemented in . The following code snippet provides the of all defined:

public static double[, ,] Prewitt3x3x4 
{
    get 
    {
        double[,] baseKernel = new double[,]  
         { {  -1,  0,  1,  },  
           {  -1,  0,  1,  },  
           {  -1,  0,  1,  }, }; 

double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Prewitt3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -1, 0, 1, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Prewitt5x5x4 { get { double[,] baseKernel = new double[,] { { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Kirsch3x3x4 { get { double[,] baseKernel = new double[,] { { -3, -3, 5, }, { -3, 0, 5, }, { -3, -3, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Kirsch3x3x8 { get { double[,] baseKernel = new double[,] { { -3, -3, 5, }, { -3, 0, 5, }, { -3, -3, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Sobel3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -2, 0, 2, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Sobel3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -2, 0, 2, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Sobel5x5x4 { get { double[,] baseKernel = new double[,] { { -5, -4, 0, 4, 5, }, { -8, -10, 0, 10, 8, }, { -10, -20, 0, 20, 10, }, { -8, -10, 0, 10, 8, }, { -5, -4, 0, 4, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Scharr3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -3, 0, 3, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Scharr3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -3, 0, 3, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Scharr5x5x4 { get { double[,] baseKernel = new double[,] { { -1, -1, 0, 1, 1, }, { -2, -2, 0, 2, 2, }, { -3, -6, 0, 6, 3, }, { -2, -2, 0, 2, 2, }, { -1, -1, 0, 1, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Isotropic3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -Math.Sqrt(2), 0, Math.Sqrt(2), }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Isotropic3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -Math.Sqrt(2), 0, Math.Sqrt(2), }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }

Notice how each property invokes the RotateMatrix method discussed in the previous section.

Butterfly: Scharr 3 x 3 x 8

Butterfly Scharr 3 x 3 x 8

The CompassEdgeDetectionFilter method is defined as an targeting the class. The purpose of this method is to act as a wrapper method encapsulating the technical implementation. The definition as follows:

public static Bitmap CompassEdgeDetectionFilter(this Bitmap sourceBitmap,  
                                    CompassEdgeDetectionType compassType) 
{ 
    Bitmap resultBitmap = null; 

switch (compassType) { case CompassEdgeDetectionType.Sobel3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel3x3x4, 1.0 / 4.0); } break; case CompassEdgeDetectionType.Sobel3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel3x3x8, 1.0/ 4.0); } break; case CompassEdgeDetectionType.Sobel5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel5x5x4, 1.0/ 84.0); } break; case CompassEdgeDetectionType.Prewitt3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt3x3x4, 1.0 / 3.0); } break; case CompassEdgeDetectionType.Prewitt3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt3x3x8, 1.0/ 3.0); } break; case CompassEdgeDetectionType.Prewitt5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt5x5x4, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Scharr3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr3x3x4, 1.0 / 4.0); } break; case CompassEdgeDetectionType.Scharr3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr3x3x8, 1.0 / 4.0); } break; case CompassEdgeDetectionType .Scharr5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr5x5x4, 1.0 / 21.0); } break; case CompassEdgeDetectionType.Kirsch3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Kirsch3x3x4, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Kirsch3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Kirsch3x3x8, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Isotropic3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Isotropic3x3x4, 1.0 / 3.4); } break; case CompassEdgeDetectionType.Isotropic3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Isotropic3x3x8, 1.0 / 3.4); } break; }
return resultBitmap; }

Rose: Scharr 3 x 3 x 8

Rose Scharr 3 x 3 x 8

Notice from the code snippet listed above, each case statement invokes the ConvolutionFilter method. This method has been defined as an targeting the class. The ConvolutionFilter performs the actual task of . This method implements each passed as a parameter, the highest result value will be determined as the output value. The definition as follows:

private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap,  
                                     double[,,] filterMatrix,  
                                           double factor = 1,  
                                                int bias = 0)  
{
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly,  
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte [sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte [sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double blue = 0.0; double green = 0.0; double red = 0.0;
double blueCompass = 0.0; double greenCompass = 0.0; double redCompass = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth-1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int compass = 0; compass < filterMatrix.GetLength(0); compass++) {
blueCompass = 0.0; greenCompass = 0.0; redCompass = 0.0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blueCompass += (double)(pixelBuffer[calcOffset]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset];
greenCompass += (double)(pixelBuffer[calcOffset + 1]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset];
redCompass += (double)(pixelBuffer[calcOffset + 2]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset]; } }
blue = (blueCompass > blue ? blueCompass : blue); green = (greenCompass > green ? greenCompass : green); red = (redCompass > red ? redCompass : red); }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
if(blue > 255) { blue = 255; } else if(blue < 0) { blue = 0; }
if(green > 255) { green = 255; } else if(green < 0) { green = 0; }
if(red > 255) { red = 255; } else if(red < 0) { red = 0; }
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rose: Isotropic 3 x 3 x 8

Rose Isotropic 3 x 3 x 8

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:

The Original Image

Original Image

Butterfly: Isotropic 3 x 3 x 4

Butterfly Isotropic 3 x 3 x 4

Butterfly: Isotropic 3 x 3 x 8

Butterfly Isotropic 3 x 3 x 8

Butterfly: Kirsch 3 x 3 x 4

Butterfly Kirsch 3 x 3 x 4

Butterfly: Kirsch 3 x 3 x 8

Butterfly Kirsch 3 x 3 x 8

Butterfly: Prewitt 3 x 3 x 4

Butterfly Prewitt 3 x 3 x 4

Butterfly: Prewitt 3 x 3 x 8

Butterfly Prewitt 3 x 3 x 8

Butterfly: Prewitt 5 x 5 x 4

Butterfly Prewitt 5 x 5 x 4

Butterfly: Scharr 3 x 3 x 4

Butterfly Scharr 3 x 3 x 4

Butterfly: Scharr 3 x 3 x 8

Butterfly Scharr 3 x 3 x 8

Butterfly: Scharr 5 x 5 x 4

Butterfly Scharr 5 x 5 x 4

Butterfly: Sobel 3  x 3 x 4

Butterfly Sobel 3  x 3 x 4

Butterfly: Sobel 3 x 3 x 8

Butterfly Sobel 3 x 3 x 8

Butterfly: Sobel 5 x 5 x 4

Butterfly Sobel 5 x 5 x 4

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Blur

Article Purpose

This article serves to provides an introduction and discussion relating to methods and techniques. The Image Blur methods covered in this article include: , , , and  .

Daisy: Mean 9×9

Daisy Mean 9x9

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

This article is accompanied by a sample application, intended to provide a means of testing and replicating topics discussed in this article. The sample application is a based application of which the user interface enables the user to select an type to implement.

When clicking the Load Image button users are able to browse the local file system in order to select source/input . In addition users are also able to save blurred result when clicking the Save Image button and browsing the local file system.

Daisy: Mean 7×7

Daisy Mean 7x7

The sample application provides the user with the ability to select the method of to implement. The dropdown located on the right-hand side of the user interface lists all of the supported methods of . When a user selects an item from the , the associated blur method will be implemented on the preview .

The image below is a screenshot of the Image Blur Filter sample application in action:

Image Blur Filter Sample Application

Image Blur Overview

The process of can be regarded as reducing the sharpness or crispness defined by an . results in detail/ being perceived as less distinct. are often blurred as a method of smoothing an .

perceived as too crisp/sharp can be softened by applying a variety of techniques and intensity levels. Often are smoothed/blurred in order to remove/reduce . In implementations better results are often achieved when first implementing through smoothing/. can even be implemented in a fashion where results reflect , a method known as .

In this article and the accompanying sample source code all methods of supported have been implemented through , with the exception of the filter. Each of the supported methods in essence only represent a different   . The technique capable of achieving optimal results will to varying degrees be dependent on the features present in the specified source/input . Each method provides a different set of desired properties and compromises. In the following sections an overview of each method will be discussed.

Daisy: Mean 9×9

Daisy Mean 9x9

Mean Filter/Box Blur

The also sometimes referred to as a represents a fairly simplistic implementation and definition. A definition can be found on as follows:

A box blur is an in which each pixel in the resulting image has a value equal to the average value of its neighbouring pixels in the input image. It is a form of low-pass ("blurring") filter and is a .

Due to its property of using equal weights it can be implemented using a much simpler accumulation algorithm which is significantly faster than using a sliding window algorithm.

as a title relates to all weight values in a being equal, therefore the alternate title of . In most cases a will only contain the value one. When performing implementing a , the factor value equates to the 1 being divided by the sum of all values.

The following is an example of a 5×5 convolution kernel:

Mean Filter Blur 5x5 Kernel

The consist of 25 elements, therefore the factor value equates to one divided by twenty five.

The Blur does not result in the same level of smoothing achieved by other methods. The method can also be susceptible to directional artefacts.

Daisy Mean 5×5

Daisy Mean 5x5

Gaussian Blur

The method of is a popular and often implemented filter. In contrast to the method produce resulting appearing to contain a more uniform level of smoothing. When implementing a is often applied to source/input resulting in . The has a good level of edge preservation, hence being used in operations.

From we gain the following :

A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a . It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination. Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales

A potential drawback to implementing a results from the filter being computationally intensive. The following represents a 5×5 . The sum total of all elements in the equate to 159, therefore a factor value of 1.0 / 159.0 will be implemented.

Guassian Blur 5x5 Kernel

Daisy: Gaussian 5×5

Daisy Gaussian 5x5

Median Filter Blur

The is classified as a non-linear filter. In contrast to the other methods of discussed in this article the implementation does not involve or a predefined matrix . The following can be found on :

In signal processing, it is often desirable to be able to perform some kind of on an image or signal. The median filter is a nonlinear technique, often used to remove . Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, on an image). Median filtering is very widely used in digital because, under certain conditions, it preserves edges while removing noise.

Daisy: Median 7×7

Daisy Median 7x7

As the name implies, the operates by calculating the value of a pixel group also referred to as a window. Calculating a value involves a number of steps. The required steps are listed as follows:

  1. Iterate each pixel that forms part of the source/input .
  2. In relation to the pixel currently being iterated determine neighbouring pixels located within the bounds defined by the window size. The window location should be offset in order to align the window’s middle pixel and the pixel currently being iterated.
  3. Neighbouring pixels located within the bounds  defined by the window should be added to a one dimensional neighbourhood array. Once all value have been added, the array should be sorted by value.
  4. The pixel value located at the middle of the sorted neighbourhood array qualifies as the value. The newly determined value should be assigned to the pixel currently being iterated.
  5. Repeat the steps listed above until all pixels within the source/input have been iterated.

Similar to the filter the has the ability to smooth whilst providing edge preservation. Depending on the window size implemented and the physical dimensions of input/source the can be computationally expensive.

Daisy: Median 9×9

Daisy Median 9x9

Motion Blur

The sample source implements filters. in the traditional sense has been association with photography and video capturing. can often be observed in scenarios where rapid movements are being captured to photographs or video recording. When recording a single frame, rapid movements could result in the changing  before the frame being captured has completed.

can be synthetically imitated through the implementation of Digital filters. The size of the provided when implementing affects the filter intensity perceived in result . Relating to filters the size of the specified in influences the perception and appearance of how rapidly movement had occurred to have blurred the resulting . Larger produce the appearance of more rapid motion, whereas smaller result in less rapid motion being perceived.

Daisy: Motion Blur 7×7 135 Degrees

Daisy Motion Blur 7x7 135 Degrees

Depending on the specified the ability exists to create the appearance of movement having occurred in a certain direction. The sample source code implements filters at 45 degrees, 135 degrees and in both directions simultaneously.

The listed below represents a 5×5 filter occurring at  45 degrees and 135 degrees:

MotionBlur5x5

Image Blur Implementation

The sample source code implements all of the concepts explored throughout this article. The source code definition can be grouped into 4 sections: ImageBlurFilter method, ConvolutionFilter method, MedianFilter method and the Matrix class. The following article sections relate to the 4 main source code sections.

The ImageBlurFilter has the purpose of invoking the correct blur filter method and relevant method parameters. This method acts as a method wrapper providing the technical implementation details required when performing a specified blur filter.

The definition of the ImageBlurFilter as follows:

 public static Bitmap ImageBlurFilter(this Bitmap sourceBitmap,  
                                             BlurType blurType) 
{  
     Bitmap resultBitmap = null; 

switch (blurType) { case BlurType.Mean3x3: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean3x3, 1.0 / 9.0, 0); } break; case BlurType.Mean5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean5x5, 1.0 / 25.0, 0); } break; case BlurType.Mean7x7: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean7x7, 1.0 / 49.0, 0); } break; case BlurType.Mean9x9: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean9x9, 1.0 / 81.0, 0); } break; case BlurType.GaussianBlur3x3: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.GaussianBlur3x3, 1.0 / 16.0, 0); } break; case BlurType.GaussianBlur5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.GaussianBlur5x5, 1.0 / 159.0, 0); } break; case BlurType.MotionBlur5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5, 1.0 / 10.0, 0); } break; case BlurType.MotionBlur5x5At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5At45Degrees, 1.0 / 5.0, 0); } break; case BlurType.MotionBlur5x5At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5At135Degrees, 1.0 / 5.0, 0); } break; case BlurType.MotionBlur7x7: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7, 1.0 / 14.0, 0); } break; case BlurType.MotionBlur7x7At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7At45Degrees, 1.0 / 7.0, 0); } break; case BlurType.MotionBlur7x7At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7At135Degrees, 1.0 / 7.0, 0); } break; case BlurType.MotionBlur9x9: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9, 1.0 / 18.0, 0); } break; case BlurType.MotionBlur9x9At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9At45Degrees, 1.0 / 9.0, 0); } break; case BlurType.MotionBlur9x9At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9At135Degrees, 1.0 / 9.0, 0); } break; case BlurType.Median3x3: { resultBitmap = sourceBitmap.MedianFilter(3); } break; case BlurType.Median5x5: { resultBitmap = sourceBitmap.MedianFilter(5); } break; case BlurType.Median7x7: { resultBitmap = sourceBitmap.MedianFilter(7); } break; case BlurType.Median9x9: { resultBitmap = sourceBitmap.MedianFilter(9); } break; case BlurType.Median11x11: { resultBitmap = sourceBitmap.MedianFilter(11); } break; }
return resultBitmap; }

Daisy: Motion Blur 9×9

Daisy Motion Blur 9x9

The Matrix class serves as a collection of  various definitions. The Matrix class and all public properties are defined as static. The definition of the Matrix class as follows:

     public static class Matrix 
    {  
         public static double[,] Mean3x3 
         {  
             get 
             {  
                 return new double[,]   
                { {  1, 1, 1, },  
                  {  1, 1, 1, },  
                  {  1, 1, 1, }, }; 
             }  
         }  

public static double[,] Mean5x5 { get { return new double[,] { { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, }; } }
public static double[,] Mean7x7 { get { return new double[,] { { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, }; } }
public static double[,] Mean9x9 { get { return new double[,] { { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, }; } }
public static double[,] GaussianBlur3x3 { get { return new double[,] { { 1, 2, 1, }, { 2, 4, 2, }, { 1, 2, 1, }, }; } }
public static double[,] GaussianBlur5x5 { get { return new double[,] { { 2, 04, 05, 04, 2 }, { 4, 09, 12, 09, 4 }, { 5, 12, 15, 12, 5 }, { 4, 09, 12, 09, 4 }, { 2, 04, 05, 04, 2 }, }; } }
public static double[,] MotionBlur5x5 { get { return new double[,] { { 1, 0, 0, 0, 1 }, { 0, 1, 0, 1, 0 }, { 0, 0, 1, 0, 0 }, { 0, 1, 0, 1, 0 }, { 1, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur5x5At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 1 }, { 0, 0, 0, 1, 0 }, { 0, 0, 1, 0, 0 }, { 0, 1, 0, 0, 0 }, { 1, 0, 0, 0, 0 }, }; } }
public static double[,] MotionBlur5x5At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur7x7 { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 1 }, { 0, 1, 0, 0, 0, 1, 0 }, { 0, 0, 1, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 1, 0, 0 }, { 0, 1, 0, 0, 0, 1, 0 }, { 1, 0, 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur7x7At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 0, 0, 1 }, { 0, 0, 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0, 0, 0 }, { 1, 0, 0, 0, 0, 0, 0 }, }; } }
public static double[,] MotionBlur7x7At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0, 0, 0 }, { 0, 0, 1, 0, 0, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 0, 0, 1, 0, 0 }, { 0, 0, 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur9x9 { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 1, }, { 0, 1, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 1, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 1, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 1, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 1, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 1, 0, }, { 1, 0, 0, 0, 0, 0, 0, 0, 1, }, }; } }
public static double[,] MotionBlur9x9At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 0, 0, 0, 0, 1, }, { 0, 0, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 0, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 0, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 0, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 0, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 0, 0, }, { 1, 0, 0, 0, 0, 0, 0, 0, 0, }, }; } }
public static double[,] MotionBlur9x9At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 0, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 0, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 0, 0, 0, 0, 0, 0, 1, }, }; } } }

Daisy: Median 7×7

Daisy Median 7x7

The MedianFilter targets the class. The MedianFilter method applies a using the specified and matrix size (window size), returning a new representing the filtered .

The definition of the MedianFilter as follows:

 public static Bitmap MedianFilter(this Bitmap sourceBitmap, 
                                   int matrixSize) 
{ 
     BitmapData sourceData = 
                sourceBitmap.LockBits(new Rectangle(0, 0, 
                sourceBitmap.Width, sourceBitmap.Height), 
                ImageLockMode.ReadOnly, 
                PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
List<int> neighbourPixels = new List<int>(); byte[] middlePixel;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
neighbourPixels.Clear();
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[byteOffset] = middlePixel[0]; resultBuffer[byteOffset + 1] = middlePixel[1]; resultBuffer[byteOffset + 2] = middlePixel[2]; resultBuffer[byteOffset + 3] = middlePixel[3]; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Daisy: Motion Blur 9×9

Daisy Motion Blur 9x9

The sample source code performs by invoking the ConvolutionFilter .

The definition of the ConvolutionFilter as follows:

private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap, 
                                          double[,] filterMatrix, 
                                               double factor = 1, 
                                                    int bias = 0) 
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly, 
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double)(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double)(pixelBuffer[calcOffset + 1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double)(pixelBuffer[calcOffset + 2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red));
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction.

The sample images featuring an image of a yellow daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikimedia.org.

The sample images featuring an image of a white daisy is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia.

The sample images featuring an image of a pink daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikipedia.

The sample images featuring an image of a purple daisy is licensed under the Creative Commons Attribution-ShareAlike 3.0 License and can be downloaded from Wikipedia.

The Original Image

Purple_osteospermum

Daisy: Gaussian 3×3

Daisy Gaussian 3x3

Daisy: Gaussian 5×5

Daisy Gaussian 5x5

Daisy: Mean 3×3

Daisy Mean 3x3

Daisy: Mean 5×5

Daisy Mean 5x5

Daisy: Mean 7×7

Daisy Mean 7x7

Daisy: Mean 9×9

Daisy Mean 9x9

Daisy: Median 3×3

Daisy Median 3x3

Daisy: Median 5×5

Daisy Median 5x5

Daisy: Median 7×7

Daisy Median 7x7

Daisy: Median 9×9

Daisy Median 9x9

Daisy: Median 11×11

Daisy Median 11x11

Daisy: Motion Blur 5×5

Daisy Motion Blur 5x5

Daisy: Motion Blur 5×5 45 Degrees

Daisy Motion Blur 5x5 45 Degrees

Daisy: Motion Blur 5×5 135 Degrees

Daisy Motion Blur 5x5 135 Degrees

Daisy: Motion Blur 7×7

Daisy Motion Blur 7x7

Daisy: Motion Blur 7×7 45 Degrees

Daisy Motion Blur 7x7 45 Degree

Daisy: Motion Blur 7×7 135 Degrees

Daisy Motion Blur 7x7 135 Degrees

Daisy: Motion Blur 9×9

Daisy Motion Blur 9x9

Daisy: Motion Blur 9×9 45 Degrees

Daisy Motion Blur 9x9 45 Degrees

Daisy: Motion Blur 9×9 135 Degrees

Daisy Motion Blur 9x9 135 Degrees

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Sharpen Edge Detection

Article Purpose

It is the objective of this article to explore and provide a discussion based in the concept of through means of . Illustrated are various methods of sharpening and in addition a implemented in reduction.

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

The sample source code accompanying this article includes a based Sample Application. The concepts illustrated throughout this article can easily be tested and replicated by making use of the Sample Application.

The Sample Application exposes seven main areas of functionality:

  • Loading input/source images.
  • Saving image result.
  • Sharpen Filters
  • Median Filter Size
  • Threshold value
  • Grayscale Source
  • Mono Output

When using the Sample application users are able to select input/source from the local file system by clicking the Load Image button. If desired, users may save result to the local file system by clicking the Save Image button.

The sample source code and sample application implement various methods of . Each method of results in varying degrees of . Some methods are more effective than other methods. The method being implemented serves as a primary factor influencing results. The effectiveness of the selected method is reliant on the input/source provided. The sample application implements the following methods:

  • Sharpen5To4
  • Sharpen7To1
  • Sharpen9To1
  • Sharpen12To1
  • Sharpen24To1
  • Sharpen48To1
  • Sharpen10To8
  • Sharpen11To8
  • Sharpen821

is regarded as a common problem relating to . Often will be incorrectly detected as forming part of an edge within an . The sample source code implements a in order to counter act . The size/intensity of the applied can be specified via the labelled Median Filter Size.

The Threshold value configured through the sample application’s user interface has a two-fold implementation. In a scenario where output images are created in a black and white format the Threshold value will be implemented to determine whether a pixel should be either black or white. When output are created as full colour the Threshold value will be added to each pixel, acting as a bias value.

In some scenarios can be achieved more effectively when specifying format source/input . The purpose of the labelled Grayscale Source is to format source/input in a format before implementing .

The labelled Mono Output, when selected, has the effect of producing result in a black and white format.

The image below is a screenshot of the Sharpen Edge Detection sample application in action:

Sharpen Edge Detection Sample Application

Edge Detection through Image Sharpening

The sample source code performs on source/input by means of . The steps performed can be broken down to the following items:

  1. If specified, apply a filter to the input/source image. A filter results in smoothing an . can be reduced when implementing a . smoothing/ often results reducing details/. The is well suited to smoothing away whilst implementing edge preservation. When performing the functions as an ideal method of reducing whilst not negatively impacting tasks.
  2. If specified, convert the source/input to by iterating each pixel that forms part of the . Each pixel’s colour components are calculated multiplying by factor values: Red x 0.3  Green x 0.59  Blue x 0.11.
  3. Using the specified   iterate each pixel forming part of the source/input , performing on each pixel colour channel.
  4. If the output has been specified as Mono, the middle pixel calculated in should be multiplied with the specified factor value. Each colour component should be compared to the specified threshold value and be assigned as either black or white.
  5. If the output has not been specified as Mono, the middle pixel calculated in should be multiplied with the factor value to which the threshold/bias value should be added. The value of each colour component will be set to the result of subtracting the calculated convolution/filter/bias value from the pixel’s original colour component value. In other words perform using applying a factor and bias which should then be subtracted from the original source/input .

Implementing Sharpen Edge Detection

The sample source code achieves through image sharpening by implementing three methods: MedianFilter and two overloaded methods titled SharpenEdgeDetect.

The MedianFilter method is defined as an targeting the class. The definition as follows:

 public static Bitmap MedianFilter(this Bitmap sourceBitmap, 
                                   int matrixSize) 
{ 
     BitmapData sourceData = 
                sourceBitmap.LockBits(new Rectangle(0, 0, 
                sourceBitmap.Width, sourceBitmap.Height), 
                ImageLockMode.ReadOnly, 
                PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
List<int> neighbourPixels = new List<int>(); byte[] middlePixel;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
neighbourPixels.Clear();
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[byteOffset] = middlePixel[0]; resultBuffer[byteOffset + 1] = middlePixel[1]; resultBuffer[byteOffset + 2] = middlePixel[2]; resultBuffer[byteOffset + 3] = middlePixel[3]; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

The public implementation of the SharpenEdgeDetect has the purpose of translating user specified options into the relevant method calls to the private implementation of the SharpenEdgeDetect . The public implementation of the SharpenEdgeDetect method as follows:

public static Bitmap SharpenEdgeDetect(this Bitmap sourceBitmap, 
                                            SharpenType sharpen, 
                                                   int bias = 0, 
                                         bool grayscale = false, 
                                              bool mono = false, 
                                       int medianFilterSize = 0) 
{ 
    Bitmap resultBitmap = null; 

if (medianFilterSize == 0) { resultBitmap = sourceBitmap; } else { resultBitmap = sourceBitmap.MedianFilter(medianFilterSize); }
switch (sharpen) { case SharpenType.Sharpen7To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen7To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen9To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen9To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen12To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen12To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen24To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen24To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen48To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen48To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen5To4: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen5To4, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen10To8: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen10To8, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen11To8: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen11To8, 3.0 / 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen821: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen821, 8.0 / 1.0, bias, grayscale, mono); } break; }
return resultBitmap; }

The Matrix class provides the definition of static pre-defined values. The definition as follows:

public static class Matrix   
{
    public static double[,] Sharpen7To1 
    {
        get   
        { 
            return new double[,]   
            {  { 1,  1,  1, },  
               { 1, -7,  1, },   
               { 1,  1,  1, }, }; 
        }  
    }  

public static double[,] Sharpen9To1 { get { return new double[,] { { -1, -1, -1, }, { -1, 9, -1, }, { -1, -1, -1, }, }; } }
public static double[,] Sharpen12To1 { get { return new double[,] { { -1, -1, -1, }, { -1, 12, -1, }, { -1, -1, -1, }, }; } }
public static double[,] Sharpen24To1 { get { return new double[,] { { -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, }, { -1, -1, 24, -1, -1, }, { -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, }, }; } }
public static double[,] Sharpen48To1 { get { return new double[,] { { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, 48, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, }; } }
public static double[,] Sharpen5To4 { get { return new double[,] { { 0, -1, 0, }, { -1, 5, -1, }, { 0, -1, 0, }, }; } }
public static double[,] Sharpen10To8 { get { return new double[,] { { 0, -2, 0, }, { -2, 10, -2, }, { 0, -2, 0, }, }; } }
public static double[,] Sharpen11To8 { get { return new double[,] { { 0, -2, 0, }, { -2, 11, -2, }, { 0, -2, 0, }, }; } }
public static double[,] Sharpen821 { get { return new double[,] { { -1, -1, -1, -1, -1, }, { -1, 2, 2, 2, -1, }, { -1, 2, 8, 2, 1, }, { -1, 2, 2, 2, -1, }, { -1, -1, -1, -1, -1, }, }; } } }

The private implementation of the SharpenEdgeDetect performs through and then performs subtraction. The definition as follows:

private static Bitmap SharpenEdgeDetect(this Bitmap sourceBitmap, 
                                          double[,] filterMatrix, 
                                               double factor = 1, 
                                                    int bias = 0, 
                                          bool grayscale = false, 
                                               bool mono = false) 
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly, 
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
if (grayscale == true) { for (int pixel = 0; pixel < pixelBuffer.Length; pixel += 4) { pixelBuffer[pixel] = (byte)(pixelBuffer[pixel] * 0.11f);
pixelBuffer[pixel + 1] = (byte)(pixelBuffer[pixel + 1] * 0.59f);
pixelBuffer[pixel + 2] = (byte)(pixelBuffer[pixel + 2] * 0.3f); } }
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double )(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double )(pixelBuffer[calcOffset + 1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double )(pixelBuffer[calcOffset + 2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
if (mono == true) { blue = resultBuffer[byteOffset] - factor * blue; green = resultBuffer[byteOffset + 1] - factor * green; red = resultBuffer[byteOffset + 2] - factor * red;
blue = (blue > bias ? 255 : 0);
green = (blue > bias ? 255 : 0);
red = (blue > bias ? 255 : 0); } else { blue = resultBuffer[byteOffset] - factor * blue + bias;
green = resultBuffer[byteOffset + 1] - factor * green + bias;
red = resultBuffer[byteOffset + 2] - factor * red + bias;
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red)); }
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height); BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Sample Images

The sample image used in this article is in the public domain because its copyright has expired. This applies to Australia, the European Union and those countries with a copyright term of life of the author plus 70 years. The original image can be downloaded from Wikipedia.

The Original Image

NovaraExpZoologischeTheilLepidopteraAtlasTaf53

Sharpen5To4, Median 0, Threshold 0

Sharpen5To4 Median 0 Threshold 0

Sharpen5To4, Median 0, Threshold 0, Mono

Sharpen5To4 Median 0 Threshold 0 Mono

Sharpen7To1, Median 0, Threshold 0

Sharpen7To1 Median 0 Threshold 0

Sharpen7To1, Median 0, Threshold 0, Mono

Sharpen7To1 Median 0 Threshold 0 Mono

Sharpen9To1, Median 0, Threshold 0

Sharpen9To1 Median 0 Threshold 0

Sharpen9To1, Median 0, Threshold 0, Mono

Sharpen9To1 Median 0 Threshold 0 Mono

Sharpen10To8, Median 0, Threshold 0

Sharpen10To8 Median 0 Threshold 0

Sharpen10To8, Median 0, Threshold 0, Mono

Sharpen10To8 Median 0 Threshold 0 Mono

Sharpen11To8, Median 0, Threshold 0

Sharpen11To8 Median 0 Threshold 0

Sharpen11To8, Median 0, Threshold 0, Grayscale, Mono

Sharpen11To8 Median 0 Threshold 0 Grayscale Mono

Sharpen12To1, Median 0, Threshold 0

Sharpen12To1 Median 0 Threshold 0

Sharpen12To1, Median 0, Threshold 0, Mono

Sharpen12To1 Median 0 Threshold 0 Mono

Sharpen24To1, Median 0, Threshold 0

Sharpen24To1 Median 0 Threshold 0

Sharpen24To1, Median 0, Threshold 0, Grayscale, Mono

Sharpen24To1 Median 0 Threshold 0 Grayscale Mono

Sharpen24To1, Median 0, Threshold 0, Mono

Sharpen24To1 Median 0 Threshold 0 Mono

Sharpen24To1, Median 0, Threshold 21, Grayscale, Mono

Sharpen24To1 Median 0 Threshold 21 Grayscale Mono

Sharpen48To1, Median 0, Threshold 0

Sharpen48To1 Median 0 Threshold 0

Sharpen48To1, Median 0, Threshold 0, Grayscale, Mono

Sharpen48To1 Median 0 Threshold 0 Grayscale Mono

Sharpen48To1, Median 0, Threshold 0, Mono

Sharpen48To1 Median 0 Threshold 0 Mono

Sharpen48To1, Median 0, Threshold 226, Mono

Sharpen48To1 Median 0 Threshold 226 Mono

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:


Dewald Esterhuizen

Blog Stats

  • 869,810 hits

Enter your email address to follow and receive notifications of new posts by email.

Join 228 other subscribers

Archives

RSS SoftwareByDefault on MSDN

  • An error has occurred; the feed is probably down. Try again later.