This article serves as a detailed discussion on implementing image edge detection through pixel neighbourhood maximum and minimum value subtraction. Additional concepts illustrated in this article include implementing a median filter and RGB grayscale conversion.
Frog Filter 3×3 Smoothed
This article is accompanied by a sample source code Visual Studio project which is available for download here.
This article’s accompanying sample source code includes a Windows Forms based sample application. The sample application provides an implementation of the concepts explored by this article. Concepts discussed can be easily replicated and tested by using the sample application.
Source/input image files can be specified from the local file system when clicking the Load Image button. Additionally users also have the option to save resulting filtered images by clicking the Save Image button.
The sample application user interface enables the user to specify three filter configuration values. These values serve as input parameters to the Min/Max Edge Detection Filter and can be described as follows:
The following image represents a screenshot of the Min/Max Edge Detection Sample application in action.
The method of edge detection illustrated in this article can be classified as a variation of commonly implemented edge detection methods. Image edges expressed within a source image can be determined through the presence of sudden and significant changes in gradient levels that occur within a small/limited perimeter.
As a means to determine gradient level changes the Min/Max Edge Detection algorithm performs pixel neighbourhood inspection, comparing maximum and minimum colour channel values. Should the difference between maximum and minimum colour values be significant, it would be an indication of a significant change in gradient level within the pixel neighbourhood being inspected.
Image noise represents interference in relation to regular gradient level expression. Image noise does not signal the presence of an image edge, although could potentially result in incorrectly determining image edge presence. Image noise and the negative impact thereof can be significantly reduced when applying image smoothing, also sometimes referred to as image blur. The Min/Max Edge Detection algorithm makes provision for optional image smoothing implemented in the form of a median filter.
The following sections provide more detail regarding the concepts introduced in this section, pixel neighbourhood and median filter.
Frog Filter 3×3 Smoothed
A pixel neighbourhood refers to a set of pixels, all of which are related through pixel location coordinates. The width and height of a pixel neighbourhood must be equal, in other words, a pixel neighbourhood can only be square. Additionally, the width/height of a pixel neighbourhood must be an uneven value. When inspecting a pixel’s neighbouring pixels, the pixel being inspected will always be located at the exact center of the pixel neighbourhood. Only when a pixel neighbourhood’s width/height are an uneven value can such a pixel neighbourhood have an exact center pixel. Each pixel represented in an image has a different set of neighbours, some neighbours overlap, but no two pixels have the exact same neighbours. A pixel’s neighbouring pixels can be determined when considering the pixel to be at the center of a block of pixels, extending half the neighbourhood size less one in horizontal, vertical and diagonal directions.
In context of this article and the Min/Max Edge Detection filter, median filtering has been implemented as a means to reduce source image noise. From the Median Wikipedia page we gain the following quote:
In statistics and probability theory, the median is the number separating the higher half of a data sample, a population, or a probability distribution, from the lower half. The median of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking the middle one (e.g., the median of {3, 3, 5, 9, 11} is 5).
Frog Filter 3×3 Smoothed
The application of a median filter is based in the concept of pixel neighbourhood as discussed earlier. The implementation steps required when applying a median filter can be described as follows:
The median filter should not be confused with the mean filter. A median will always be a midpoint value from a sorted value range, whereas a mean value is equal to the calculated average of a value range. The median filter has the characteristic of reducing image noise whilst still preserving image edges. The mean filter will also reduce image noise, but will do so through generalized image blurring, also referred to as box blur, which does not preserve image edges.
Note that when applying a median filter to RGB colour images median values need to be determined per individual colour channel.
Frog Filter 3×3 Smoothed
Image edge detection based in a min/max approach requires relatively few steps, which can be combined in source code implementations to be more efficient from a computational/processing perspective. A higher level logical definition of the steps required can be described as follows:
The source code implementation of the Min/Max Edge Detection Filter declares two methods, a median filter method and an edge detection method. A median filter and edge detection filter cannot be processed simultaneously. When applying a median filter, the median value of a pixel neighbourhood determined from a source image should be expressed in a separate result image. The original source image should not be altered whilst inspecting pixel neighbourhoods and calculating median values. Only once all pixel values in the result image has been set, can the result image serve as a source image to an edge detection filter method.
The following code snippet provides the source code definition of the MedianFilter method.
private static byte[] MedianFilter(this byte[] pixelBuffer, int imageWidth, int imageHeight, int filterSize) { byte[] resultBuffer = new byte[pixelBuffer.Length]; int filterOffset = (filterSize - 1) / 2; int calcOffset = 0; int stride = imageWidth * pixelByteCount; int byteOffset = 0; var neighbourCount = filterSize * filterSize; int medianIndex = neighbourCount / 2; var blueNeighbours = new byte[neighbourCount]; var greenNeighbours = new byte[neighbourCount]; var redNeighbours = new byte[neighbourCount]; for (int offsetY = filterOffset; offsetY < imageHeight - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < imageWidth - filterOffset; offsetX++) { byteOffset = offsetY * stride + offsetX * pixelByteCount; for (int filterY = -filterOffset, neighbour = 0; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++, neighbour++) { calcOffset = byteOffset + (filterX * pixelByteCount) + (filterY * stride); blueNeighbours[neighbour] = pixelBuffer[calcOffset]; greenNeighbours[neighbour] = pixelBuffer[calcOffset + greenOffset]; redNeighbours[neighbour] = pixelBuffer[calcOffset + redOffset]; } } Array.Sort(blueNeighbours); Array.Sort(greenNeighbours); Array.Sort(redNeighbours); resultBuffer[byteOffset] = blueNeighbours[medianIndex]; resultBuffer[byteOffset + greenOffset] = greenNeighbours[medianIndex]; resultBuffer[byteOffset + redOffset] = redNeighbours[medianIndex]; resultBuffer[byteOffset + alphaOffset] = maxByteValue; } } return resultBuffer; }
Notice the definition of three separate byte arrays, each intended to represent a pixel neighbourhood’s pixel values related to a specific colour channel. Each neighbourhood colour channel byte array needs to be sorted according to value. The value located at the array index exactly halfway from the start and the end of the array represents the median value. When a median value has been determined, the result buffer pixel related to the source buffer pixel in terms of XY Location needs to be set.
Frog Filter 3×3 Smoothed
The sample source code defines two overloaded versions of an edge detection method. The first version is defined as an extension method targeting the Bitmap class. A FilterSize parameter is the only required parameter, intended to specify pixel neighbourhood width/height. In addition, when invoking this method optional parameters may be specified. When image noise reduction should be implemented the smoothNoise parameter should be defined as true. If resulting images are required in grayscale the last parameter, grayscale, should reflect true. The following code snippet provides the definition of the MinMaxEdgeDetection method.
public static Bitmap MinMaxEdgeDetection(this Bitmap sourceBitmap, int filterSize, bool smoothNoise = false, bool grayscale = false) { return sourceBitmap.ToPixelBuffer() .MinMaxEdgeDetection(sourceBitmap.Width, sourceBitmap.Height, filterSize, smoothNoise, grayscale) .ToBitmap(sourceBitmap.Width, sourceBitmap.Height); }
The MinMaxEdgeDetection method as expressed above essentially acts as a wrapper method, invoking the overloaded version of this method, performing mapping between Bitmap objects and byte array pixel buffers.
An overloaded version of the MinMaxEdgeDetection method performs all of the tasks required in edge detection through means of minimum maximum pixel neighbourhood value subtraction. The method definition as provided by the following code snippet.
private static byte[] MinMaxEdgeDetection(this byte[] sourceBuffer, int imageWidth, int imageHeight, int filterSize, bool smoothNoise = false, bool grayscale = false) { byte[] pixelBuffer = sourceBuffer; if (smoothNoise) { pixelBuffer = sourceBuffer.MedianFilter(imageWidth, imageHeight, filterSize); } byte[] resultBuffer = new byte[pixelBuffer.Length]; int filterOffset = (filterSize - 1) / 2; int calcOffset = 0; int stride = imageWidth * pixelByteCount; int byteOffset = 0; byte minBlue = 0, minGreen = 0, minRed = 0; byte maxBlue = 0, maxGreen = 0, maxRed = 0; for (int offsetY = filterOffset; offsetY < imageHeight - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < imageWidth - filterOffset; offsetX++) { byteOffset = offsetY * stride + offsetX * pixelByteCount; minBlue = maxByteValue; minGreen = maxByteValue; minRed = maxByteValue; maxBlue = minByteValue; maxGreen = minByteValue; maxRed = minByteValue; for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * pixelByteCount) + (filterY * stride); minBlue = Math.Min(pixelBuffer[calcOffset], minBlue); maxBlue = Math.Max(pixelBuffer[calcOffset], maxBlue); minGreen = Math.Min(pixelBuffer[calcOffset + greenOffset], minGreen); maxGreen = Math.Max(pixelBuffer[calcOffset + greenOffset], maxGreen); minRed = Math.Min(pixelBuffer[calcOffset + redOffset], minRed); maxRed = Math.Max(pixelBuffer[calcOffset + redOffset], maxRed); } } if (grayscale) { resultBuffer[byteOffset] = ByteVal((maxBlue - minBlue) * 0.114 + (maxGreen - minGreen) * 0.587 + (maxRed - minRed) * 0.299); resultBuffer[byteOffset + greenOffset] = resultBuffer[byteOffset]; resultBuffer[byteOffset + redOffset] = resultBuffer[byteOffset]; resultBuffer[byteOffset + alphaOffset] = maxByteValue; } else { resultBuffer[byteOffset] = (byte)(maxBlue - minBlue); resultBuffer[byteOffset + greenOffset] = (byte)(maxGreen - minGreen); resultBuffer[byteOffset + redOffset] = (byte)(maxRed - minRed); resultBuffer[byteOffset + alphaOffset] = maxByteValue; } } } return resultBuffer; }
As discussed earlier, image noise reduction if required should be the first task performed. Based on parameter value the method applies a median filter to the source image buffer.
When iterating a pixel neighbourhood a comparison is performed between the currently iterated neighbouring pixel’s value and the previously determined minimum and maximum values.
When the grayscale method parameter reflects true, a grayscale algorithm is applied to the difference between the determined maximum and minimum pixel neighbourhood values.
Should the grayscale method parameter reflect false, grayscale algorithm logic will not execute. Instead, the result obtained from subtracting the determined minimum and maximum values are assigned to the relevant pixel and colour channel on the result buffer image.
Frog Filter 3×3 Smoothed
This article features several sample images provided as examples. All sample images were created using the sample application. All of the original source images used in generating sample images have been licensed by their respective authors to allow for reproduction here. The following section lists each original source image and related license and copyright details.
Red-eyed Tree Frog (Agalychnis callidryas), photographed near Playa Jaco in Costa Rica © 2007 Careyjamesbalboa (Carey James Balboa) has been released into the public domain by the author.
Yellow-Banded Poison Dart Frog © 2013 H. Krisp is used here under a Creative Commons Attribution 3.0 Unported license.
Green and Black Poison Dart Frog © 2011 H. Krisp is used here under a Creative Commons Attribution 3.0 Unported license.
Atelopus certus calling male © 2010 Brian Gratwicke is used here under a Creative Commons Attribution 2.0 Generic license.
Tyler’s Tree Frog (Litoria tyleri) © 2006 LiquidGhoul has been released into the public domain by the author.
Dendropsophus microcephalus © 2010 Brian Gratwicke is used here under a Creative Commons Attribution 2.0 Generic license.
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article explores image edge detection implemented through computing pixel neighbourhood standard deviation on RGB colour images. The main sections of this article consists of a detailed explanation of the concepts related to the standard deviation edge detection algorithm and an in-depth discussion and a practical implementation through source code.
Butterfly Filter 3×3 Factor 5.0
This article is accompanied by a sample source code Visual Studio project which is available for download here.
This article’s accompanying sample source code includes a Windows Forms based sample application. The sample application provides an implementation of the concepts explored by this article. Concepts discussed can be easily replicated and tested by using the sample application.
Source/input image files can be specified from the local file system when clicking the Load Image button. Additionally users also have the option to save resulting filtered images by clicking the Save Image button.
The sample application user interface exposes three filter configuration values to the end user in the form of predefined filter size values, a grayscale output flag and a variance factor. End users can configure whether filtered result images should express image edges using source image colour values or in grayscale. The filter size value specified by the user determines the number of pixels included when calculating standard deviation values.
Filter size has a direct correlation to the extend at which gradient edges will be represented in resulting images. Faint edge values require larger filter size values in order to be expressed in a resulting output image. Larger filter size values require additional computation and would thus have a longer completion time when compared to smaller filter size values.
The following screenshot captures the Standard Deviation Edge Detection sample application in action.
Image edge detection can be achieved through a variety of methods, each associated with particular benefits and trade offs. This article is focussed on image edge detection through implementing standard deviation calculations on a pixel neighbourhood.
A pixel neighbourhood refers to a set of pixels, all of which are related through pixel location coordinates. The width and height of a pixel neighbourhood must be equal, in other words, a pixel neighbourhood can only be square. Additionally, the width/height of a pixel neighbourhood must be an uneven value. When inspecting a pixel’s neighbouring pixels, the pixel being inspected will always be located at the exact center of the pixel neighbourhood. Only when a pixel neighbourhood’s width/height are an uneven value can such a pixel neighbourhood have an exact center pixel. Each pixel represented in an image has a different set of neighbours, some neighbours overlap, but no two pixels have the exact same neighbours. A pixel’s neighbouring pixels can be determined when considering the pixel to be at the center of a block of pixels, extending half the neighbourhood size less one in horizontal, vertical and diagonal directions.
Butterfly Filter 3×3 Factor 5
Mean value calculation forms a core part in calculating standard deviation. The mean value from a set of values could be considered equivalent to the value set’s average value. The average of a set of values can be calculated as the sum total of all the values in a set, divided by the number of values in the set.
From the Standard Deviation Wikipedia page we gain the following quote:
In statistics, the standard deviation (SD, also represented by the Greek letter sigma, σ for the population standard deviation or s for the sample standard deviation) is a measure that is used to quantify the amount of variation or dispersion of a set of data values. A standard deviation close to 0 indicates that the data points tend to be very close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the data points are spread out over a wider range of values
A pixel neighbourhood’s standard deviation can indicate whether a significant change in image gradient is present in a pixel neighbourhood. A large standard deviation value is an indication that the neighbourhood’s pixel values could be spread far from the calculated mean. Inversely, a small standard deviation will indicate that the neighbourhood’s pixel values are closer to the calculated mean. A sudden change in image gradient will equate to a large standard deviation.
Steps required in calculating standard deviation can be described as follows:
Butterfly Filter 3×3 Factor 5
The standard deviation edge detection algorithm is based in the concept of standard deviation, providing additional capabilities. The algorithm allows for a more prominent expression of variance through means of a variance factor. Calculated variance values can be increased or decreased when implementing a variance factor. When variances are less significant, resulting images will express gradient edges at faint/low intensity levels. Providing a variance factor will result in output images expressing gradient edges at a higher intensity.
Variance factor and filter size should not be confused. When source gradient edges are expressed at low intensities, higher filter sizes would result in those low intensity source edges to be expressed in resulting images. In a scenario where high intensity gradient edges from a source image are expressed in resulting images at low intensities, a higher variance factor would increase resulting image gradient edge intensity.
The following list provides a summary of the steps required to implement the standard deviation edge detection algorithm:
It is important to note that the steps as described above should be applied per individual colour channel, Red, Green and Blue.
Butterfly Filter 3×3 Factor 4.5
The sample source code that accompanies this article provides a public extension method targeting the Bitmap class. A private overloaded implementation of the StandardDeviationEdgeDetection method performs the bulk of the required functionality. The following code snippet illustrates the public overloaded version of the StandardDeviationEdgeDetection method:
public static Bitmap StandardDeviationEdgeDetection(this Bitmap sourceBuffer, int filterSize, float varianceFactor = 1.0f, bool grayscaleOutput = true) { return sourceBuffer.ToPixelBuffer() .StandardDeviationEdgeDetection(sourceBuffer.Width, sourceBuffer.Height, filterSize, varianceFactor, grayscaleOutput) .ToBitmap(sourceBuffer.Width, sourceBuffer.Height); }
The StandardDeviationEdgeDetection method accepts 3 parameters, the first Bitmap parameter serves to signal that the method is an extension method targeting the Bitmap class. A brief description of the other parameters as follows:
Butterfly Filter 3×3 Factor 4
The following code snippet relates the private implementation of the StandardDeviationEdgeDetection method, which performs all of the tasks required to implement the standard deviation edge detection algorithm.
private static byte[] StandardDeviationEdgeDetection(this byte[] pixelBuffer, int imageWidth, int imageHeight, int filterSize, float varianceFactor = 1.0f, bool grayscaleOutput = true) { byte[] resultBuffer = new byte[pixelBuffer.Length]; int filterOffset = (filterSize - 1) / 2; int calcOffset = 0; int stride = imageWidth * pixelByteCount; int byteOffset = 0; var neighbourCount = filterSize * filterSize; var blueNeighbours = new int[neighbourCount]; var greenNeighbours = new int[neighbourCount]; var redNeighbours = new int[neighbourCount]; double resetValue = 0; double meanBlue = 0, meanGreen = 0, meanRed = 0; double varianceBlue = 0, varianceGreen = 0, varianceRed = 0; varianceFactor = varianceFactor * varianceFactor; for (int offsetY = filterOffset; offsetY < imageHeight - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < imageWidth - filterOffset; offsetX++) { byteOffset = offsetY * stride + offsetX * pixelByteCount; meanBlue = resetValue; meanGreen = resetValue; meanRed = resetValue; varianceBlue = resetValue; varianceGreen = resetValue; varianceRed = resetValue; for (int filterY = -filterOffset, neighbour = 0; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++, neighbour++) { calcOffset = byteOffset + (filterX * pixelByteCount) + (filterY * stride); blueNeighbours[neighbour] = pixelBuffer[calcOffset]; greenNeighbours[neighbour] = pixelBuffer[calcOffset + 1]; redNeighbours[neighbour] = pixelBuffer[calcOffset + 2]; } } meanBlue = blueNeighbours.Average(); meanGreen = greenNeighbours.Average(); meanRed = redNeighbours.Average(); for (int n = 0; n < neighbourCount; n++) { varianceBlue = varianceBlue + SquareNumber(blueNeighbours[n] - meanBlue); varianceGreen = varianceGreen + SquareNumber(greenNeighbours[n] - meanGreen); varianceRed = varianceRed + SquareNumber(redNeighbours[n] - meanRed); } varianceBlue = varianceBlue / neighbourCount * varianceFactor; varianceGreen = varianceGreen / neighbourCount * varianceFactor; varianceRed = varianceRed / neighbourCount * varianceFactor; if (grayscaleOutput) { var pixelValue = ByteVal(ByteVal(Math.Sqrt(varianceBlue)) | ByteVal(Math.Sqrt(varianceGreen)) | ByteVal(Math.Sqrt(varianceRed))); resultBuffer[byteOffset] = pixelValue; resultBuffer[byteOffset + 1] = pixelValue; resultBuffer[byteOffset + 2] = pixelValue; resultBuffer[byteOffset + 3] = Byte.MaxValue; } else { resultBuffer[byteOffset] = ByteVal(Math.Sqrt(varianceBlue)); resultBuffer[byteOffset + 1] = ByteVal(Math.Sqrt(varianceGreen)); resultBuffer[byteOffset + 2] = ByteVal(Math.Sqrt(varianceRed)); resultBuffer[byteOffset + 3] = Byte.MaxValue; } } } return resultBuffer; }
This method features several for loops, resulting in each image pixel being iterated. Notice how the two inner most loops declare negative initializer values. In order to determine a pixel’s neighbourhood, the pixel should be considered as being located at the exact center of the neighbourhood. Negative initializer values enable the code to determine neighbouring pixels located to the left and above of the pixel being iterated.
A pixel neighbourhood needs to be determined in terms of each colour channel, Red, Green and Blue. The pixel neighbourhood of each colour channel must be averaged individually. Logically it follows that pixel neighbourhood variance should also be calculated per colour channel.
The method signature indicates the varianceFactor parameter should be optional and assigned a default value of 1.0. Should a variance factor not be required, implementing a default factor value of 1.0 will not result in any change to the calculated variance value.
When grayscale output has been configured the resulting output pixel will express the same value on all three colour channels. The grayscale value will be calculated through the application of a bitwise OR operation, applied to the standard deviation of each colour channel. The square root of a pixel neighbourhood’s variance provides the standard deviation value for that pixel neighbourhood.
If grayscale output had not been configured the resulting pixel colour channels will be assigned the standard deviation of the related colour channel on the source pixel.
private const byte maxByteValue = Byte.MaxValue; private const byte minByteValue = Byte.MinValue; public static byte ByteVal(int val) { if (val < minByteValue) { return minByteValue; } else if (val > maxByteValue) { return maxByteValue; } else { return (byte)val; } }
The StandardDeviationEdgeDetection method reflects several references to the ByteVal method, as illustrated in the code snippet above. Casting double and int values to byte values could result in values exceeding the upper and lower bounds allowed by the byte type. The ByteVal method tests whether a value would exceed upper and lower bounds, when determined to do so the resulting value is assigned either the upper inclusive bound or lower inclusive bound value, depending on the bound being exceeded.
Bee Filter 3×3 Factor 5
This article features several sample images provided as examples. All sample images were created using the sample application. All of the original source images used in generating sample images have been licensed by their respective authors to allow for reproduction here. The following section lists each original source image and related license and copyright details.
Viceroy (Limenitis archippus), Mer Bleue Conservation Area, Ottawa, Ontario © 2008 D. Gordon E. Robertson is used here under a Creative Commons Attribution-Share Alike 3.0 Unported license.
Old World Swallowtail on Buddleja davidii © 2008 Thomas Bresson is used here under a Creative Commons Attribution 2.0 Generic license.
Cethosia cyane butterfly © 2006 Airbete is used here under a Creative Commons Attribution-Share Alike 3.0 Unported license.
“Weiße Baumnymphe (Idea leuconoe) fotografiert im Schmetterlingshaus des Maximilianpark Hamm” © 2009 Steffen Flor is used here under a Creative Commons Attribution-Share Alike 3.0 Unported license.
"Dark Blue Tiger tirumala septentrionis by kadavoor" © 2010 Jeevan Jose, Kerala, India is used here under a Creative Commons Attribution-ShareAlike 4.0 International License
"Common Lime Butterfly Papilio demoleus by Kadavoor" © 2010 Jeevan Jose, Kerala, India is used here under a Creative Commons Attribution-ShareAlike 4.0 International License
Syrphidae, Knüllwald, Hessen, Deutschland © 2007 Fritz Geller-Grimm is used here under a Creative Commons Attribution-Share Alike 3.0 Unported license
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article explores the process of implementing an Image Distortion Blur filter. This image filter is classified as a non-photo realistic image filter, primarily implemented in rendering artistic effects.
Flower: Distortion Factor 15
This article is accompanied by a sample source code Visual Studio project which is available for download here.
Flower: Distortion Factor 10
The sample source code that accompanies this article includes a Windows Forms based sample application. The concepts explored in this article have all been implemented as part of the sample application. From an end user perspective the following configurable options are available:
The following image is screenshot of the Image Distortion Blur sample application:
Flower: Distortion Factor 10
Flower: Distortion Factor 10
In this article and the accompanying sample source code images are distorted through slightly adjusting each individual pixel’s coordinates. The direction and distance by which pixel coordinates are adjusted differ per pixel as a result of being randomly selected. The maximum distance offset applied depends on the user specified Distortion Factor. Once all pixel coordinates have been updated, implementing a Median Filter provides smoothing and an image blur effect.
Applying an Image Distortion Filter requires implementing the following steps:
Flower: Distortion Factor 10
Flower: Distortion Factor 10
Applying a Median Filter is the final step required when implementing an Image Distortion Blur filter. Median Filters are often implemented in reducing image noise. The method of image distortion illustrated in this article express similarities when compared to image noise. In order to soften the appearance of image noise we implement a Median Filter.
A Median Filter can be applied through implementing the following steps:
Flower: Distortion Factor 10
Flower: Distortion Factor 15
The sample source code defines the DistortionBlurFilter method, an extension method targeting the Bitmap class. The following code snippet illustrates the implementation:
public static Bitmap DistortionBlurFilter( this Bitmap sourceBitmap, int distortFactor) { byte[] pixelBuffer = sourceBitmap.GetByteArray(); byte[] resultBuffer = sourceBitmap.GetByteArray();
int imageStride = sourceBitmap.Width * 4; int calcOffset = 0, filterY = 0, filterX = 0; int factorMax = (distortFactor + 1) * 2; Random rand = new Random();
for (int k = 0; k + 4 < pixelBuffer.Length; k += 4) { filterY = distortFactor - rand.Next(0, factorMax); filterX = distortFactor - rand.Next(0, factorMax);
if (filterX * 4 + (k % imageStride) < imageStride && filterX * 4 + (k % imageStride) > 0) { calcOffset = k + filterY * imageStride + 4 * filterX;
if (calcOffset >= 0 && calcOffset + 4 < resultBuffer.Length) { resultBuffer[calcOffset] = pixelBuffer[k]; resultBuffer[calcOffset + 1] = pixelBuffer[k + 1]; resultBuffer[calcOffset + 2] = pixelBuffer[k + 2]; } } }
return resultBuffer.GetImage(sourceBitmap.Width, sourceBitmap.Height).MedianFilter(3); }
Flower: Distortion Factor 15
The MedianFilter extension method targets the Bitmap class. The implementation as follows:
public static Bitmap MedianFilter(this Bitmap sourceBitmap, int matrixSize) { byte[] pixelBuffer = sourceBitmap.GetByteArray(); byte[] resultBuffer = new byte[pixelBuffer.Length]; byte[] middlePixel;
int imageStride = sourceBitmap.Width * 4; int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0, filterY = 0, filterX = 0; List<int> neighbourPixels = new List<int>();
for (int k = 0; k + 4 < pixelBuffer.Length; k += 4) { filterY = -filterOffset; filterX = -filterOffset; neighbourPixels.Clear();
while (filterY <= filterOffset) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
if (calcOffset > 0 && calcOffset + 4 < pixelBuffer.Length) { neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); }
filterX++;
if (filterX > filterOffset) { filterX = -filterOffset; filterY++; } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[k] = middlePixel[0]; resultBuffer[k + 1] = middlePixel[1]; resultBuffer[k + 2] = middlePixel[2]; resultBuffer[k + 3] = middlePixel[3]; }
return resultBuffer.GetImage(sourceBitmap.Width, sourceBitmap.Height); }
Flower: Distortion Factor 25
This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following images feature as sample images:
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article serves to illustrate the concepts involved in implementing a Fuzzy Blur Filter. This filter results in rendering non-photo realistic images which express a certain artistic effect.
Frog: Filter Size 19×19
This article is accompanied by a sample source code Visual Studio project which is available for download here.
The sample source code accompanying this article includes a Windows Forms based test application. The concepts explored throughout this article can be replicated/tested using the sample application.
When executing the sample application the user interface exposes a number of configurable options:
The following image is a screenshot of the Fuzzy Blur Filter sample application in action:
Frog: Filter Size 9×9
The Fuzzy Blur Filter relies on the interference of image noise when performing edge detection in order to create a fuzzy effect. In addition image blurring results from performing a mean filter.
The steps involved in performing a Fuzzy Blur Filter can be described as follows:
Frog: Filter Size 9×9
A Mean Filter Blur, also known as a Box Blur, can be performed through image convolution. The size of the matrix/kernel implemented when preforming image convolution will be determined through user input.
Every matrix/kernel element should be set to one. The resulting value should be multiplied by a factor value equating to one divided by the matrix/kernel size. As an example, a matrix/kernel size of 3×3 can be expressed as follows:
An alternative expression can also be:
Frog: Filter Size 9×9
When performing Boolean Edge Detection a local threshold should be implemented in order to exclude image noise. In this article we rely on the interference of image noise in order to render a fuzzy image effect. By not implementing a local threshold when performing Boolean Edge detection the sample source code ensures sufficient interference from image noise.
The steps involved in performing Boolean Edge Detection without a local threshold can be described as follows:
Note: A detailed article on Boolean Edge detection implementing a local threshold can be found here: C# How to: Boolean Edge Detection
Frog: Filter Size 9×9
The sixteen predefined edge masks each represent an image edge in a different direction. The predefined edge masks can be expressed as:
Frog: Filter Size 13×13
The sample source code defines the MeanFilter method, an extension method targeting the Bitmap class. The definition listed as follows:
private static Bitmap MeanFilter(this Bitmap sourceBitmap, int meanSize) { byte[] pixelBuffer = sourceBitmap.GetByteArray(); byte[] resultBuffer = new byte[pixelBuffer.Length];
double blue = 0.0, green = 0.0, red = 0.0; double factor = 1.0 / (meanSize * meanSize);
int imageStride = sourceBitmap.Width * 4; int filterOffset = meanSize / 2; int calcOffset = 0, filterY = 0, filterX = 0;
for (int k = 0; k + 4 < pixelBuffer.Length; k += 4) { blue = 0; green = 0; red = 0; filterY = -filterOffset; filterX = -filterOffset;
while (filterY <= filterOffset) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length - 2 ? pixelBuffer.Length - 3 : calcOffset));
blue += pixelBuffer[calcOffset]; green += pixelBuffer[calcOffset + 1]; red += pixelBuffer[calcOffset + 2];
filterX++;
if (filterX > filterOffset) { filterX = -filterOffset; filterY++; } }
resultBuffer[k] = ClipByte(factor * blue); resultBuffer[k + 1] = ClipByte(factor * green); resultBuffer[k + 2] = ClipByte(factor * red); resultBuffer[k + 3] = 255; }
return resultBuffer.GetImage(sourceBitmap.Width, sourceBitmap.Height); }
Frog: Filter Size 19×19
Boolean Edge detection is performed in the sample source code through the implementation of the BooleanEdgeDetectionFilter method. This method has been defined as an extension method targeting the Bitmap class.
The following code snippet provides the definition of the BooleanEdgeDetectionFilter extension method:
public static Bitmap BooleanEdgeDetectionFilter( this Bitmap sourceBitmap, float edgeFactor) { byte[] pixelBuffer = sourceBitmap.GetByteArray(); byte[] resultBuffer = new byte[pixelBuffer.Length]; Buffer.BlockCopy(pixelBuffer, 0, resultBuffer, 0, pixelBuffer.Length);
List<string> edgeMasks = GetBooleanEdgeMasks(); int imageStride = sourceBitmap.Width * 4; int matrixMean = 0, pixelTotal = 0; int filterY = 0, filterX = 0, calcOffset = 0; string matrixPatern = String.Empty;
for (int k = 0; k + 4 < pixelBuffer.Length; k += 4) { matrixPatern = String.Empty; matrixMean = 0; pixelTotal = 0; filterY = -1; filterX = -1;
while (filterY < 2) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length - 2 ? pixelBuffer.Length - 3 : calcOffset)); matrixMean += pixelBuffer[calcOffset]; matrixMean += pixelBuffer[calcOffset + 1]; matrixMean += pixelBuffer[calcOffset + 2];
filterX += 1;
if (filterX > 1) { filterX = -1; filterY += 1; } }
matrixMean = matrixMean / 9; filterY = -1; filterX = -1;
while (filterY < 2) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length - 2 ? pixelBuffer.Length - 3 : calcOffset));
pixelTotal = pixelBuffer[calcOffset]; pixelTotal += pixelBuffer[calcOffset + 1]; pixelTotal += pixelBuffer[calcOffset + 2]; matrixPatern += (pixelTotal > matrixMean ? "1" : "0"); filterX += 1;
if (filterX > 1) { filterX = -1; filterY += 1; } }
if (edgeMasks.Contains(matrixPatern)) { resultBuffer[k] = ClipByte(resultBuffer[k] * edgeFactor);
resultBuffer[k + 1] = ClipByte(resultBuffer[k + 1] * edgeFactor);
resultBuffer[k + 2] = ClipByte(resultBuffer[k + 2] * edgeFactor); } }
return resultBuffer.GetImage(sourceBitmap.Width, sourceBitmap.Height); }
Frog: Filter Size 13×13
The predefined edge masks implemented in mean comparison have been wrapped by the GetBooleanEdgeMasks method. The definition as follows:
public static List<string> GetBooleanEdgeMasks() { List<string> edgeMasks = new List<string>();
edgeMasks.Add("011011011"); edgeMasks.Add("000111111"); edgeMasks.Add("110110110"); edgeMasks.Add("111111000"); edgeMasks.Add("011011001"); edgeMasks.Add("100110110"); edgeMasks.Add("111011000"); edgeMasks.Add("111110000"); edgeMasks.Add("111011001"); edgeMasks.Add("100110111"); edgeMasks.Add("001011111"); edgeMasks.Add("111110100"); edgeMasks.Add("000011111"); edgeMasks.Add("000110111"); edgeMasks.Add("001011011"); edgeMasks.Add("110110100");
return edgeMasks; }
Frog: Filter Size 19×19
The FuzzyEdgeBlurFilter method serves as the implementation of a Fuzzy Blur Filter. As discussed earlier a Fuzzy Blur Filter involves enhancing image edges through Boolean Edge detection, performing a Mean Filter blur and then once again performing Boolean Edge detection. This method has been defined as an extension method targeting the Bitmap class.
The following code snippet provides the definition of the FuzzyEdgeBlurFilter method:
public static Bitmap FuzzyEdgeBlurFilter(this Bitmap sourceBitmap, int filterSize, float edgeFactor1, float edgeFactor2) { return sourceBitmap.BooleanEdgeDetectionFilter(edgeFactor1). MeanFilter(filterSize).BooleanEdgeDetectionFilter(edgeFactor2); }
Frog: Filter Size 3×3
This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following images feature as sample images:
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article explores Abstract Colour Image filters as a process of Non-photo Realistic Image Rendering. The output images produced reflects a variety of artistic effects.
Colour Values | Red, Blue | Filter Size | 9 |
Edge Tracing | Black | Edge Threshold | 55 |
This article is accompanied by a sample source code Visual Studio project which is available for download here.
Colour Values | Blue | Filter Size | 9 |
Edge Tracing | Double Intensity | Edge Threshold | 60 |
The sample source code that accompanies this article includes a Windows Forms based sample application. The concepts discussed in this article have been illustrated through the implementation of the sample application.
When executing the sample application, through the user interface several options are made available to the user, described as follows:
Colour Values | Green | Filter Size | 9 |
Edge Tracing | Double Intensity | Edge Threshold | 60 |
Colour Values | Red, Blue | Filter Size | 9 |
Edge Tracing | Double Intensity | Edge Threshold | 55 |
The following image is a screenshot of the Image Abstract Colour Filter sample application in action:
The Abstract Colour Filter explored in this article can be considered a non-photo realistic filter. As the title implies, non-photo realistic filters transforms an input image, usually a photograph, producing a result image which visibly lacks the aspects of realism expressed in the input image. In most scenarios the objective of non-photo realistic filters can be described as using photographic images in rendering images having an animated appearance.
Colour Values | Blue | Filter Size | 11 |
Edge Tracing | Double Intensity | Edge Threshold | 60 |
Colour Values | Red | Filter Size | 11 |
Edge Tracing | Double Intensity | Edge Threshold | 60 |
The Abstract Colour Filter can be broken down into two main components: Colour Averaging and Image Edge detection. Through implementing a variety of colour averaging algorithms resulting images express abstract yet uniform colours. Abstract colours result in output images no longer appearing photo realistic, instead output images appear unconventionally augmented/artistic.
Output images express a lesser degree of image definition and detail, when compared to input images. In some scenarios output images might not be easily recognisable. In order to retain some image detail, edge/boundary detail detected from input images will be emphasised in result images.
Colour Values | Green | Filter Size | 11 |
Edge Tracing | Double Intensity | Edge Threshold | 60 |
Colour Values | Green, Blue | Filter Size | 11 |
Edge Tracing | Double Intensity | Edge Threshold | 75 |
The steps required when applying an Abstract Colour Filter can be described as follows:
Colour Values | Red, Green | Filter Size | 11 |
Edge Tracing | Double Intensity | Edge Threshold | 75 |
Colour Values | Red | Filter Size | 11 |
Edge Tracing | Black | Edge Threshold | 60 |
In the sample source code neighbourhood colour averaging has been implemented through the definition of the AverageColoursFilter extension method. This method creates a new image using the source image as input. The following code snippet provides the definition:
public static Bitmap AverageColoursFilter(this Bitmap sourceBitmap, int matrixSize, bool applyBlue = true, bool applyGreen = true, bool applyRed = true, ColorShiftType shiftType = ColorShiftType.None) { byte[] pixelBuffer = sourceBitmap.GetByteArray(); byte[] resultBuffer = new byte[pixelBuffer.Length];
int calcOffset = 0; int byteOffset = 0; int blue = 0; int green = 0; int red = 0; int filterOffset = (matrixSize - 1) / 2;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceBitmap.Width*4 + offsetX * 4;
blue = 0; green = 0; red = 0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceBitmap.Width * 4);
blue += pixelBuffer[calcOffset]; green += pixelBuffer[calcOffset + 1]; red += pixelBuffer[calcOffset + 2]; } }
blue = blue / matrixSize; green = green / matrixSize; red = red / matrixSize;
if (applyBlue == false ) { blue = pixelBuffer[byteOffset]; }
if (applyGreen == false ) { green = pixelBuffer[byteOffset + 1]; }
if (applyRed == false ) { red = pixelBuffer[byteOffset + 2]; }
if (shiftType == ColorShiftType.None) { resultBuffer[byteOffset] = (byte)blue; resultBuffer[byteOffset + 1] = (byte)green; resultBuffer[byteOffset + 2] = (byte)red; resultBuffer[byteOffset + 3] = 255; } else if (shiftType == ColorShiftType.ShiftLeft) { resultBuffer[byteOffset] = (byte)green; resultBuffer[byteOffset + 1] = (byte)red; resultBuffer[byteOffset + 2] = (byte)blue; resultBuffer[byteOffset + 3] = 255; } else if (shiftType == ColorShiftType.ShiftRight) { resultBuffer[byteOffset] = (byte)red; resultBuffer[byteOffset + 1] = (byte)blue; resultBuffer[byteOffset + 2] = (byte)green; resultBuffer[byteOffset + 3] = 255; } } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Colour Values | Green, Blue | Filter Size | 17 |
Edge Tracing | Black | Edge Threshold | 85 |
When applying an Abstract Colours Filter, one of the required steps involve image edge detection. The sample source code implements Gradient based edge detection through the definition of the GradientBasedEdgeDetectionFilter method. This method has been defined as an extension method targeting the bitmap class. The definition as follows:
public static Bitmap GradientBasedEdgeDetectionFilter( this Bitmap sourceBitmap, byte threshold = 0) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
int sourceOffset = 0, gradientValue = 0; bool exceedsThreshold = false;
for (int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for (int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { sourceOffset = offsetY * sourceData.Stride + offsetX * 4; gradientValue = 0; exceedsThreshold = true;
// Horizontal Gradient CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold, 2); // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Horizontal Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold, 2); // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset - 4 + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset + sourceData.Stride - 4, ref gradientValue, threshold); } } } } }
resultBuffer[sourceOffset] = (byte)(exceedsThreshold ? 255 : 0); resultBuffer[sourceOffset + 1] = resultBuffer[sourceOffset]; resultBuffer[sourceOffset + 2] = resultBuffer[sourceOffset]; resultBuffer[sourceOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Colour Values | Red, Green | Filter Size | 5 |
Edge Tracing | Black | Edge Threshold | 85 |
The AbstractColorsFilter method serves as means of combining an average colour image and an edge detected image. This extension method targets the Bitmap class. The following code snippet details the definition:
public static Bitmap AbstractColorsFilter(this Bitmap sourceBitmap, int matrixSize, byte edgeThreshold, bool applyBlue = true, bool applyGreen = true, bool applyRed = true, EdgeTracingType edgeType = EdgeTracingType.Black, ColorShiftType shiftType = ColorShiftType.None) { Bitmap edgeBitmap = sourceBitmap.GradientBasedEdgeDetectionFilter(edgeThreshold);
Bitmap colorBitmap = sourceBitmap.AverageColoursFilter(matrixSize, applyBlue, applyGreen, applyRed, shiftType);
byte[] edgeBuffer = edgeBitmap.GetByteArray(); byte[] colorBuffer = colorBitmap.GetByteArray(); byte[] resultBuffer = colorBitmap.GetByteArray();
for (int k = 0; k + 4 < edgeBuffer.Length; k += 4) { if (edgeBuffer[k] == 255) { switch (edgeType) { case EdgeTracingType.Black: resultBuffer[k] = 0; resultBuffer[k+1] = 0; resultBuffer[k+2] = 0; break; case EdgeTracingType.White: resultBuffer[k] = 255; resultBuffer[k+1] = 255; resultBuffer[k+2] = 255; break; case EdgeTracingType.HalfIntensity: resultBuffer[k] = ClipByte(resultBuffer[k] * 0.5); resultBuffer[k + 1] = ClipByte(resultBuffer[k + 1] * 0.5); resultBuffer[k + 2] = ClipByte(resultBuffer[k + 2] * 0.5); break; case EdgeTracingType.DoubleIntensity: resultBuffer[k] = ClipByte(resultBuffer[k] * 2); resultBuffer[k + 1] = ClipByte(resultBuffer[k + 1] * 2); resultBuffer[k + 2] = ClipByte(resultBuffer[k + 2] * 2); break; case EdgeTracingType.ColorInversion: resultBuffer[k] = ClipByte(255 - resultBuffer[k]); resultBuffer[k+1] = ClipByte(255 - resultBuffer[k+1]); resultBuffer[k+2] = ClipByte(255 - resultBuffer[k+2]); break; } } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Colour Values | Red, Green | Filter Size | 17 |
Edge Tracing | Double Intensity | Edge Threshold | 85 |
This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following images feature as sample images:
Lung Oyster (Pleurotus pulmonarius), Småland, Sweden.
Attributed to: Jörg Hempel. This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Germany license.
The following series of images represent additional filter results.
Colour Values | Red | Filter Size | 9 |
Edge Tracing | Double Intensity | Edge Threshold | 60 |
Colour Values | Green, Blue | Filter Size | 9 |
Edge Tracing | Double Intensity | Edge Threshold | 60 |
Colour Values | Green, Blue | Filter Size | 9 |
Edge Tracing | Black | Edge Threshold | 60 |
Colour Values | Red, Green, Blue | Filter Size | 9 |
Edge Tracing | Double Intensity | Edge Threshold | 55 |
Colour Values | Red, Green, Blue | Filter Size | 11 |
Edge Tracing | Black | Edge Threshold | 75 |
Colour Values | Red, Blue | Filter Size | 11 |
Edge Tracing | Double Intensity | Edge Threshold | 75 |
Colour Values | Green | Filter Size | 17 |
Edge Tracing | Black | Edge Threshold | 85 |
Colour Values | Red | Filter Size | 17 |
Edge Tracing | Black | Edge Threshold | 85 |
Colour Values | Red, Green, Blue | Filter Size | 5 |
Edge Tracing | Half Edge | Edge Threshold | 85 |
Colour Values | Blue | Filter Size | 5 |
Edge Tracing | Double Intensity | Edge Threshold | 75 |
Colour Values | Green | Filter Size | 5 |
Edge Tracing | Double Intensity | Edge Threshold | 75 |
Colour Values | Red | Filter Size | 5 |
Edge Tracing | Double Intensity | Edge Threshold | 75 |
Colour Values | Red, Green, Blue | Filter Size | 5 |
Edge Tracing | Black | Edge Threshold | 75 |
Colour Values | Red, Green, Blue | Filter Size | 9 |
Edge Tracing | Black | Edge Threshold | 55 |
Colour Values | Red, Green, Blue | Filter Size | 3 |
Edge Tracing | Black | Edge Threshold | 75 |
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article explores various image processing concepts, which feature in combination when implementing Image Boundary Extraction. Concepts covered within this article include: Morphological Image Erosion and Image Dilation, Image Addition and Subtraction, Boundary Sharpening, Boundary Tracing and Boundary Extraction.
Parrot: Boundary Extraction, 3×3, Red, Green, Blue
This article is accompanied by a sample source code Visual Studio project which is available for download here.
This article’s accompanying sample source code includes the definition of a sample application. The sample application serves as an implementation of the concepts discussed in this article. In using the sample application concepts can be easily tested and replicated.
The sample application has been defined as a Windows Forms application. The user interface enables the user to configure several options which influence the output produced from image filtering processes. The following section describes the options available to a user when executing the sample application:
The following image is a screenshot of the Image Boundary Extraction sample application in action:
Parrot: Boundary Extraction, 3×3, Green
Image Boundary Extraction can be considered a method of Image Edge Detection. In contrast to more commonly implemented gradient based edge detection methods, Image Boundary Extraction originates from Morphological Image Filters.
When drawing a comparison, Image Boundary Extraction and Morphological Edge Detection express strong similarities. Morphological Edge Detection results from the difference in image erosion and image dilation. Considered from a different point of view, creating one image expressing thicker edges and another image expressing thinner edges provides the means to calculate the difference in edges.
Image Boundary Extraction implements the same concept as Morphological Edge Detection. The base concept can be regarded as calculating the difference between two images which rendered the same image, but expressing a difference in image edges. Image Boundary Extraction relies on calculating the difference between either image erosion and the source image or image dilation and the source image. The difference between image erosion and image dilation in most cases result in more of difference than the difference between image erosion and the source image or image dilation and the source image. The result of Image Boundary Extraction representing less of a difference than Morphological Edge Detection can be observed in Image Boundary Extraction being expressed in finer/smaller width lines.
Difference of Gaussians is another method of edge detection which functions along the same basis. Edges are determined by calculating the difference between two images, each having been filtered from the same source image, using a Gaussian blur of differing intensity levels.
Parrot: Boundary Extraction, 3×3, Red, Green, Blue
The concept of Boundary Sharpening refers to enhancing or sharpening the boundaries or edges expressed in a source/input image. Boundaries can be easily determined or extracted as discussed earlier when exploring Boundary Extraction.
The steps involved in performing Boundary Sharpening can be described as follows:
Parrot: Boundary Extraction, 3×3, Red, Green, Blue
Boundary Tracing refers to applying image filters which result in image edges/boundaries appearing darker or more pronounced. This type of filter also relies on Boundary Extraction.
Boundary Tracing can be implemented in two steps, described as follows:
Parrot: Boundary Extraction, 3×3, Red, Green, Blue
The accompanying sample source code defines the MorphologyOperation method, defined as an extension method targeting the Bitmap class. In terms of parameters this method expects a two dimensional array representing a structuring element. The other required parameter represents an enumeration value indicating which Morphological Operation to perform, either erosion or dilation.
The following code snippet provides the definition in full:
private static Bitmap MorphologyOperation(this Bitmap sourceBitmap, bool[,] se, MorphologyOperationType morphType, bool applyBlue = true, bool applyGreen = true, bool applyRed = true) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (se.GetLength(0) - 1) / 2; int calcOffset = 0, byteOffset = 0; byte blueErode = 0, greenErode = 0, redErode = 0; byte blueDilate = 0, greenDilate = 0, redDilate = 0;
for (int offsetY = 0; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = 0; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
blueErode = 255; greenErode = 255; redErode = 255; blueDilate = 0; greenDilate = 0; redDilate = 0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { if (se[filterY + filterOffset, filterX + filterOffset] == true) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length + 2 ? pixelBuffer.Length - 3 : calcOffset));
blueDilate = (pixelBuffer[calcOffset] > blueDilate ? pixelBuffer[calcOffset] : blueDilate);
greenDilate = (pixelBuffer[calcOffset + 1] > greenDilate ? pixelBuffer[calcOffset + 1] : greenDilate);
redDilate = (pixelBuffer[calcOffset + 2] > redDilate ? pixelBuffer[calcOffset + 2] : redDilate);
blueErode = (pixelBuffer[calcOffset] < blueErode ? pixelBuffer[calcOffset] : blueErode);
greenErode = (pixelBuffer[calcOffset + 1] < greenErode ? pixelBuffer[calcOffset + 1] : greenErode);
redErode = (pixelBuffer[calcOffset + 2] < redErode ? pixelBuffer[calcOffset + 2] : redErode); } } }
blueErode = (applyBlue ? blueErode : pixelBuffer[byteOffset]); blueDilate = (applyBlue ? blueDilate : pixelBuffer[byteOffset]);
greenErode = (applyGreen ? greenErode : pixelBuffer[byteOffset + 1]); greenDilate = (applyGreen ? greenDilate : pixelBuffer[byteOffset + 1]);
redErode = (applyRed ? redErode : pixelBuffer[byteOffset + 2]); redDilate = (applyRed ? redDilate : pixelBuffer[byteOffset + 2]);
if (morphType == MorphologyOperationType.Erosion) { resultBuffer[byteOffset] = blueErode; resultBuffer[byteOffset + 1] = greenErode; resultBuffer[byteOffset + 2] = redErode; } else if (morphType == MorphologyOperationType.Dilation) { resultBuffer[byteOffset] = blueDilate; resultBuffer[byteOffset + 1] = greenDilate; resultBuffer[byteOffset + 2] = redDilate; }
resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height); BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Parrot: Boundary Extraction, 3×3, Red, Green
The sample source code encapsulates the process of combining two separate images through means of addition. The AddImage method serves as a single declaration of image addition functionality. This method has been defined as an extension method targeting the Bitmap class. Boundary Sharpen filtering implements image addition.
The following code snippet provides the definition of the AddImage extension method:
private static Bitmap AddImage(this Bitmapsource Bitmap, Bitmap addBitmap) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
BitmapData addData = addBitmap.LockBits(new Rectangle(0, 0, addBitmap.Width, addBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] addBuffer = new byte[addData.Stride * addData.Height];
Marshal.Copy(addData.Scan0, addBuffer, 0, addBuffer.Length);
addBitmap.UnlockBits(addData);
for (int k = 0; k + 4 < resultBuffer.Length && k + 4 < addBuffer.Length; k += 4) { resultBuffer[k] = AddColors(resultBuffer[k], addBuffer[k]); resultBuffer[k + 1] = AddColors(resultBuffer[k + 1], addBuffer[k + 1]); resultBuffer[k + 2] = AddColors(resultBuffer[k + 2], addBuffer[k + 2]); resultBuffer[k + 3] = 255; }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
private static byte AddColors(byte color1, byte color2) { int result = color1 + color2;
return (byte)(result < 0 ? 0 : (result > 255 ? 255 : result)); }
Parrot: Boundary Extraction, 3×3, Red, Green, Blue
In a similar fashion regarding the AddImage method the sample code defines the SubractImage method. By definition this method serves as an extension method targeting the Bitmap class. Image subtraction has been implemented in Boundary Extraction and Boundary Tracing.
The definition of the SubtractImage method listed as follows:
private static Bitmap SubtractImage(this Bitmap sourceBitmap, Bitmap subtractBitmap) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
BitmapData subtractData = subtractBitmap.LockBits(new Rectangle(0, 0, subtractBitmap.Width, subtractBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] subtractBuffer = new byte[subtractData.Stride * subtractData.Height];
Marshal.Copy(subtractData.Scan0, subtractBuffer, 0, subtractBuffer.Length);
subtractBitmap.UnlockBits(subtractData);
for (int k = 0; k + 4 < resultBuffer.Length && k + 4 < subtractBuffer.Length; k += 4) { resultBuffer[k] = SubtractColors(resultBuffer[k], subtractBuffer[k]);
resultBuffer[k + 1] = SubtractColors(resultBuffer[k + 1], subtractBuffer[k + 1]);
resultBuffer[k + 2] = SubtractColors(resultBuffer[k + 2], subtractBuffer[k + 2]);
resultBuffer[k + 3] = 255; }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
private static byte SubtractColors(byte color1, byte color2) { int result = (int)color1 - (int)color2;
return (byte)(result < 0 ? 0 : result); }
Parrot: Boundary Extraction, 3×3, Green
In the sample source code processing Image Boundary Extraction can be achieved when invoking the BoundaryExtraction method. Defined as an extension method, the BoundaryExtraction method targets the Bitmap class.
As discussed earlier, this method performs Boundary Extraction through subtracting the source image from a dilated copy of the source image.
The following code snippet details the definition of the BoundaryExtraction method:
private static Bitmap BoundaryExtraction(this Bitmap sourceBitmap, bool[,] se, bool applyBlue = true, bool applyGreen = true, bool applyRed = true) { Bitmap resultBitmap = sourceBitmap.MorphologyOperation(se, MorphologyOperationType.Dilation, applyBlue, applyGreen, applyRed);
resultBitmap = resultBitmap.SubtractImage(sourceBitmap);
return resultBitmap; }
Parrot: Boundary Extraction, 3×3, Red, Blue
Boundary Sharpening in the sample source code has been implemented through the definition of the BoundarySharpen method. The BoundarySharpen extension method targets the Bitmap class. The following code snippet provides the definition:
private static Bitmap BoundarySharpen(this Bitmap sourceBitmap, bool[,] se, bool applyBlue = true, bool applyGreen = true, bool applyRed = true) { Bitmap resultBitmap = sourceBitmap.BoundaryExtraction(se, applyBlue, applyGreen, applyRed);
resultBitmap = sourceBitmap.MorphologyOperation(se, MorphologyOperationType.Dilation, applyBlue, applyGreen, applyRed).AddImage(resultBitmap);
return resultBitmap; }
Parrot: Boundary Extraction, 3×3, Green
Boundary Tracing has been defined through the BoundaryTrace extension method, which targets the Bitmap class. Similar to the BoundarySharpen method this method performs Boundary Extraction, the result of which serves to be subtracted from the original source image. Subtracting image boundaries/edges result in those boundaries/edges being darkened, or traced. The definition of the BoundaryTracing extension method detailed as follows:
private static Bitmap BoundaryTrace(this Bitmap sourceBitmap, bool[,] se, bool applyBlue = true, bool applyGreen = true, bool applyRed = true) { Bitmap resultBitmap = sourceBitmap.BoundaryExtraction(se, applyBlue, applyGreen, applyRed);
resultBitmap = sourceBitmap.SubtractImage(resultBitmap);
return resultBitmap; }
Parrot: Boundary Extraction, 3×3, Green, Blue
The BoundaryExtractionFilter method is the only method defined as publicly accessible. Following convention, this method’s definition signals the method as an extension method targeting the Bitmap class. This method has the intention of acting as a wrapper method, a single method capable of performing Boundary Extraction, Boundary Sharpening and Boundary Tracing, depending on method parameters.
The definition of the BoundaryExtractionFilter method detailed by the following code snippet:
public static Bitmap BoundaryExtractionFilter(this Bitmap sourceBitmap, bool[,] se, BoundaryExtractionFilterType filterType, bool applyBlue = true, bool applyGreen = true, bool applyRed = true) { Bitmap resultBitmap = null;
if (filterType == BoundaryExtractionFilterType.BoundaryExtraction) { resultBitmap = sourceBitmap.BoundaryExtraction(se, applyBlue, applyGreen, applyRed); } else if (filterType == BoundaryExtractionFilterType.BoundarySharpen) { resultBitmap = sourceBitmap.BoundarySharpen(se, applyBlue, applyGreen, applyRed); } else if (filterType == BoundaryExtractionFilterType.BoundaryTrace) { resultBitmap = sourceBitmap.BoundaryTrace(se, applyBlue, applyGreen, applyRed); }
return resultBitmap; }
Parrot: Boundary Extraction, 3×3, Red, Green, Blue
This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following images feature as sample images:
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
It is the purpose of this article to illustrate the concept of Difference of Gaussians Edge Detection. This article extends the conventional implementation of Difference of Gaussian algorithms through the application of equally sized matrix kernels only differing by a weight factor.
Frog: Kernel 5×5, Weight1 0.1, Weight2 2.1
This article is accompanied by a sample source code Visual Studio project which is available for download here.
This article relies on a sample application included as part of the accompanying sample source code. The sample application serves as a practical implementation of the concepts explored throughout this article.
The sample application user interface enables the user to configure and control the implementation of a Difference of Gaussians Edge Detection Image filter. The configuration options exposed through the sample application’s user interface can be detailed as follows:
The following image is screenshot of the Weighted Difference of Gaussians sample application in action:
Frog: Kernel 5×5, Weight1 1.8, Weight2 0.1
The Gaussian Blur algorithm can be described as one of the most popular and widely implemented methods of image blurring. From Wikipedia we gain the following excerpt:
A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function. It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination. Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales.
Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. This is also known as a two-dimensional Weierstrass transform.
Take Note: The Gaussian Blur algorithm has the attribute of smoothing image detail/definition whilst also having an edge preservation attribute. When applying a Gaussian Blur to an image a level of image detail/definition will be blurred/smoothed away, done in a fashion that would exclude/preserve image gradient edges.
Frog: Kernel 5×5, Weight1 2.7, Weight2 0.1
Difference of Gaussian refers to a specific method of image edge detection. Difference of Gaussians, common abbreviated as DoG, functions through the implementation of Gaussian Image Blurring.
A clear and concise description can be found on the Difference of Gaussians Wikipedia Article Page:
In imaging science, difference of Gaussians is a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original. In the simple case of grayscale images, the blurred images are obtained by convolving the original grayscale images with Gaussian kernels having differing standard deviations. Blurring an image using a Gaussian kernel suppresses only high-frequency spatial information. Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the difference of Gaussians is a band-pass filter that discards all but a handful of spatial frequencies that are present in the original grayscale image.
In a conventional sense Difference of Gaussians involves applying Gaussian Blurring to images created as copies of the original source/input image. There must be a difference in the size of the kernels implemented when applying image convolution. A typical example would be applying a 3×3 Gaussian blur on one image copy whilst applying a 5×5 Gaussian blur on another image copy. The final step requires creating a result image populated by subtracting the two blurred image copies. The results obtained from subtraction represents the edges forming part of the source/input image.
This article extends beyond the conventional method of implementing Difference of Gaussians Edge Detection. The implementation illustrated in this article retains the core concept of subtracting values which have been blurred to different intensities. The implementation method explored here differs from the conventional method in the sense that the matrix kernels implemented do not differ in size. Both matrix kernels are in fact required to have the same size dimensions.
The matrix kernels implemented are equal in terms of their size dimensions, although kernel values are different. Expressed from another angle: equally sized matrix kernels of which one represents a more intense level of blurring than the other. A matrix kernels’ resulting intensity can be determined by the weight factor implemented when calculating the kernel values.
Frog: Kernel 5×5, Weight1 3.7, Weight2 0.2
The advantages of implementing equally sized kernels can be described as follows:
Single Convolution implementation: Image Convolution involves executing several nested code loops. Application performance can be severely negatively impacted when executing large nested loops. The conventional method of implementing Difference of Gaussians generally involves having to implement two instances of image convolution, once per image copy. The Difference of Gaussians method implemented in this article executes the code loops related to image convolution only once. Considering the kernels are equal in size, both can be iterated within the same set of loops.
Eliminating Image subtraction: In conventional Difference of Gaussians implementations images expressing differing intensity levels of Gaussian Blurring have to be subtracted. The Difference of Gaussian implementation method described in this article eliminates the need to perform image subtraction. When applying image convolution using both kernels simultaneously the two results obtained, one from each kernel, can be subtracted and assigned to the result image. In addition, through calculating both kernel results at the same time further reduces the need to create two temporary source image copies.
Frog: Kernel 5×5, Weight1 2.4, Weight2 0.3
When implementing a Difference of Gaussians Edge Detection several steps are required, those steps are detailed as follows:
Frog: Kernel 5×5, Weight1 2.1, Weight2 0.5
The sample application implements Gaussian Blur Kernel calculations. The matrix kernels implemented in image convolution are calculated at runtime, as opposed to being hard coded. Being able to dynamically construct convolution kernels has the advantage of providing a greater degree of control in runtime regarding image convolution application.
Several steps are involved in calculating Gaussian Blur Matrix Kernels. The first required step being to determine the Matrix Kernel Size and Weight. The size and weight factor of a matrix kernel comprises the two configurable values implemented when calculating Gaussian Blur Kernels. In the case of this article and the sample application those values will be configured by the user through the sample application’s user interface.
The formula implemented in calculating Gaussian Blur Kernels can be expressed as follows:
The formula contains a number of symbols, which define how the filter will be implemented. The symbols forming part of the Gaussian Kernel formula are described in the following list:
Note: The formula’s implementation expects x and y to equal zero values when representing the coordinates of the pixel located in the middle of the kernel.
Frog: Kernel 7×7, Weight1 0.1, Weight2 2.0
The sample application defines the GaussianCalculator.Calculate method. This method accepts two parameters, kernel size and kernel weight. The following code snippet details the implementation:
public static double[,] Calculate(int lenght, double weight) { double[,] Kernel = new double [lenght, lenght]; double sumTotal = 0;
int kernelRadius = lenght / 2; double distance = 0;
double calculatedEuler = 1.0 / (2.0 * Math.PI * Math.Pow(weight, 2));
for (int filterY = -kernelRadius; filterY <= kernelRadius; filterY++) { for (int filterX = -kernelRadius; filterX <= kernelRadius; filterX++) { distance = ((filterX * filterX) + (filterY * filterY)) / (2 * (weight * weight));
Kernel[filterY + kernelRadius, filterX + kernelRadius] = calculatedEuler * Math.Exp(-distance);
sumTotal += Kernel[filterY + kernelRadius, filterX + kernelRadius]; } }
for (int y = 0; y < lenght; y++) { for (int x = 0; x < lenght; x++) { Kernel[y, x] = Kernel[y, x] * (1.0 / sumTotal); } }
return Kernel; }
Frog: Kernel 3×3, Weight1 0.1, Weight2 1.8
The sample source code defines the DifferenceOfGaussianFilter method. This method has been defined as an extension method targeting the Bitmap class. The following code snippet provides the implementation:
public static Bitmap DifferenceOfGaussianFilter(this Bitmap sourceBitmap, int matrixSize, double weight1, double weight2) { double[,] kernel1 = GaussianCalculator.Calculate(matrixSize, (weight1 > weight2 ? weight1 : weight2));
double[,] kernel2 = GaussianCalculator.Calculate(matrixSize, (weight1 > weight2 ? weight2 : weight1));
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte [sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte [sourceData.Stride * sourceData.Height]; byte[] grayscaleBuffer = new byte [sourceData.Width * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double rgb = 0;
for (int source = 0, dst = 0; source < pixelBuffer.Length && dst < grayscaleBuffer.Length; source += 4, dst++) { rgb = pixelBuffer * 0.11f; rgb += pixelBuffer * 0.59f; rgb += pixelBuffer * 0.3f;
grayscaleBuffer[dst] = (byte)rgb; }
double color1 = 0.0; double color2 = 0.0;
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
for (int source = 0, dst = 0; source < grayscaleBuffer.Length && dst + 4 < resultBuffer.Length; source++, dst += 4) { color1 = 0; color2 = 0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = source + (filterX) + (filterY * sourceBitmap.Width);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= grayscaleBuffer.Length ? grayscaleBuffer.Length - 1 : calcOffset));
color1 += (grayscaleBuffer[calcOffset]) * kernel1[filterY + filterOffset, filterX + filterOffset];
color2 += (grayscaleBuffer[calcOffset]) * kernel2[filterY + filterOffset, filterX + filterOffset]; } }
color1 = color1 - color2; color1 = (color1 >= weight1 - weight2 ? 255 : 0);
resultBuffer[dst] = (byte)color1; resultBuffer[dst + 1] = (byte)color1; resultBuffer[dst + 2] = (byte)color1; resultBuffer[dst + 3] = 255; }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Frog: Kernel 3×3, Weight1 2.1, Weight2 0.7
This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature as sample images:
Panamanian Golden Frog
Dendropsophus Microcephalus
Tyler’s Tree Frog
Mimic Poison Frog
Phyllobates Terribilis
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article explores the concept of rendering ASCII Art from source images. Beyond exploring concepts this article also provides a practical implementation of all the steps required in creating an Image ASCII Filter.
Sir Tim Berners-Lee: 2 Pixels Per Character, 12 Characters, Font Size 4, Zoom 100
This article is accompanied by a sample source code Visual Studio project which is available for download here.
The sample source code that accompanies this article includes a sample application. The concepts illustrated in this article can tested and replicated using the sample application.
The sample application user interface implements a variety of functionality which can be described as follows:
The following image is screenshot of the Image ASCII Art sample application is action:
Image ASCII Art Sample Application
Anders Hejlsberg: 1 Pixel Per Character, 24 Characters, Font Size 6, Zoom 20
ASCII Art in various forms have been part of computer culture since the pioneering days of computing. From Wikipedia we gain the following:
ASCII art is a graphic design technique that uses computers for presentation and consists of pictures pieced together from the 95 printable (from a total of 128) characters defined by the ASCII Standard from 1963 and ASCII compliant character sets with proprietary extended characters (beyond the 128 characters of standard 7-bit ASCII). The term is also loosely used to refer to text based visual art in general. ASCII art can be created with any text editor, and is often used with free-form languages. Most examples of ASCII art require a fixed-width font (non-proportional fonts, as on a traditional typewriter) such as Courier for presentation.
Among the oldest known examples of ASCII art are the creations by computer-art pioneer Kenneth Knowlton from around 1966, who was working for Bell Labs at the time.^{[1]} "Studies in Perception I" by Ken Knowlton and Leon Harmon from 1966 shows some examples of their early ASCII art.^{[2]}
One of the main reasons ASCII art was born was because early printers often lacked graphics ability and thus characters were used in place of graphic marks. Also, to mark divisions between different print jobs from different users, bulk printers often used ASCII art to print large banners, making the division easier to spot so that the results could be more easily separated by a computer operator or clerk. ASCII art was also used in early e-mail when images could not be embedded.
Bjarne Stroustrup: 1 Pixel Per Character, 12 Characters, Font Size 6, Zoom 60
This article explores the steps involved in rendering text strings representing ASCII Art, implementing source/input images in rendering ASCII Art text representations. The following sections details the steps required to render ASCII Art text strings from source/input images:
Linus Torvalds: 1 Pixel Per Character, 16 Characters, Font Size 5, Zoom 60
When rendering high definition ASCII Art the resulting text can easily consist of several thousand characters. Attempting to display such a vast number of text in a traditional text editor in most scenarios would be futile. An alternative method of retaining a high definition whilst still being viewable can be achieved through creating an image from the rendered text and then reducing the image dimensions.
The sample code employs the following steps when converting rendered text to an image:
Alan Turing: 1 Pixel Per Character, 16 Characters, Font Size 4, Zoom 100
The sample source code implements four methods when implementing an Image ASCII Filter, the methods are:
The GenerateRandomString method, as the name implies, generates a string consisting of randomly selected characters. The number of characters contained in the string will be determined by the parameter value passed to this method. The following code snippet provides the implementation of the GenerateRandomString method:
private static string GenerateRandomString(int maxSize) { StringBuilder stringBuilder = new StringBuilder(maxSize); Random randomChar = new Random();
char charValue;
for (int k = 0; k < maxSize; k++) { charValue = (char)(Math.Floor(255 * randomChar.NextDouble() * 4));
if (stringBuilder.ToString().IndexOf(charValue) != -1) { charValue = (char)(Math.Floor((byte)charValue * randomChar.NextDouble())); }
if (Char.IsControl(charValue) == false && Char.IsPunctuation(charValue) == false && stringBuilder.ToString().IndexOf(charValue) == -1) { stringBuilder.Append(charValue); randomChar = new Random((int)((byte)charValue * (k + 1) + DateTime.Now.Ticks)); } else { randomChar = new Random((int)((byte)charValue * (k + 1) + DateTime.UtcNow.Ticks)); k -= 1; } }
return stringBuilder.ToString().RandomStringSort(); }
Sir Tim Berners-Lee: 4 Pixels Per Character, 16 Characters, Font Size 6, Zoom 100
The RandomStringSort method has been defined as an extension method targeting the string class. This method provides a means of sorting a string in a random manner, in essence shuffling a string’s characters. The definition as follows:
public static string RandomStringSort(this string stringValue) { char[] charArray = stringValue.ToCharArray();
Random randomIndex = new Random((byte)charArray[0]); int iterator = charArray.Length;
while(iterator > 1) { iterator -= 1;
int nextIndex = randomIndex.Next(iterator + 1);
char nextValue = charArray[nextIndex]; charArray[nextIndex] = charArray[iterator]; charArray[iterator] = nextValue; }
return new string(charArray); }
Anders Hejlsberg: 3 Pixels Per Character, 12 Characters, Font Size 5, Zoom 50
The sample source code defines the GetColorCharacter method, intended to map pixels to character values. This method has been defined as an extension method targeting the string class. The definition as follows:
private static string colorCharacters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
private static string GetColorCharacter(int blue, int green, int red) { string colorChar = String.Empty; int intensity = (blue + green + red) / 3 * (colorCharacters.Length - 1) / 255;
colorChar = colorCharacters.Substring(intensity, 1).ToUpper(); colorChar += colorChar.ToLower();
return colorChar; }
Bjarne Stroustrup: 1 Pixel Per Character, 12 Characters, Font Size 4, Zoom 100
The ASCIIFilter method defined by the sample source code has the task of translating source/input images into text based ASCII Art. This method has been defined as an extension method targeting the Bitmap class. The following code snippet provides the definition:
public static string ASCIIFilter(this Bitmap sourceBitmap, int pixelBlockSize, int colorCount = 0) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
StringBuilder asciiArt = new StringBuilder();
int avgBlue = 0; int avgGreen = 0; int avgRed = 0; int offset = 0;
int rows = sourceBitmap.Height / pixelBlockSize; int columns = sourceBitmap.Width / pixelBlockSize;
if (colorCount > 0) { colorCharacters = GenerateRandomString(colorCount); }
for (int y = 0; y < rows; y++) { for (int x = 0; x < columns; x++) { avgBlue = 0; avgGreen = 0; avgRed = 0;
for (int pY = 0; pY < pixelBlockSize; pY++) { for (int pX = 0; pX < pixelBlockSize; pX++) { offset = y * pixelBlockSize * sourceData.Stride + x * pixelBlockSize * 4;
offset += pY * sourceData.Stride; offset += pX * 4;
avgBlue += pixelBuffer[offset]; avgGreen += pixelBuffer[offset + 1]; avgRed += pixelBuffer[offset + 2]; } }
avgBlue = avgBlue / (pixelBlockSize * pixelBlockSize); avgGreen = avgGreen / (pixelBlockSize * pixelBlockSize); avgRed = avgRed / (pixelBlockSize * pixelBlockSize);
asciiArt.Append(GetColorCharacter(avgBlue, avgGreen, avgRed)); }
asciiArt.Append("\r\n" ); }
return asciiArt.ToString(); }
Linus Torvalds: 1 Pixel Per Character, 8 Characters, Font Size 4, Zoom 80
The sample source code implements the GDI+ Graphics class when drawing rendered ASCII Art text onto Bitmap images. The sample source code defines the TextToImage method, an extension method extending the string class. The definition listed as follows:
public static Bitmap TextToImage(this string text, Font font, float factor) { Bitmap textBitmap = new Bitmap(1, 1);
Graphics graphics = Graphics.FromImage(textBitmap);
int width = (int)Math.Ceiling( graphics.MeasureString(text, font).Width * factor);
int height = (int)Math.Ceiling( graphics.MeasureString(text, font).Height * factor);
graphics.Dispose();
textBitmap = new Bitmap(width, height, PixelFormat.Format32bppArgb);
graphics = Graphics.FromImage(textBitmap); graphics.Clear(Color.Black);
graphics.CompositingQuality = CompositingQuality.HighQuality; graphics.InterpolationMode = InterpolationMode.HighQualityBicubic; graphics.PixelOffsetMode = PixelOffsetMode.HighQuality; graphics.SmoothingMode = SmoothingMode.HighQuality; graphics.TextRenderingHint = TextRenderingHint.AntiAliasGridFit;
graphics.ScaleTransform(factor, factor); graphics.DrawString(text, font, Brushes.White, new PointF(0, 0));
graphics.Flush(); graphics.Dispose();
return textBitmap; }
Sir Tim Berners-Lee: 1 Pixel Per Character, 32 Characters, Font Size 4, Zoom 100
This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:
The following section lists the original image files that were used as source/input images in generating the ASCII Art images found throughout this article.
Alan Turing
Anders Hejlsberg
Bjarne Stroustrup
Linus Torvalds
Tim Berners-Lee
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article serves to provides a detailed discussion and implementation of a Stained Glass Image Filter. Primary topics explored include: Creating Voronoi Diagrams, Pixel Coordinate distance calculations implementing Euclidean, Manhattan and Chebyshev methods. In addition, this article explores Gradient Based Edge Detection implementing thresholds.
Zurich: Block Size 15, Factor 4, Euclidean
This article is accompanied by a sample source code Visual Studio project which is available for download here.
This article’s accompanying sample source code includes a Windows Forms based sample application. The sample application provides an implementation of the concepts explored by this article. Concepts discussed can be easily replicated and tested by using the sample application.
Source/input image files can be specified from the local file system when clicking the Load Image button. Additionally users also have the option to save resulting filtered images by clicking the Save Image button.
The sample application through its user interface allows a user to specify several filter configuration options. Two main categories of configuration options have been defined as Block Properties and Edge Properties.
Block Properties relate to the process of rendering Voronoi Diagrams. The following configuration options have been implemented:
Salzburg: Block Size 20, Factor 1, Chebyshev, Edge Threshold 2
Edge Properties relate to the implementation of Image Gradient Based Edge Detection. Edge detection is an optional filter and can be enabled/disabled through the user interface, The implementation of edge detection serves to highlight/outline regions rendered as part of a Voronoi Diagram. The configuration options implemented are:
The following image is a screenshot of the Stained Glass Image Filter sample application in action:
Locarno: Block Size 10, Factor 4, Euclidean
The Stained Glass Image Filter detailed in this article operates on the basis of implementing modifications upon a specified sample/input image, producing resulting images which resemble the appearance of stained glass artwork.
A common variant of stained glass artwork comes in the form of several individual pieces of coloured glass being combined in order to create an image. The sample source code employs a similar method of combining what appears to be non-uniform puzzle pieces. The following list provides a broad overview of the steps involved in applying a Stained Glass Image Filter:
Bad Ragaz: Block Size 10, Factor 1, Manhattan
Voronoi Diagrams represent a fairly uncomplicated concept. In contrast, the implementation of Voronoi Diagrams prove somewhat more of a challenge. From Wikipedia we gain the following definition:
In mathematics, a Voronoi diagram is a way of dividing space into a number of regions. A set of points (called seeds, sites, or generators) is specified beforehand and for each seed there will be a corresponding region consisting of all points closer to that seed than to any other. The regions are called Voronoi cells. It is dual to the Delaunay triangulation.
In this article Voronoi Diagrams are generated resulting in regions expressing random shapes. Although region shapes are randomly generated, the parameters or ranges within which random values are selected are fixed/constant. The steps required in generating a Voronoi Diagram can be detailed as follows:
The following image illustrates an example Voronoi Diagram consisting of 10 regions:
Port Edward: Block Size 10, Factor 1, Chebyshev, Edge Threshold 2
The sample source code provides three different coordinate distance calculation methods. The supported methods are: Euclidean, Manhattan and Chebyshev. A pixel’s nearest randomly generated coordinate depends on the distance between that pixel and the random coordinate. Each method of calculating distance in most instances would be likely to produce different output values, which in turn influences the region to which a pixel will be associated.
The most common method of distance calculation, Euclidean distance, has been described by Wikipedia as follows:
In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" distance between two points that one would measure with a ruler, and is given by the Pythagorean formula. By using this formula as distance, Euclidean space (or even any inner product space) becomes a metric space. The associated norm is called the Euclidean norm. Older literature refers to the metric as Pythagorean metric.
When calculating Euclidean distance the algorithm implemented can be expressed as follows:
Zurich: Block Size 10, Factor 1, Euclidean
As an alternative to calculating Euclidean distance, the sample source code also implements Manhattan Distance calculation. Often Manhattan Distance calculation will be referred to as City Block, Taxicab Geometry or rectilinear distance. From Wikipedia we gain the following description:
Taxicab geometry, considered by Hermann Minkowski in the 19th century, is a form of geometry in which the usual distance function or metric of Euclidean geometry is replaced by a new metric in which the distance between two points is the sum of the absolute differences of their coordinates. The taxicab metric is also known as rectilinear distance, L_{1} distance or norm (see L^{p} space), city block distance, Manhattan distance, or Manhattan length, with corresponding variations in the name of the geometry.^{[1]} The latter names allude to the grid layout of most streets on the island of Manhattan, which causes the shortest path a car could take between two intersections in the borough to have length equal to the intersections’ distance in taxicab geometry
When calculating Manhattan Distance the algorithm implemented can be expressed as follows:
Port Edward: Block Size 10, Factor 4, Euclidean
Chebyshev Distance, a distance algorithm resembling the way in which a King Chess piece may move on a chess board. The following description we gain from Wikipedia:
In mathematics, Chebyshev distance (or Tchebychev distance), Maximum metric, or L∞ metric^{[1]} is a metric defined on a vector space where the distance between two vectors is the greatest of their differences along any coordinate dimension.^{[2]} It is named after Pafnuty Chebyshev.
It is also known as chessboard distance, since in the game of chess the minimum number of moves needed by a king to go from one square on a chessboard to another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board.^{[3]} For example, the Chebyshev distance between f6 and e2 equals 4.
When calculating Chebyshev Distance the algorithm implemented can be expressed as follows:
Salzburg: Block Size 20, Factor 1, Chebyshev
Various methods of image edge detection can easily be implemented in C#. Each method of edge detection provides a set of benefits, usually weighed against a set of trade-offs. In this article and the accompanying sample source code the Gradient Based Edge Detection method has been implement.
Take into regard that every region within the rendered Voronoi Diagram will only express a single colour, although most regions differ in the single colour they express. Once all pixels have been associated to a region and all pixel colour values have been updated the resulting image defines mostly clearly distinguishable colour gradients. A Gradient Based method of edge detection performs efficiently at detecting image edges. The edges detected are defined between different regions.
An image gradient can be considered as a difference in colour intensity relating to a specific direction. Only once all tasks related to applying the Stained Glass Filter have been completed should the Gradient Based Edge Detection be applied. The steps involved in applying Gradient Based Edge Detection can be described as follows:
Zurich: Block Size 10, Factor 4, Chebyshev
The sample source code defines two helper classes, both implemented when applying the Stained Glass Image Filter. The Pixel class represents a single pixel in terms of an XY-Coordinate and Red, Green and Blue values. The definition as follows:
public class Pixel { private int xOffset = 0; public int XOffset { get { return xOffset; } set { xOffset = value; } }
private int yOffset = 0; public int YOffset { get { return yOffset; } set { yOffset = value; } }
private byte blue = 0; public byte Blue { get { return blue; } set { blue = value; } }
private byte green = 0; public byte Green { get { return green; } set { green = value; } }
private byte red = 0; public byte Red { get { return red; } set { red = value; } } }
Zurich: Block Size 10, Factor 1, Chebyshev, Edge Threshold 1
The VoronoiPoint class serves as method of recording randomly generated coordinates and referencing a region’s associated pixels. The definition as follows:
public class VoronoiPoint { private int xOffset = 0; public int XOffset { get { return xOffset; } set { xOffset = value; } }
private int yOffset = 0; public int YOffset { get { return yOffset; } set { yOffset = value; } }
private int blueTotal = 0; public int BlueTotal { get { return blueTotal; } set { blueTotal = value; } }
private int greenTotal = 0; public int GreenTotal { get {return greenTotal; } set { greenTotal = value; } }
private int redTotal = 0; public int RedTotal { get { return redTotal; } set { redTotal = value; } }
public void CalculateAverages() { if (pixelCollection.Count > 0) { blueAverage = blueTotal / pixelCollection.Count; greenAverage = greenTotal / pixelCollection.Count; redAverage = redTotal / pixelCollection.Count; } }
private int blueAverage = 0; public int BlueAverage { get { return blueAverage; } }
private int greenAverage = 0; public int GreenAverage { get { return greenAverage; } }
private int redAverage = 0; public int RedAverage { get { return redAverage; } }
private List<Pixel> pixelCollection = new List<Pixel>(); public List<Pixel> PixelCollection { get { return pixelCollection; } }
public void AddPixel(Pixel pixel) { blueTotal += pixel.Blue; greenTotal += pixel.Green; redTotal += pixel.Red;
pixelCollection.Add(pixel); } }
Zurich: Block Size 20, Factor 1, Euclidean, Edge Threshold 1
From the perspective of a filter implementation code base the only requirement comes in the form of having to invoke the StainedGlassColorFilter extension method, no additional work is required from external code consumers. The StainedGlassColorFilter method has been defined as an extension method targeting the Bitmap class. The StainedGlassColorFilter method definition as follows:
public static Bitmap StainedGlassColorFilter(this Bitmap sourceBitmap, int blockSize, double blockFactor, DistanceFormulaType distanceType, bool highlightEdges, byte edgeThreshold, Color edgeColor) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int neighbourHoodTotal = 0; int sourceOffset = 0; int resultOffset = 0; int currentPixelDistance = 0; int nearestPixelDistance = 0; int nearesttPointIndex = 0;
Random randomizer = new Random();
List<VoronoiPoint> randomPointList = new List<VoronoiPoint>();
for (int row = 0; row < sourceBitmap.Height - blockSize; row += blockSize) { for (int col = 0; col < sourceBitmap.Width - blockSize; col += blockSize) { sourceOffset = row * sourceData.Stride + col * 4;
neighbourHoodTotal = 0;
for (int y = 0; y < blockSize; y++) { for (int x = 0; x < blockSize; x++) { resultOffset = sourceOffset + y * sourceData.Stride + x * 4; neighbourHoodTotal += pixelBuffer[resultOffset]; neighbourHoodTotal += pixelBuffer[resultOffset + 1]; neighbourHoodTotal += pixelBuffer[resultOffset + 2]; } }
randomizer = new Random(neighbourHoodTotal);
VoronoiPoint randomPoint = new VoronoiPoint(); randomPoint.XOffset = randomizer.Next(0, blockSize) + col; randomPoint.YOffset = randomizer.Next(0, blockSize) + row;
randomPointList.Add(randomPoint); } }
int rowOffset = 0; int colOffset = 0;
for (int bufferOffset = 0; bufferOffset < pixelBuffer.Length - 4; bufferOffset += 4) { rowOffset = bufferOffset / sourceData.Stride; colOffset = (bufferOffset % sourceData.Stride) / 4;
currentPixelDistance = 0; nearestPixelDistance = blockSize * 4; nearesttPointIndex = 0;
List<VoronoiPoint> pointSubset = new List<VoronoiPoint>();
pointSubset.AddRange(from t in randomPointList where rowOffset >= t.YOffset - blockSize * 2 && rowOffset <= t.YOffset + blockSize * 2 select t);
for (int k = 0; k < pointSubset.Count; k++) { if (distanceType == DistanceFormulaType.Euclidean) { currentPixelDistance = CalculateDistanceEuclidean(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } else if (distanceType == DistanceFormulaType.Manhattan) { currentPixelDistance = CalculateDistanceManhattan(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } else if (distanceType == DistanceFormulaType.Chebyshev) { currentPixelDistance = CalculateDistanceChebyshev(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } if (currentPixelDistance <= nearestPixelDistance) { nearestPixelDistance = currentPixelDistance; nearesttPointIndex = k; if (nearestPixelDistance <= blockSize / blockFactor) { break; } } }
Pixel tmpPixel = new Pixel (); tmpPixel.XOffset = colOffset; tmpPixel.YOffset = rowOffset; tmpPixel.Blue = pixelBuffer[bufferOffset]; tmpPixel.Green = pixelBuffer[bufferOffset + 1]; tmpPixel.Red = pixelBuffer[bufferOffset + 2];
pointSubset[nearesttPointIndex].AddPixel(tmpPixel); }
for (int k = 0; k < randomPointList.Count; k++) { randomPointList[k].CalculateAverages();
for (int i = 0; i < randomPointList[k].PixelCollection.Count; i++) { resultOffset = randomPointList[k].PixelCollection[i].YOffset * sourceData.Stride + randomPointList[k].PixelCollection[i].XOffset * 4;
resultBuffer[resultOffset] = (byte)randomPointList[k].BlueAverage; resultBuffer[resultOffset + 1] = (byte)randomPointList[k].GreenAverage; resultBuffer[resultOffset + 2] = (byte)randomPointList[k].RedAverage;
resultBuffer[resultOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
if (highlightEdges == true ) { resultBitmap = resultBitmap.GradientBasedEdgeDetectionFilter(edgeColor, edgeThreshold); }
return resultBitmap; }
Locarno: Block Size 10, Factor 4, Euclidean, Edge Threshold 1
As mentioned earlier, this article and the accompanying sample source code support coordinate distance calculations through three different calculation methods, namely Euclidean, Manhattan and Chebyshev. The method of distance calculation implemented depends on the configuration option specified by the user.
The CalculateDistanceEuclidean method calculates distance implementing the Euclidean Distance Calculation method. In order to aid faster execution this method will calculate the square root of a specific value only once. Once a square root has been calculated the result is kept in memory. The following code snippet lists the definition of the CalculateDistanceEuclidean method:
private static Dictionary <int,int> squareRoots = new Dictionary<int,int>();
private static int CalculateDistanceEuclidean(int x1, int x2, int y1, int y2) { int square = (x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2);
if(squareRoots.ContainsKey(square) == false) { squareRoots.Add(square, (int)Math.Sqrt(square)); }
return squareRoots[square]; }
The two other methods of calculating distance are implemented through the CalculateDistanceManhattan and CalculateDistanceChebyshev methods. The definition as follows:
private static int CalculateDistanceManhattan(int x1, int x2, int y1, int y2) { return Math.Abs(x1 - x2) + Math.Abs(y1 - y2); }
private static int CalculateDistanceChebyshev(int x1, int x2, int y1, int y2) { return Math.Max(Math.Abs(x1 - x2), Math.Abs(y1 - y2)); }
Bad Ragaz: Block Size 12, Factor 1, Chebyshev
Did you notice the very last step performed by the StainedGlassColorFilter method involves implementing Gradient Based Edge Detection, depending on whether edge detection had been specified by the user.
The following code snippet provides the implementation of the GradientBasedEdgeDetectionFilter extension method:
public static Bitmap GradientBasedEdgeDetectionFilter( this Bitmap sourceBitmap, Color edgeColour, byte threshold = 0) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int sourceOffset = 0, gradientValue = 0; bool exceedsThreshold = false;
for(int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for(int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { sourceOffset = offsetY * sourceData.Stride + offsetX * 4; gradientValue = 0; exceedsThreshold = true;
// Horizontal Gradient CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold, 2); // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Horizontal Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold, 2); // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset - 4 + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset + sourceData.Stride - 4, ref gradientValue, threshold); } } } } }
if (exceedsThreshold == true) { resultBuffer[sourceOffset] = edgeColour.B; resultBuffer[sourceOffset + 1] = edgeColour.G; resultBuffer[sourceOffset + 2] = edgeColour.R; }
resultBuffer[sourceOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height); BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode .WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Zurich: Block Size 15, Factor 1, Manhattan, Edge Threshold 1
This article features a rendered graphic illustrating an example Voronoi Diagram which has been released into the public domain by its author, Augochy at the wikipedia project. This applies worldwide. The original can be downloaded from Wikipedia.
All of the photos that appear in this article were taken by myself. Photos listed under Zurich, Locarno and Bad Ragaz were shot in Switzerland. The photo listed as Salzburg had been shot in Austria and the photo listed under Port Edward had been shot in South Africa. In order to fully realize the extent to which images had been modified the following section details the original photos.
Zurich, Switzerland
Salzburg, Austria
Locarno, Switzerland
Bad Ragaz, Switzerland
Port Edward, South Africa
Zurich, Switzerland
Zurich, Switzerland
Zurich, Switzerland
Bad Ragaz, Switzerland
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article illustrates and provides a discussion and implementation of Image Oil Painting Filters and related Image Cartoon Filters.
Sunflower: Oil Painting, Filter 5, Levels 30, Cartoon Threshold 30
This article is accompanied by a sample source code Visual Studio project which is available for download here.
A sample application accompanies this article. The sample application creates a visual implementation of the concepts discussed throughout this article. Source/input images can be selected from the local file system and if desired filter result images can be saved to the local file system.
The two main types of functionality exposed by the sample application can be described as Image Oil Painting Filters and Image Cartoon Filters. The user interface provides the following user input options:
The following image is screenshot of the Oil Painting Cartoon Filter sample application in action:
Rose: Oil Painting, Filter 15, Levels 10
The Image Oil Painting Filter consists of two main components: colour gradients and pixel colour intensities. As implied by the title when implementing this image filter resulting images are similar in appearance to images of Oil Paintings. Result images express a lesser degree of image detail when compared to source/input images. This filter also tends to output images which appear to have smaller colour ranges.
Four steps are required when implementing an Oil Painting Filter, indicated as follows:
Roses: Oil Painting, Filter 11, Levels 60, Cartoon Threshold 80
When calculating colour intensity reduced to fit the number of levels specified the algorithm implemented can be expressed as follows:
In the algorithm listed above the variables implemented can be explained as follows:
Rose: Oil Painting, Filter 15, Levels 30
A Cartoon Filter effect can be achieved by combining an Image Oil Painting filter and an Edge Detection Filter. The Oil Painting filter has the effect of creating more gradual image colour gradients, in other words reducing image edge intensity.
The steps required in implementing a Cartoon filter can be listed as follows:
Daisy: Oil Painting, Filter 7, Levels 30, Cartoon Threshold 40
In the sample source code edge detection has been implemented through Gradient Based Edge Detection. This method of edge detection compares the difference in colour gradients between a pixel’s neighbouring pixels. A pixel forms part of an edge if the difference in neighbouring pixel colour values exceeds a specified threshold value. The steps involved in Gradient Based Edge Detection as follows:
Rose: Oil Painting, Filter 9, Levels 30
The sample source code defines the OilPaintFilter method, an extension method targeting the Bitmap class. This method determines the maximum colour intensity from a pixel’s neighbouring pixels. The definition detailed as follows:
public static Bitmap OilPaintFilter(this Bitmap sourceBitmap, int levels, int filterSize) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int[] intensityBin = new int [levels]; int[] blueBin = new int [levels]; int[] greenBin = new int [levels]; int[] redBin = new int [levels];
levels = levels - 1;
int filterOffset = (filterSize - 1) / 2; int byteOffset = 0; int calcOffset = 0; int currentIntensity = 0; int maxIntensity = 0; int maxIndex = 0;
double blue = 0; double green = 0; double red = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = green = red = 0;
currentIntensity = maxIntensity = maxIndex = 0;
intensityBin = new int[levels + 1]; blueBin = new int[levels + 1]; greenBin = new int[levels + 1]; redBin = new int[levels + 1];
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
currentIntensity = (int )Math.Round(((double) (pixelBuffer[calcOffset] + pixelBuffer[calcOffset + 1] + pixelBuffer[calcOffset + 2]) / 3.0 * (levels)) / 255.0);
intensityBin[currentIntensity] += 1; blueBin[currentIntensity] += pixelBuffer[calcOffset]; greenBin[currentIntensity] += pixelBuffer[calcOffset + 1]; redBin[currentIntensity] += pixelBuffer[calcOffset + 2];
if (intensityBin[currentIntensity] > maxIntensity) { maxIntensity = intensityBin[currentIntensity]; maxIndex = currentIntensity; } } }
blue = blueBin[maxIndex] / maxIntensity; green = greenBin[maxIndex] / maxIntensity; red = redBin[maxIndex] / maxIntensity;
resultBuffer[byteOffset] = ClipByte(blue); resultBuffer[byteOffset + 1] = ClipByte(green); resultBuffer[byteOffset + 2] = ClipByte(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Rose: Oil Painting, Filter 7, Levels 20, Cartoon Threshold 20
The sample source code defines the CheckThreshold method. The purpose of this method to determine the difference in colour between two pixels. In addition this method compares the colour difference and the specified threshold value. The following code snippet provides the implementation:
private static bool CheckThreshold(byte[] pixelBuffer, int offset1, int offset2, ref int gradientValue, byte threshold, int divideBy = 1) { gradientValue += Math.Abs(pixelBuffer[offset1] - pixelBuffer[offset2]) / divideBy;
gradientValue += Math.Abs(pixelBuffer[offset1 + 1] - pixelBuffer[offset2 + 1]) / divideBy;
gradientValue += Math.Abs(pixelBuffer[offset1 + 2] - pixelBuffer[offset2 + 2]) / divideBy;
return (gradientValue >= threshold); }
Rose: Oil Painting, Filter 13, Levels 15
The GradientBasedEdgeDetectionFilter method has been defined as an extension method targeting the Bitmap class. This method iterates each pixel forming part of the source/input image. Whilst iterating pixels the GradientBasedEdgeDetectionFilter extension method determines if the colour gradients in various directions exceeds the specified threshold value. A pixel is considered as part of an edge if a colour gradient exceeds the threshold value. The implementation as follows:
public static Bitmap GradientBasedEdgeDetectionFilter( this Bitmap sourceBitmap, byte threshold = 0) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
int sourceOffset = 0, gradientValue = 0; bool exceedsThreshold = false;
for(int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for(int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { sourceOffset = offsetY * sourceData.Stride + offsetX * 4; gradientValue = 0; exceedsThreshold = true ;
// Horizontal Gradient CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold, 2); // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false ) { gradientValue = 0;
// Horizontal Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold);
if (exceedsThreshold == false ) { gradientValue = 0; // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false ) { gradientValue = 0; // Diagonal Gradient : NW-SE CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold, 2); // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset - 4 + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false ) { gradientValue = 0; // Diagonal Gradient : NW-SE exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false ) { gradientValue = 0; // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset + sourceData.Stride - 4, ref gradientValue, threshold); } } } } }
resultBuffer[sourceOffset] = (byte)(exceedsThreshold ? 255 : 0); resultBuffer[sourceOffset + 1] = resultBuffer[sourceOffset]; resultBuffer[sourceOffset + 2] = resultBuffer[sourceOffset]; resultBuffer[sourceOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Rose: Oil Painting, Filter 7, Levels 20, Cartoon Threshold 20
The CartoonFilter extension method serves to combine images generated by the OilPaintFilter and GradientBasedEdgeDetectionFilter methods. The CartoonFilter method being defined as an extension method targets the Bitmap class. In this method pixels detected as forming part of an edge are set to black in Oil Painting filtered images. The definition as follows:
public static Bitmap CartoonFilter(this Bitmap sourceBitmap, int levels, int filterSize, byte threshold) { Bitmap paintFilterImage = sourceBitmap.OilPaintFilter(levels, filterSize);
Bitmap edgeDetectImage = sourceBitmap.GradientBasedEdgeDetectionFilter(threshold);
BitmapData paintData = paintFilterImage.LockBits(new Rectangle (0, 0, paintFilterImage.Width, paintFilterImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] paintPixelBuffer = new byte[paintData.Stride * paintData.Height];
Marshal.Copy(paintData.Scan0, paintPixelBuffer, 0, paintPixelBuffer.Length);
paintFilterImage.UnlockBits(paintData);
BitmapData edgeData = edgeDetectImage.LockBits(new Rectangle (0, 0, edgeDetectImage.Width, edgeDetectImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] edgePixelBuffer = new byte[edgeData.Stride * edgeData.Height];
Marshal.Copy(edgeData.Scan0, edgePixelBuffer, 0, edgePixelBuffer.Length);
edgeDetectImage.UnlockBits(edgeData);
byte[] resultBuffer = new byte [edgeData.Stride * edgeData.Height];
for(int k = 0; k + 4 < paintPixelBuffer.Length; k += 4) { if (edgePixelBuffer[k] == 255 || edgePixelBuffer[k + 1] == 255 || edgePixelBuffer[k + 2] == 255) { resultBuffer[k] = 0; resultBuffer[k + 1] = 0; resultBuffer[k + 2] = 0; resultBuffer[k + 3] = 255; } else { resultBuffer[k] = paintPixelBuffer[k]; resultBuffer[k + 1] = paintPixelBuffer[k + 1]; resultBuffer[k + 2] = paintPixelBuffer[k + 2]; resultBuffer[k + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Rose: Oil Painting, Filter 9, Levels 25, Cartoon Threshold 25
This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article’s objective is to illustrate concepts relating to Compass Edge Detection. The edge detection methods implemented in this article include: Prewitt, Sobel, Scharr, Kirsch and Isotropic.
Wasp: Scharr 3 x 3 x 8
This article is accompanied by a sample source code Visual Studio project which is available for download here.
The sample source code accompanying this article includes a Windows Forms based sample application. When using the sample application users are able to load source/input images from and save result images to the local file system. The user interface provides a combobox which contains the supported methods of Compass Edge Detection. Selecting an item from the combobox results in the related Compass Edge Detection method being applied to the current source/input image. Supported methods are:
The following image is a screenshot of the Compass Edge Detection Sample Application in action:
Bee: Isotropic 3 x 3 x 8
Compass Edge Detection as a concept title can be explained through the implementation of compass directions. Compass Edge Detection can be implemented through image convolution, using multiple matrix kernels, each suited to detecting edges in a specific direction. Often the edge directions implemented are:
Each of the compass directions listed above differ by 45 degrees. Applying a kernel rotation of 45 degrees to an existing direction specific edge detection kernel results in a new kernel suited to detecting edges in the next compass direction.
Various edge detection kernels can be implemented in Compass Edge Detection. This article and accompanying sample source code implements the following kernel types:
Prey Mantis: Sobel 3 x 3 x 8
The steps required when implementing Compass Edge Detection can be described as follows:
Prewitt Compass Kernels
LadyBug: Prewitt 3 x 3 x 8
Kernels can be rotated by implementing a rotate transform. Repeatedly rotating by 45 degrees results in calculating 8 kernels, each suited to a different direction. The algorithm implemented when performing a rotate transform can be expressed as follows:
Rotate Horizontal Algorithm
Rotate Vertical Algorithm
I’ve published an in-depth article on matrix rotation available here: C# How to: Image Transform Rotate
Butterfly: Sobel 3 x 3 x 8
The sample source code defines the RotateMatrix method. This method accepts as parameter a single kernel, defined as a two dimensional array of type double. In addition the method also expects as a parameter the degree to which the specified kernel should be rotated. The definition as follows:
public static double[, ,] RotateMatrix(double[,] baseKernel, double degrees) { double[, ,] kernel = new double[(int )(360 / degrees), baseKernel.GetLength(0), baseKernel.GetLength(1)];
int xOffset = baseKernel.GetLength(1) / 2; int yOffset = baseKernel.GetLength(0) / 2;
for (int y = 0; y < baseKernel.GetLength(0); y++) { for (int x = 0; x < baseKernel.GetLength(1); x++) { for (int compass = 0; compass < kernel.GetLength(0); compass++) { double radians = compass * degrees * Math.PI / 180.0;
int resultX = (int)(Math.Round((x - xOffset) * Math.Cos(radians) - (y - yOffset) * Math.Sin(radians)) + xOffset);
int resultY = (int )(Math.Round((x - xOffset) * Math.Sin(radians) + (y - yOffset) * Math.Cos(radians)) + yOffset);
kernel[compass, resultY, resultX] = baseKernel[y, x]; } } }
return kernel; }
Butterfly: Prewitt 3 x 3 x 8
The sample source code defines several kernels which are implemented in convolution. The following code snippet provides the definition of all kernels defined:
public static double[, ,] Prewitt3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -1, 0, 1, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Prewitt3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -1, 0, 1, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Prewitt5x5x4 { get { double[,] baseKernel = new double[,] { { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Kirsch3x3x4 { get { double[,] baseKernel = new double[,] { { -3, -3, 5, }, { -3, 0, 5, }, { -3, -3, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Kirsch3x3x8 { get { double[,] baseKernel = new double[,] { { -3, -3, 5, }, { -3, 0, 5, }, { -3, -3, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Sobel3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -2, 0, 2, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Sobel3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -2, 0, 2, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Sobel5x5x4 { get { double[,] baseKernel = new double[,] { { -5, -4, 0, 4, 5, }, { -8, -10, 0, 10, 8, }, { -10, -20, 0, 20, 10, }, { -8, -10, 0, 10, 8, }, { -5, -4, 0, 4, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Scharr3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -3, 0, 3, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Scharr3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -3, 0, 3, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Scharr5x5x4 { get { double[,] baseKernel = new double[,] { { -1, -1, 0, 1, 1, }, { -2, -2, 0, 2, 2, }, { -3, -6, 0, 6, 3, }, { -2, -2, 0, 2, 2, }, { -1, -1, 0, 1, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Isotropic3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -Math.Sqrt(2), 0, Math.Sqrt(2), }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Isotropic3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -Math.Sqrt(2), 0, Math.Sqrt(2), }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
Notice how each property invokes the RotateMatrix method discussed in the previous section.
Butterfly: Scharr 3 x 3 x 8
The CompassEdgeDetectionFilter method is defined as an extension method targeting the Bitmap class. The purpose of this method is to act as a wrapper method encapsulating the technical implementation. The definition as follows:
public static Bitmap CompassEdgeDetectionFilter(this Bitmap sourceBitmap, CompassEdgeDetectionType compassType) { Bitmap resultBitmap = null;
switch (compassType) { case CompassEdgeDetectionType.Sobel3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel3x3x4, 1.0 / 4.0); } break; case CompassEdgeDetectionType.Sobel3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel3x3x8, 1.0/ 4.0); } break; case CompassEdgeDetectionType.Sobel5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel5x5x4, 1.0/ 84.0); } break; case CompassEdgeDetectionType.Prewitt3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt3x3x4, 1.0 / 3.0); } break; case CompassEdgeDetectionType.Prewitt3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt3x3x8, 1.0/ 3.0); } break; case CompassEdgeDetectionType.Prewitt5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt5x5x4, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Scharr3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr3x3x4, 1.0 / 4.0); } break; case CompassEdgeDetectionType.Scharr3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr3x3x8, 1.0 / 4.0); } break; case CompassEdgeDetectionType .Scharr5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr5x5x4, 1.0 / 21.0); } break; case CompassEdgeDetectionType.Kirsch3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Kirsch3x3x4, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Kirsch3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Kirsch3x3x8, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Isotropic3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Isotropic3x3x4, 1.0 / 3.4); } break; case CompassEdgeDetectionType.Isotropic3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Isotropic3x3x8, 1.0 / 3.4); } break; }
return resultBitmap; }
Rose: Scharr 3 x 3 x 8
Notice from the code snippet listed above, each case statement invokes the ConvolutionFilter method. This method has been defined as an extension method targeting the Bitmap class. The ConvolutionFilter extension method performs the actual task of image convolution. This method implements each kernel passed as a parameter, the highest result value will be determined as the output value. The definition as follows:
private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap, double[,,] filterMatrix, double factor = 1, int bias = 0) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte [sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte [sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double blue = 0.0; double green = 0.0; double red = 0.0;
double blueCompass = 0.0; double greenCompass = 0.0; double redCompass = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth-1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int compass = 0; compass < filterMatrix.GetLength(0); compass++) {
blueCompass = 0.0; greenCompass = 0.0; redCompass = 0.0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blueCompass += (double)(pixelBuffer[calcOffset]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset];
greenCompass += (double)(pixelBuffer[calcOffset + 1]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset];
redCompass += (double)(pixelBuffer[calcOffset + 2]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset]; } }
blue = (blueCompass > blue ? blueCompass : blue); green = (greenCompass > green ? greenCompass : green); red = (redCompass > red ? redCompass : red); }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
if(blue > 255) { blue = 255; } else if(blue < 0) { blue = 0; }
if(green > 255) { green = 255; } else if(green < 0) { green = 0; }
if(red > 255) { red = 255; } else if(red < 0) { red = 0; }
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Rose: Isotropic 3 x 3 x 8
This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:
The Original Image
Butterfly: Isotropic 3 x 3 x 4
Butterfly: Isotropic 3 x 3 x 8
Butterfly: Kirsch 3 x 3 x 4
Butterfly: Kirsch 3 x 3 x 8
Butterfly: Prewitt 3 x 3 x 4
Butterfly: Prewitt 3 x 3 x 8
Butterfly: Prewitt 5 x 5 x 4
Butterfly: Scharr 3 x 3 x 4
Butterfly: Scharr 3 x 3 x 8
Butterfly: Scharr 5 x 5 x 4
Butterfly: Sobel 3 x 3 x 4
Butterfly: Sobel 3 x 3 x 8
Butterfly: Sobel 5 x 5 x 4
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article is focussed on illustrating the steps required in performing an image Shear Transformation. All of the concepts explored have been implemented by means of raw pixel data processing, no conventional drawing methods, such as GDI, are required.
Rabbit: Shear X 0.4, Y 0.4
This article is accompanied by a sample source code Visual Studio project which is available for download here.
This article features a Windows Forms based sample application which is included as part of the accompanying sample source code. The concepts explored in this article can be illustrated in a practical implementation using the sample application.
The sample application enables a user to load source/input images from the local file system when clicking the Load Image button. In addition users are also able to save output result images to the local file system by clicking the Save Image button.
Image Shear Transformations can be applied to either X or Y, or both X and Y pixel coordinates. When using the sample application the user has option of adjusting Shear factors, as indicated on the user interface by the numeric up/down controls labelled Shear X and Shear Y.
The following image is a screenshot of the Image Transform Shear Sample Application in action:
Rabbit: Shear X -0.5, Y -0.25
A good definition of the term Shear Transformation can be found on the Wikipedia topic related article:
In plane geometry, a shear mapping is a linear map that displaces each point in fixed direction, by an amount proportional to its signed distance from a line that is parallel to that direction.^{[1]} This type of mapping is also called shear transformation, transvection, or just shearing
A Shear Transformation can be applied as a horizontal shear, a vertical shear or as both. The algorithms implemented when performing a Shear Transformation can be expressed as follows:
Horizontal Shear Algorithm
Vertical Shear Algorithm
The algorithm description:
Note: When performing a Shear Transformation implementing both the horizontal and vertical planes each coordinate plane can be calculated using a different shearing factor.
The algorithms have been adapted in order to implement a middle pixel offset by means of subtracting the product of the related image plane boundary and the specified Shearing Factor, which will then be divided by a factor of two.
Rabbit: Shear X 1.0, Y 0.1
The sample source code performs Shear Transformations through the implementation of the extension methods ShearXY and ShearImage.
The ShearXY extension method targets the Point structure. The algorithms discussed in the previous sections have been implemented in this function from a C# perspective. The definition as illustrated by the following code snippet:
public static Point ShearXY(this Point source, double shearX, double shearY, int offsetX, int offsetY) { Point result = new Point();
result.X = (int)(Math.Round(source.X + shearX * source.Y)); result.X -= offsetX;
result.Y = (int)(Math.Round(source.Y + shearY * source.X)); result.Y -= offsetY;
return result; }
Rabbit: Shear X 0.0, Y 0.5
The ShearImage extension method targets the Bitmap class. This method expects as parameter values a horizontal and a vertical shearing factor. Providing a shearing factor of zero results in no image shearing being implemented in the corresponding direction. The definition as follows:
public static Bitmap ShearImage(this Bitmap sourceBitmap, double shearX, double shearY) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int xOffset = (int )Math.Round(sourceBitmap.Width * shearX / 2.0);
int yOffset = (int )Math.Round(sourceBitmap.Height * shearY / 2.0);
int sourceXY = 0; int resultXY = 0;
Point sourcePoint = new Point(); Point resultPoint = new Point();
Rectangle imageBounds = new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height);
for (int row = 0; row < sourceBitmap.Height; row++) { for (int col = 0; col < sourceBitmap.Width; col++) { sourceXY = row * sourceData.Stride + col * 4;
sourcePoint.X = col; sourcePoint.Y = row;
if (sourceXY >= 0 && sourceXY + 3 < pixelBuffer.Length) { resultPoint = sourcePoint.ShearXY(shearX, shearY, xOffset, yOffset);
resultXY = resultPoint.Y * sourceData.Stride + resultPoint.X * 4;
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 <= resultBuffer.Length) { resultBuffer[resultXY + 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 5] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 6] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 7] = 255; }
if (resultXY - 3 >= 0) { resultBuffer[resultXY - 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY - 3] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY - 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY - 1] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 1] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 3] = 255; } } } } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Rabbit: Shear X 0.5, Y 0.0
This article features a number of sample images. All featured images have been licensed allowing for reproduction.
The sample images featuring the image of a Desert Cottontail Rabbit is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia. The original author is attributed as Larry D. Moore.
The sample images featuring the image of a Rabbit in Snow is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia. The original author is attributed as George Tuli.
The sample images featuring the image of an Eastern Cottontail Rabbit has been released into the public domain by its author. The original image can be downloaded from Wikipedia.
The sample images featuring the image of a Mountain Cottontail Rabbit is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code. The original image can be downloaded from Wikipedia.
Rabbit: Shear X 1.0, Y 0.0
Rabbit: Shear X 0.5, Y 0.1
Rabbit: Shear X -0.5, Y -0.25
Rabbit: Shear X -0.5, Y 0.0
Rabbit: Shear X 0.25, Y 0.0
Rabbit: Shear X 0.50, Y 0.0
Rabbit: Shear X 0.0, Y 0.5
Rabbit: Shear X 0.0, Y 0.25
Rabbit: Shear X 0.0, Y 1.0
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article provides a discussion exploring the concept of image rotation as a geometric transformation. In addition to conventional image rotation this article illustrates the concept of individual colour channel rotation.
Daisy: Rotate Red 0^{o}, Green 10^{o}, Blue 20^{o}
This article is accompanied by a sample source code Visual Studio project which is available for download here.
A Sample Application has been included in the sample source code that accompanies this article. The sample application serves as an implementation of the concepts discussed throughout this article. Concepts can be easily tested and replicated using the sample application.
Daisy: Rotate Red 15^{o}, Green 5^{o}, Blue 10^{o}
When using the sample application users are able to load source/input images from the local file system by clicking the Load Image button. Required user input via the user interface can be found in the form of three numeric up/down controls labelled Blue, Green and Red respectively. Each control represents the degree to which the related colour component should be rotated. Possible input values range from –360 to 360. Positive values result in clockwise rotation, whereas negative values result in counter clockwise rotation. The sample application enables users to save result images to the local file system by clicking the Save Image button.
The following image is a screenshot of the Image Transform Rotate sample application in action:
A rotational transformation applied to an image from a theoretical point of view is based in Transformation Geometry. From Wikipedia we learn the following definition:
In mathematics, transformation geometry (or transformational geometry) is the name of a mathematical and pedagogic approach to the study of geometry by focusing on groups of geometric transformations, and the properties of figures that are invariant under them. It is opposed to the classical synthetic geometry approach of Euclidean geometry, that focus on geometric constructions.
Rose: Rotate Red –20^{o}, Green 0^{o}, Blue 20^{o}
In this article image rotation is implemented through applying a set rotation transform algorithm to the coordinates of each pixel forming part of a source/input image. In the corresponding result image the calculated rotated pixel coordinates in terms of colour channel values will be assigned to the colour channel values of the original pixel.
The algorithms implemented when calculating a pixel’s rotated coordinates can be expressed as follows:
Symbols/variables contained in the algorithms:
Butterfly: Rotate Red 10^{o}, Green 0^{o}, Blue 0^{o}
In order to apply a rotation transformation each pixel forming part of the source/input image should be iterated. The algorithms expressed above should be applied to each pixel.
The pixel coordinates located at exactly the middle of an image can be calculated through dividing the image width with a factor of two in regards to the X-coordinate. The Y-coordinate can be calculated through dividing the image height also with a factor of two. The algorithms calculate the coordinates of the image middle pixel and implements the coordinates as offsets. Implementing the pixel offsets results in images being rotated around the image’s middle, as opposed to the the top left pixel (0,0).
This article and the associated sample source code extends the concept of traditional rotation through implementing rotation on a per colour channel basis. Through user input the individual degree of rotation can be specified for each colour channel, namely Red, Green and Blue. Functionality has been implemented allowing each colour channel to be rotated to a different degree. In essence the algorithms described above have to be implemented three times per pixel iterated.
Daisy: Rotate Red 30^{o}, Green 0^{o}, Blue 180^{o}
The sample source code implements a rotation transformation through the definition of two extension methods: RotateXY and RotateImage.
The RotateXY extension method targets the Point structure. This method serves as an encapsulation of the logic behind calculating rotating coordinates at a specified angle. The practical C# code implementation of the algorithms discussed in the previous section can be found within this method. The definition as follows:
public static Point RotateXY(this Point source, double degrees, int offsetX, int offsetY) { Point result = new Point(); result.X = (int)(Math.Round((source.X - offsetX) * Math.Cos(degrees) - (source.Y - offsetY) * Math.Sin(degrees))) + offsetX;
result.Y = (int)(Math.Round((source.X - offsetX) * Math.Sin(degrees) + (source.Y - offsetY) * Math.Cos(degrees))) + offsetY;
return result; }
Rose: Rotate Red –60^{o}, Green 0^{o}, Blue 60^{o}
The RotateImage extension method targets the Bitmap class. This method expects three rotation degree/angle values, each corresponding to a colour channel. Positive degrees result in clockwise rotation and negative values result in counter clockwise rotation. The definition as follows:
public static Bitmap RotateImage(this Bitmap sourceBitmap, double degreesBlue, double degreesGreen, double degreesRed) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
//Convert to Radians degreesBlue = degreesBlue * Math.PI / 180.0; degreesGreen = degreesGreen * Math.PI / 180.0; degreesRed = degreesRed * Math.PI / 180.0;
//Calculate Offset in order to rotate on image middle int xOffset = (int )(sourceBitmap.Width / 2.0); int yOffset = (int )(sourceBitmap.Height / 2.0);
int sourceXY = 0; int resultXY = 0;
Point sourcePoint = new Point(); Point resultPoint = new Point();
Rectangle imageBounds = new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height);
for (int row = 0; row < sourceBitmap.Height; row++) { for (int col = 0; col < sourceBitmap.Width; col++) { sourceXY = row * sourceData.Stride + col * 4;
sourcePoint.X = col; sourcePoint.Y = row;
if (sourceXY >= 0 && sourceXY + 3 < pixelBuffer.Length) { //Calculate Blue Rotation
resultPoint = sourcePoint.RotateXY(degreesBlue, xOffset, yOffset);
resultXY = (int)(Math.Round( (resultPoint.Y * sourceData.Stride) + (resultPoint.X * 4.0)));
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 < resultBuffer.Length) { resultBuffer[resultXY + 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 7] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 3] = 255; } }
//Calculate Green Rotation
resultPoint = sourcePoint.RotateXY(degreesGreen, xOffset, yOffset);
resultXY = (int)(Math.Round( (resultPoint.Y * sourceData.Stride) + (resultPoint.X * 4.0)));
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 < resultBuffer.Length) { resultBuffer[resultXY + 5] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 7] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY + 1] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 3] = 255; } }
//Calculate Red Rotation
resultPoint = sourcePoint.RotateXY(degreesRed, xOffset, yOffset);
resultXY = (int)(Math.Round( (resultPoint.Y * sourceData.Stride) + (resultPoint.X * 4.0)));
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 < resultBuffer.Length) { resultBuffer[resultXY + 6] = pixelBuffer[sourceXY + 2]; resultBuffer[resultXY + 7] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY + 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 3] = 255; } } } } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Daisy: Rotate Red 15^{o}, Green 5^{o}, Blue 5^{o}
This article features a number of sample images. All featured images have been licensed allowing for reproduction.
The sample images featuring an image of a yellow daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikimedia.org.
The sample images featuring an image of a white daisy is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia.
The sample images featuring an image of a CPU is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license. The original author is credited as Andrew Dunn. The original image can be downloaded from Wikipedia.
The sample images featuring an image of a rose is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license. The original image can be downloaded from Wikipedia.
The sample images featuring an image of a butterfly is licensed under the Creative Commons Attribution 3.0 Unported license and can be downloaded from Wikimedia.org.
The Original Image
CPU: Rotate Red 90^{o}, Green 0^{o}, Blue –30^{o}
CPU: Rotate Red 0^{o}, Green 10^{o}, Blue 0^{o}
CPU: Rotate Red –4^{o}, Green 4^{o}, Blue 6^{o}
CPU: Rotate Red 10^{o}, Green 0^{o}, Blue 0^{o}
CPU: Rotate Red 10^{o}, Green –5^{o}, Blue 0^{o}
CPU: Rotate Red 10^{o}, Green 0^{o}, Blue 10^{o}
CPU: Rotate Red –10^{o}, Green 10^{o}, Blue 0^{o}
CPU: Rotate Red 30^{o}, Green –30^{o}, Blue 0^{o}
CPU: Rotate Red 40^{o}, Green 20^{o}, Blue 0^{o}
CPU: Rotate Red 40^{o}, Green 20^{o}, Blue 0^{o}
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This article serves to provides an introduction and discussion relating to Image Blurring methods and techniques. The Image Blur methods covered in this article include: Box Blur, Gaussian Blur, Mean Filter, Median Filter and Motion Blur.
Daisy: Mean 9×9
This article is accompanied by a sample source code Visual Studio project which is available for download here.
This article is accompanied by a sample application, intended to provide a means of testing and replicating topics discussed in this article. The sample application is a Windows Forms based application of which the user interface enables the user to select an Image Blur type to implement.
When clicking the Load Image button users are able to browse the local file system in order to select source/input images. In addition users are also able to save blurred result images when clicking the Save Image button and browsing the local file system.
Daisy: Mean 7×7
The sample application provides the user with the ability to select the method of image blurring to implement. The combobox dropdown located on the right-hand side of the user interface lists all of the supported methods of image blurring. When a user selects an item from the combobox, the associated blur method will be implemented on the preview image.
The image below is a screenshot of the Image Blur Filter sample application in action:
The process of image blurring can be regarded as reducing the sharpness or crispness defined by an image. Image blurring results in image detail/definition being perceived as less distinct. Images are often blurred as a method of smoothing an image.
Images perceived as too crisp/sharp can be softened by applying a variety of image blurring techniques and intensity levels. Often images are smoothed/blurred in order to remove/reduce image noise. In image edge detection implementations better results are often achieved when first implementing noise reduction through smoothing/blurring. Image blurring can even be implemented in a fashion where results reflect image edge detection, a method known as Difference of Gaussians.
In this article and the accompanying sample source code all methods of image blurring supported have been implemented through image convolution, with the exception of the Median filter. Each of the supported methods in essence only represent a different convolution matrix kernel. The image blurring technique capable of achieving optimal results will to varying degrees be dependent on the features present in the specified source/input image. Each method provides a different set of desired properties and compromises. In the following sections an overview of each method will be discussed.
Daisy: Mean 9×9
The Mean Filter also sometimes referred to as a Box Blur represents a fairly simplistic implementation and definition. A Mean Filter definition can be found on Wikipedia as follows:
A box blur is an image filter in which each pixel in the resulting image has a value equal to the average value of its neighbouring pixels in the input image. It is a form of low-pass ("blurring") filter and is a convolution.
Due to its property of using equal weights it can be implemented using a much simpler accumulation algorithm which is significantly faster than using a sliding window algorithm.
Mean Filter as a title relates to all weight values in a convolution kernel being equal, therefore the alternate title of Box Blur. In most cases a Mean Filter matrix kernel will only contain the value one. When performing image convolution implementing a Mean Filter kernel, the factor value equates to the 1 being divided by the sum of all kernel values.
The following is an example of a 5×5 Mean Filter convolution kernel:
The kernel consist of 25 elements, therefore the factor value equates to one divided by twenty five.
The Mean Filter Blur does not result in the same level of smoothing achieved by other image blur methods. The Mean Filter method can also be susceptible to directional artefacts.
Daisy Mean 5×5
The Gaussian method of image blurring is a popular and often implemented filter. In contrast to the Box Blur method Gaussian Blurring produce resulting images appearing to contain a more uniform level of smoothing. When implementing image edge detection a Gaussian Blur is often applied to source/input images resulting in noise reduction. The Gaussian Blur has a good level of image edge preservation, hence being used in edge detection operations.
From Wikipedia we gain the following description:
A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function. It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination. Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales
A potential drawback to implementing a Gaussian blur results from the filter being computationally intensive. The following matrix kernel represents a 5×5 Gaussian Blur. The sum total of all elements in the kernel equate to 159, therefore a factor value of 1.0 / 159.0 will be implemented.
Daisy: Gaussian 5×5
The Median Filter is classified as a non-linear filter. In contrast to the other methods of image blurring discussed in this article the Median Filter implementation does not involve convolution or a predefined matrix kernel. The following description can be found on Wikipedia:
In signal processing, it is often desirable to be able to perform some kind of noise reduction on an image or signal. The median filter is a nonlinear digital filtering technique, often used to remove noise. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image). Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise.
Daisy: Median 7×7
As the name implies, the Median Filter operates by calculating the median value of a pixel group also referred to as a window. Calculating a Median value involves a number of steps. The required steps are listed as follows:
Similar to the Gaussian Blur filter the Median Filter has the ability to smooth image noise whilst providing edge preservation. Depending on the window size implemented and the physical dimensions of input/source images the Median Filter can be computationally expensive.
Daisy: Median 9×9
The sample source implements Motion Blur filters. Motion blurring in the traditional sense has been association with photography and video capturing. Motion Blurring can often be observed in scenarios where rapid movements are being captured to photographs or video recording. When recording a single frame, rapid movements could result in the image changing before the frame being captured has completed.
Motion Blurring can be synthetically imitated through the implementation of Digital Motion Blur filters. The size of the matrix kernel provided when implementing image convolution affects the filter intensity perceived in result images. Relating to Motion Blur filters the size of the kernel specified in convolution influences the perception and appearance of how rapidly movement had occurred to have blurred the resulting image. Larger kernels produce the appearance of more rapid motion, whereas smaller kernels result in less rapid motion being perceived.
Daisy: Motion Blur 7×7 135 Degrees
Depending on the kernel specified the ability exists to create the appearance of movement having occurred in a certain direction. The sample source code implements Motion Blur filters at 45 degrees, 135 degrees and in both directions simultaneously.
The kernel listed below represents a 5×5 Motion Blur filter occurring at 45 degrees and 135 degrees:
The sample source code implements all of the concepts explored throughout this article. The source code definition can be grouped into 4 sections: ImageBlurFilter method, ConvolutionFilter method, MedianFilter method and the Matrix class. The following article sections relate to the 4 main source code sections.
The ImageBlurFilter extension method has the purpose of invoking the correct blur filter method and relevant method parameters. This method acts as a method wrapper providing the technical implementation details required when performing a specified blur filter.
The definition of the ImageBlurFilter extension method as follows:
public static Bitmap ImageBlurFilter(this Bitmap sourceBitmap, BlurType blurType) { Bitmap resultBitmap = null;
switch (blurType) { case BlurType.Mean3x3: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean3x3, 1.0 / 9.0, 0); } break; case BlurType.Mean5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean5x5, 1.0 / 25.0, 0); } break; case BlurType.Mean7x7: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean7x7, 1.0 / 49.0, 0); } break; case BlurType.Mean9x9: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean9x9, 1.0 / 81.0, 0); } break; case BlurType.GaussianBlur3x3: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.GaussianBlur3x3, 1.0 / 16.0, 0); } break; case BlurType.GaussianBlur5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.GaussianBlur5x5, 1.0 / 159.0, 0); } break; case BlurType.MotionBlur5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5, 1.0 / 10.0, 0); } break; case BlurType.MotionBlur5x5At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5At45Degrees, 1.0 / 5.0, 0); } break; case BlurType.MotionBlur5x5At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5At135Degrees, 1.0 / 5.0, 0); } break; case BlurType.MotionBlur7x7: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7, 1.0 / 14.0, 0); } break; case BlurType.MotionBlur7x7At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7At45Degrees, 1.0 / 7.0, 0); } break; case BlurType.MotionBlur7x7At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7At135Degrees, 1.0 / 7.0, 0); } break; case BlurType.MotionBlur9x9: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9, 1.0 / 18.0, 0); } break; case BlurType.MotionBlur9x9At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9At45Degrees, 1.0 / 9.0, 0); } break; case BlurType.MotionBlur9x9At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9At135Degrees, 1.0 / 9.0, 0); } break; case BlurType.Median3x3: { resultBitmap = sourceBitmap.MedianFilter(3); } break; case BlurType.Median5x5: { resultBitmap = sourceBitmap.MedianFilter(5); } break; case BlurType.Median7x7: { resultBitmap = sourceBitmap.MedianFilter(7); } break; case BlurType.Median9x9: { resultBitmap = sourceBitmap.MedianFilter(9); } break; case BlurType.Median11x11: { resultBitmap = sourceBitmap.MedianFilter(11); } break; }
return resultBitmap; }
Daisy: Motion Blur 9×9
The Matrix class serves as a collection of various kernel definitions. The Matrix class and all public properties are defined as static. The definition of the Matrix class as follows:
public static class Matrix { public static double[,] Mean3x3 { get { return new double[,] { { 1, 1, 1, }, { 1, 1, 1, }, { 1, 1, 1, }, }; } }
public static double[,] Mean5x5 { get { return new double[,] { { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, }; } }
public static double[,] Mean7x7 { get { return new double[,] { { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, }; } }
public static double[,] Mean9x9 { get { return new double[,] { { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, }; } }
public static double[,] GaussianBlur3x3 { get { return new double[,] { { 1, 2, 1, }, { 2, 4, 2, }, { 1, 2, 1, }, }; } }
public static double[,] GaussianBlur5x5 { get { return new double[,] { { 2, 04, 05, 04, 2 }, { 4, 09, 12, 09, 4 }, { 5, 12, 15, 12, 5 }, { 4, 09, 12, 09, 4 }, { 2, 04, 05, 04, 2 }, }; } }
public static double[,] MotionBlur5x5 { get { return new double[,] { { 1, 0, 0, 0, 1 }, { 0, 1, 0, 1, 0 }, { 0, 0, 1, 0, 0 }, { 0, 1, 0, 1, 0 }, { 1, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur5x5At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 1 }, { 0, 0, 0, 1, 0 }, { 0, 0, 1, 0, 0 }, { 0, 1, 0, 0, 0 }, { 1, 0, 0, 0, 0 }, }; } }
public static double[,] MotionBlur5x5At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur7x7 { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 1 }, { 0, 1, 0, 0, 0, 1, 0 }, { 0, 0, 1, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 1, 0, 0 }, { 0, 1, 0, 0, 0, 1, 0 }, { 1, 0, 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur7x7At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 0, 0, 1 }, { 0, 0, 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0, 0, 0 }, { 1, 0, 0, 0, 0, 0, 0 }, }; } }
public static double[,] MotionBlur7x7At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0, 0, 0 }, { 0, 0, 1, 0, 0, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 0, 0, 1, 0, 0 }, { 0, 0, 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur9x9 { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 1, }, { 0, 1, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 1, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 1, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 1, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 1, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 1, 0, }, { 1, 0, 0, 0, 0, 0, 0, 0, 1, }, }; } }
public static double[,] MotionBlur9x9At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 0, 0, 0, 0, 1, }, { 0, 0, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 0, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 0, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 0, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 0, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 0, 0, }, { 1, 0, 0, 0, 0, 0, 0, 0, 0, }, }; } }
public static double[,] MotionBlur9x9At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 0, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 0, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 0, 0, 0, 0, 0, 0, 1, }, }; } } }
Daisy: Median 7×7
The MedianFilter extension method targets the Bitmap class. The MedianFilter method applies a Median Filter using the specified Bitmap and matrix size (window size), returning a new Bitmap representing the filtered image.
The definition of the MedianFilter extension method as follows:
public static Bitmap MedianFilter(this Bitmap sourceBitmap, int matrixSize) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
List<int> neighbourPixels = new List<int>(); byte[] middlePixel;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
neighbourPixels.Clear();
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[byteOffset] = middlePixel[0]; resultBuffer[byteOffset + 1] = middlePixel[1]; resultBuffer[byteOffset + 2] = middlePixel[2]; resultBuffer[byteOffset + 3] = middlePixel[3]; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Daisy: Motion Blur 9×9
The sample source code performs image convolution by invoking the ConvolutionFilter extension method.
The definition of the ConvolutionFilter extension method as follows:
private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap, double[,] filterMatrix, double factor = 1, int bias = 0) { BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double)(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double)(pixelBuffer[calcOffset + 1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double)(pixelBuffer[calcOffset + 2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red));
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }
This article features a number of sample images. All featured images have been licensed allowing for reproduction.
The sample images featuring an image of a yellow daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikimedia.org.
The sample images featuring an image of a white daisy is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia.
The sample images featuring an image of a pink daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikipedia.
The sample images featuring an image of a purple daisy is licensed under the Creative Commons Attribution-ShareAlike 3.0 License and can be downloaded from Wikipedia.
The Original Image
Daisy: Gaussian 3×3
Daisy: Gaussian 5×5
Daisy: Mean 3×3
Daisy: Mean 5×5
Daisy: Mean 7×7
Daisy: Mean 9×9
Daisy: Median 3×3
Daisy: Median 5×5
Daisy: Median 7×7
Daisy: Median 9×9
Daisy: Median 11×11
Daisy: Motion Blur 5×5
Daisy: Motion Blur 5×5 45 Degrees
Daisy: Motion Blur 5×5 135 Degrees
Daisy: Motion Blur 7×7
Daisy: Motion Blur 7×7 45 Degrees
Daisy: Motion Blur 7×7 135 Degrees
Daisy: Motion Blur 9×9
Daisy: Motion Blur 9×9 45 Degrees
Daisy: Motion Blur 9×9 135 Degrees
Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.
I’ve published a number of articles related to imaging and images of which you can find URL links here:
This purpose of this article is to explain and illustrate in detail the requirements involved in calculating Gaussian Kernels intended for use in image convolution when implementing Gaussian Blur filters. This article’s discussion spans from exploring concepts in theory and continues on to implement concepts through C# sample source code.
Ant: Gaussian Kernel 5×5 Weight 19
This article is accompanied by a sample source code Visual Studio project which is available for download here
A Sample Application forms part of the accompanying sample source code, intended to implement the topics discussed and also provides the means to replicate and test the concepts being illustrated.
The sample application is a Windows Forms based application which provides functionality enabling users to generate/calculate Gaussian Kernels. Calculation results are influenced through user specified options in the form of: Kernel Size and Weight.
Ladybird: Gaussian Kernel 5×5 Weight 5.5
In the sample application and related sample source code when referring to Kernel Size, a reference is being made relating to the physical size dimensions of the kernel/matrix used in convolution. When higher values are specified in setting the Kernel Size, the resulting output image will reflect a greater degree of blurring. Kernel Sizes being specified as lower values result in the output image reflecting a lesser degree of blurring.
In a similar fashion to the Kernel size value, the Weight value provided when generating a Kernel results in smoother/more blurred images when specified as higher values. Lower values assigned to the Weight value has the expected result of less blurring being evident in output images.
Prey Mantis: Gaussian Kernel 13×13 Weight 13
The sample application has the ability to provide the user with a visual representation implementing the calculated kernel value blurring. Users are able to select source/input image from the local file system by clicking the Load Image button. When desired, users are able to save blurred/filtered images to the local file system by clicking the Save Image button.
The image below is screenshot of the Gaussian Kernel Calculator sample application in action:
The formula implemented in calculating Gaussian Kernels can be implemented in C# source code fairly easily. Once the method in which the formula operates has been grasped the actual code implementation becomes straight forward.
The Gaussian Kernel formula can be expressed as follows:
The formula contains a number of symbols, which define how the filter will be implemented. The symbols forming part of the Gaussian Kernel formula are described in the following list:
Note: The formula’s implementation expects x and y to equal zero values when representing the coordinates of the pixel located in the middle of the