Posts Tagged 'Image Edge Detection'

C# How to: Fuzzy Blur Filter

Article Purpose

This article serves to illustrate the concepts involved in implementing a Fuzzy Blur Filter. This filter results in rendering  non-photo realistic images which express a certain artistic effect.

Frog: Filter Size 19×19

Frog: Filter Size 19x19

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

The sample source code accompanying this article includes a based test application. The concepts explored throughout this article can be replicated/tested using the sample application.

When executing the sample application the user interface exposes a number of configurable options:

  • Loading and Saving Images – Users are able to load source/input from the local system by clicking the Load Image button. Clicking the Save Image button allow users to save filter result .
  • Filter Size – The specified filter size affects the filter intensity. Smaller filter sizes result in less blurry being rendered, whereas larger filter sizes result in more blurry being rendered.
  • Edge Factors – The contrast of fuzzy expressed in resulting depend on the specified edge factor values. Values less than one result in detected being darkened and values greater than one result in detected image edges being lightened.

The following image is a screenshot of the Fuzzy Blur Filter sample application in action:

Fuzzy Blur Filter Sample Application

Frog: Filter Size 9×9

Frog: Filter Size 9x9

Fuzzy Blur Overview

The Fuzzy Blur Filter relies on the interference of when performing in order to create a fuzzy effect. In addition results from performing a .

The steps involved in performing a Fuzzy Blur Filter can be described as follows:

  1. Edge Detection and Enhancement – Using the first edge factor specified enhance by performing Boolean Edge detection. Being sensitive to , a fair amount of detected will actually be in addition to actual .
  2. Mean Filter Blur – Using the edge enhanced created in the previous step perform a blur. The enhanced edges will be blurred since a does not have edge preservation properties. The size of the implemented depends on a user specified value.
  3. Edge Detection and Enhancement –  Using the blurred created in the previous step once again perform Boolean Edge detection, enhancing detected edges according to the second edge factor specified.

Frog: Filter Size 9×9

Frog: Filter Size 9x9

Mean Filter

A Blur, also known as a , can be performed through . The size of the / implemented when preforming will be determined through user input.

Every / element should be set to one. The resulting value should be multiplied by a factor value equating to one divided by the / size. As an example, a / size of 3×3 can be expressed as follows:

Mean Kernel

An alternative expression can also be:

Mean Kernel

Frog: Filter Size 9×9

Frog: Filter Size 9x9

Boolean Edge Detection without a local threshold

When performing Boolean Edge Detection a local threshold should be implemented in order to exclude . In this article we rely on the interference of in order to render a fuzzy effect. By not implementing a local threshold when performing Boolean Edge detection the sample source code ensures sufficient interference from .

The steps involved in performing Boolean Edge Detection without a local threshold can be described as follows:

  1. Calculate Neighbourhood Mean – Iterate each forming part of the source/input . Using a 3×3 size calculate the mean value of the neighbourhood surrounding the currently being iterated.
  2. Create Mean comparison Matrix – Once again using a 3×3 size compare each neighbourhood to the newly calculated mean value. Create a temporary 3×3 size , each element’s value should be the result of mean comparison. Should the value expressed by a neighbourhood exceed the mean value the corresponding temporary element should be set to one. When the calculated mean value exceeds the value of a neighbourhood the corresponding temporary  element should be set to zero.
  3. Compare Edge Masks – Using sixteen predefined edge masks compare the temporary created in the previous step to each edge mask. If the temporary matches one of the predefined edge masks multiply the specified factor to the currently being iterated.

Note: A detailed article on Boolean Edge detection implementing a local threshold can be found here:

Frog: Filter Size 9×9

Frog: Filter Size 9x9

The sixteen predefined edge masks each represent an in a different direction. The predefined edge masks can be expressed as:

Boolean Edge Masks

Frog: Filter Size 13×13

Frog: Filter Size 13x13

Implementing a Mean Filter

The sample source code defines the MeanFilter method, an targeting the class. The definition listed as follows:

private static Bitmap MeanFilter(this Bitmap sourceBitmap, 
                                 int meanSize)
{
    byte[] pixelBuffer = sourceBitmap.GetByteArray(); 
    byte[] resultBuffer = new byte[pixelBuffer.Length];

double blue = 0.0, green = 0.0, red = 0.0; double factor = 1.0 / (meanSize * meanSize);
int imageStride = sourceBitmap.Width * 4; int filterOffset = meanSize / 2; int calcOffset = 0, filterY = 0, filterX = 0;
for (int k = 0; k + 4 < pixelBuffer.Length; k += 4) { blue = 0; green = 0; red = 0; filterY = -filterOffset; filterX = -filterOffset;
while (filterY <= filterOffset) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length - 2 ? pixelBuffer.Length - 3 : calcOffset));
blue += pixelBuffer[calcOffset]; green += pixelBuffer[calcOffset + 1]; red += pixelBuffer[calcOffset + 2];
filterX++;
if (filterX > filterOffset) { filterX = -filterOffset; filterY++; } }
resultBuffer[k] = ClipByte(factor * blue); resultBuffer[k + 1] = ClipByte(factor * green); resultBuffer[k + 2] = ClipByte(factor * red); resultBuffer[k + 3] = 255; }
return resultBuffer.GetImage(sourceBitmap.Width, sourceBitmap.Height); }

Frog: Filter Size 19×19

Frog: Filter Size 19x19

Implementing Boolean Edge Detection

Boolean Edge detection is performed in the sample source code through the implementation of the BooleanEdgeDetectionFilter method. This method has been defined as an targeting the class.

The following code snippet provides the definition of the BooleanEdgeDetectionFilter :

public static Bitmap BooleanEdgeDetectionFilter( 
       this Bitmap sourceBitmap, float edgeFactor) 
{
    byte[] pixelBuffer = sourceBitmap.GetByteArray(); 
    byte[] resultBuffer = new byte[pixelBuffer.Length]; 
    Buffer.BlockCopy(pixelBuffer, 0, resultBuffer, 
                     0, pixelBuffer.Length); 

List<string> edgeMasks = GetBooleanEdgeMasks(); int imageStride = sourceBitmap.Width * 4; int matrixMean = 0, pixelTotal = 0; int filterY = 0, filterX = 0, calcOffset = 0; string matrixPatern = String.Empty;
for (int k = 0; k + 4 < pixelBuffer.Length; k += 4) { matrixPatern = String.Empty; matrixMean = 0; pixelTotal = 0; filterY = -1; filterX = -1;
while (filterY < 2) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length - 2 ? pixelBuffer.Length - 3 : calcOffset)); matrixMean += pixelBuffer[calcOffset]; matrixMean += pixelBuffer[calcOffset + 1]; matrixMean += pixelBuffer[calcOffset + 2];
filterX += 1;
if (filterX > 1) { filterX = -1; filterY += 1; } }
matrixMean = matrixMean / 9; filterY = -1; filterX = -1;
while (filterY < 2) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length - 2 ? pixelBuffer.Length - 3 : calcOffset));
pixelTotal = pixelBuffer[calcOffset]; pixelTotal += pixelBuffer[calcOffset + 1]; pixelTotal += pixelBuffer[calcOffset + 2]; matrixPatern += (pixelTotal > matrixMean ? "1" : "0"); filterX += 1;
if (filterX > 1) { filterX = -1; filterY += 1; } }
if (edgeMasks.Contains(matrixPatern)) { resultBuffer[k] = ClipByte(resultBuffer[k] * edgeFactor);
resultBuffer[k + 1] = ClipByte(resultBuffer[k + 1] * edgeFactor);
resultBuffer[k + 2] = ClipByte(resultBuffer[k + 2] * edgeFactor); } }
return resultBuffer.GetImage(sourceBitmap.Width, sourceBitmap.Height); }

Frog: Filter Size 13×13

Frog: Filter Size 13x13

The predefined edge masks implemented in mean comparison have been wrapped by the GetBooleanEdgeMasks method. The definition as follows:

public static List<string> GetBooleanEdgeMasks() 
{
    List<string> edgeMasks = new List<string>(); 

edgeMasks.Add("011011011"); edgeMasks.Add("000111111"); edgeMasks.Add("110110110"); edgeMasks.Add("111111000"); edgeMasks.Add("011011001"); edgeMasks.Add("100110110"); edgeMasks.Add("111011000"); edgeMasks.Add("111110000"); edgeMasks.Add("111011001"); edgeMasks.Add("100110111"); edgeMasks.Add("001011111"); edgeMasks.Add("111110100"); edgeMasks.Add("000011111"); edgeMasks.Add("000110111"); edgeMasks.Add("001011011"); edgeMasks.Add("110110100");
return edgeMasks; }

Frog: Filter Size 19×19

Frog: Filter Size 19x19

Implementing a Fuzzy Blur Filter

The FuzzyEdgeBlurFilter method serves as the implementation of a Fuzzy Blur Filter. As discussed earlier a Fuzzy Blur Filter involves enhancing through Boolean Edge detection, performing a blur and then once again performing Boolean Edge detection. This method has been defined as an extension method targeting the class.

The following code snippet provides the definition of the FuzzyEdgeBlurFilter method:

public static Bitmap FuzzyEdgeBlurFilter(this Bitmap sourceBitmap,  
                                         int filterSize,  
                                         float edgeFactor1,  
                                         float edgeFactor2) 
{
    return  
    sourceBitmap.BooleanEdgeDetectionFilter(edgeFactor1). 
    MeanFilter(filterSize).BooleanEdgeDetectionFilter(edgeFactor2); 
}

Frog: Filter Size 3×3

Frog: Filter Size 3x3

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following images feature as sample images:

Litoria_tyleri

Schrecklicherpfeilgiftfrosch-01

Dendropsophus_microcephalus_-_calling_male_(Cope,_1886)

Atelopus_zeteki1

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Stained Glass Image Filter

Article Purpose

This article serves to provides a detailed discussion and implementation of a Stained Glass Image Filter. Primary topics explored include: Creating , Pixel Coordinate distance calculations implementing , and methods. In addition, this article explores Gradient Based implementing thresholds.

Zurich: Block Size 15, Factor 4, Euclidean

Zurich Block Size 15 Factor 4 Euclidean

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

This article’s accompanying sample source code includes a based sample application. The sample application provides an implementation of the concepts explored by this article. Concepts discussed can be easily replicated and tested by using the sample application.

Source/input files can be specified from the local system when clicking the Load Image button. Additionally users also have the option to save resulting filtered by clicking the Save Image button.

The sample application through its user interface allows a user to specify several filter configuration options. Two main categories of configuration options have been defined as Block Properties and Edge Properties.

Block Properties relate to  the process of rendering  . The following configuration options have been implemented:

  • Block Size – During the process of rendering a regions or blocks of equal shape and size have to be defined. These uniform regions/blocks form the basis of rendering uniquely shaped regions later on. The Block Size option determines the width and height of an individual region/block. Larger values result in larger non-uniform regions being rendered. Smaller values in return result in smaller non-uniform regions being rendered.
  • Distance Factor – The Distance Factor option determines the extent to which a pixel’s containing region will be calculated. Possible values range from 1 to 4 inclusive. A Distance Factor value of 4 equates to precise calculation of a pixel’s containing region, whereas a value of 1 results in containing regions often registering pixels that should be part of a neighbouring region. Values closer to 4 result in more varied region shapes. Values closer to 1 result in regions being rendered having more of a uniform shape/pattern.
  • Distance Formula – The distance between a pixel’s coordinates and a region’s outline determines whether that pixel should be considered part of a region. The sample application implements three different methods of calculating pixel distance: , and methods. Each result in region shapes being rendered differently.

Salzburg: Block Size 20, Factor 1, Chebyshev, Edge Threshold 2 

Saltzburg Block Size 20 Factor 1 Chebyshev Edge Threshold 2

Edge Properties relate to the implementation of Image Gradient Based Edge Detection. is an optional filter and can be enabled/disabled through the user interface, The implementation of serves to highlight/outline regions rendered as part of a . The configuration options implemented are:

  • Highlight Edges – Boolean value indicating whether or not should be applied
  • Threshold – In calculating a threshold value determines if a pixel forms part of an edge. Higher threshold values result in less being expressed. Lower threshold values result in more being expressed.
  • Colour – If a pixel has been determined as forming part of an , the resulting pixel colour will be determined by the colour value specified by the user.

The following image is a screenshot of the Stained Glass Image Filter sample application in action:

Stained Glass Image Filter Sample Application 

Locarno: Block Size 10, Factor 4, Euclidean

Locarno Block Size 10 Factor 4 Euclidean

Stained Glass

The Stained Glass Image Filter detailed in this article operates on the basis of implementing modifications upon a specified sample/input , producing resulting which resemble the appearance of stained glass artwork.

A common variant of stained glass artwork comes in the form of several individual pieces of coloured glass being combined in order to create an . The sample source code employs a similar  method of combining what appears to be non-uniform puzzle pieces. The following list provides a broad overview of the steps involved in applying a Stained Glass Image Filter:

  1. Render a Voronoi Diagram – Through rendering a the resulting will be divided into a number of regions. Each region being intended to represent an individual glass puzzle piece. The following section of this article provides a detailed discussion on rendering .
  2. Assign each Pixel to a Voronoi Diagram Region – Each pixel forming part of the source/input should be iterated. Whilst iterating pixels determine the region to which a pixel should be associated. A pixel should be associated to the region whose border has been determined the nearest to the pixel. In a following section of this article a detailed discussion regarding Pixel Coordinate Distance Calculations can be found.
  3. Determine each Region’s Colour Mean – Each region will only express a single colour value. A region’s colour equates to the average colour as expressed by all the pixels forming part of a region. Once the average colour value of a region has been determined every pixel forming part of that region should be set to the average colour.
  4. Implement Edge Detection – If the user configuration option indicates that should be implemented, apply Gradient Based Edge Detection. This method of has been discussed in detailed in a following section of this article.

Bad Ragaz: Block Size 10, Factor 1, Manhattan 

Bad Ragaz Block Size 10 Factor 1 Manhattan

Voronoi Diagrams

represent a fairly uncomplicated concept. In contrast, the implementation of prove somewhat more of a challenge. From we gain the following :

In mathematics, a Voronoi diagram is a way of dividing space into a number of regions. A set of points (called seeds, sites, or generators) is specified beforehand and for each seed there will be a corresponding region consisting of all points closer to that seed than to any other. The regions are called Voronoi cells. It is dual to the Delaunay triangulation.

In this article are generated resulting in regions expressing random shapes. Although region shapes are randomly generated, the parameters or ranges within which random values are selected are fixed/constant. The steps required in generating a can be detailed as follows:

  1. Define fixed size square regions – By making use of the user specified Block/Region Size value, group pixels together into square regions.
  2. Determine a Seed Value for Random number generation – Determine the sum total of pixel colour components of all the pixels forming part of a square region. The colour sum total value should be used as a seed value when generating random numbers in the next step.
  3. Determine a Random XY coordinate within each square region – Generate two random numbers, specifying each region’s coordinate boundaries as minimum and maximum boundaries in generating random numbers. Keep record of every new randomly generated XY-Coordinate value.
  4. Associate Pixels and Regions – A pixel should be associated to the Random Coordinate point nearest to that pixel. Determine the Random Coordinate nearest to each pixel in the source/input image. The method implemented in calculating coordinate distance depends on the configuration value specified by the user.
  5. Set Region Colours – Each pixel forming part of the same region should be set to the same colour. The colour assigned to a region’s pixels will be determined by the average colour value of the region’s pixels.

The following image illustrates an example consisting of 10 regions:

2Ddim-L2norm-10site

Port Edward: Block Size 10, Factor 1, Chebyshev, Edge Threshold 2

Port Edward Block Size 10 Factor 1 Chebyshev Edge Threshold 2

Calculating Pixel Coordinate Distances

The sample source code provides three different coordinate distance calculation methods. The supported methods are: , and . A pixel’s nearest randomly generated coordinate depends on the distance between that pixel and the random coordinate. Each method of calculating distance in most instances would be likely to produce different output values, which in turn influences the region to which a pixel will be associated.

The most common method of distance calculation, , has been described by as follows:

In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" distance between two points that one would measure with a ruler, and is given by the Pythagorean formula. By using this formula as distance, Euclidean space (or even any inner product space) becomes a metric space. The associated norm is called the Euclidean norm. Older literature refers to the metric as Pythagorean metric.

When calculating the algorithm implemented can be expressed as follows:

Euclidean Distance Algorithm

Zurich: Block Size 10, Factor 1, Euclidean

Zurich Block Size 10 Factor 1 Euclidean

As an alternative to calculating , the sample source code also implements calculation. Often calculation will be referred to as , or . From we gain the following :

Taxicab geometry, considered by Hermann Minkowski in the 19th century, is a form of geometry in which the usual distance function or metric of Euclidean geometry is replaced by a new metric in which the distance between two points is the sum of the absolute differences of their coordinates. The taxicab metric is also known as rectilinear distance, L1 distance or \ell_1 norm (see Lp space), city block distance, Manhattan distance, or Manhattan length, with corresponding variations in the name of the geometry.[1] The latter names allude to the grid layout of most streets on the island of Manhattan, which causes the shortest path a car could take between two intersections in the borough to have length equal to the intersections’ distance in taxicab geometry

When calculating the algorithm implemented can be expressed as follows:

Manhattan Distance Algorithm

Port Edward: Block Size 10, Factor 4, Euclidean

Port Edward Block Size 10 Factor 4 Euclidean

, a distance algorithm resembling the way in which a King Chess piece may move on a chess board. The following we gain from :

In mathematics, Chebyshev distance (or Tchebychev distance), Maximum metric, or L∞ metric[1] is a metric defined on a vector space where the distance between two vectors is the greatest of their differences along any coordinate dimension.[2] It is named after Pafnuty Chebyshev.

It is also known as chessboard distance, since in the game of chess the minimum number of moves needed by a king to go from one square on a chessboard to another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board.[3] For example, the Chebyshev distance between f6 and e2 equals 4.

When calculating the algorithm implemented can be expressed as follows:

Chebyshev Distance Algorithm

Salzburg: Block Size 20, Factor 1, Chebyshev

Salzburg Block Size 20 Factor 1 Chebyshev

Gradient Based Edge Detection

Various methods of can easily be implemented in C#. Each method of provides a set of benefits, usually weighed against a set of trade-offs. In this article and the accompanying sample source code the Gradient Based Edge Detection method has been implement.

Take into regard that every region within the rendered will only express a single colour, although most regions differ in the single colour they express. Once all pixels have been associated to a region and all pixel colour values have been updated the resulting defines mostly clearly distinguishable  colour gradients. A method of performs efficiently at detecting . The edges detected are defined between different regions.

An can be considered as a difference in colour intensity relating to a specific direction. Only once all tasks related to applying the Stained Glass Filter have been completed should the Gradient Based Edge Detection be applied. The steps involved in applying Gradient Based Edge Detection can be described as follows:

  1. Iterate each pixel – Each pixel forming part of a source/input image should be iterated.
  2. Determine Horizontal and Vertical Gradients – Calculate the colour value difference between the currently iterated pixel’s left and right neighbour pixel as well as the top and bottom neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  3. Determine Horizontal Gradient – Calculate the colour value difference between the currently iterated pixel’s left and right neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  4. Determine Vertical Gradient – Calculate the colour value difference between the currently iterated pixel’s top and bottom neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  5. Determine Diagonal Gradients – Calculate the colour value difference between the currently iterated pixel’s North-Western and South-Eastern neighbour pixel as well as the North-Eastern and South-Western neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  6. Determine NW-SE Gradient – Calculate the colour value difference between the currently iterated pixel’s North-Western and South-Eastern neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  7. Determine NE-SW Gradient  – Calculate the colour value difference between the currently iterated pixel’s North-Eastern and South-Western neighbour pixel.
  8. Determine and set result pixel value – If any of the six gradients calculated exceeded the specified threshold value set the related pixel in the resulting image to the Edge Colour specified by the user, if not, set the related pixel equal to the source pixel colour value.

Zurich: Block Size 10, Factor 4, Chebyshev

Zurich Block Size 10 Factor 4 Chebyshev 

Implementing a Stained Glass Image Filter

The sample source code defines two helper classes, both implemented when applying the Stained Glass Image Filter. The Pixel class represents a single pixel in terms of an XY-Coordinate and Red, Green and Blue values. The definition as follows:

public class Pixel  
{
    private int xOffset = 0; 
    public int XOffset 
    {
        get { return xOffset; } set { xOffset = value; } 
    }

private int yOffset = 0; public int YOffset { get { return yOffset; } set { yOffset = value; } }
private byte blue = 0; public byte Blue { get { return blue; } set { blue = value; } }
private byte green = 0; public byte Green { get { return green; } set { green = value; } }
private byte red = 0; public byte Red { get { return red; } set { red = value; } } }

Zurich: Block Size 10, Factor 1, Chebyshev, Edge Threshold 1

Zurich Block Size 10 Factor 1 Chebyshev Edge Threshold 1

The VoronoiPoint class serves as method of recording randomly generated coordinates and referencing a region’s associated pixels. The definition as follows:

public class VoronoiPoint 
{
    private int xOffset = 0; 
    public int XOffset 
    {
        get  { return xOffset; } set { xOffset = value; }
    }

private int yOffset = 0; public int YOffset { get { return yOffset; } set { yOffset = value; } }
private int blueTotal = 0; public int BlueTotal { get { return blueTotal; } set { blueTotal = value; } }
private int greenTotal = 0; public int GreenTotal { get {return greenTotal; } set { greenTotal = value; } }
private int redTotal = 0; public int RedTotal { get { return redTotal; } set { redTotal = value; } }
public void CalculateAverages() { if (pixelCollection.Count > 0) { blueAverage = blueTotal / pixelCollection.Count; greenAverage = greenTotal / pixelCollection.Count; redAverage = redTotal / pixelCollection.Count; } }
private int blueAverage = 0; public int BlueAverage { get { return blueAverage; } }
private int greenAverage = 0; public int GreenAverage { get { return greenAverage; } }
private int redAverage = 0; public int RedAverage { get { return redAverage; } }
private List<Pixel> pixelCollection = new List<Pixel>(); public List<Pixel> PixelCollection { get { return pixelCollection; } }
public void AddPixel(Pixel pixel) { blueTotal += pixel.Blue; greenTotal += pixel.Green; redTotal += pixel.Red;
pixelCollection.Add(pixel); } }

Zurich: Block Size 20, Factor 1, Euclidean, Edge Threshold 1

Zurich Block Size 20 Factor 1 Euclidean Edge Threshold 1

From the perspective of a filter implementation code base the only requirement comes in the form of having to invoke the StainedGlassColorFilter , no additional work is required from external code consumers. The StainedGlassColorFilter method has been defined as an targeting the class. The StainedGlassColorFilter method definition as follows:

public static Bitmap StainedGlassColorFilter(this Bitmap sourceBitmap,  
                                             int blockSize, double blockFactor, 
                                             DistanceFormulaType distanceType, 
                                             bool highlightEdges,  
                                             byte edgeThreshold, Color edgeColor) 
{
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int neighbourHoodTotal = 0; int sourceOffset = 0; int resultOffset = 0; int currentPixelDistance = 0; int nearestPixelDistance = 0; int nearesttPointIndex = 0;
Random randomizer = new Random();
List<VoronoiPoint> randomPointList = new List<VoronoiPoint>();
for (int row = 0; row < sourceBitmap.Height - blockSize; row += blockSize) { for (int col = 0; col < sourceBitmap.Width - blockSize; col += blockSize) { sourceOffset = row * sourceData.Stride + col * 4;
neighbourHoodTotal = 0;
for (int y = 0; y < blockSize; y++) { for (int x = 0; x < blockSize; x++) { resultOffset = sourceOffset + y * sourceData.Stride + x * 4; neighbourHoodTotal += pixelBuffer[resultOffset]; neighbourHoodTotal += pixelBuffer[resultOffset + 1]; neighbourHoodTotal += pixelBuffer[resultOffset + 2]; } }
randomizer = new Random(neighbourHoodTotal);
VoronoiPoint randomPoint = new VoronoiPoint(); randomPoint.XOffset = randomizer.Next(0, blockSize) + col; randomPoint.YOffset = randomizer.Next(0, blockSize) + row;
randomPointList.Add(randomPoint); } }
int rowOffset = 0; int colOffset = 0;
for (int bufferOffset = 0; bufferOffset < pixelBuffer.Length - 4; bufferOffset += 4) { rowOffset = bufferOffset / sourceData.Stride; colOffset = (bufferOffset % sourceData.Stride) / 4;
currentPixelDistance = 0; nearestPixelDistance = blockSize * 4; nearesttPointIndex = 0;
List<VoronoiPoint> pointSubset = new List<VoronoiPoint>();
pointSubset.AddRange(from t in randomPointList where rowOffset >= t.YOffset - blockSize * 2 && rowOffset <= t.YOffset + blockSize * 2 select t);
for (int k = 0; k < pointSubset.Count; k++) { if (distanceType == DistanceFormulaType.Euclidean) { currentPixelDistance = CalculateDistanceEuclidean(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } else if (distanceType == DistanceFormulaType.Manhattan) { currentPixelDistance = CalculateDistanceManhattan(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } else if (distanceType == DistanceFormulaType.Chebyshev) { currentPixelDistance = CalculateDistanceChebyshev(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } if (currentPixelDistance <= nearestPixelDistance) { nearestPixelDistance = currentPixelDistance; nearesttPointIndex = k; if (nearestPixelDistance <= blockSize / blockFactor) { break; } } }
Pixel tmpPixel = new Pixel (); tmpPixel.XOffset = colOffset; tmpPixel.YOffset = rowOffset; tmpPixel.Blue = pixelBuffer[bufferOffset]; tmpPixel.Green = pixelBuffer[bufferOffset + 1]; tmpPixel.Red = pixelBuffer[bufferOffset + 2];
pointSubset[nearesttPointIndex].AddPixel(tmpPixel); }
for (int k = 0; k < randomPointList.Count; k++) { randomPointList[k].CalculateAverages();
for (int i = 0; i < randomPointList[k].PixelCollection.Count; i++) { resultOffset = randomPointList[k].PixelCollection[i].YOffset * sourceData.Stride + randomPointList[k].PixelCollection[i].XOffset * 4;
resultBuffer[resultOffset] = (byte)randomPointList[k].BlueAverage; resultBuffer[resultOffset + 1] = (byte)randomPointList[k].GreenAverage; resultBuffer[resultOffset + 2] = (byte)randomPointList[k].RedAverage;
resultBuffer[resultOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
if (highlightEdges == true ) { resultBitmap = resultBitmap.GradientBasedEdgeDetectionFilter(edgeColor, edgeThreshold); }
return resultBitmap; }

Locarno: Block Size 10, Factor 4, Euclidean, Edge Threshold 1

Locarno Block Size 10 Factor 4 Euclidean Edge Threshold 1

Implementing Pixel Coordinate Distance Calculations

As mentioned earlier, this article and the accompanying sample source code support coordinate distance calculations through three different calculation methods, namely , and . The method of distance calculation implemented depends on the configuration option specified by the user.

The CalculateDistanceEuclidean method calculates distance implementing the Calculation method. In order to aid faster execution this method will calculate the square root of a specific value only once. Once a square root has been calculated the result is kept in memory. The following code snippet lists the definition of the CalculateDistanceEuclidean method:

private static Dictionary <int,int> squareRoots = new Dictionary<int,int>(); 

private static int CalculateDistanceEuclidean(int x1, int x2, int y1, int y2) { int square = (x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2);
if(squareRoots.ContainsKey(square) == false) { squareRoots.Add(square, (int)Math.Sqrt(square)); }
return squareRoots[square]; }

The two other methods of calculating distance are implemented through the CalculateDistanceManhattan and CalculateDistanceChebyshev methods. The definition as follows:

private static int CalculateDistanceManhattan(int x1, int x2, int y1, int y2) 
{
    return Math.Abs(x1 - x2) + Math.Abs(y1 - y2); 
}

private static int CalculateDistanceChebyshev(int x1, int x2, int y1, int y2) { return Math.Max(Math.Abs(x1 - x2), Math.Abs(y1 - y2)); }

Bad Ragaz: Block Size 12, Factor 1, Chebyshev

Bad Ragaz Block Size 12 Factor 1 Chebyshev

Implementing Gradient Based Edge Detection

Did you notice the very last step performed by the StainedGlassColorFilter method involves implementing Gradient Based Edge Detection, depending on whether had been specified by the user.

The following code snippet provides the implementation of the GradientBasedEdgeDetectionFilter extension method:

public static Bitmap GradientBasedEdgeDetectionFilter( 
                this Bitmap sourceBitmap, 
                Color edgeColour, 
                byte threshold = 0) 
{
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle (0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int sourceOffset = 0, gradientValue = 0; bool exceedsThreshold = false;
for(int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for(int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { sourceOffset = offsetY * sourceData.Stride + offsetX * 4; gradientValue = 0; exceedsThreshold = true;
// Horizontal Gradient CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold, 2); // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Horizontal Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold, 2); // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset - 4 + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset + sourceData.Stride - 4, ref gradientValue, threshold); } } } } }
if (exceedsThreshold == true) { resultBuffer[sourceOffset] = edgeColour.B; resultBuffer[sourceOffset + 1] = edgeColour.G; resultBuffer[sourceOffset + 2] = edgeColour.R; }
resultBuffer[sourceOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height); BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode .WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Zurich: Block Size 15, Factor 1, Manhattan, Edge Threshold 1

Zurich Block Size 15 Factor 1 Manhattan Edge Threshold 1

Sample Images

This article features a rendered graphic illustrating an example which has been released into the public domain by its author, Augochy at the wikipedia project. This applies worldwide. The original can be downloaded from .

All of the photos that appear in this article were taken by myself. Photos listed under Zurich, Locarno and Bad Ragaz were shot in Switzerland. The photo listed as Salzburg had been shot in Austria and the photo listed under Port Edward had been shot in South Africa. In order to fully realize the extent to which had been modified the following section details the original photos.

Zurich, Switzerland

Zurich, Switzerland

Salzburg, Austria

Salzburg, Austria

Locarno, Switzerland

Locarno, Switzerland

Bad Ragaz, Switzerland

Bad Ragaz, Switzerland

Port Edward, South Africa

Port Edward, South Africa

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Bad Ragaz, Switzerland

Bad Ragaz, Switzerland

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Oil Painting and Cartoon Filter

Article Purpose

This article illustrates and provides a discussion and implementation of Oil Painting Filters and related Image Cartoon Filters.

Sunflower: Oil Painting, Filter 5, Levels 30, Cartoon Threshold 30

Sunflower Oil Painting Filter 5 Levels 30 Cartoon Threshold 30

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

A sample application accompanies this article. The sample application creates a visual implementation of the concepts discussed throughout this article. Source/input can be selected from the local system and if desired filter result images can be saved to the local file system.

The two main types of functionality exposed by the sample application can be described as Image Oil Painting Filters and Image Cartoon Filters. The user interface provides the following user input options:

  • Filter Size – The number of neighbouring pixels used in calculating each individual pixel value in regards to an Oil Painting Filter. Higher Filter sizes relate to a more intense Oil Painting Filter being applied. Lower Filter sizes relate to less intense Oil Painting Filters being applied.
  • Intensity Levels – Represents the number of Intensity Levels implemented when applying an Oil Painting Filter. Higher values result in a broader range of colour intensities forming part of the result . Lower values will reduce the range of colour intensities forming part of the result .
  • Cartoon Filter – A Boolean value indicating whether or not in addition to an Oil Painting Filter if a Cartoon Filter should also be applied.
  • Threshold – Only applicable when applying a Cartoon Filter. This option represents the threshold value implemented in determining whether a pixel forms part of an . Lower Values result in more being highlighted. Higher values result in less being highlighted.

The following image is screenshot of the Oil Painting Cartoon Filter sample application in action:

OilPaintingCartoonFilter_SampleApplication

Rose: Oil Painting, Filter 15, Levels 10

Rose Oil Painting Filter 15 Levels 10

Image Oil Painting Filter

The Image Oil Painting Filter consists of two main components: colour gradients and pixel colour intensities. As implied by the title when implementing this resulting are similar in appearance to of Oil Paintings. Result express a lesser degree of detail when compared to source/input . This filter also tends to output which appear to have smaller colour ranges.

Four steps are required when implementing an Oil Painting Filter, indicated as follows:

  1. Iterate each pixel – Every pixel forming part of the source/input should be iterated. When iterating a pixel determine the neighbouring pixel values based on the specified filter size/filter range.
  2. Calculate Colour Intensity -  Determine the Colour Intensity of each pixel being iterated and that of the neighbouring pixels. The neighbouring pixels included should extend to a range determined by the Filter Size specified. The calculated value should be reduced in order to match a value ranging from zero to the number of Intensity Levels specified.
  3. Determine maximum neighbourhood colour intensity – When calculating the colour intensities of a pixel neighbourhood determine the maximum intensity value. In addition, record the occurrence of each intensity level and sum each of the Red, Green and Blue pixel colour component values equating to the same intensity level.
  4. Assign the result pixel – The value assigned to the corresponding pixel in the resulting equates to the pixel colour sum total, where those pixels expressed the same intensity level. The sum total should be averaged by dividing the colour sum total by the intensity level occurrence.

Roses: Oil Painting, Filter 11, Levels 60, Cartoon Threshold 80

Roses Oil Painting Filter 11 Levels 60 Cartoon Threshold 80

When calculating colour intensity reduced to fit the number of levels specified  the algorithm implemented can be expressed as follows:

Colour Intensity Level Algorithm

In the algorithm listed above the variables implemented can be explained as follows:

  • I – Intensity: The calculated intensity value.
  • R – Red: The value of a pixel’s Red colour component.
  • G – Green: The value of a pixel’s Green colour component.
  • B – Blue: The value of a pixel’s Blue colour component.
  • l – Number of intensity levels: The maximum number of intensity levels specified.

Rose: Oil Painting, Filter 15, Levels 30

Rose Oil Painting Filter 15 Levels 30

Cartoon Filter implementing Edge Detection

A Cartoon Filter effect can be achieved by combining an Image Oil Painting filter and an Edge Detection Filter. The Oil Painting filter has the effect of creating more gradual colour gradients, in other words reducing edge intensity.

The steps required in implementing a Cartoon filter can be listed as follows:

  1. Apply Oil Painting filter – Applying an Oil Painting Filter creates the perception of result having been painted by hand.
  2. Implement Edge Detection – Using the original source/input create a new binary detailing .
  3. Overlay edges on Oil Painting image – Iterate each pixel forming part of the edge detected . If the pixel being iterated forms part of an edge, the related pixel in the Oil Painting filtered should be set to black. Because the edge detected was created as a binary , a pixel forms part of an edge should that pixel equate to white.

Daisy: Oil Painting, Filter 7, Levels 30, Cartoon Threshold 40 

Daisy Oil Painting Filter 7 Levels 30 Cartoon Threshold 40

In the sample source code has been implemented through Gradient Based Edge Detection. This method of compares the difference in colour gradients between a pixel’s neighbouring pixels. A pixel forms part of an edge if the difference in neighbouring pixel colour values exceeds a specified threshold value. The steps involved in Gradient Based Edge Detection as follows:

  1. Iterate each pixel – Each pixel forming part of a source/input should be iterated.
  2. Determine Horizontal and Vertical Gradients – Calculate the colour value difference between the currently iterated pixel’s left and right neighbour pixel as well as the top and bottom neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  3. Determine Horizontal Gradient – Calculate the colour value difference between the currently iterated pixel’s left and right neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  4. Determine Vertical Gradient – Calculate the colour value difference between the currently iterated pixel’s top and bottom neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  5. Determine Diagonal Gradients – Calculate the colour value difference between the currently iterated pixel’s North-Western and South-Eastern neighbour pixel as well as the North-Eastern and South-Western neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  6. Determine NW-SE Gradient – Calculate the colour value difference between the currently iterated pixel’s North-Western and South-Eastern neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  7. Determine NE-SW Gradient  – Calculate the colour value difference between the currently iterated pixel’s North-Eastern and South-Western neighbour pixel.
  8. Determine and set result pixel value – If any of the six gradients calculated exceeded the specified threshold value set the related pixel in the resulting image to white, if not, set the related pixel to black.

Rose: Oil Painting, Filter 9, Levels 30

Rose Oil Painting Filter 9 Levels 30

Implementing an Oil Painting Filter

The sample source code defines the OilPaintFilter method, an targeting the class. method determines the maximum colour intensity from a pixel’s neighbouring pixels. The definition detailed as follows:

public static Bitmap OilPaintFilter(this Bitmap sourceBitmap, 
                                       int levels, 
                                       int filterSize) 
{
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int[] intensityBin = new int [levels]; int[] blueBin = new int [levels]; int[] greenBin = new int [levels]; int[] redBin = new int [levels];
levels = levels - 1;
int filterOffset = (filterSize - 1) / 2; int byteOffset = 0; int calcOffset = 0; int currentIntensity = 0; int maxIntensity = 0; int maxIndex = 0;
double blue = 0; double green = 0; double red = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = green = red = 0;
currentIntensity = maxIntensity = maxIndex = 0;
intensityBin = new int[levels + 1]; blueBin = new int[levels + 1]; greenBin = new int[levels + 1]; redBin = new int[levels + 1];
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
currentIntensity = (int )Math.Round(((double) (pixelBuffer[calcOffset] + pixelBuffer[calcOffset + 1] + pixelBuffer[calcOffset + 2]) / 3.0 * (levels)) / 255.0);
intensityBin[currentIntensity] += 1; blueBin[currentIntensity] += pixelBuffer[calcOffset]; greenBin[currentIntensity] += pixelBuffer[calcOffset + 1]; redBin[currentIntensity] += pixelBuffer[calcOffset + 2];
if (intensityBin[currentIntensity] > maxIntensity) { maxIntensity = intensityBin[currentIntensity]; maxIndex = currentIntensity; } } }
blue = blueBin[maxIndex] / maxIntensity; green = greenBin[maxIndex] / maxIntensity; red = redBin[maxIndex] / maxIntensity;
resultBuffer[byteOffset] = ClipByte(blue); resultBuffer[byteOffset + 1] = ClipByte(green); resultBuffer[byteOffset + 2] = ClipByte(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rose: Oil Painting, Filter 7, Levels 20, Cartoon Threshold 20

Rose Oil Painting Filter 7 Levels 20 Cartoon Threshold 20

Implementing a Cartoon Filter using Edge Detection

The sample source code defines the CheckThreshold method. The purpose of this method to determine the difference in colour between two pixels. In addition this method compares the colour difference and the specified threshold value. The following code snippet provides the implementation:

private static bool CheckThreshold(byte[] pixelBuffer,  
                                   int offset1, int offset2,  
                                   ref int gradientValue,  
                                   byte threshold,  
                                   int divideBy = 1) 
{ 
    gradientValue += 
    Math.Abs(pixelBuffer[offset1] - 
    pixelBuffer[offset2]) / divideBy; 

gradientValue += Math.Abs(pixelBuffer[offset1 + 1] - pixelBuffer[offset2 + 1]) / divideBy;
gradientValue += Math.Abs(pixelBuffer[offset1 + 2] - pixelBuffer[offset2 + 2]) / divideBy;
return (gradientValue >= threshold); }

Rose: Oil Painting, Filter 13, Levels 15

Rose Oil Painting Filter 13 Levels 15

The GradientBasedEdgeDetectionFilter method has been defined as an targeting the class. This method iterates each pixel forming part of the source/input . Whilst iterating pixels the GradientBasedEdgeDetectionFilter determines if the colour gradients in various directions exceeds the specified threshold value. A pixel is considered as part of an edge if a colour gradient exceeds the threshold value. The implementation as follows:

public static Bitmap GradientBasedEdgeDetectionFilter( 
                        this Bitmap sourceBitmap, 
                        byte threshold = 0) 
{ 
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle (0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
int sourceOffset = 0, gradientValue = 0; bool exceedsThreshold = false;
for(int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for(int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { sourceOffset = offsetY * sourceData.Stride + offsetX * 4; gradientValue = 0; exceedsThreshold = true ;
// Horizontal Gradient CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold, 2); // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false ) { gradientValue = 0;
// Horizontal Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold);
if (exceedsThreshold == false ) { gradientValue = 0; // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false ) { gradientValue = 0; // Diagonal Gradient : NW-SE CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold, 2); // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset - 4 + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false ) { gradientValue = 0; // Diagonal Gradient : NW-SE exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false ) { gradientValue = 0; // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset + sourceData.Stride - 4, ref gradientValue, threshold); } } } } }
resultBuffer[sourceOffset] = (byte)(exceedsThreshold ? 255 : 0); resultBuffer[sourceOffset + 1] = resultBuffer[sourceOffset]; resultBuffer[sourceOffset + 2] = resultBuffer[sourceOffset]; resultBuffer[sourceOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rose: Oil Painting, Filter 7, Levels 20, Cartoon Threshold 20

Rose Oil Painting Filter 7 Levels 20 Cartoon Threshold 20

The CartoonFilter serves to combine generated by the OilPaintFilter and GradientBasedEdgeDetectionFilter methods. The CartoonFilter method being defined as an targets the class. In this method pixels detected as forming part of an edge are set to black in Oil Painting filtered . The definition as follows:

public static Bitmap CartoonFilter(this Bitmap sourceBitmap,
                                       int levels, 
                                       int filterSize, 
                                       byte threshold) 
{
    Bitmap paintFilterImage =  
           sourceBitmap.OilPaintFilter(levels, filterSize);

Bitmap edgeDetectImage = sourceBitmap.GradientBasedEdgeDetectionFilter(threshold);
BitmapData paintData = paintFilterImage.LockBits(new Rectangle (0, 0, paintFilterImage.Width, paintFilterImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] paintPixelBuffer = new byte[paintData.Stride * paintData.Height];
Marshal.Copy(paintData.Scan0, paintPixelBuffer, 0, paintPixelBuffer.Length);
paintFilterImage.UnlockBits(paintData);
BitmapData edgeData = edgeDetectImage.LockBits(new Rectangle (0, 0, edgeDetectImage.Width, edgeDetectImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] edgePixelBuffer = new byte[edgeData.Stride * edgeData.Height];
Marshal.Copy(edgeData.Scan0, edgePixelBuffer, 0, edgePixelBuffer.Length);
edgeDetectImage.UnlockBits(edgeData);
byte[] resultBuffer = new byte [edgeData.Stride * edgeData.Height];
for(int k = 0; k + 4 < paintPixelBuffer.Length; k += 4) { if (edgePixelBuffer[k] == 255 || edgePixelBuffer[k + 1] == 255 || edgePixelBuffer[k + 2] == 255) { resultBuffer[k] = 0; resultBuffer[k + 1] = 0; resultBuffer[k + 2] = 0; resultBuffer[k + 3] = 255; } else { resultBuffer[k] = paintPixelBuffer[k]; resultBuffer[k + 1] = paintPixelBuffer[k + 1]; resultBuffer[k + 2] = paintPixelBuffer[k + 2]; resultBuffer[k + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rose: Oil Painting, Filter 9, Levels 25, Cartoon Threshold 25

Rose Oil Painting Filter 9 Levels 25 Cartoon Threshold 25

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Compass Edge Detection

Article Purpose

This article’s objective is to illustrate concepts relating to Compass . The methods implemented in this article include: , , Scharr, and Isotropic.

Wasp: Scharr 3 x 3 x 8

Wasp Scharr 3 x 3 x 8

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

The sample source code accompanying this article includes a based sample application. When using the sample application users are able to load source/input from and save result to the local system. The user interface provides a which contains the supported methods of Compass Edge Detection. Selecting an item from the results in the related Compass Edge Detection method being applied to the current source/input . Supported methods are:

  • Prewitt3x3x4 – 3×3 in 4 compass directions
  • Prewitt3x3x8 – 3×3 in 8 compass directions
  • Prewitt5x5x4 – 5×5 in 4 compass directions
  • Sobel3x3x4 – 3×3 in 4 compass directions
  • Sobel3x3x8 – 3×3 in 8 compass directions
  • Sobel5x5x4 – 5×5 in 4 compass directions
  • Scharr3x3x4 – 3×3 Scharr in 4 compass directions
  • Scharr3x3x8 – 3×3 Scharr in 8 compass directions
  • Scharr5x5x4 – 5×5 Scharr in 4 compass directions
  • Kirsch3x3x4 – 3×3 in 4 compass directions
  • Kirsch3x3x8 – 3×3 in 8 compass directions
  • Isotropic3x3x4 – 3×3 Isotropic in 4 compass directions
  • Isotropic3x3x8 – 3×3 Isotropic in 8 compass directions

The following image is a screenshot of the Compass Edge Detection Sample Application in action:

Compass Edge Detection Sample Application

Bee: Isotropic 3 x 3 x 8

Bee Isotropic 3 x 3 x 8

Compass Edge Detection Overview

Compass Edge Detection as a concept title can be explained through the implementation of compass directions. Compass Edge Detection can be implemented through , using multiple , each suited to detecting edges in a specific direction. Often the edge directions implemented are:

  • North
  • North East
  • East
  • South East
  • South
  • South West
  • West
  • North West

Each of the compass directions listed above differ by 45 degrees. Applying a rotation of 45 degrees to an existing direction specific results in a new suited to detecting edges in the next compass direction.

Various can be implemented in Compass Edge Detection. This article and accompanying sample source code implements the following types:

Prey Mantis: Sobel 3 x 3 x 8

Prey Mantis Sobel 3 x 3 x 8

The steps required when implementing Compass Edge Detection can be described as follows:

  1. Determine the compass kernels. When an   suited to a specific direction is known, the suited to the 7 remaining compass directions can be calculated. Rotating a by 45 degrees around a central axis equates to the suited to the next compass direction. As an example, if the suited to detect edges in a northerly direction were to be rotated clockwise by 45 degrees around a central axis the result would be an suited to edges in a North Easterly direction.
  2. Iterate source image pixels. Every pixel forming part of the source/input should be iterated, implementing using each of the compass .
  3. Determine the most responsive kernel convolution. After having applied each compass to the pixel currently being iterated, the most responsive compass determines the output value. In other words, after having applied eight times on the same pixel using each compass direction the output value should be set to the highest value calculated.
  4. Validate and set output result. Ensure that the highest value returned from does not equate to less than 0 or more than 255. Should a value be less than zero the result should be assigned as zero. In a similar fashion, should a value exceed 255 the result should be assigned as 255.

Prewitt Compass Kernels

Prewitt Compass Kernels

LadyBug: Prewitt 3 x 3 x 8

LadyBug Prewitt 3 x 3 x 8

Rotating Convolution Kernels

can be rotated by implementing a . Repeatedly rotating by 45 degrees results in calculating 8 , each suited to a different direction. The algorithm implemented when performing a can be expressed as follows:

Rotate Horizontal Algorithm

Rotate Horizontal Algorithm

Rotate Vertical Algorithm

Rotate Vertical Algorithm

I’ve published an in-depth article on rotation available here:  

Butterfly: Sobel 3 x 3 x 8

Butterfly Sobel 3 x 3 x 8

Implementing Kernel Rotation

The sample source code defines the RotateMatrix method. This method accepts as parameter a single , defined as a two dimensional array of type double. In addition the method also expects as a parameter the degree to which the specified should be rotated. The definition as follows:

public static double[, ,] RotateMatrix(double[,] baseKernel,  
                                             double degrees) 
{
    double[, ,] kernel = new double[(int )(360 / degrees),  
        baseKernel.GetLength(0), baseKernel.GetLength(1)]; 

int xOffset = baseKernel.GetLength(1) / 2; int yOffset = baseKernel.GetLength(0) / 2;
for (int y = 0; y < baseKernel.GetLength(0); y++) { for (int x = 0; x < baseKernel.GetLength(1); x++) { for (int compass = 0; compass < kernel.GetLength(0); compass++) { double radians = compass * degrees * Math.PI / 180.0;
int resultX = (int)(Math.Round((x - xOffset) * Math.Cos(radians) - (y - yOffset) * Math.Sin(radians)) + xOffset);
int resultY = (int )(Math.Round((x - xOffset) * Math.Sin(radians) + (y - yOffset) * Math.Cos(radians)) + yOffset);
kernel[compass, resultY, resultX] = baseKernel[y, x]; } } }
return kernel; }

Butterfly: Prewitt 3 x 3 x 8

Butterfly Prewitt 3 x 3 x 8

Implementing Compass Edge Detection

The sample source code defines several which are implemented in . The following code snippet provides the of all defined:

public static double[, ,] Prewitt3x3x4 
{
    get 
    {
        double[,] baseKernel = new double[,]  
         { {  -1,  0,  1,  },  
           {  -1,  0,  1,  },  
           {  -1,  0,  1,  }, }; 

double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Prewitt3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -1, 0, 1, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Prewitt5x5x4 { get { double[,] baseKernel = new double[,] { { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, { -2, -1, 0, 1, 2, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Kirsch3x3x4 { get { double[,] baseKernel = new double[,] { { -3, -3, 5, }, { -3, 0, 5, }, { -3, -3, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Kirsch3x3x8 { get { double[,] baseKernel = new double[,] { { -3, -3, 5, }, { -3, 0, 5, }, { -3, -3, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Sobel3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -2, 0, 2, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Sobel3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -2, 0, 2, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Sobel5x5x4 { get { double[,] baseKernel = new double[,] { { -5, -4, 0, 4, 5, }, { -8, -10, 0, 10, 8, }, { -10, -20, 0, 20, 10, }, { -8, -10, 0, 10, 8, }, { -5, -4, 0, 4, 5, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Scharr3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -3, 0, 3, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Scharr3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -3, 0, 3, }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }
public static double[, ,] Scharr5x5x4 { get { double[,] baseKernel = new double[,] { { -1, -1, 0, 1, 1, }, { -2, -2, 0, 2, 2, }, { -3, -6, 0, 6, 3, }, { -2, -2, 0, 2, 2, }, { -1, -1, 0, 1, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Isotropic3x3x4 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -Math.Sqrt(2), 0, Math.Sqrt(2), }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 90);
return kernel; } }
public static double[, ,] Isotropic3x3x8 { get { double[,] baseKernel = new double[,] { { -1, 0, 1, }, { -Math.Sqrt(2), 0, Math.Sqrt(2), }, { -1, 0, 1, }, };
double[, ,] kernel = RotateMatrix(baseKernel, 45);
return kernel; } }

Notice how each property invokes the RotateMatrix method discussed in the previous section.

Butterfly: Scharr 3 x 3 x 8

Butterfly Scharr 3 x 3 x 8

The CompassEdgeDetectionFilter method is defined as an targeting the class. The purpose of this method is to act as a wrapper method encapsulating the technical implementation. The definition as follows:

public static Bitmap CompassEdgeDetectionFilter(this Bitmap sourceBitmap,  
                                    CompassEdgeDetectionType compassType) 
{ 
    Bitmap resultBitmap = null; 

switch (compassType) { case CompassEdgeDetectionType.Sobel3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel3x3x4, 1.0 / 4.0); } break; case CompassEdgeDetectionType.Sobel3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel3x3x8, 1.0/ 4.0); } break; case CompassEdgeDetectionType.Sobel5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Sobel5x5x4, 1.0/ 84.0); } break; case CompassEdgeDetectionType.Prewitt3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt3x3x4, 1.0 / 3.0); } break; case CompassEdgeDetectionType.Prewitt3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt3x3x8, 1.0/ 3.0); } break; case CompassEdgeDetectionType.Prewitt5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Prewitt5x5x4, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Scharr3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr3x3x4, 1.0 / 4.0); } break; case CompassEdgeDetectionType.Scharr3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr3x3x8, 1.0 / 4.0); } break; case CompassEdgeDetectionType .Scharr5x5x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Scharr5x5x4, 1.0 / 21.0); } break; case CompassEdgeDetectionType.Kirsch3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Kirsch3x3x4, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Kirsch3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Kirsch3x3x8, 1.0 / 15.0); } break; case CompassEdgeDetectionType.Isotropic3x3x4: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Isotropic3x3x4, 1.0 / 3.4); } break; case CompassEdgeDetectionType.Isotropic3x3x8: { resultBitmap = sourceBitmap.ConvolutionFilter(Matrix.Isotropic3x3x8, 1.0 / 3.4); } break; }
return resultBitmap; }

Rose: Scharr 3 x 3 x 8

Rose Scharr 3 x 3 x 8

Notice from the code snippet listed above, each case statement invokes the ConvolutionFilter method. This method has been defined as an targeting the class. The ConvolutionFilter performs the actual task of . This method implements each passed as a parameter, the highest result value will be determined as the output value. The definition as follows:

private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap,  
                                     double[,,] filterMatrix,  
                                           double factor = 1,  
                                                int bias = 0)  
{
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly,  
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte [sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte [sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double blue = 0.0; double green = 0.0; double red = 0.0;
double blueCompass = 0.0; double greenCompass = 0.0; double redCompass = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth-1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int compass = 0; compass < filterMatrix.GetLength(0); compass++) {
blueCompass = 0.0; greenCompass = 0.0; redCompass = 0.0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blueCompass += (double)(pixelBuffer[calcOffset]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset];
greenCompass += (double)(pixelBuffer[calcOffset + 1]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset];
redCompass += (double)(pixelBuffer[calcOffset + 2]) * filterMatrix[compass, filterY + filterOffset, filterX + filterOffset]; } }
blue = (blueCompass > blue ? blueCompass : blue); green = (greenCompass > green ? greenCompass : green); red = (redCompass > red ? redCompass : red); }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
if(blue > 255) { blue = 255; } else if(blue < 0) { blue = 0; }
if(green > 255) { green = 255; } else if(green < 0) { green = 0; }
if(red > 255) { red = 255; } else if(red < 0) { red = 0; }
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rose: Isotropic 3 x 3 x 8

Rose Isotropic 3 x 3 x 8

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:

The Original Image

Original Image

Butterfly: Isotropic 3 x 3 x 4

Butterfly Isotropic 3 x 3 x 4

Butterfly: Isotropic 3 x 3 x 8

Butterfly Isotropic 3 x 3 x 8

Butterfly: Kirsch 3 x 3 x 4

Butterfly Kirsch 3 x 3 x 4

Butterfly: Kirsch 3 x 3 x 8

Butterfly Kirsch 3 x 3 x 8

Butterfly: Prewitt 3 x 3 x 4

Butterfly Prewitt 3 x 3 x 4

Butterfly: Prewitt 3 x 3 x 8

Butterfly Prewitt 3 x 3 x 8

Butterfly: Prewitt 5 x 5 x 4

Butterfly Prewitt 5 x 5 x 4

Butterfly: Scharr 3 x 3 x 4

Butterfly Scharr 3 x 3 x 4

Butterfly: Scharr 3 x 3 x 8

Butterfly Scharr 3 x 3 x 8

Butterfly: Scharr 5 x 5 x 4

Butterfly Scharr 5 x 5 x 4

Butterfly: Sobel 3  x 3 x 4

Butterfly Sobel 3  x 3 x 4

Butterfly: Sobel 3 x 3 x 8

Butterfly Sobel 3 x 3 x 8

Butterfly: Sobel 5 x 5 x 4

Butterfly Sobel 5 x 5 x 4

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Sharpen Edge Detection

Article Purpose

It is the objective of this article to explore and provide a discussion based in the concept of through means of . Illustrated are various methods of sharpening and in addition a implemented in reduction.

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

The sample source code accompanying this article includes a based Sample Application. The concepts illustrated throughout this article can easily be tested and replicated by making use of the Sample Application.

The Sample Application exposes seven main areas of functionality:

  • Loading input/source images.
  • Saving image result.
  • Sharpen Filters
  • Median Filter Size
  • Threshold value
  • Grayscale Source
  • Mono Output

When using the Sample application users are able to select input/source from the local file system by clicking the Load Image button. If desired, users may save result to the local file system by clicking the Save Image button.

The sample source code and sample application implement various methods of . Each method of results in varying degrees of . Some methods are more effective than other methods. The method being implemented serves as a primary factor influencing results. The effectiveness of the selected method is reliant on the input/source provided. The sample application implements the following methods:

  • Sharpen5To4
  • Sharpen7To1
  • Sharpen9To1
  • Sharpen12To1
  • Sharpen24To1
  • Sharpen48To1
  • Sharpen10To8
  • Sharpen11To8
  • Sharpen821

is regarded as a common problem relating to . Often will be incorrectly detected as forming part of an edge within an . The sample source code implements a in order to counter act . The size/intensity of the applied can be specified via the labelled Median Filter Size.

The Threshold value configured through the sample application’s user interface has a two-fold implementation. In a scenario where output images are created in a black and white format the Threshold value will be implemented to determine whether a pixel should be either black or white. When output are created as full colour the Threshold value will be added to each pixel, acting as a bias value.

In some scenarios can be achieved more effectively when specifying format source/input . The purpose of the labelled Grayscale Source is to format source/input in a format before implementing .

The labelled Mono Output, when selected, has the effect of producing result in a black and white format.

The image below is a screenshot of the Sharpen Edge Detection sample application in action:

Sharpen Edge Detection Sample Application

Edge Detection through Image Sharpening

The sample source code performs on source/input by means of . The steps performed can be broken down to the following items:

  1. If specified, apply a filter to the input/source image. A filter results in smoothing an . can be reduced when implementing a . smoothing/ often results reducing details/. The is well suited to smoothing away whilst implementing edge preservation. When performing the functions as an ideal method of reducing whilst not negatively impacting tasks.
  2. If specified, convert the source/input to by iterating each pixel that forms part of the . Each pixel’s colour components are calculated multiplying by factor values: Red x 0.3  Green x 0.59  Blue x 0.11.
  3. Using the specified   iterate each pixel forming part of the source/input , performing on each pixel colour channel.
  4. If the output has been specified as Mono, the middle pixel calculated in should be multiplied with the specified factor value. Each colour component should be compared to the specified threshold value and be assigned as either black or white.
  5. If the output has not been specified as Mono, the middle pixel calculated in should be multiplied with the factor value to which the threshold/bias value should be added. The value of each colour component will be set to the result of subtracting the calculated convolution/filter/bias value from the pixel’s original colour component value. In other words perform using applying a factor and bias which should then be subtracted from the original source/input .

Implementing Sharpen Edge Detection

The sample source code achieves through image sharpening by implementing three methods: MedianFilter and two overloaded methods titled SharpenEdgeDetect.

The MedianFilter method is defined as an targeting the class. The definition as follows:

 public static Bitmap MedianFilter(this Bitmap sourceBitmap, 
                                   int matrixSize) 
{ 
     BitmapData sourceData = 
                sourceBitmap.LockBits(new Rectangle(0, 0, 
                sourceBitmap.Width, sourceBitmap.Height), 
                ImageLockMode.ReadOnly, 
                PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
List<int> neighbourPixels = new List<int>(); byte[] middlePixel;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
neighbourPixels.Clear();
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[byteOffset] = middlePixel[0]; resultBuffer[byteOffset + 1] = middlePixel[1]; resultBuffer[byteOffset + 2] = middlePixel[2]; resultBuffer[byteOffset + 3] = middlePixel[3]; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

The public implementation of the SharpenEdgeDetect has the purpose of translating user specified options into the relevant method calls to the private implementation of the SharpenEdgeDetect . The public implementation of the SharpenEdgeDetect method as follows:

public static Bitmap SharpenEdgeDetect(this Bitmap sourceBitmap, 
                                            SharpenType sharpen, 
                                                   int bias = 0, 
                                         bool grayscale = false, 
                                              bool mono = false, 
                                       int medianFilterSize = 0) 
{ 
    Bitmap resultBitmap = null; 

if (medianFilterSize == 0) { resultBitmap = sourceBitmap; } else { resultBitmap = sourceBitmap.MedianFilter(medianFilterSize); }
switch (sharpen) { case SharpenType.Sharpen7To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen7To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen9To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen9To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen12To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen12To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen24To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen24To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen48To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen48To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen5To4: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen5To4, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen10To8: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen10To8, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen11To8: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen11To8, 3.0 / 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen821: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen821, 8.0 / 1.0, bias, grayscale, mono); } break; }
return resultBitmap; }

The Matrix class provides the definition of static pre-defined values. The definition as follows:

public static class Matrix   
{
    public static double[,] Sharpen7To1 
    {
        get   
        { 
            return new double[,]   
            {  { 1,  1,  1, },  
               { 1, -7,  1, },   
               { 1,  1,  1, }, }; 
        }  
    }  

public static double[,] Sharpen9To1 { get { return new double[,] { { -1, -1, -1, }, { -1, 9, -1, }, { -1, -1, -1, }, }; } }
public static double[,] Sharpen12To1 { get { return new double[,] { { -1, -1, -1, }, { -1, 12, -1, }, { -1, -1, -1, }, }; } }
public static double[,] Sharpen24To1 { get { return new double[,] { { -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, }, { -1, -1, 24, -1, -1, }, { -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, }, }; } }
public static double[,] Sharpen48To1 { get { return new double[,] { { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, 48, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, }; } }
public static double[,] Sharpen5To4 { get { return new double[,] { { 0, -1, 0, }, { -1, 5, -1, }, { 0, -1, 0, }, }; } }
public static double[,] Sharpen10To8 { get { return new double[,] { { 0, -2, 0, }, { -2, 10, -2, }, { 0, -2, 0, }, }; } }
public static double[,] Sharpen11To8 { get { return new double[,] { { 0, -2, 0, }, { -2, 11, -2, }, { 0, -2, 0, }, }; } }
public static double[,] Sharpen821 { get { return new double[,] { { -1, -1, -1, -1, -1, }, { -1, 2, 2, 2, -1, }, { -1, 2, 8, 2, 1, }, { -1, 2, 2, 2, -1, }, { -1, -1, -1, -1, -1, }, }; } } }

The private implementation of the SharpenEdgeDetect performs through and then performs subtraction. The definition as follows:

private static Bitmap SharpenEdgeDetect(this Bitmap sourceBitmap, 
                                          double[,] filterMatrix, 
                                               double factor = 1, 
                                                    int bias = 0, 
                                          bool grayscale = false, 
                                               bool mono = false) 
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly, 
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
if (grayscale == true) { for (int pixel = 0; pixel < pixelBuffer.Length; pixel += 4) { pixelBuffer[pixel] = (byte)(pixelBuffer[pixel] * 0.11f);
pixelBuffer[pixel + 1] = (byte)(pixelBuffer[pixel + 1] * 0.59f);
pixelBuffer[pixel + 2] = (byte)(pixelBuffer[pixel + 2] * 0.3f); } }
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double )(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double )(pixelBuffer[calcOffset + 1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double )(pixelBuffer[calcOffset + 2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
if (mono == true) { blue = resultBuffer[byteOffset] - factor * blue; green = resultBuffer[byteOffset + 1] - factor * green; red = resultBuffer[byteOffset + 2] - factor * red;
blue = (blue > bias ? 255 : 0);
green = (blue > bias ? 255 : 0);
red = (blue > bias ? 255 : 0); } else { blue = resultBuffer[byteOffset] - factor * blue + bias;
green = resultBuffer[byteOffset + 1] - factor * green + bias;
red = resultBuffer[byteOffset + 2] - factor * red + bias;
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red)); }
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height); BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Sample Images

The sample image used in this article is in the public domain because its copyright has expired. This applies to Australia, the European Union and those countries with a copyright term of life of the author plus 70 years. The original image can be downloaded from Wikipedia.

The Original Image

NovaraExpZoologischeTheilLepidopteraAtlasTaf53

Sharpen5To4, Median 0, Threshold 0

Sharpen5To4 Median 0 Threshold 0

Sharpen5To4, Median 0, Threshold 0, Mono

Sharpen5To4 Median 0 Threshold 0 Mono

Sharpen7To1, Median 0, Threshold 0

Sharpen7To1 Median 0 Threshold 0

Sharpen7To1, Median 0, Threshold 0, Mono

Sharpen7To1 Median 0 Threshold 0 Mono

Sharpen9To1, Median 0, Threshold 0

Sharpen9To1 Median 0 Threshold 0

Sharpen9To1, Median 0, Threshold 0, Mono

Sharpen9To1 Median 0 Threshold 0 Mono

Sharpen10To8, Median 0, Threshold 0

Sharpen10To8 Median 0 Threshold 0

Sharpen10To8, Median 0, Threshold 0, Mono

Sharpen10To8 Median 0 Threshold 0 Mono

Sharpen11To8, Median 0, Threshold 0

Sharpen11To8 Median 0 Threshold 0

Sharpen11To8, Median 0, Threshold 0, Grayscale, Mono

Sharpen11To8 Median 0 Threshold 0 Grayscale Mono

Sharpen12To1, Median 0, Threshold 0

Sharpen12To1 Median 0 Threshold 0

Sharpen12To1, Median 0, Threshold 0, Mono

Sharpen12To1 Median 0 Threshold 0 Mono

Sharpen24To1, Median 0, Threshold 0

Sharpen24To1 Median 0 Threshold 0

Sharpen24To1, Median 0, Threshold 0, Grayscale, Mono

Sharpen24To1 Median 0 Threshold 0 Grayscale Mono

Sharpen24To1, Median 0, Threshold 0, Mono

Sharpen24To1 Median 0 Threshold 0 Mono

Sharpen24To1, Median 0, Threshold 21, Grayscale, Mono

Sharpen24To1 Median 0 Threshold 21 Grayscale Mono

Sharpen48To1, Median 0, Threshold 0

Sharpen48To1 Median 0 Threshold 0

Sharpen48To1, Median 0, Threshold 0, Grayscale, Mono

Sharpen48To1 Median 0 Threshold 0 Grayscale Mono

Sharpen48To1, Median 0, Threshold 0, Mono

Sharpen48To1 Median 0 Threshold 0 Mono

Sharpen48To1, Median 0, Threshold 226, Mono

Sharpen48To1 Median 0 Threshold 226 Mono

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:


Dewald Esterhuizen

Blog Stats

  • 869,756 hits

Enter your email address to follow and receive notifications of new posts by email.

Join 228 other subscribers

Archives

RSS SoftwareByDefault on MSDN

  • An error has occurred; the feed is probably down. Try again later.