Search Results for 'image filters'



C# How to: Stained Glass Image Filter

Article Purpose

This article serves to provides a detailed discussion and implementation of a Stained Glass Image Filter. Primary topics explored include: Creating , Pixel Coordinate distance calculations implementing , and methods. In addition, this article explores Gradient Based implementing thresholds.

Zurich: Block Size 15, Factor 4, Euclidean

Zurich Block Size 15 Factor 4 Euclidean

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

This article’s accompanying sample source code includes a based sample application. The sample application provides an implementation of the concepts explored by this article. Concepts discussed can be easily replicated and tested by using the sample application.

Source/input files can be specified from the local system when clicking the Load Image button. Additionally users also have the option to save resulting filtered by clicking the Save Image button.

The sample application through its user interface allows a user to specify several filter configuration options. Two main categories of configuration options have been defined as Block Properties and Edge Properties.

Block Properties relate to  the process of rendering  . The following configuration options have been implemented:

  • Block Size – During the process of rendering a regions or blocks of equal shape and size have to be defined. These uniform regions/blocks form the basis of rendering uniquely shaped regions later on. The Block Size option determines the width and height of an individual region/block. Larger values result in larger non-uniform regions being rendered. Smaller values in return result in smaller non-uniform regions being rendered.
  • Distance Factor – The Distance Factor option determines the extent to which a pixel’s containing region will be calculated. Possible values range from 1 to 4 inclusive. A Distance Factor value of 4 equates to precise calculation of a pixel’s containing region, whereas a value of 1 results in containing regions often registering pixels that should be part of a neighbouring region. Values closer to 4 result in more varied region shapes. Values closer to 1 result in regions being rendered having more of a uniform shape/pattern.
  • Distance Formula – The distance between a pixel’s coordinates and a region’s outline determines whether that pixel should be considered part of a region. The sample application implements three different methods of calculating pixel distance: , and methods. Each result in region shapes being rendered differently.

Salzburg: Block Size 20, Factor 1, Chebyshev, Edge Threshold 2 

Saltzburg Block Size 20 Factor 1 Chebyshev Edge Threshold 2

Edge Properties relate to the implementation of Image Gradient Based Edge Detection. is an optional filter and can be enabled/disabled through the user interface, The implementation of serves to highlight/outline regions rendered as part of a . The configuration options implemented are:

  • Highlight Edges – Boolean value indicating whether or not should be applied
  • Threshold – In calculating a threshold value determines if a pixel forms part of an edge. Higher threshold values result in less being expressed. Lower threshold values result in more being expressed.
  • Colour – If a pixel has been determined as forming part of an , the resulting pixel colour will be determined by the colour value specified by the user.

The following image is a screenshot of the Stained Glass Image Filter sample application in action:

Stained Glass Image Filter Sample Application 

Locarno: Block Size 10, Factor 4, Euclidean

Locarno Block Size 10 Factor 4 Euclidean

Stained Glass

The Stained Glass Image Filter detailed in this article operates on the basis of implementing modifications upon a specified sample/input , producing resulting which resemble the appearance of stained glass artwork.

A common variant of stained glass artwork comes in the form of several individual pieces of coloured glass being combined in order to create an . The sample source code employs a similar  method of combining what appears to be non-uniform puzzle pieces. The following list provides a broad overview of the steps involved in applying a Stained Glass Image Filter:

  1. Render a Voronoi Diagram – Through rendering a the resulting will be divided into a number of regions. Each region being intended to represent an individual glass puzzle piece. The following section of this article provides a detailed discussion on rendering .
  2. Assign each Pixel to a Voronoi Diagram Region – Each pixel forming part of the source/input should be iterated. Whilst iterating pixels determine the region to which a pixel should be associated. A pixel should be associated to the region whose border has been determined the nearest to the pixel. In a following section of this article a detailed discussion regarding Pixel Coordinate Distance Calculations can be found.
  3. Determine each Region’s Colour Mean – Each region will only express a single colour value. A region’s colour equates to the average colour as expressed by all the pixels forming part of a region. Once the average colour value of a region has been determined every pixel forming part of that region should be set to the average colour.
  4. Implement Edge Detection – If the user configuration option indicates that should be implemented, apply Gradient Based Edge Detection. This method of has been discussed in detailed in a following section of this article.

Bad Ragaz: Block Size 10, Factor 1, Manhattan 

Bad Ragaz Block Size 10 Factor 1 Manhattan

Voronoi Diagrams

represent a fairly uncomplicated concept. In contrast, the implementation of prove somewhat more of a challenge. From we gain the following :

In mathematics, a Voronoi diagram is a way of dividing space into a number of regions. A set of points (called seeds, sites, or generators) is specified beforehand and for each seed there will be a corresponding region consisting of all points closer to that seed than to any other. The regions are called Voronoi cells. It is dual to the Delaunay triangulation.

In this article are generated resulting in regions expressing random shapes. Although region shapes are randomly generated, the parameters or ranges within which random values are selected are fixed/constant. The steps required in generating a can be detailed as follows:

  1. Define fixed size square regions – By making use of the user specified Block/Region Size value, group pixels together into square regions.
  2. Determine a Seed Value for Random number generation – Determine the sum total of pixel colour components of all the pixels forming part of a square region. The colour sum total value should be used as a seed value when generating random numbers in the next step.
  3. Determine a Random XY coordinate within each square region – Generate two random numbers, specifying each region’s coordinate boundaries as minimum and maximum boundaries in generating random numbers. Keep record of every new randomly generated XY-Coordinate value.
  4. Associate Pixels and Regions – A pixel should be associated to the Random Coordinate point nearest to that pixel. Determine the Random Coordinate nearest to each pixel in the source/input image. The method implemented in calculating coordinate distance depends on the configuration value specified by the user.
  5. Set Region Colours – Each pixel forming part of the same region should be set to the same colour. The colour assigned to a region’s pixels will be determined by the average colour value of the region’s pixels.

The following image illustrates an example consisting of 10 regions:

2Ddim-L2norm-10site

Port Edward: Block Size 10, Factor 1, Chebyshev, Edge Threshold 2

Port Edward Block Size 10 Factor 1 Chebyshev Edge Threshold 2

Calculating Pixel Coordinate Distances

The sample source code provides three different coordinate distance calculation methods. The supported methods are: , and . A pixel’s nearest randomly generated coordinate depends on the distance between that pixel and the random coordinate. Each method of calculating distance in most instances would be likely to produce different output values, which in turn influences the region to which a pixel will be associated.

The most common method of distance calculation, , has been described by as follows:

In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" distance between two points that one would measure with a ruler, and is given by the Pythagorean formula. By using this formula as distance, Euclidean space (or even any inner product space) becomes a metric space. The associated norm is called the Euclidean norm. Older literature refers to the metric as Pythagorean metric.

When calculating the algorithm implemented can be expressed as follows:

Euclidean Distance Algorithm

Zurich: Block Size 10, Factor 1, Euclidean

Zurich Block Size 10 Factor 1 Euclidean

As an alternative to calculating , the sample source code also implements calculation. Often calculation will be referred to as , or . From we gain the following :

Taxicab geometry, considered by Hermann Minkowski in the 19th century, is a form of geometry in which the usual distance function or metric of Euclidean geometry is replaced by a new metric in which the distance between two points is the sum of the absolute differences of their coordinates. The taxicab metric is also known as rectilinear distance, L1 distance or \ell_1 norm (see Lp space), city block distance, Manhattan distance, or Manhattan length, with corresponding variations in the name of the geometry.[1] The latter names allude to the grid layout of most streets on the island of Manhattan, which causes the shortest path a car could take between two intersections in the borough to have length equal to the intersections’ distance in taxicab geometry

When calculating the algorithm implemented can be expressed as follows:

Manhattan Distance Algorithm

Port Edward: Block Size 10, Factor 4, Euclidean

Port Edward Block Size 10 Factor 4 Euclidean

, a distance algorithm resembling the way in which a King Chess piece may move on a chess board. The following we gain from :

In mathematics, Chebyshev distance (or Tchebychev distance), Maximum metric, or L∞ metric[1] is a metric defined on a vector space where the distance between two vectors is the greatest of their differences along any coordinate dimension.[2] It is named after Pafnuty Chebyshev.

It is also known as chessboard distance, since in the game of chess the minimum number of moves needed by a king to go from one square on a chessboard to another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board.[3] For example, the Chebyshev distance between f6 and e2 equals 4.

When calculating the algorithm implemented can be expressed as follows:

Chebyshev Distance Algorithm

Salzburg: Block Size 20, Factor 1, Chebyshev

Salzburg Block Size 20 Factor 1 Chebyshev

Gradient Based Edge Detection

Various methods of can easily be implemented in C#. Each method of provides a set of benefits, usually weighed against a set of trade-offs. In this article and the accompanying sample source code the Gradient Based Edge Detection method has been implement.

Take into regard that every region within the rendered will only express a single colour, although most regions differ in the single colour they express. Once all pixels have been associated to a region and all pixel colour values have been updated the resulting defines mostly clearly distinguishable  colour gradients. A method of performs efficiently at detecting . The edges detected are defined between different regions.

An can be considered as a difference in colour intensity relating to a specific direction. Only once all tasks related to applying the Stained Glass Filter have been completed should the Gradient Based Edge Detection be applied. The steps involved in applying Gradient Based Edge Detection can be described as follows:

  1. Iterate each pixel – Each pixel forming part of a source/input image should be iterated.
  2. Determine Horizontal and Vertical Gradients – Calculate the colour value difference between the currently iterated pixel’s left and right neighbour pixel as well as the top and bottom neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  3. Determine Horizontal Gradient – Calculate the colour value difference between the currently iterated pixel’s left and right neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  4. Determine Vertical Gradient – Calculate the colour value difference between the currently iterated pixel’s top and bottom neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  5. Determine Diagonal Gradients – Calculate the colour value difference between the currently iterated pixel’s North-Western and South-Eastern neighbour pixel as well as the North-Eastern and South-Western neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  6. Determine NW-SE Gradient – Calculate the colour value difference between the currently iterated pixel’s North-Western and South-Eastern neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  7. Determine NE-SW Gradient  – Calculate the colour value difference between the currently iterated pixel’s North-Eastern and South-Western neighbour pixel.
  8. Determine and set result pixel value – If any of the six gradients calculated exceeded the specified threshold value set the related pixel in the resulting image to the Edge Colour specified by the user, if not, set the related pixel equal to the source pixel colour value.

Zurich: Block Size 10, Factor 4, Chebyshev

Zurich Block Size 10 Factor 4 Chebyshev 

Implementing a Stained Glass Image Filter

The sample source code defines two helper classes, both implemented when applying the Stained Glass Image Filter. The Pixel class represents a single pixel in terms of an XY-Coordinate and Red, Green and Blue values. The definition as follows:

public class Pixel  
{
    private int xOffset = 0; 
    public int XOffset 
    {
        get { return xOffset; } set { xOffset = value; } 
    }

private int yOffset = 0; public int YOffset { get { return yOffset; } set { yOffset = value; } }
private byte blue = 0; public byte Blue { get { return blue; } set { blue = value; } }
private byte green = 0; public byte Green { get { return green; } set { green = value; } }
private byte red = 0; public byte Red { get { return red; } set { red = value; } } }

Zurich: Block Size 10, Factor 1, Chebyshev, Edge Threshold 1

Zurich Block Size 10 Factor 1 Chebyshev Edge Threshold 1

The VoronoiPoint class serves as method of recording randomly generated coordinates and referencing a region’s associated pixels. The definition as follows:

public class VoronoiPoint 
{
    private int xOffset = 0; 
    public int XOffset 
    {
        get  { return xOffset; } set { xOffset = value; }
    }

private int yOffset = 0; public int YOffset { get { return yOffset; } set { yOffset = value; } }
private int blueTotal = 0; public int BlueTotal { get { return blueTotal; } set { blueTotal = value; } }
private int greenTotal = 0; public int GreenTotal { get {return greenTotal; } set { greenTotal = value; } }
private int redTotal = 0; public int RedTotal { get { return redTotal; } set { redTotal = value; } }
public void CalculateAverages() { if (pixelCollection.Count > 0) { blueAverage = blueTotal / pixelCollection.Count; greenAverage = greenTotal / pixelCollection.Count; redAverage = redTotal / pixelCollection.Count; } }
private int blueAverage = 0; public int BlueAverage { get { return blueAverage; } }
private int greenAverage = 0; public int GreenAverage { get { return greenAverage; } }
private int redAverage = 0; public int RedAverage { get { return redAverage; } }
private List<Pixel> pixelCollection = new List<Pixel>(); public List<Pixel> PixelCollection { get { return pixelCollection; } }
public void AddPixel(Pixel pixel) { blueTotal += pixel.Blue; greenTotal += pixel.Green; redTotal += pixel.Red;
pixelCollection.Add(pixel); } }

Zurich: Block Size 20, Factor 1, Euclidean, Edge Threshold 1

Zurich Block Size 20 Factor 1 Euclidean Edge Threshold 1

From the perspective of a filter implementation code base the only requirement comes in the form of having to invoke the StainedGlassColorFilter , no additional work is required from external code consumers. The StainedGlassColorFilter method has been defined as an targeting the class. The StainedGlassColorFilter method definition as follows:

public static Bitmap StainedGlassColorFilter(this Bitmap sourceBitmap,  
                                             int blockSize, double blockFactor, 
                                             DistanceFormulaType distanceType, 
                                             bool highlightEdges,  
                                             byte edgeThreshold, Color edgeColor) 
{
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int neighbourHoodTotal = 0; int sourceOffset = 0; int resultOffset = 0; int currentPixelDistance = 0; int nearestPixelDistance = 0; int nearesttPointIndex = 0;
Random randomizer = new Random();
List<VoronoiPoint> randomPointList = new List<VoronoiPoint>();
for (int row = 0; row < sourceBitmap.Height - blockSize; row += blockSize) { for (int col = 0; col < sourceBitmap.Width - blockSize; col += blockSize) { sourceOffset = row * sourceData.Stride + col * 4;
neighbourHoodTotal = 0;
for (int y = 0; y < blockSize; y++) { for (int x = 0; x < blockSize; x++) { resultOffset = sourceOffset + y * sourceData.Stride + x * 4; neighbourHoodTotal += pixelBuffer[resultOffset]; neighbourHoodTotal += pixelBuffer[resultOffset + 1]; neighbourHoodTotal += pixelBuffer[resultOffset + 2]; } }
randomizer = new Random(neighbourHoodTotal);
VoronoiPoint randomPoint = new VoronoiPoint(); randomPoint.XOffset = randomizer.Next(0, blockSize) + col; randomPoint.YOffset = randomizer.Next(0, blockSize) + row;
randomPointList.Add(randomPoint); } }
int rowOffset = 0; int colOffset = 0;
for (int bufferOffset = 0; bufferOffset < pixelBuffer.Length - 4; bufferOffset += 4) { rowOffset = bufferOffset / sourceData.Stride; colOffset = (bufferOffset % sourceData.Stride) / 4;
currentPixelDistance = 0; nearestPixelDistance = blockSize * 4; nearesttPointIndex = 0;
List<VoronoiPoint> pointSubset = new List<VoronoiPoint>();
pointSubset.AddRange(from t in randomPointList where rowOffset >= t.YOffset - blockSize * 2 && rowOffset <= t.YOffset + blockSize * 2 select t);
for (int k = 0; k < pointSubset.Count; k++) { if (distanceType == DistanceFormulaType.Euclidean) { currentPixelDistance = CalculateDistanceEuclidean(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } else if (distanceType == DistanceFormulaType.Manhattan) { currentPixelDistance = CalculateDistanceManhattan(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } else if (distanceType == DistanceFormulaType.Chebyshev) { currentPixelDistance = CalculateDistanceChebyshev(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } if (currentPixelDistance <= nearestPixelDistance) { nearestPixelDistance = currentPixelDistance; nearesttPointIndex = k; if (nearestPixelDistance <= blockSize / blockFactor) { break; } } }
Pixel tmpPixel = new Pixel (); tmpPixel.XOffset = colOffset; tmpPixel.YOffset = rowOffset; tmpPixel.Blue = pixelBuffer[bufferOffset]; tmpPixel.Green = pixelBuffer[bufferOffset + 1]; tmpPixel.Red = pixelBuffer[bufferOffset + 2];
pointSubset[nearesttPointIndex].AddPixel(tmpPixel); }
for (int k = 0; k < randomPointList.Count; k++) { randomPointList[k].CalculateAverages();
for (int i = 0; i < randomPointList[k].PixelCollection.Count; i++) { resultOffset = randomPointList[k].PixelCollection[i].YOffset * sourceData.Stride + randomPointList[k].PixelCollection[i].XOffset * 4;
resultBuffer[resultOffset] = (byte)randomPointList[k].BlueAverage; resultBuffer[resultOffset + 1] = (byte)randomPointList[k].GreenAverage; resultBuffer[resultOffset + 2] = (byte)randomPointList[k].RedAverage;
resultBuffer[resultOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
if (highlightEdges == true ) { resultBitmap = resultBitmap.GradientBasedEdgeDetectionFilter(edgeColor, edgeThreshold); }
return resultBitmap; }

Locarno: Block Size 10, Factor 4, Euclidean, Edge Threshold 1

Locarno Block Size 10 Factor 4 Euclidean Edge Threshold 1

Implementing Pixel Coordinate Distance Calculations

As mentioned earlier, this article and the accompanying sample source code support coordinate distance calculations through three different calculation methods, namely , and . The method of distance calculation implemented depends on the configuration option specified by the user.

The CalculateDistanceEuclidean method calculates distance implementing the Calculation method. In order to aid faster execution this method will calculate the square root of a specific value only once. Once a square root has been calculated the result is kept in memory. The following code snippet lists the definition of the CalculateDistanceEuclidean method:

private static Dictionary <int,int> squareRoots = new Dictionary<int,int>(); 

private static int CalculateDistanceEuclidean(int x1, int x2, int y1, int y2) { int square = (x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2);
if(squareRoots.ContainsKey(square) == false) { squareRoots.Add(square, (int)Math.Sqrt(square)); }
return squareRoots[square]; }

The two other methods of calculating distance are implemented through the CalculateDistanceManhattan and CalculateDistanceChebyshev methods. The definition as follows:

private static int CalculateDistanceManhattan(int x1, int x2, int y1, int y2) 
{
    return Math.Abs(x1 - x2) + Math.Abs(y1 - y2); 
}

private static int CalculateDistanceChebyshev(int x1, int x2, int y1, int y2) { return Math.Max(Math.Abs(x1 - x2), Math.Abs(y1 - y2)); }

Bad Ragaz: Block Size 12, Factor 1, Chebyshev

Bad Ragaz Block Size 12 Factor 1 Chebyshev

Implementing Gradient Based Edge Detection

Did you notice the very last step performed by the StainedGlassColorFilter method involves implementing Gradient Based Edge Detection, depending on whether had been specified by the user.

The following code snippet provides the implementation of the GradientBasedEdgeDetectionFilter extension method:

public static Bitmap GradientBasedEdgeDetectionFilter( 
                this Bitmap sourceBitmap, 
                Color edgeColour, 
                byte threshold = 0) 
{
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle (0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int sourceOffset = 0, gradientValue = 0; bool exceedsThreshold = false;
for(int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for(int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { sourceOffset = offsetY * sourceData.Stride + offsetX * 4; gradientValue = 0; exceedsThreshold = true;
// Horizontal Gradient CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold, 2); // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Horizontal Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold, 2); // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset - 4 + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset + sourceData.Stride - 4, ref gradientValue, threshold); } } } } }
if (exceedsThreshold == true) { resultBuffer[sourceOffset] = edgeColour.B; resultBuffer[sourceOffset + 1] = edgeColour.G; resultBuffer[sourceOffset + 2] = edgeColour.R; }
resultBuffer[sourceOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height); BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode .WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Zurich: Block Size 15, Factor 1, Manhattan, Edge Threshold 1

Zurich Block Size 15 Factor 1 Manhattan Edge Threshold 1

Sample Images

This article features a rendered graphic illustrating an example which has been released into the public domain by its author, Augochy at the wikipedia project. This applies worldwide. The original can be downloaded from .

All of the photos that appear in this article were taken by myself. Photos listed under Zurich, Locarno and Bad Ragaz were shot in Switzerland. The photo listed as Salzburg had been shot in Austria and the photo listed under Port Edward had been shot in South Africa. In order to fully realize the extent to which had been modified the following section details the original photos.

Zurich, Switzerland

Zurich, Switzerland

Salzburg, Austria

Salzburg, Austria

Locarno, Switzerland

Locarno, Switzerland

Bad Ragaz, Switzerland

Bad Ragaz, Switzerland

Port Edward, South Africa

Port Edward, South Africa

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Bad Ragaz, Switzerland

Bad Ragaz, Switzerland

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Transform Shear

Article Purpose

This article is focussed on illustrating the steps required in performing an . All of the concepts explored have been implemented by means of raw pixel data processing, no conventional drawing methods, such as GDI, are required.

Rabbit: Shear X 0.4, Y 0.4

Rabbit Shear X 0.4, Y 0.4

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

article features a based sample application which is included as part of the accompanying sample source code. The concepts explored in this article can be illustrated in a practical implementation using the sample application.

The sample application enables a user to load source/input from the local system when clicking the Load Image button. In addition users are also able to save output result to the local file system by clicking the Save Image button.

Image can be applied to either X or Y, or both X and Y pixel coordinates. When using the sample application the user has option of adjusting Shear factors, as indicated on the user interface by the numeric up/down controls labelled Shear X and Shear Y.

The following image is a screenshot of the Image Transform Shear Sample Application in action:

Image Transform Shear Sample Application

Rabbit: Shear X -0.5, Y -0.25

Rabbit Shear X -0.5, Y -0.25

Image Shear Transformation

A good definition of the term can be found on the Wikipedia :

In , a shear mapping is a that displaces each point in fixed direction, by an amount proportional to its signed distance from a line that is to that direction.[1] This type of mapping is also called shear transformation, transvection, or just shearing

A can be applied as a horizontal shear, a vertical shear or as both. The algorithms implemented when performing a can be expressed as follows:

Horizontal Shear Algorithm

Horizontal Shear Algorithm

Vertical Shear Algorithm

Vertical Shear Algorithm

The algorithm description:

  • Shear(x) : The result of a horizontal – The calculated X-Coordinate representing a .
  • Shear(y) : The result of a vertical – The calculated Y-Coordinate representing a .
  • σ : The lower case version of the Greek alphabet letter Sigma – Represents the Shear Factor.
  • x : The X-Coordinate originating from the source/input – The horizontal coordinate value intended to be sheared.
  • y : The Y-Coordinate originating from the source/input – The vertical coordinate value intended to be sheared.
  • H : Source height in pixels.
  • W : Source width in pixels.

Note: When performing a implementing both the horizontal and vertical planes each coordinate plane can be calculated using a different shearing factor.

The algorithms have been adapted in order to implement a middle pixel offset by means of subtracting the product of the related plane boundary and the specified Shearing Factor, which will then be divided by a factor of two.

Rabbit: Shear X 1.0, Y 0.1

Rabbit Shear X 1.0, Y 0.1

Implementing a Shear Transformation

The sample source code performs through the implementation of the ShearXY and ShearImage.

The ShearXY targets the structure. The algorithms discussed in the previous sections have been implemented in this function from a C# perspective. The definition as illustrated by the following code snippet:

public static Point ShearXY(this Point source, double shearX, 
                                               double shearY, 
                                               int offsetX,  
                                               int offsetY) 
{
    Point result = new Point(); 

result.X = (int)(Math.Round(source.X + shearX * source.Y)); result.X -= offsetX;
result.Y = (int)(Math.Round(source.Y + shearY * source.X)); result.Y -= offsetY;
return result; }

Rabbit: Shear X 0.0, Y 0.5

Rabbit Shear X 0.0, Y 0.5

The ShearImage targets the class. This method expects as parameter values a horizontal and a vertical shearing factor. Providing a shearing factor of zero results in no shearing being implemented in the corresponding direction. The definition as follows:

public static Bitmap ShearImage(this Bitmap sourceBitmap, 
                               double shearX, 
                               double shearY) 
{ 
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int xOffset = (int )Math.Round(sourceBitmap.Width * shearX / 2.0);
int yOffset = (int )Math.Round(sourceBitmap.Height * shearY / 2.0);
int sourceXY = 0; int resultXY = 0;
Point sourcePoint = new Point(); Point resultPoint = new Point();
Rectangle imageBounds = new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height);
for (int row = 0; row < sourceBitmap.Height; row++) { for (int col = 0; col < sourceBitmap.Width; col++) { sourceXY = row * sourceData.Stride + col * 4;
sourcePoint.X = col; sourcePoint.Y = row;
if (sourceXY >= 0 && sourceXY + 3 < pixelBuffer.Length) { resultPoint = sourcePoint.ShearXY(shearX, shearY, xOffset, yOffset);
resultXY = resultPoint.Y * sourceData.Stride + resultPoint.X * 4;
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 <= resultBuffer.Length) { resultBuffer[resultXY + 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 5] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 6] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 7] = 255; }
if (resultXY - 3 >= 0) { resultBuffer[resultXY - 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY - 3] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY - 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY - 1] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 1] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 3] = 255; } } } } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rabbit: Shear X 0.5, Y 0.0

Rabbit Shear X 0.5, Y 0.0

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction.

The sample images featuring the image of a Desert Cottontail Rabbit is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia. The original author is attributed as Larry D. Moore.

The sample images featuring the image of a Rabbit in Snow is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia. The original author is attributed as George Tuli.

The sample images featuring the image of an Eastern Cottontail Rabbit has been released into the public domain by its author. The original image can be downloaded from .

The sample images featuring the image of a Mountain Cottontail Rabbit is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code. The original image can be downloaded from .

Rabbit: Shear X 1.0, Y 0.0

Rabbit Shear X 1.0, Y 0.0

Rabbit: Shear X 0.5, Y 0.1

Rabbit Shear X 0.5, Y 0.1

Rabbit: Shear X -0.5, Y -0.25

Rabbit Shear X -0.5, Y -0.25

Rabbit: Shear X -0.5, Y 0.0

Rabbit Shear X -0.5, Y 0.0

Rabbit: Shear X 0.25, Y 0.0

Rabbit Shear X 0.25, Y 0.0

Rabbit: Shear X 0.50, Y 0.0

Rabbit Shear X 0.50, Y 0.0

Rabbit: Shear X 0.0, Y 0.5

Rabbit Shear X 0.0, Y 0.5

Rabbit: Shear X 0.0, Y 0.25

Rabbit Shear X 0.0, Y 0.25

Rabbit: Shear X 0.0, Y 1.0

Rabbit Shear X 0.0, Y 1.0

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Transform Rotate

Article Purpose

This article provides a discussion exploring the concept of rotation as a . In addition to conventional rotation this article illustrates the concept of individual colour channel rotation.

Daisy: Rotate Red 0o, Green 10o, Blue 20o

Daisy Rotate Red 0 Green 10 Blue 20 

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

A Sample Application has been included in the sample source code that accompanies this article. The sample application serves as an implementation of the concepts discussed throughout this article. Concepts can be easily tested and replicated using the sample application.

Daisy: Rotate Red 15o, Green 5o, Blue 10o

Daisy Rotate Red 15 Green 5 Blue 10

When using the sample application users are able to load source/input from the local system by clicking the Load Image button. Required user input via the user interface can be found in the form of three numeric up/down controls labelled Blue, Green and Red respectively. Each control represents the degree to which the related colour component should be rotated. Possible input values range from –360 to 360. Positive values result in clockwise rotation, whereas negative values result in counter clockwise rotation. The sample application enables users to save result to the local file system by clicking the Save Image button.

The following image is a screenshot of the Image Transform Rotate sample application in action:

Image Transform Rotate Sample Application

Image Rotation Transformation

A applied to an from a theoretical point of view is based in . From we learn the following :

In mathematics, transformation geometry (or transformational geometry) is the name of a mathematical and approach to the study of by focusing on groups of , and the properties of figures that are under them. It is opposed to the classical synthetic geometry approach of Euclidean geometry, that focus on geometric constructions.

Rose: Rotate Red –20o, Green 0o, Blue 20o

Rose Rotate Red -20 Green 0 Blue 20

In this article rotation is implemented through applying a set algorithm to the coordinates of each pixel forming part of a source/input . In the corresponding result the calculated rotated pixel coordinates in terms of colour channel values will be assigned to the colour channel values of the original pixel.

The algorithms implemented when calculating  a pixel’s rotated coordinates can be expressed as follows:

RotateX_Algorithm

RotateY_Algorithm

Symbols/variables contained in the algorithms:

  • R (x) : The result of rotating a pixel’s x-coordinate.
  • R (y) : The result of rotating a pixel’s y-coordinate.
  • x : The source pixel’s x-coordinate.
  • y : The source pixel’s y-coordinate.
  • W : The width in pixels of the source .
  • H : The height in pixels of the source .
  • ɑ : The lower case Greek alphabet letter alpha. The value represented by alpha reflects the degree of rotation.

Butterfly: Rotate Red 10o, Green 0o, Blue 0o

Butterfly Rotate Red 10 Green 0 Blue 0

In order to apply a each pixel forming part of the source/input should be iterated. The algorithms expressed above should be applied to each pixel.

The pixel coordinates located at exactly the middle of an can be calculated through dividing the width with a factor of two in regards to the X-coordinate. The Y-coordinate can be calculated through dividing the height also with a factor of two. The algorithms calculate the coordinates of the middle pixel and implements the coordinates as offsets. Implementing the pixel offsets  results in being rotated around the ’s middle, as opposed to the the top left pixel (0,0).

This article and the associated sample source code extends the concept of traditional rotation through implementing rotation on a per colour channel basis. Through user input the individual degree of rotation can be specified for each colour channel, namely Red, Green and Blue. Functionality has been implemented allowing each colour channel to be rotated to a different degree. In essence the algorithms described above have to be implemented three times per pixel iterated.

Daisy: Rotate Red 30o, Green 0o, Blue 180o

Daisy Rotate Red 30 Green 0 Blue 180 

Implementing a Rotation Transformation

The sample source code implements a through the of two : RotateXY and RotateImage.

The RotateXY targets the structure. This method serves as an encapsulation of the logic behind calculating rotating coordinates at a specified angle. The practical C# code implementation of the algorithms discussed in the previous section can be found within this method. The definition as follows:

public static Point RotateXY(this Point source, double degrees,
                                       int offsetX, int offsetY)
{ 
   Point result = new Point();
 
   result.X = (int)(Math.Round((source.X - offsetX) *
              Math.Cos(degrees) - (source.Y - offsetY) *
              Math.Sin(degrees))) + offsetX;

result.Y = (int)(Math.Round((source.X - offsetX) * Math.Sin(degrees) + (source.Y - offsetY) * Math.Cos(degrees))) + offsetY;
return result; }

Rose: Rotate Red –60o, Green 0o, Blue 60o

Rose Rotate Red -60 Green 0 Blue 60

The RotateImage targets the class. This method expects three rotation degree/angle values, each corresponding to a colour channel. Positive degrees result in clockwise rotation and negative values result in counter clockwise rotation.  The definition as follows:

public static Bitmap RotateImage(this Bitmap sourceBitmap,  
                                       double degreesBlue, 
                                      double degreesGreen, 
                                        double degreesRed) 
{ 
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
//Convert to Radians degreesBlue = degreesBlue * Math.PI / 180.0; degreesGreen = degreesGreen * Math.PI / 180.0; degreesRed = degreesRed * Math.PI / 180.0;
//Calculate Offset in order to rotate on image middle int xOffset = (int )(sourceBitmap.Width / 2.0); int yOffset = (int )(sourceBitmap.Height / 2.0);
int sourceXY = 0; int resultXY = 0;
Point sourcePoint = new Point(); Point resultPoint = new Point();
Rectangle imageBounds = new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height);
for (int row = 0; row < sourceBitmap.Height; row++) { for (int col = 0; col < sourceBitmap.Width; col++) { sourceXY = row * sourceData.Stride + col * 4;
sourcePoint.X = col; sourcePoint.Y = row;
if (sourceXY >= 0 && sourceXY + 3 < pixelBuffer.Length) { //Calculate Blue Rotation
resultPoint = sourcePoint.RotateXY(degreesBlue, xOffset, yOffset);
resultXY = (int)(Math.Round( (resultPoint.Y * sourceData.Stride) + (resultPoint.X * 4.0)));
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 < resultBuffer.Length) { resultBuffer[resultXY + 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 7] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 3] = 255; } }
//Calculate Green Rotation
resultPoint = sourcePoint.RotateXY(degreesGreen, xOffset, yOffset);
resultXY = (int)(Math.Round( (resultPoint.Y * sourceData.Stride) + (resultPoint.X * 4.0)));
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 < resultBuffer.Length) { resultBuffer[resultXY + 5] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 7] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY + 1] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 3] = 255; } }
//Calculate Red Rotation
resultPoint = sourcePoint.RotateXY(degreesRed, xOffset, yOffset);
resultXY = (int)(Math.Round( (resultPoint.Y * sourceData.Stride) + (resultPoint.X * 4.0)));
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 < resultBuffer.Length) { resultBuffer[resultXY + 6] = pixelBuffer[sourceXY + 2]; resultBuffer[resultXY + 7] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY + 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 3] = 255; } } } } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Daisy: Rotate Red 15o, Green 5o, Blue 5o

Daisy Rotate Red 15 Green 5 Blue 5

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction.

The sample images featuring an image of a yellow daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikimedia.org.

The sample images featuring an image of a white daisy is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia.

The sample images featuring an image of a CPU is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license. The original author is credited as Andrew Dunn. The original image can be downloaded from .

The sample images featuring an image of a rose is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license. The original image can be downloaded from .

The sample images featuring an image of a butterfly is licensed under the Creative Commons Attribution 3.0 Unported license and can be downloaded from Wikimedia.org.

The Original Image

Intel_80486DX2_bottom

CPU: Rotate Red 90o, Green 0o, Blue –30o

CPU Rotate Red 90 Green 0 Blue -30

CPU: Rotate Red 0o, Green 10o, Blue 0o

CPU Rotate Red 0 Green 10 Blue 0

CPU: Rotate Red –4o, Green 4o, Blue 6o

CPU Rotate Red -4 Green 4 Blue 6

CPU: Rotate Red 10o, Green 0o, Blue 0o

CPU Rotate Red 10 Green 0 Blue 0

CPU: Rotate Red 10o, Green –5o, Blue 0o

CPU Rotate Red 10 Green -5 Blue 0

CPU: Rotate Red 10o, Green 0o, Blue 10o

CPU Rotate Red 10 Green 0 Blue 10

CPU: Rotate Red –10o, Green 10o, Blue 0o

CPU Rotate Red -10 Green 10 Blue 0

CPU: Rotate Red 30o, Green –30o, Blue 0o

CPU Rotate Red 30 Green -30 Blue 0

CPU: Rotate Red 40o, Green 20o, Blue 0o

CPU Rotate Red 40 Green 20 Blue 0

CPU: Rotate Red 40o, Green 20o, Blue 0o

CPU Rotate Red 60 Green 30 Blue 0

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Blur

Article Purpose

This article serves to provides an introduction and discussion relating to methods and techniques. The Image Blur methods covered in this article include: , , , and  .

Daisy: Mean 9×9

Daisy Mean 9x9

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

This article is accompanied by a sample application, intended to provide a means of testing and replicating topics discussed in this article. The sample application is a based application of which the user interface enables the user to select an type to implement.

When clicking the Load Image button users are able to browse the local file system in order to select source/input . In addition users are also able to save blurred result when clicking the Save Image button and browsing the local file system.

Daisy: Mean 7×7

Daisy Mean 7x7

The sample application provides the user with the ability to select the method of to implement. The dropdown located on the right-hand side of the user interface lists all of the supported methods of . When a user selects an item from the , the associated blur method will be implemented on the preview .

The image below is a screenshot of the Image Blur Filter sample application in action:

Image Blur Filter Sample Application

Image Blur Overview

The process of can be regarded as reducing the sharpness or crispness defined by an . results in detail/ being perceived as less distinct. are often blurred as a method of smoothing an .

perceived as too crisp/sharp can be softened by applying a variety of techniques and intensity levels. Often are smoothed/blurred in order to remove/reduce . In implementations better results are often achieved when first implementing through smoothing/. can even be implemented in a fashion where results reflect , a method known as .

In this article and the accompanying sample source code all methods of supported have been implemented through , with the exception of the filter. Each of the supported methods in essence only represent a different   . The technique capable of achieving optimal results will to varying degrees be dependent on the features present in the specified source/input . Each method provides a different set of desired properties and compromises. In the following sections an overview of each method will be discussed.

Daisy: Mean 9×9

Daisy Mean 9x9

Mean Filter/Box Blur

The also sometimes referred to as a represents a fairly simplistic implementation and definition. A definition can be found on as follows:

A box blur is an in which each pixel in the resulting image has a value equal to the average value of its neighbouring pixels in the input image. It is a form of low-pass ("blurring") filter and is a .

Due to its property of using equal weights it can be implemented using a much simpler accumulation algorithm which is significantly faster than using a sliding window algorithm.

as a title relates to all weight values in a being equal, therefore the alternate title of . In most cases a will only contain the value one. When performing implementing a , the factor value equates to the 1 being divided by the sum of all values.

The following is an example of a 5×5 convolution kernel:

Mean Filter Blur 5x5 Kernel

The consist of 25 elements, therefore the factor value equates to one divided by twenty five.

The Blur does not result in the same level of smoothing achieved by other methods. The method can also be susceptible to directional artefacts.

Daisy Mean 5×5

Daisy Mean 5x5

Gaussian Blur

The method of is a popular and often implemented filter. In contrast to the method produce resulting appearing to contain a more uniform level of smoothing. When implementing a is often applied to source/input resulting in . The has a good level of edge preservation, hence being used in operations.

From we gain the following :

A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a . It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination. Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales

A potential drawback to implementing a results from the filter being computationally intensive. The following represents a 5×5 . The sum total of all elements in the equate to 159, therefore a factor value of 1.0 / 159.0 will be implemented.

Guassian Blur 5x5 Kernel

Daisy: Gaussian 5×5

Daisy Gaussian 5x5

Median Filter Blur

The is classified as a non-linear filter. In contrast to the other methods of discussed in this article the implementation does not involve or a predefined matrix . The following can be found on :

In signal processing, it is often desirable to be able to perform some kind of on an image or signal. The median filter is a nonlinear technique, often used to remove . Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, on an image). Median filtering is very widely used in digital because, under certain conditions, it preserves edges while removing noise.

Daisy: Median 7×7

Daisy Median 7x7

As the name implies, the operates by calculating the value of a pixel group also referred to as a window. Calculating a value involves a number of steps. The required steps are listed as follows:

  1. Iterate each pixel that forms part of the source/input .
  2. In relation to the pixel currently being iterated determine neighbouring pixels located within the bounds defined by the window size. The window location should be offset in order to align the window’s middle pixel and the pixel currently being iterated.
  3. Neighbouring pixels located within the bounds  defined by the window should be added to a one dimensional neighbourhood array. Once all value have been added, the array should be sorted by value.
  4. The pixel value located at the middle of the sorted neighbourhood array qualifies as the value. The newly determined value should be assigned to the pixel currently being iterated.
  5. Repeat the steps listed above until all pixels within the source/input have been iterated.

Similar to the filter the has the ability to smooth whilst providing edge preservation. Depending on the window size implemented and the physical dimensions of input/source the can be computationally expensive.

Daisy: Median 9×9

Daisy Median 9x9

Motion Blur

The sample source implements filters. in the traditional sense has been association with photography and video capturing. can often be observed in scenarios where rapid movements are being captured to photographs or video recording. When recording a single frame, rapid movements could result in the changing  before the frame being captured has completed.

can be synthetically imitated through the implementation of Digital filters. The size of the provided when implementing affects the filter intensity perceived in result . Relating to filters the size of the specified in influences the perception and appearance of how rapidly movement had occurred to have blurred the resulting . Larger produce the appearance of more rapid motion, whereas smaller result in less rapid motion being perceived.

Daisy: Motion Blur 7×7 135 Degrees

Daisy Motion Blur 7x7 135 Degrees

Depending on the specified the ability exists to create the appearance of movement having occurred in a certain direction. The sample source code implements filters at 45 degrees, 135 degrees and in both directions simultaneously.

The listed below represents a 5×5 filter occurring at  45 degrees and 135 degrees:

MotionBlur5x5

Image Blur Implementation

The sample source code implements all of the concepts explored throughout this article. The source code definition can be grouped into 4 sections: ImageBlurFilter method, ConvolutionFilter method, MedianFilter method and the Matrix class. The following article sections relate to the 4 main source code sections.

The ImageBlurFilter has the purpose of invoking the correct blur filter method and relevant method parameters. This method acts as a method wrapper providing the technical implementation details required when performing a specified blur filter.

The definition of the ImageBlurFilter as follows:

 public static Bitmap ImageBlurFilter(this Bitmap sourceBitmap,  
                                             BlurType blurType) 
{  
     Bitmap resultBitmap = null; 

switch (blurType) { case BlurType.Mean3x3: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean3x3, 1.0 / 9.0, 0); } break; case BlurType.Mean5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean5x5, 1.0 / 25.0, 0); } break; case BlurType.Mean7x7: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean7x7, 1.0 / 49.0, 0); } break; case BlurType.Mean9x9: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean9x9, 1.0 / 81.0, 0); } break; case BlurType.GaussianBlur3x3: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.GaussianBlur3x3, 1.0 / 16.0, 0); } break; case BlurType.GaussianBlur5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.GaussianBlur5x5, 1.0 / 159.0, 0); } break; case BlurType.MotionBlur5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5, 1.0 / 10.0, 0); } break; case BlurType.MotionBlur5x5At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5At45Degrees, 1.0 / 5.0, 0); } break; case BlurType.MotionBlur5x5At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5At135Degrees, 1.0 / 5.0, 0); } break; case BlurType.MotionBlur7x7: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7, 1.0 / 14.0, 0); } break; case BlurType.MotionBlur7x7At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7At45Degrees, 1.0 / 7.0, 0); } break; case BlurType.MotionBlur7x7At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7At135Degrees, 1.0 / 7.0, 0); } break; case BlurType.MotionBlur9x9: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9, 1.0 / 18.0, 0); } break; case BlurType.MotionBlur9x9At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9At45Degrees, 1.0 / 9.0, 0); } break; case BlurType.MotionBlur9x9At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9At135Degrees, 1.0 / 9.0, 0); } break; case BlurType.Median3x3: { resultBitmap = sourceBitmap.MedianFilter(3); } break; case BlurType.Median5x5: { resultBitmap = sourceBitmap.MedianFilter(5); } break; case BlurType.Median7x7: { resultBitmap = sourceBitmap.MedianFilter(7); } break; case BlurType.Median9x9: { resultBitmap = sourceBitmap.MedianFilter(9); } break; case BlurType.Median11x11: { resultBitmap = sourceBitmap.MedianFilter(11); } break; }
return resultBitmap; }

Daisy: Motion Blur 9×9

Daisy Motion Blur 9x9

The Matrix class serves as a collection of  various definitions. The Matrix class and all public properties are defined as static. The definition of the Matrix class as follows:

     public static class Matrix 
    {  
         public static double[,] Mean3x3 
         {  
             get 
             {  
                 return new double[,]   
                { {  1, 1, 1, },  
                  {  1, 1, 1, },  
                  {  1, 1, 1, }, }; 
             }  
         }  

public static double[,] Mean5x5 { get { return new double[,] { { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, }; } }
public static double[,] Mean7x7 { get { return new double[,] { { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, }; } }
public static double[,] Mean9x9 { get { return new double[,] { { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, }; } }
public static double[,] GaussianBlur3x3 { get { return new double[,] { { 1, 2, 1, }, { 2, 4, 2, }, { 1, 2, 1, }, }; } }
public static double[,] GaussianBlur5x5 { get { return new double[,] { { 2, 04, 05, 04, 2 }, { 4, 09, 12, 09, 4 }, { 5, 12, 15, 12, 5 }, { 4, 09, 12, 09, 4 }, { 2, 04, 05, 04, 2 }, }; } }
public static double[,] MotionBlur5x5 { get { return new double[,] { { 1, 0, 0, 0, 1 }, { 0, 1, 0, 1, 0 }, { 0, 0, 1, 0, 0 }, { 0, 1, 0, 1, 0 }, { 1, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur5x5At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 1 }, { 0, 0, 0, 1, 0 }, { 0, 0, 1, 0, 0 }, { 0, 1, 0, 0, 0 }, { 1, 0, 0, 0, 0 }, }; } }
public static double[,] MotionBlur5x5At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur7x7 { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 1 }, { 0, 1, 0, 0, 0, 1, 0 }, { 0, 0, 1, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 1, 0, 0 }, { 0, 1, 0, 0, 0, 1, 0 }, { 1, 0, 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur7x7At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 0, 0, 1 }, { 0, 0, 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0, 0, 0 }, { 1, 0, 0, 0, 0, 0, 0 }, }; } }
public static double[,] MotionBlur7x7At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0, 0, 0 }, { 0, 0, 1, 0, 0, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 0, 0, 1, 0, 0 }, { 0, 0, 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur9x9 { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 1, }, { 0, 1, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 1, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 1, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 1, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 1, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 1, 0, }, { 1, 0, 0, 0, 0, 0, 0, 0, 1, }, }; } }
public static double[,] MotionBlur9x9At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 0, 0, 0, 0, 1, }, { 0, 0, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 0, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 0, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 0, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 0, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 0, 0, }, { 1, 0, 0, 0, 0, 0, 0, 0, 0, }, }; } }
public static double[,] MotionBlur9x9At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 0, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 0, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 0, 0, 0, 0, 0, 0, 1, }, }; } } }

Daisy: Median 7×7

Daisy Median 7x7

The MedianFilter targets the class. The MedianFilter method applies a using the specified and matrix size (window size), returning a new representing the filtered .

The definition of the MedianFilter as follows:

 public static Bitmap MedianFilter(this Bitmap sourceBitmap, 
                                   int matrixSize) 
{ 
     BitmapData sourceData = 
                sourceBitmap.LockBits(new Rectangle(0, 0, 
                sourceBitmap.Width, sourceBitmap.Height), 
                ImageLockMode.ReadOnly, 
                PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
List<int> neighbourPixels = new List<int>(); byte[] middlePixel;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
neighbourPixels.Clear();
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[byteOffset] = middlePixel[0]; resultBuffer[byteOffset + 1] = middlePixel[1]; resultBuffer[byteOffset + 2] = middlePixel[2]; resultBuffer[byteOffset + 3] = middlePixel[3]; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Daisy: Motion Blur 9×9

Daisy Motion Blur 9x9

The sample source code performs by invoking the ConvolutionFilter .

The definition of the ConvolutionFilter as follows:

private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap, 
                                          double[,] filterMatrix, 
                                               double factor = 1, 
                                                    int bias = 0) 
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly, 
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double)(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double)(pixelBuffer[calcOffset + 1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double)(pixelBuffer[calcOffset + 2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red));
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction.

The sample images featuring an image of a yellow daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikimedia.org.

The sample images featuring an image of a white daisy is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia.

The sample images featuring an image of a pink daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikipedia.

The sample images featuring an image of a purple daisy is licensed under the Creative Commons Attribution-ShareAlike 3.0 License and can be downloaded from Wikipedia.

The Original Image

Purple_osteospermum

Daisy: Gaussian 3×3

Daisy Gaussian 3x3

Daisy: Gaussian 5×5

Daisy Gaussian 5x5

Daisy: Mean 3×3

Daisy Mean 3x3

Daisy: Mean 5×5

Daisy Mean 5x5

Daisy: Mean 7×7

Daisy Mean 7x7

Daisy: Mean 9×9

Daisy Mean 9x9

Daisy: Median 3×3

Daisy Median 3x3

Daisy: Median 5×5

Daisy Median 5x5

Daisy: Median 7×7

Daisy Median 7x7

Daisy: Median 9×9

Daisy Median 9x9

Daisy: Median 11×11

Daisy Median 11x11

Daisy: Motion Blur 5×5

Daisy Motion Blur 5x5

Daisy: Motion Blur 5×5 45 Degrees

Daisy Motion Blur 5x5 45 Degrees

Daisy: Motion Blur 5×5 135 Degrees

Daisy Motion Blur 5x5 135 Degrees

Daisy: Motion Blur 7×7

Daisy Motion Blur 7x7

Daisy: Motion Blur 7×7 45 Degrees

Daisy Motion Blur 7x7 45 Degree

Daisy: Motion Blur 7×7 135 Degrees

Daisy Motion Blur 7x7 135 Degrees

Daisy: Motion Blur 9×9

Daisy Motion Blur 9x9

Daisy: Motion Blur 9×9 45 Degrees

Daisy Motion Blur 9x9 45 Degrees

Daisy: Motion Blur 9×9 135 Degrees

Daisy Motion Blur 9x9 135 Degrees

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Cartoon Effect

Article purpose

In this article we explore the tasks related to creating a Cartoon Effect from which reflect real world non-animated scenarios. When applying a Cartoon Effect it becomes possible with relative ease to create appearing to have originated from a drawing/animation.

Cartoon version of Steve Ballmer: Low Pass 3×3, Threshold 65.

Low Pass 3x3 Threshold 65

Sample source code

This article is accompanied by a sample source code Visual Studio project which is available for download .

CPU: Gaussian 7×7, Threshold 84

Gaussian 7x7 Threshold 84 CPU

Using the Sample Application

A Sample Application has been included as part of the sample source code accompanying this article. The Sample Application is a based application which enables a user to specify source/input , apply various methods of implementing the Cartoon Effect. In addition users are able to save generated images to the local system.

When using the Sample Application click the Load Image button to load files from the local file system. On the right-hand side of the Sample application’s user interface, users are provided with two configuration options: Smoothing Filter and Threshold.

Rose: Gaussian 3×3 Threshold 28.

Gaussian 3x3 Threshold 28

In this article and sample source code detail and definition can be reduced through means of image smoothing filters. Several smoothing options are available to the user, the following section serves as a discussion of each option.

None – When specifying the Smoothing Filter option None, no smoothing operations will be performed on source/input .

3×3 – filters can be very effective at removing , smoothing an background, whilst still preserving the edges expressed in the sample/input . A  / of 3×3 dimensions result in slight .

 5×5 – A operation being implemented by making use of a / defined with dimensions of 5×5. A slightly larger results in an increased level of being expressed by output . A greater level of   equates to a larger degree of reduction/removal.

Rose: Gaussian 7×7 Threshold 48.

Gaussian 7x7 Threshold 48 

7×7 – As can be expected when specifying a / conforming to 7×7 size dimension an even more intense level of can be detected when looking at result . Notice how increased levels of negatively affects the process of . Consider the following: In a scenario where too many elements are being detected as part of an edge as a result of , specifying a higher level of should reduce edges being detected. The reasoning can be explained in terms of reducing /detail, higher levels of will thus result in a greater level of detail/definition reduction. Lower definition are less likely to express the same level of detected edges when compared to higher definition .

CPU: Median 3×3, Threshold 96.

Median 3x3 Threshold 96 CPU

 3×3 – When applying a to an the resulting should express a lesser degree of . In other words, the can be considered as well suited to performing . Also note that a under certain conditions has the ability to preserve the edges contained in an . In the following section we explore the importance of in achieving a Cartoon Effect. Important concepts to take note of: The when implemented on an performs whilst preserving edges. In relation, represents a core concept/task when creating a Cartoon Effect. The ’s edge preservation property compliments the process of . When an contains a low level of the Median 3×3 Filter could be considered.

5×5 – The 5×5 dimension implementation of the result in producing which exhibit a higher degree of smoothing and a lesser expression of . If the 3×3  fails to provide adequate levels of smoothing and the 5×5 could be implemented.

Cartoon version of Steve Ballmer: Sharpen 3×3, Threshold 80.

Sharpen 3x3 Threshold 80

7×7 – The last implemented by the sample source code conforms to a 7×7 size dimension. This filter variation results in a high level of image . The trade off to more effective will be expressed in result appearing extremely smooth, in some scenarios perhaps overly so.

Mean 3×3 – The Mean Filter provides a different implementation towards achieving image smoothing and .

Mean 5×5 – The 5×5 dimension Mean Filter variation serves as a more intense version of the Mean 3×3 Filter. Depending on the level of and type of a Mean Filter could prove a more efficient implementation in comparison to a .

Low Pass 3×3 – In much the same fashion as and Mean Filters, a achieves smoothing and . Notice when comparing , Mean and Filtering, the differences observed in output results are only expressed as slight differences. The most effective filter to apply should be seen as as being dependent on the input/source characteristics.

CPU: Gaussian 3×3, Threshold 92.

Gaussian 3x3 Threshold 92 CPU 

Low Pass 5×5 – This filter variation being of a larger dimension serves as a more intense implementation of the 3×3 Filter.

Sharpen 3×3 – In certain scenarios input/source may already be smoothed/blurred to such an extent where the process performs below expectation. can be improved when applying a .

Threshold values specified by the user through the user interface serves the purpose of enabling the user to finely control the extent/intensity of edges being detected. Implementing a higher Threshold value will have the result of less edges being detected. In order to reduce the level of being detected as false edges the Threshold value should be increased. When too few edges are being detected the Threshold value should be decreased.

The following image is a screenshot of the Image Cartoon Effect Sample Application in action:

Image Cartoon Effect Sample Application

Explanation of the Cartoon Effect

The Cartoon Effect can be characterised as an image filter producing result which appear similar to input/source with the exception of having an animated appearance.

The Cartoon Effect consists of reducing image detail/definition whilst at the same instance performing . The resulting smoothed and the edges detected in the source/input should be combined, where detected edges are being expressed in the colour black. The final reflects an appearance similar to that of an animated/artist drawn image.

Various methods of reducing detail/definition are supported in the sample source code. Most methods consist of implementing smoothing. The following configurable methods are implemented:

Rose: Mean 5×5 Threshold 37.

Mean 5x5 Threshold 37 

All of the filter methods listed above are implemented by means of . The size dimensions listed for each filter option relates to the dimension of the / being implemented by a filter.

When applying a filter, the intensity/extent will be determined by the size dimensions of the / implemented. Smaller / dimensions result in a filter being applied to a lesser extent. Larger / dimensions will result in the filter effect being more evident, being applied to a greater extent. reduction will be achieved when implementing a filter.

The Sample Source code implements Gradient Based Edge Detection using the original source/input , therefore not being influenced by any smoothing operations. I have published an in-depth article on the topic of Gradient Based Edge Detection which can be located here: .

Rose: Median 3×3 Threshold 37.

Median 3x3 Threshold 37

The Sample source code implements Gradient Based Edge Detection by means of iterating each pixel that forms part of the sample/input . Whilst iterating pixels the sample code calculate various gradients from the current pixel’s neighbouring pixels, on a per colour component basis (Red, Green and Blue). Referring to neighbouring pixels, calculations include the value of each of the surrounding pixels in regards to the pixel currently being iterated. Neighbouring pixel calculations are better know as /window/ operations.

Note: Do not confuse and the method in which we iterate and calculate gradients. Although both methods have various aspects in common, is regarded as linear filter processing, whereas our method qualifies as a non-linear filter.

We calculate various gradients, which is to be compared against the user specified global threshold value. If a calculated gradient value exceeds the value of the user specified threshold the pixel currently being iterated will be considered as part of an edge.

The first gradients to be calculated involves the pixel directly above, below, left and right of the current pixel. A gradient will be calculated for each colour component. The gradient values being calculated can be considered as an indicator reflecting the rate of change. If the sum total of the calculated gradients exceed that of the global threshold the pixel will be considered as forming part an edge.

When the comparison of the threshold value and the total gradient value reflects in favour of the threshold the following set of gradients will be calculated. This process of calculating gradients will continue either until a gradient value exceeds the threshold or all gradients have been calculated.

If a pixel was detected as forming part of an edge, the pixel’s colour will be set to black. In the case of non-edge pixels, the original colour components from the source/input image will be used in setting the current pixel’s value.

Rose: Gaussian 3×3 Threshold 28

Gaussian 3x3 Threshold 28

Implementing Cartoon Effects

The sample source code implementation can be divided into five distinct components: Cartoon Effect Filter, smoothing helper method, implementation, implementation and the collection of pre-defined / values.

The sample source code defines the MedianFilter targeting the class. The following code snippet provides the definition:

 public static Bitmap MedianFilter(this Bitmap sourceBitmap, 
                                   int matrixSize) 
{ 
     BitmapData sourceData = 
                sourceBitmap.LockBits(new Rectangle(0, 0, 
                sourceBitmap.Width, sourceBitmap.Height), 
                ImageLockMode.ReadOnly, 
                PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
List<int> neighbourPixels = new List<int>(); byte[] middlePixel;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
neighbourPixels.Clear();
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[byteOffset] = middlePixel[0]; resultBuffer[byteOffset + 1] = middlePixel[1]; resultBuffer[byteOffset + 2] = middlePixel[2]; resultBuffer[byteOffset + 3] = middlePixel[3]; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

The SmoothingFilterType , defined by the sample source code, serves as a strongly typed definition of a the collection of implemented smoothing filters. The definition as follows:

 public enum SmoothingFilterType  
 {
     None, 
     Gaussian3x3, 
     Gaussian5x5, 
     Gaussian7x7, 
     Median3x3, 
     Median5x5, 
     Median7x7, 
     Median9x9, 
     Mean3x3, 
     Mean5x5, 
     LowPass3x3, 
     LowPass5x5, 
     Sharpen3x3, 
 } 

The Matrix class contains the definition of all the two dimensional / values implemented when performing . The definition as follows:

public static class Matrix 
{ 
    public static double[,] Gaussian3x3 
    { 
        get 
        {
            return new double[,]   
             { { 1, 2, 1, },  
               { 2, 4, 2, },  
               { 1, 2, 1, }, }; 
        } 
    }
 
    public static double[,] Gaussian5x5 
    {
        get 
        { 
            return new double[,]   
             { { 2, 04, 05, 04, 2  },  
               { 4, 09, 12, 09, 4  },  
               { 5, 12, 15, 12, 5  }, 
               { 4, 09, 12, 09, 4  }, 
               { 2, 04, 05, 04, 2  }, }; 
        } 
    } 
 
    public static double[,] Gaussian7x7 
    {
        get 
        { 
            return new double[,]   
             { { 1,  1,  2,  2,  2,  1,  1, },  
               { 1,  2,  2,  4,  2,  2,  1, },  
               { 2,  2,  4,  8,  4,  2,  2, },  
               { 2,  4,  8, 16,  8,  4,  2, },  
               { 2,  2,  4,  8,  4,  2,  2, },  
               { 1,  2,  2,  4,  2,  2,  1, },  
               { 1,  1,  2,  2,  2,  1,  1, }, }; 
        } 
    } 
 
    public static double[,] Mean3x3 
    { 
        get 
        { 
            return new double[,]   
             { { 1, 1, 1, },  
               { 1, 1, 1, },  
               { 1, 1, 1, }, }; 
        } 
    } 
 
    public static double[,] Mean5x5 
    { 
        get 
        { 
            return new double[,]   
             { { 1, 1, 1, 1, 1, },  
               { 1, 1, 1, 1, 1, },  
               { 1, 1, 1, 1, 1, },  
               { 1, 1, 1, 1, 1, },  
               { 1, 1, 1, 1, 1, }, }; 
        } 
    } 
 
    public static double [,] LowPass3x3 
    { 
        get 
        { 
            return new double [,]   
             { { 1, 2, 1, },  
               { 2, 4, 2, },   
               { 1, 2, 1, }, }; 
        }
    } 
 
    public static double[,] LowPass5x5 
    { 
        get 
        { 
            return new double[,]   
             { { 1, 1,  1, 1, 1, },  
               { 1, 4,  4, 4, 1, },  
               { 1, 4, 12, 4, 1, },  
               { 1, 4,  4, 4, 1, },  
               { 1, 1,  1, 1, 1, }, }; 
        }
    }
 
    public static double[,] Sharpen3x3 
    { 
        get 
         {
            return new double[,]   
             { { -1, -2, -1, },  
               {  2,  4,  2, },   
               {  1,  2,  1, }, }; 
         }
    } 
} 

Rose: Low Pass 3×3 Threshold 61

Low Pass 3x3 Threshold 61

The SmoothingFilter targets the class. This method implements . The primary task performed by the SmoothingFilter involves translating filter options into the correct method calls. The definition as follows:

public static Bitmap SmoothingFilter(this Bitmap sourceBitmap, 
                            SmoothingFilterType smoothFilter = 
                            SmoothingFilterType.None) 
 {
    Bitmap inputBitmap = null; 

switch (smoothFilter) { case SmoothingFilterType.None: { inputBitmap = sourceBitmap; } break; case SmoothingFilterType.Gaussian3x3: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Gaussian3x3, 1.0 / 16.0, 0); } break; case SmoothingFilterType.Gaussian5x5: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Gaussian5x5, 1.0 / 159.0, 0); } break; case SmoothingFilterType.Gaussian7x7: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Gaussian7x7, 1.0 / 136.0, 0); } break; case SmoothingFilterType.Median3x3: { inputBitmap = sourceBitmap.MedianFilter(3); } break; case SmoothingFilterType.Median5x5: { inputBitmap = sourceBitmap.MedianFilter(5); } break; case SmoothingFilterType.Median7x7: { inputBitmap = sourceBitmap.MedianFilter(7); } break; case SmoothingFilterType.Median9x9: { inputBitmap = sourceBitmap.MedianFilter(9); } break; case SmoothingFilterType.Mean3x3: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean3x3, 1.0 / 9.0, 0); } break; case SmoothingFilterType.Mean5x5: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean5x5, 1.0 / 25.0, 0); } break; case SmoothingFilterType.LowPass3x3: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.LowPass3x3, 1.0 / 16.0, 0); } break; case SmoothingFilterType.LowPass5x5: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.LowPass5x5, 1.0 / 60.0, 0); } break; case SmoothingFilterType.Sharpen3x3: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Sharpen3x3, 1.0 / 8.0, 0); } break; }
return inputBitmap; }

The ConvolutionFilter which targets the class implements . The definition as follows:

private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap, 
                                          double[,] filterMatrix, 
                                               double factor = 1, 
                                                    int bias = 0) 
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly, 
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double)(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double)(pixelBuffer[calcOffset + 1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double)(pixelBuffer[calcOffset + 2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red));
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode .WriteOnly, PixelFormat .Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Cartoon version of Steve Ballmer: Sharpen 3×3 Threshold 80

Sharpen 3x3 Threshold 80

The CartoonEffectFilter targets the class. This method defines all the tasks required in order to implement a Cartoon Filter. From an implementation point of view, consuming code is only required to invoke this method, no other additional method calls are required. The definition as follows:

public static Bitmap CartoonEffectFilter( 
                                this Bitmap sourceBitmap, 
                                byte threshold = 0, 
                                SmoothingFilterType smoothFilter  
                                = SmoothingFilterType.None) 
{ 
    sourceBitmap = sourceBitmap.SmoothingFilter(smoothFilter); 

BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int byteOffset = 0; int blueGradient, greenGradient, redGradient = 0; double blue = 0, green = 0, red = 0;
bool exceedsThreshold = false;
for (int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for (int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
blueGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]);
blueGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]);
greenGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]);
redGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { byteOffset -= 2;
blueGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]); byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]); byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]);
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { byteOffset -= 2;
blueGradient = Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { byteOffset -= 2;
blueGradient = Math.Abs(pixelBuffer[byteOffset - 4 - sourceData.Stride] - pixelBuffer[byteOffset + 4 + sourceData.Stride]);
blueGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride + 4] - pixelBuffer[byteOffset + sourceData.Stride - 4]);
byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - 4 - sourceData.Stride] - pixelBuffer[byteOffset + 4 + sourceData.Stride]);
greenGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride + 4] - pixelBuffer[byteOffset + sourceData.Stride - 4]);
byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - 4 - sourceData.Stride] - pixelBuffer[byteOffset + 4 + sourceData.Stride]);
redGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride + 4] - pixelBuffer[byteOffset + sourceData.Stride - 4]);
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { exceedsThreshold = false ; } } } }
byteOffset -= 2;
if (exceedsThreshold) { blue = 0; green = 0; red = 0; } else { blue = pixelBuffer[byteOffset]; green = pixelBuffer[byteOffset + 1]; red = pixelBuffer[byteOffset + 2]; }
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red));
resultBuffer[byteOffset] = (byte)blue; resultBuffer[byteOffset + 1] = (byte)green; resultBuffer[byteOffset + 2] = (byte)red; resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Sample Images

The sample image used in this article which features Bill Gates has been licensed under the Creative Commons Attribution 2.0 Generic license and can be from .

The sample image featuring Steve Ballmer has been licensed under the Creative Commons Attribution 2.0 Generic license and can be from .

The sample image featuring an Amber flush Rose has been licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license and can be from .

The sample image featuring a Computer Processor has been licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license and can be downloaded from .l The original author is attributed as Andrew Dunnhttp://www.andrewdunnphoto.com/

The Original Image

BillGates2012

No Smoothing, Threshold 100

No Smoothing Threshold 100 Gates

Gaussian 3×3, Threshold 73

Gaussian 3x3 Threshold 73 Gates

Gaussian 5×5, Threshold 78

Gaussian 5x5 Threshold 78 Gates

Gaussian 7×7, Threshold 84

Gaussian 7x7 Threshold 84 Gates

Low Pass 3×3, Threshold 72

LowPass 3x3 Threshold 72 Gates

Low Pass 5×5, Threshold 81

LowPass 5x5 Threshold 81 Gates

Mean 3×3, Threshold 79

Mean 3x3 Threshold 79 Gates

Mean 5×5, Threshold 80

Mean 5x5 Threshold 80 Gates

Median 3×3, Threshold 85

Median 3x3 Threshold 85 Gates

Median 5×5, Threshold 105

Median 5x5 Threshold 105 Gates

Median 7×7, Threshold 127

Median 7x7 Threshold 127 Gates

Median 9×9, Threshold 154

Median 9x9 Threshold 154 Gates

Sharpen 3×3, Threshold 114

Sharpen 3x3 Threshold 114 Gates

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

« Previous PageNext Page »


Dewald Esterhuizen

Blog Stats

  • 825,919 hits

Enter your email address to follow and receive notifications of new posts by email.

Join 228 other followers

Archives

Twitter feed


%d bloggers like this: