Archive for the 'Image Arithmetic' Category



C# How to: Image ASCII Art

Article Purpose

This article explores the concept of rendering from source . Beyond exploring concepts this article also provides a practical implementation of all the steps required in creating an ASCII Filter.

Sir Tim Berners-Lee: 2 Pixels Per Character, 12 Characters,  Font Size 4, Zoom 100

Sir Tim Berners-Lee 2 Pixels Per Character, 12 Characters,  Font Size 4, Zoom 100

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

The sample source code that accompanies this article includes a sample application. The concepts illustrated in this article can tested and replicated using the sample application.

The sample application user interface implements a variety of functionality which can be described as follows:

  • Loading/Saving Images – Users are able to load source/input from the local system through clicking the Load Image button. Rendered can be saved as an file when clicking the Save Image button.
  • Pixels Per Character – This configuration option determines the number of pixels represented by a single character. Lower values result in better detail/definition and a larger proportional output. Higher values result in less detail/definition and a smaller proportional output.
  • Character Count – The number of unique characters to be rendered can be adjusted through updating this value. This value can be considered similar to number of shades of gray in a  .
  • Font Size – This option determines the Font Size related to the rendered text.
  • Zoom Level – Configure this value in order to apply a scaling level when rendering text.
  • Copy to Clipboard – Copy the current to the Windows Clipboard, in Rich Text Format.

The following image is screenshot of the Image ASCII Art sample application is action:

Image ASCII Art Sample Application

Image ASCII Art Sample Application

Anders Hejlsberg: 1 Pixel Per Character, 24 Characters, Font Size 6, Zoom 20

Anders Hejlsberg: 1 Pixel Per Character, 24 Characters, Font Size 6, Zoom 20

Converting Pixels to Characters

in various forms have been part of computer culture since the pioneering days of computing. From we gain the following:

ASCII art is a graphic design technique that uses computers for presentation and consists of pictures pieced together from the 95 printable (from a total of 128) characters defined by the ASCII Standard from 1963 and ASCII compliant character sets with proprietary extended characters (beyond the 128 characters of standard 7-bit ASCII). The term is also loosely used to refer to text based visual art in general. ASCII art can be created with any text editor, and is often used with free-form languages. Most examples of ASCII art require a fixed-width font (non-proportional fonts, as on a traditional typewriter) such as Courier for presentation.

Among the oldest known examples of ASCII art are the creations by computer-art pioneer Kenneth Knowlton from around 1966, who was working for Bell Labs at the time.[1] "Studies in Perception I" by Ken Knowlton and Leon Harmon from 1966 shows some examples of their early ASCII art.[2]

One of the main reasons ASCII art was born was because early printers often lacked graphics ability and thus characters were used in place of graphic marks. Also, to mark divisions between different print jobs from different users, bulk printers often used ASCII art to print large banners, making the division easier to spot so that the results could be more easily separated by a computer operator or clerk. ASCII art was also used in early e-mail when images could not be embedded.

Bjarne Stroustrup: 1 Pixel Per Character, 12 Characters, Font Size 6, Zoom 60

Bjarne Stroustrup: 1 Pixel Per Character, 12 Characters, Font Size 6, Zoom 60

This article explores the steps involved in rendering text representing , implementing source/input in rendering text representations. The following sections details the steps required to render text from source/input :

  1. Generate Random Characters – Generate a consisting of random characters. The number of characters will be determined through user input relating to the Character Count option. When generating the random ensure that all characters added to the are unique. In addition avoid adding control characters or punctuation characters. Control characters are non-visible characters such as Start of Text, Beep, New Line or Carriage Return. Most punctuation characters occupy a lot less screen space compared to regular alphabet characters.
  2. Determine Row and Column Count – Rows and Columns in terms of the Character Count option indicate the ratio between pixels and characters. The number of rows equate to the height in pixels divided by the Character Count. The number of columns equate to the width in pixels divided by the Character Count.
  3. Iterate Rows/Columns and Determine Colour Averages – Iterate pixels in terms of a rows and columns grid strategy. Calculate the sum total of each grid region’s colour values. Calculate the average/mean colour value through dividing the colour sum total by the Character Count squared.
  4. Map Average Colour Intensity to a Character – Using the average colour values calculate in the previous step, calculate a colour intensity value ranging between 0 and the number of randomly generate characters. The intensity value should be implemented as an array index in accessing the of random characters. All of the pixels included in calculating an average value should be represented by the random character located at the index equating to the colour average intensity value.

Linus Torvalds: 1 Pixel Per Character, 16 Characters, Font Size 5, Zoom 60

Linus Torvalds: 1 Pixel Per Character, 16 Characters, Font Size 5, Zoom 60

Converting Text to an Image

When rendering high definition the resulting text can easily consist of several thousand characters. Attempting to display such a vast number of text in a traditional text editor in most scenarios would be futile. An alternative method of retaining a high definition whilst still being viewable can be achieved through creating an from the rendered text and then reducing the dimensions.

The sample code employs the following steps when converting rendered text to an :

  1. Determine Required Image Dimensions – Determine the dimensions required to fit the rendered text.
  2. Create a new Image and set the background colour – After having determined the required dimensions create a new consisting of those dimensions. Set every pixel in the new to Black.
  3. Draw Rendered Text – The rendered text should be drawn on the new in plain White.
  4. Resize Image – In order to ensure more manageable dimensions resize the with a specified factor.

Alan Turing: 1 Pixel Per Character, 16 Characters, Font Size 4, Zoom 100

Alan Turing: 1 Pixel Per Character, 16 Characters, Font Size 4, Zoom 100

Implementing an Image ASCII Filter

The sample source code implements four methods when implementing an ASCII Filter, the methods are:

  • ASCIIFilter
  • GenerateRandomString
  • RandomStringSort
  • GetColorCharacter

The GenerateRandomString  method, as the name implies, generates a consisting of randomly selected characters. The number of characters contained in the will be determined by the parameter value passed to this method. The following code snippet provides the implementation of the GenerateRandomString method:

private static string GenerateRandomString(int maxSize) 
{
    StringBuilder stringBuilder = new StringBuilder(maxSize); 
    Random randomChar = new Random(); 

char charValue;
for (int k = 0; k < maxSize; k++) { charValue = (char)(Math.Floor(255 * randomChar.NextDouble() * 4));
if (stringBuilder.ToString().IndexOf(charValue) != -1) { charValue = (char)(Math.Floor((byte)charValue * randomChar.NextDouble())); }
if (Char.IsControl(charValue) == false && Char.IsPunctuation(charValue) == false && stringBuilder.ToString().IndexOf(charValue) == -1) { stringBuilder.Append(charValue); randomChar = new Random((int)((byte)charValue * (k + 1) + DateTime.Now.Ticks)); } else { randomChar = new Random((int)((byte)charValue * (k + 1) + DateTime.UtcNow.Ticks)); k -= 1; } }
return stringBuilder.ToString().RandomStringSort(); }

Sir Tim Berners-Lee: 4 Pixels Per Character, 16 Characters, Font Size 6, Zoom 100

Sir Tim Berners-Lee, 4 Pixels Per Character, 16 Characters, Font Size 6, Zoom 100

The RandomStringSort method has been defined as an targeting the . This method provides a means of sorting a in a random manner, in essence shuffling a ’s characters. The definition as follows:

public static string RandomStringSort(this string stringValue) 
{
    char[] charArray = stringValue.ToCharArray(); 

Random randomIndex = new Random((byte)charArray[0]); int iterator = charArray.Length;
while(iterator > 1) { iterator -= 1;
int nextIndex = randomIndex.Next(iterator + 1);
char nextValue = charArray[nextIndex]; charArray[nextIndex] = charArray[iterator]; charArray[iterator] = nextValue; }
return new string(charArray); }

Anders Hejlsberg: 3 Pixels Per Character, 12 Characters, Font Size 5, Zoom 50

Anders Hejlsberg: 3 Pixels Per Character, 12 Characters, Font Size 5, Zoom 50

The sample source code defines the GetColorCharacter method, intended to map pixels to character values. This method has been defined as an targeting the . The definition as follows:

private static string colorCharacters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; 

private static string GetColorCharacter(int blue, int green, int red) { string colorChar = String.Empty; int intensity = (blue + green + red) / 3 * (colorCharacters.Length - 1) / 255;
colorChar = colorCharacters.Substring(intensity, 1).ToUpper(); colorChar += colorChar.ToLower();
return colorChar; }

Bjarne Stroustrup: 1 Pixel Per Character, 12 Characters, Font Size 4, Zoom 100

Bjarne Stroustrup: 1 Pixel Per Character, 12 Characters, Font Size 4, Zoom 100

The ASCIIFilter method defined by the sample source code has the task of translating source/input into text based . This method has been defined as an targeting the class. The following code snippet provides the definition:

public static string ASCIIFilter(this Bitmap sourceBitmap, int pixelBlockSize,  
                                                           int colorCount = 0) 
{
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, 
                            sourceBitmap.Width, sourceBitmap.Height), 
                                              ImageLockMode.ReadOnly, 
                                        PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
StringBuilder asciiArt = new StringBuilder();
int avgBlue = 0; int avgGreen = 0; int avgRed = 0; int offset = 0;
int rows = sourceBitmap.Height / pixelBlockSize; int columns = sourceBitmap.Width / pixelBlockSize;
if (colorCount > 0) { colorCharacters = GenerateRandomString(colorCount); }
for (int y = 0; y < rows; y++) { for (int x = 0; x < columns; x++) { avgBlue = 0; avgGreen = 0; avgRed = 0;
for (int pY = 0; pY < pixelBlockSize; pY++) { for (int pX = 0; pX < pixelBlockSize; pX++) { offset = y * pixelBlockSize * sourceData.Stride + x * pixelBlockSize * 4;
offset += pY * sourceData.Stride; offset += pX * 4;
avgBlue += pixelBuffer[offset]; avgGreen += pixelBuffer[offset + 1]; avgRed += pixelBuffer[offset + 2]; } }
avgBlue = avgBlue / (pixelBlockSize * pixelBlockSize); avgGreen = avgGreen / (pixelBlockSize * pixelBlockSize); avgRed = avgRed / (pixelBlockSize * pixelBlockSize);
asciiArt.Append(GetColorCharacter(avgBlue, avgGreen, avgRed)); }
asciiArt.Append("\r\n" ); }
return asciiArt.ToString(); }

Linus Torvalds: 1 Pixel Per Character, 8 Characters, Font Size 4, Zoom 80

Linus Torvalds: 1 Pixel Per Character, 8 Characters, Font Size 4, Zoom 80

Implementing Text to Image Functionality

The sample source code implements the GDI+ class when drawing rendered text onto . The sample source code defines the TextToImage method, an extending the . The definition listed as follows:

public static Bitmap TextToImage(this string text, Font font,  
                                                float factor) 
{
    Bitmap textBitmap = new Bitmap(1, 1); 

Graphics graphics = Graphics.FromImage(textBitmap);
int width = (int)Math.Ceiling( graphics.MeasureString(text, font).Width * factor);
int height = (int)Math.Ceiling( graphics.MeasureString(text, font).Height * factor);
graphics.Dispose();
textBitmap = new Bitmap(width, height, PixelFormat.Format32bppArgb);
graphics = Graphics.FromImage(textBitmap); graphics.Clear(Color.Black);
graphics.CompositingQuality = CompositingQuality.HighQuality; graphics.InterpolationMode = InterpolationMode.HighQualityBicubic; graphics.PixelOffsetMode = PixelOffsetMode.HighQuality; graphics.SmoothingMode = SmoothingMode.HighQuality; graphics.TextRenderingHint = TextRenderingHint.AntiAliasGridFit;
graphics.ScaleTransform(factor, factor); graphics.DrawString(text, font, Brushes.White, new PointF(0, 0));
graphics.Flush(); graphics.Dispose();
return textBitmap; }

Sir Tim Berners-Lee: 1 Pixel Per Character, 32 Characters, Font Size 4, Zoom 100

Sir Tim Berners-Lee, 1 Pixel Per Character, 32 Characters, Font Size 4, Zoom 100

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:

The following section lists the original image files that were used as source/input images in generating the images found throughout this article.

Alan Turing

Alan Turing

Anders Hejlsberg

Anders Hejlsberg

Bjarne Stroustrup

Bjarne Stroustrup

Linus Torvalds

Linus Torvalds

Tim Berners-Lee

Tim Berners-Lee

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

Advertisements

C# How to: Stained Glass Image Filter

Article Purpose

This article serves to provides a detailed discussion and implementation of a Stained Glass Image Filter. Primary topics explored include: Creating , Pixel Coordinate distance calculations implementing , and methods. In addition, this article explores Gradient Based implementing thresholds.

Zurich: Block Size 15, Factor 4, Euclidean

Zurich Block Size 15 Factor 4 Euclidean

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

This article’s accompanying sample source code includes a based sample application. The sample application provides an implementation of the concepts explored by this article. Concepts discussed can be easily replicated and tested by using the sample application.

Source/input files can be specified from the local system when clicking the Load Image button. Additionally users also have the option to save resulting filtered by clicking the Save Image button.

The sample application through its user interface allows a user to specify several filter configuration options. Two main categories of configuration options have been defined as Block Properties and Edge Properties.

Block Properties relate to  the process of rendering  . The following configuration options have been implemented:

  • Block Size – During the process of rendering a regions or blocks of equal shape and size have to be defined. These uniform regions/blocks form the basis of rendering uniquely shaped regions later on. The Block Size option determines the width and height of an individual region/block. Larger values result in larger non-uniform regions being rendered. Smaller values in return result in smaller non-uniform regions being rendered.
  • Distance Factor – The Distance Factor option determines the extent to which a pixel’s containing region will be calculated. Possible values range from 1 to 4 inclusive. A Distance Factor value of 4 equates to precise calculation of a pixel’s containing region, whereas a value of 1 results in containing regions often registering pixels that should be part of a neighbouring region. Values closer to 4 result in more varied region shapes. Values closer to 1 result in regions being rendered having more of a uniform shape/pattern.
  • Distance Formula – The distance between a pixel’s coordinates and a region’s outline determines whether that pixel should be considered part of a region. The sample application implements three different methods of calculating pixel distance: , and methods. Each result in region shapes being rendered differently.

Salzburg: Block Size 20, Factor 1, Chebyshev, Edge Threshold 2 

Saltzburg Block Size 20 Factor 1 Chebyshev Edge Threshold 2

Edge Properties relate to the implementation of Image Gradient Based Edge Detection. is an optional filter and can be enabled/disabled through the user interface, The implementation of serves to highlight/outline regions rendered as part of a . The configuration options implemented are:

  • Highlight Edges – Boolean value indicating whether or not should be applied
  • Threshold – In calculating a threshold value determines if a pixel forms part of an edge. Higher threshold values result in less being expressed. Lower threshold values result in more being expressed.
  • Colour – If a pixel has been determined as forming part of an , the resulting pixel colour will be determined by the colour value specified by the user.

The following image is a screenshot of the Stained Glass Image Filter sample application in action:

Stained Glass Image Filter Sample Application 

Locarno: Block Size 10, Factor 4, Euclidean

Locarno Block Size 10 Factor 4 Euclidean

Stained Glass

The Stained Glass Image Filter detailed in this article operates on the basis of implementing modifications upon a specified sample/input , producing resulting which resemble the appearance of stained glass artwork.

A common variant of stained glass artwork comes in the form of several individual pieces of coloured glass being combined in order to create an . The sample source code employs a similar  method of combining what appears to be non-uniform puzzle pieces. The following list provides a broad overview of the steps involved in applying a Stained Glass Image Filter:

  1. Render a Voronoi Diagram – Through rendering a the resulting will be divided into a number of regions. Each region being intended to represent an individual glass puzzle piece. The following section of this article provides a detailed discussion on rendering .
  2. Assign each Pixel to a Voronoi Diagram Region – Each pixel forming part of the source/input should be iterated. Whilst iterating pixels determine the region to which a pixel should be associated. A pixel should be associated to the region whose border has been determined the nearest to the pixel. In a following section of this article a detailed discussion regarding Pixel Coordinate Distance Calculations can be found.
  3. Determine each Region’s Colour Mean – Each region will only express a single colour value. A region’s colour equates to the average colour as expressed by all the pixels forming part of a region. Once the average colour value of a region has been determined every pixel forming part of that region should be set to the average colour.
  4. Implement Edge Detection – If the user configuration option indicates that should be implemented, apply Gradient Based Edge Detection. This method of has been discussed in detailed in a following section of this article.

Bad Ragaz: Block Size 10, Factor 1, Manhattan 

Bad Ragaz Block Size 10 Factor 1 Manhattan

Voronoi Diagrams

represent a fairly uncomplicated concept. In contrast, the implementation of prove somewhat more of a challenge. From we gain the following :

In mathematics, a Voronoi diagram is a way of dividing space into a number of regions. A set of points (called seeds, sites, or generators) is specified beforehand and for each seed there will be a corresponding region consisting of all points closer to that seed than to any other. The regions are called Voronoi cells. It is dual to the Delaunay triangulation.

In this article are generated resulting in regions expressing random shapes. Although region shapes are randomly generated, the parameters or ranges within which random values are selected are fixed/constant. The steps required in generating a can be detailed as follows:

  1. Define fixed size square regions – By making use of the user specified Block/Region Size value, group pixels together into square regions.
  2. Determine a Seed Value for Random number generation – Determine the sum total of pixel colour components of all the pixels forming part of a square region. The colour sum total value should be used as a seed value when generating random numbers in the next step.
  3. Determine a Random XY coordinate within each square region – Generate two random numbers, specifying each region’s coordinate boundaries as minimum and maximum boundaries in generating random numbers. Keep record of every new randomly generated XY-Coordinate value.
  4. Associate Pixels and Regions – A pixel should be associated to the Random Coordinate point nearest to that pixel. Determine the Random Coordinate nearest to each pixel in the source/input image. The method implemented in calculating coordinate distance depends on the configuration value specified by the user.
  5. Set Region Colours – Each pixel forming part of the same region should be set to the same colour. The colour assigned to a region’s pixels will be determined by the average colour value of the region’s pixels.

The following image illustrates an example consisting of 10 regions:

2Ddim-L2norm-10site

Port Edward: Block Size 10, Factor 1, Chebyshev, Edge Threshold 2

Port Edward Block Size 10 Factor 1 Chebyshev Edge Threshold 2

Calculating Pixel Coordinate Distances

The sample source code provides three different coordinate distance calculation methods. The supported methods are: , and . A pixel’s nearest randomly generated coordinate depends on the distance between that pixel and the random coordinate. Each method of calculating distance in most instances would be likely to produce different output values, which in turn influences the region to which a pixel will be associated.

The most common method of distance calculation, , has been described by as follows:

In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" distance between two points that one would measure with a ruler, and is given by the Pythagorean formula. By using this formula as distance, Euclidean space (or even any inner product space) becomes a metric space. The associated norm is called the Euclidean norm. Older literature refers to the metric as Pythagorean metric.

When calculating the algorithm implemented can be expressed as follows:

Euclidean Distance Algorithm

Zurich: Block Size 10, Factor 1, Euclidean

Zurich Block Size 10 Factor 1 Euclidean

As an alternative to calculating , the sample source code also implements calculation. Often calculation will be referred to as , or . From we gain the following :

Taxicab geometry, considered by Hermann Minkowski in the 19th century, is a form of geometry in which the usual distance function or metric of Euclidean geometry is replaced by a new metric in which the distance between two points is the sum of the absolute differences of their coordinates. The taxicab metric is also known as rectilinear distance, L1 distance or \ell_1 norm (see Lp space), city block distance, Manhattan distance, or Manhattan length, with corresponding variations in the name of the geometry.[1] The latter names allude to the grid layout of most streets on the island of Manhattan, which causes the shortest path a car could take between two intersections in the borough to have length equal to the intersections’ distance in taxicab geometry

When calculating the algorithm implemented can be expressed as follows:

Manhattan Distance Algorithm

Port Edward: Block Size 10, Factor 4, Euclidean

Port Edward Block Size 10 Factor 4 Euclidean

, a distance algorithm resembling the way in which a King Chess piece may move on a chess board. The following we gain from :

In mathematics, Chebyshev distance (or Tchebychev distance), Maximum metric, or L∞ metric[1] is a metric defined on a vector space where the distance between two vectors is the greatest of their differences along any coordinate dimension.[2] It is named after Pafnuty Chebyshev.

It is also known as chessboard distance, since in the game of chess the minimum number of moves needed by a king to go from one square on a chessboard to another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board.[3] For example, the Chebyshev distance between f6 and e2 equals 4.

When calculating the algorithm implemented can be expressed as follows:

Chebyshev Distance Algorithm

Salzburg: Block Size 20, Factor 1, Chebyshev

Salzburg Block Size 20 Factor 1 Chebyshev

Gradient Based Edge Detection

Various methods of can easily be implemented in C#. Each method of provides a set of benefits, usually weighed against a set of trade-offs. In this article and the accompanying sample source code the Gradient Based Edge Detection method has been implement.

Take into regard that every region within the rendered will only express a single colour, although most regions differ in the single colour they express. Once all pixels have been associated to a region and all pixel colour values have been updated the resulting defines mostly clearly distinguishable  colour gradients. A method of performs efficiently at detecting . The edges detected are defined between different regions.

An can be considered as a difference in colour intensity relating to a specific direction. Only once all tasks related to applying the Stained Glass Filter have been completed should the Gradient Based Edge Detection be applied. The steps involved in applying Gradient Based Edge Detection can be described as follows:

  1. Iterate each pixel – Each pixel forming part of a source/input image should be iterated.
  2. Determine Horizontal and Vertical Gradients – Calculate the colour value difference between the currently iterated pixel’s left and right neighbour pixel as well as the top and bottom neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  3. Determine Horizontal Gradient – Calculate the colour value difference between the currently iterated pixel’s left and right neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  4. Determine Vertical Gradient – Calculate the colour value difference between the currently iterated pixel’s top and bottom neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  5. Determine Diagonal Gradients – Calculate the colour value difference between the currently iterated pixel’s North-Western and South-Eastern neighbour pixel as well as the North-Eastern and South-Western neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  6. Determine NW-SE Gradient – Calculate the colour value difference between the currently iterated pixel’s North-Western and South-Eastern neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  7. Determine NE-SW Gradient  – Calculate the colour value difference between the currently iterated pixel’s North-Eastern and South-Western neighbour pixel.
  8. Determine and set result pixel value – If any of the six gradients calculated exceeded the specified threshold value set the related pixel in the resulting image to the Edge Colour specified by the user, if not, set the related pixel equal to the source pixel colour value.

Zurich: Block Size 10, Factor 4, Chebyshev

Zurich Block Size 10 Factor 4 Chebyshev 

Implementing a Stained Glass Image Filter

The sample source code defines two helper classes, both implemented when applying the Stained Glass Image Filter. The Pixel class represents a single pixel in terms of an XY-Coordinate and Red, Green and Blue values. The definition as follows:

public class Pixel  
{
    private int xOffset = 0; 
    public int XOffset 
    {
        get { return xOffset; } set { xOffset = value; } 
    }

private int yOffset = 0; public int YOffset { get { return yOffset; } set { yOffset = value; } }
private byte blue = 0; public byte Blue { get { return blue; } set { blue = value; } }
private byte green = 0; public byte Green { get { return green; } set { green = value; } }
private byte red = 0; public byte Red { get { return red; } set { red = value; } } }

Zurich: Block Size 10, Factor 1, Chebyshev, Edge Threshold 1

Zurich Block Size 10 Factor 1 Chebyshev Edge Threshold 1

The VoronoiPoint class serves as method of recording randomly generated coordinates and referencing a region’s associated pixels. The definition as follows:

public class VoronoiPoint 
{
    private int xOffset = 0; 
    public int XOffset 
    {
        get  { return xOffset; } set { xOffset = value; }
    }

private int yOffset = 0; public int YOffset { get { return yOffset; } set { yOffset = value; } }
private int blueTotal = 0; public int BlueTotal { get { return blueTotal; } set { blueTotal = value; } }
private int greenTotal = 0; public int GreenTotal { get {return greenTotal; } set { greenTotal = value; } }
private int redTotal = 0; public int RedTotal { get { return redTotal; } set { redTotal = value; } }
public void CalculateAverages() { if (pixelCollection.Count > 0) { blueAverage = blueTotal / pixelCollection.Count; greenAverage = greenTotal / pixelCollection.Count; redAverage = redTotal / pixelCollection.Count; } }
private int blueAverage = 0; public int BlueAverage { get { return blueAverage; } }
private int greenAverage = 0; public int GreenAverage { get { return greenAverage; } }
private int redAverage = 0; public int RedAverage { get { return redAverage; } }
private List<Pixel> pixelCollection = new List<Pixel>(); public List<Pixel> PixelCollection { get { return pixelCollection; } }
public void AddPixel(Pixel pixel) { blueTotal += pixel.Blue; greenTotal += pixel.Green; redTotal += pixel.Red;
pixelCollection.Add(pixel); } }

Zurich: Block Size 20, Factor 1, Euclidean, Edge Threshold 1

Zurich Block Size 20 Factor 1 Euclidean Edge Threshold 1

From the perspective of a filter implementation code base the only requirement comes in the form of having to invoke the StainedGlassColorFilter , no additional work is required from external code consumers. The StainedGlassColorFilter method has been defined as an targeting the class. The StainedGlassColorFilter method definition as follows:

public static Bitmap StainedGlassColorFilter(this Bitmap sourceBitmap,  
                                             int blockSize, double blockFactor, 
                                             DistanceFormulaType distanceType, 
                                             bool highlightEdges,  
                                             byte edgeThreshold, Color edgeColor) 
{
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int neighbourHoodTotal = 0; int sourceOffset = 0; int resultOffset = 0; int currentPixelDistance = 0; int nearestPixelDistance = 0; int nearesttPointIndex = 0;
Random randomizer = new Random();
List<VoronoiPoint> randomPointList = new List<VoronoiPoint>();
for (int row = 0; row < sourceBitmap.Height - blockSize; row += blockSize) { for (int col = 0; col < sourceBitmap.Width - blockSize; col += blockSize) { sourceOffset = row * sourceData.Stride + col * 4;
neighbourHoodTotal = 0;
for (int y = 0; y < blockSize; y++) { for (int x = 0; x < blockSize; x++) { resultOffset = sourceOffset + y * sourceData.Stride + x * 4; neighbourHoodTotal += pixelBuffer[resultOffset]; neighbourHoodTotal += pixelBuffer[resultOffset + 1]; neighbourHoodTotal += pixelBuffer[resultOffset + 2]; } }
randomizer = new Random(neighbourHoodTotal);
VoronoiPoint randomPoint = new VoronoiPoint(); randomPoint.XOffset = randomizer.Next(0, blockSize) + col; randomPoint.YOffset = randomizer.Next(0, blockSize) + row;
randomPointList.Add(randomPoint); } }
int rowOffset = 0; int colOffset = 0;
for (int bufferOffset = 0; bufferOffset < pixelBuffer.Length - 4; bufferOffset += 4) { rowOffset = bufferOffset / sourceData.Stride; colOffset = (bufferOffset % sourceData.Stride) / 4;
currentPixelDistance = 0; nearestPixelDistance = blockSize * 4; nearesttPointIndex = 0;
List<VoronoiPoint> pointSubset = new List<VoronoiPoint>();
pointSubset.AddRange(from t in randomPointList where rowOffset >= t.YOffset - blockSize * 2 && rowOffset <= t.YOffset + blockSize * 2 select t);
for (int k = 0; k < pointSubset.Count; k++) { if (distanceType == DistanceFormulaType.Euclidean) { currentPixelDistance = CalculateDistanceEuclidean(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } else if (distanceType == DistanceFormulaType.Manhattan) { currentPixelDistance = CalculateDistanceManhattan(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } else if (distanceType == DistanceFormulaType.Chebyshev) { currentPixelDistance = CalculateDistanceChebyshev(pointSubset[k].XOffset, colOffset, pointSubset[k].YOffset, rowOffset); } if (currentPixelDistance <= nearestPixelDistance) { nearestPixelDistance = currentPixelDistance; nearesttPointIndex = k; if (nearestPixelDistance <= blockSize / blockFactor) { break; } } }
Pixel tmpPixel = new Pixel (); tmpPixel.XOffset = colOffset; tmpPixel.YOffset = rowOffset; tmpPixel.Blue = pixelBuffer[bufferOffset]; tmpPixel.Green = pixelBuffer[bufferOffset + 1]; tmpPixel.Red = pixelBuffer[bufferOffset + 2];
pointSubset[nearesttPointIndex].AddPixel(tmpPixel); }
for (int k = 0; k < randomPointList.Count; k++) { randomPointList[k].CalculateAverages();
for (int i = 0; i < randomPointList[k].PixelCollection.Count; i++) { resultOffset = randomPointList[k].PixelCollection[i].YOffset * sourceData.Stride + randomPointList[k].PixelCollection[i].XOffset * 4;
resultBuffer[resultOffset] = (byte)randomPointList[k].BlueAverage; resultBuffer[resultOffset + 1] = (byte)randomPointList[k].GreenAverage; resultBuffer[resultOffset + 2] = (byte)randomPointList[k].RedAverage;
resultBuffer[resultOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
if (highlightEdges == true ) { resultBitmap = resultBitmap.GradientBasedEdgeDetectionFilter(edgeColor, edgeThreshold); }
return resultBitmap; }

Locarno: Block Size 10, Factor 4, Euclidean, Edge Threshold 1

Locarno Block Size 10 Factor 4 Euclidean Edge Threshold 1

Implementing Pixel Coordinate Distance Calculations

As mentioned earlier, this article and the accompanying sample source code support coordinate distance calculations through three different calculation methods, namely , and . The method of distance calculation implemented depends on the configuration option specified by the user.

The CalculateDistanceEuclidean method calculates distance implementing the Calculation method. In order to aid faster execution this method will calculate the square root of a specific value only once. Once a square root has been calculated the result is kept in memory. The following code snippet lists the definition of the CalculateDistanceEuclidean method:

private static Dictionary <int,int> squareRoots = new Dictionary<int,int>(); 

private static int CalculateDistanceEuclidean(int x1, int x2, int y1, int y2) { int square = (x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2);
if(squareRoots.ContainsKey(square) == false) { squareRoots.Add(square, (int)Math.Sqrt(square)); }
return squareRoots[square]; }

The two other methods of calculating distance are implemented through the CalculateDistanceManhattan and CalculateDistanceChebyshev methods. The definition as follows:

private static int CalculateDistanceManhattan(int x1, int x2, int y1, int y2) 
{
    return Math.Abs(x1 - x2) + Math.Abs(y1 - y2); 
}

private static int CalculateDistanceChebyshev(int x1, int x2, int y1, int y2) { return Math.Max(Math.Abs(x1 - x2), Math.Abs(y1 - y2)); }

Bad Ragaz: Block Size 12, Factor 1, Chebyshev

Bad Ragaz Block Size 12 Factor 1 Chebyshev

Implementing Gradient Based Edge Detection

Did you notice the very last step performed by the StainedGlassColorFilter method involves implementing Gradient Based Edge Detection, depending on whether had been specified by the user.

The following code snippet provides the implementation of the GradientBasedEdgeDetectionFilter extension method:

public static Bitmap GradientBasedEdgeDetectionFilter( 
                this Bitmap sourceBitmap, 
                Color edgeColour, 
                byte threshold = 0) 
{
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle (0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int sourceOffset = 0, gradientValue = 0; bool exceedsThreshold = false;
for(int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for(int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { sourceOffset = offsetY * sourceData.Stride + offsetX * 4; gradientValue = 0; exceedsThreshold = true;
// Horizontal Gradient CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold, 2); // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Horizontal Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold, 2); // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset - 4 + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset + sourceData.Stride - 4, ref gradientValue, threshold); } } } } }
if (exceedsThreshold == true) { resultBuffer[sourceOffset] = edgeColour.B; resultBuffer[sourceOffset + 1] = edgeColour.G; resultBuffer[sourceOffset + 2] = edgeColour.R; }
resultBuffer[sourceOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height); BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode .WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Zurich: Block Size 15, Factor 1, Manhattan, Edge Threshold 1

Zurich Block Size 15 Factor 1 Manhattan Edge Threshold 1

Sample Images

This article features a rendered graphic illustrating an example which has been released into the public domain by its author, Augochy at the wikipedia project. This applies worldwide. The original can be downloaded from .

All of the photos that appear in this article were taken by myself. Photos listed under Zurich, Locarno and Bad Ragaz were shot in Switzerland. The photo listed as Salzburg had been shot in Austria and the photo listed under Port Edward had been shot in South Africa. In order to fully realize the extent to which had been modified the following section details the original photos.

Zurich, Switzerland

Zurich, Switzerland

Salzburg, Austria

Salzburg, Austria

Locarno, Switzerland

Locarno, Switzerland

Bad Ragaz, Switzerland

Bad Ragaz, Switzerland

Port Edward, South Africa

Port Edward, South Africa

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Zurich, Switzerland

Bad Ragaz, Switzerland

Bad Ragaz, Switzerland

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Oil Painting and Cartoon Filter

Article Purpose

This article illustrates and provides a discussion and implementation of Oil Painting Filters and related Image Cartoon Filters.

Sunflower: Oil Painting, Filter 5, Levels 30, Cartoon Threshold 30

Sunflower Oil Painting Filter 5 Levels 30 Cartoon Threshold 30

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

A sample application accompanies this article. The sample application creates a visual implementation of the concepts discussed throughout this article. Source/input can be selected from the local system and if desired filter result images can be saved to the local file system.

The two main types of functionality exposed by the sample application can be described as Image Oil Painting Filters and Image Cartoon Filters. The user interface provides the following user input options:

  • Filter Size – The number of neighbouring pixels used in calculating each individual pixel value in regards to an Oil Painting Filter. Higher Filter sizes relate to a more intense Oil Painting Filter being applied. Lower Filter sizes relate to less intense Oil Painting Filters being applied.
  • Intensity Levels – Represents the number of Intensity Levels implemented when applying an Oil Painting Filter. Higher values result in a broader range of colour intensities forming part of the result . Lower values will reduce the range of colour intensities forming part of the result .
  • Cartoon Filter – A Boolean value indicating whether or not in addition to an Oil Painting Filter if a Cartoon Filter should also be applied.
  • Threshold – Only applicable when applying a Cartoon Filter. This option represents the threshold value implemented in determining whether a pixel forms part of an . Lower Values result in more being highlighted. Higher values result in less being highlighted.

The following image is screenshot of the Oil Painting Cartoon Filter sample application in action:

OilPaintingCartoonFilter_SampleApplication

Rose: Oil Painting, Filter 15, Levels 10

Rose Oil Painting Filter 15 Levels 10

Image Oil Painting Filter

The Image Oil Painting Filter consists of two main components: colour gradients and pixel colour intensities. As implied by the title when implementing this resulting are similar in appearance to of Oil Paintings. Result express a lesser degree of detail when compared to source/input . This filter also tends to output which appear to have smaller colour ranges.

Four steps are required when implementing an Oil Painting Filter, indicated as follows:

  1. Iterate each pixel – Every pixel forming part of the source/input should be iterated. When iterating a pixel determine the neighbouring pixel values based on the specified filter size/filter range.
  2. Calculate Colour Intensity -  Determine the Colour Intensity of each pixel being iterated and that of the neighbouring pixels. The neighbouring pixels included should extend to a range determined by the Filter Size specified. The calculated value should be reduced in order to match a value ranging from zero to the number of Intensity Levels specified.
  3. Determine maximum neighbourhood colour intensity – When calculating the colour intensities of a pixel neighbourhood determine the maximum intensity value. In addition, record the occurrence of each intensity level and sum each of the Red, Green and Blue pixel colour component values equating to the same intensity level.
  4. Assign the result pixel – The value assigned to the corresponding pixel in the resulting equates to the pixel colour sum total, where those pixels expressed the same intensity level. The sum total should be averaged by dividing the colour sum total by the intensity level occurrence.

Roses: Oil Painting, Filter 11, Levels 60, Cartoon Threshold 80

Roses Oil Painting Filter 11 Levels 60 Cartoon Threshold 80

When calculating colour intensity reduced to fit the number of levels specified  the algorithm implemented can be expressed as follows:

Colour Intensity Level Algorithm

In the algorithm listed above the variables implemented can be explained as follows:

  • I – Intensity: The calculated intensity value.
  • R – Red: The value of a pixel’s Red colour component.
  • G – Green: The value of a pixel’s Green colour component.
  • B – Blue: The value of a pixel’s Blue colour component.
  • l – Number of intensity levels: The maximum number of intensity levels specified.

Rose: Oil Painting, Filter 15, Levels 30

Rose Oil Painting Filter 15 Levels 30

Cartoon Filter implementing Edge Detection

A Cartoon Filter effect can be achieved by combining an Image Oil Painting filter and an Edge Detection Filter. The Oil Painting filter has the effect of creating more gradual colour gradients, in other words reducing edge intensity.

The steps required in implementing a Cartoon filter can be listed as follows:

  1. Apply Oil Painting filter – Applying an Oil Painting Filter creates the perception of result having been painted by hand.
  2. Implement Edge Detection – Using the original source/input create a new binary detailing .
  3. Overlay edges on Oil Painting image – Iterate each pixel forming part of the edge detected . If the pixel being iterated forms part of an edge, the related pixel in the Oil Painting filtered should be set to black. Because the edge detected was created as a binary , a pixel forms part of an edge should that pixel equate to white.

Daisy: Oil Painting, Filter 7, Levels 30, Cartoon Threshold 40 

Daisy Oil Painting Filter 7 Levels 30 Cartoon Threshold 40

In the sample source code has been implemented through Gradient Based Edge Detection. This method of compares the difference in colour gradients between a pixel’s neighbouring pixels. A pixel forms part of an edge if the difference in neighbouring pixel colour values exceeds a specified threshold value. The steps involved in Gradient Based Edge Detection as follows:

  1. Iterate each pixel – Each pixel forming part of a source/input should be iterated.
  2. Determine Horizontal and Vertical Gradients – Calculate the colour value difference between the currently iterated pixel’s left and right neighbour pixel as well as the top and bottom neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  3. Determine Horizontal Gradient – Calculate the colour value difference between the currently iterated pixel’s left and right neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  4. Determine Vertical Gradient – Calculate the colour value difference between the currently iterated pixel’s top and bottom neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  5. Determine Diagonal Gradients – Calculate the colour value difference between the currently iterated pixel’s North-Western and South-Eastern neighbour pixel as well as the North-Eastern and South-Western neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  6. Determine NW-SE Gradient – Calculate the colour value difference between the currently iterated pixel’s North-Western and South-Eastern neighbour pixel. If the gradient exceeds the specified threshold continue to step 8.
  7. Determine NE-SW Gradient  – Calculate the colour value difference between the currently iterated pixel’s North-Eastern and South-Western neighbour pixel.
  8. Determine and set result pixel value – If any of the six gradients calculated exceeded the specified threshold value set the related pixel in the resulting image to white, if not, set the related pixel to black.

Rose: Oil Painting, Filter 9, Levels 30

Rose Oil Painting Filter 9 Levels 30

Implementing an Oil Painting Filter

The sample source code defines the OilPaintFilter method, an targeting the class. method determines the maximum colour intensity from a pixel’s neighbouring pixels. The definition detailed as follows:

public static Bitmap OilPaintFilter(this Bitmap sourceBitmap, 
                                       int levels, 
                                       int filterSize) 
{
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int[] intensityBin = new int [levels]; int[] blueBin = new int [levels]; int[] greenBin = new int [levels]; int[] redBin = new int [levels];
levels = levels - 1;
int filterOffset = (filterSize - 1) / 2; int byteOffset = 0; int calcOffset = 0; int currentIntensity = 0; int maxIntensity = 0; int maxIndex = 0;
double blue = 0; double green = 0; double red = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = green = red = 0;
currentIntensity = maxIntensity = maxIndex = 0;
intensityBin = new int[levels + 1]; blueBin = new int[levels + 1]; greenBin = new int[levels + 1]; redBin = new int[levels + 1];
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
currentIntensity = (int )Math.Round(((double) (pixelBuffer[calcOffset] + pixelBuffer[calcOffset + 1] + pixelBuffer[calcOffset + 2]) / 3.0 * (levels)) / 255.0);
intensityBin[currentIntensity] += 1; blueBin[currentIntensity] += pixelBuffer[calcOffset]; greenBin[currentIntensity] += pixelBuffer[calcOffset + 1]; redBin[currentIntensity] += pixelBuffer[calcOffset + 2];
if (intensityBin[currentIntensity] > maxIntensity) { maxIntensity = intensityBin[currentIntensity]; maxIndex = currentIntensity; } } }
blue = blueBin[maxIndex] / maxIntensity; green = greenBin[maxIndex] / maxIntensity; red = redBin[maxIndex] / maxIntensity;
resultBuffer[byteOffset] = ClipByte(blue); resultBuffer[byteOffset + 1] = ClipByte(green); resultBuffer[byteOffset + 2] = ClipByte(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rose: Oil Painting, Filter 7, Levels 20, Cartoon Threshold 20

Rose Oil Painting Filter 7 Levels 20 Cartoon Threshold 20

Implementing a Cartoon Filter using Edge Detection

The sample source code defines the CheckThreshold method. The purpose of this method to determine the difference in colour between two pixels. In addition this method compares the colour difference and the specified threshold value. The following code snippet provides the implementation:

private static bool CheckThreshold(byte[] pixelBuffer,  
                                   int offset1, int offset2,  
                                   ref int gradientValue,  
                                   byte threshold,  
                                   int divideBy = 1) 
{ 
    gradientValue += 
    Math.Abs(pixelBuffer[offset1] - 
    pixelBuffer[offset2]) / divideBy; 

gradientValue += Math.Abs(pixelBuffer[offset1 + 1] - pixelBuffer[offset2 + 1]) / divideBy;
gradientValue += Math.Abs(pixelBuffer[offset1 + 2] - pixelBuffer[offset2 + 2]) / divideBy;
return (gradientValue >= threshold); }

Rose: Oil Painting, Filter 13, Levels 15

Rose Oil Painting Filter 13 Levels 15

The GradientBasedEdgeDetectionFilter method has been defined as an targeting the class. This method iterates each pixel forming part of the source/input . Whilst iterating pixels the GradientBasedEdgeDetectionFilter determines if the colour gradients in various directions exceeds the specified threshold value. A pixel is considered as part of an edge if a colour gradient exceeds the threshold value. The implementation as follows:

public static Bitmap GradientBasedEdgeDetectionFilter( 
                        this Bitmap sourceBitmap, 
                        byte threshold = 0) 
{ 
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle (0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
int sourceOffset = 0, gradientValue = 0; bool exceedsThreshold = false;
for(int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for(int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { sourceOffset = offsetY * sourceData.Stride + offsetX * 4; gradientValue = 0; exceedsThreshold = true ;
// Horizontal Gradient CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold, 2); // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false ) { gradientValue = 0;
// Horizontal Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold);
if (exceedsThreshold == false ) { gradientValue = 0; // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false ) { gradientValue = 0; // Diagonal Gradient : NW-SE CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold, 2); // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset - 4 + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false ) { gradientValue = 0; // Diagonal Gradient : NW-SE exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false ) { gradientValue = 0; // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset + sourceData.Stride - 4, ref gradientValue, threshold); } } } } }
resultBuffer[sourceOffset] = (byte)(exceedsThreshold ? 255 : 0); resultBuffer[sourceOffset + 1] = resultBuffer[sourceOffset]; resultBuffer[sourceOffset + 2] = resultBuffer[sourceOffset]; resultBuffer[sourceOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rose: Oil Painting, Filter 7, Levels 20, Cartoon Threshold 20

Rose Oil Painting Filter 7 Levels 20 Cartoon Threshold 20

The CartoonFilter serves to combine generated by the OilPaintFilter and GradientBasedEdgeDetectionFilter methods. The CartoonFilter method being defined as an targets the class. In this method pixels detected as forming part of an edge are set to black in Oil Painting filtered . The definition as follows:

public static Bitmap CartoonFilter(this Bitmap sourceBitmap,
                                       int levels, 
                                       int filterSize, 
                                       byte threshold) 
{
    Bitmap paintFilterImage =  
           sourceBitmap.OilPaintFilter(levels, filterSize);

Bitmap edgeDetectImage = sourceBitmap.GradientBasedEdgeDetectionFilter(threshold);
BitmapData paintData = paintFilterImage.LockBits(new Rectangle (0, 0, paintFilterImage.Width, paintFilterImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] paintPixelBuffer = new byte[paintData.Stride * paintData.Height];
Marshal.Copy(paintData.Scan0, paintPixelBuffer, 0, paintPixelBuffer.Length);
paintFilterImage.UnlockBits(paintData);
BitmapData edgeData = edgeDetectImage.LockBits(new Rectangle (0, 0, edgeDetectImage.Width, edgeDetectImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] edgePixelBuffer = new byte[edgeData.Stride * edgeData.Height];
Marshal.Copy(edgeData.Scan0, edgePixelBuffer, 0, edgePixelBuffer.Length);
edgeDetectImage.UnlockBits(edgeData);
byte[] resultBuffer = new byte [edgeData.Stride * edgeData.Height];
for(int k = 0; k + 4 < paintPixelBuffer.Length; k += 4) { if (edgePixelBuffer[k] == 255 || edgePixelBuffer[k + 1] == 255 || edgePixelBuffer[k + 2] == 255) { resultBuffer[k] = 0; resultBuffer[k + 1] = 0; resultBuffer[k + 2] = 0; resultBuffer[k + 3] = 255; } else { resultBuffer[k] = paintPixelBuffer[k]; resultBuffer[k + 1] = paintPixelBuffer[k + 1]; resultBuffer[k + 2] = paintPixelBuffer[k + 2]; resultBuffer[k + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rose: Oil Painting, Filter 9, Levels 25, Cartoon Threshold 25

Rose Oil Painting Filter 9 Levels 25 Cartoon Threshold 25

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Transform Shear

Article Purpose

This article is focussed on illustrating the steps required in performing an . All of the concepts explored have been implemented by means of raw pixel data processing, no conventional drawing methods, such as GDI, are required.

Rabbit: Shear X 0.4, Y 0.4

Rabbit Shear X 0.4, Y 0.4

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

article features a based sample application which is included as part of the accompanying sample source code. The concepts explored in this article can be illustrated in a practical implementation using the sample application.

The sample application enables a user to load source/input from the local system when clicking the Load Image button. In addition users are also able to save output result to the local file system by clicking the Save Image button.

Image can be applied to either X or Y, or both X and Y pixel coordinates. When using the sample application the user has option of adjusting Shear factors, as indicated on the user interface by the numeric up/down controls labelled Shear X and Shear Y.

The following image is a screenshot of the Image Transform Shear Sample Application in action:

Image Transform Shear Sample Application

Rabbit: Shear X -0.5, Y -0.25

Rabbit Shear X -0.5, Y -0.25

Image Shear Transformation

A good definition of the term can be found on the Wikipedia :

In , a shear mapping is a that displaces each point in fixed direction, by an amount proportional to its signed distance from a line that is to that direction.[1] This type of mapping is also called shear transformation, transvection, or just shearing

A can be applied as a horizontal shear, a vertical shear or as both. The algorithms implemented when performing a can be expressed as follows:

Horizontal Shear Algorithm

Horizontal Shear Algorithm

Vertical Shear Algorithm

Vertical Shear Algorithm

The algorithm description:

  • Shear(x) : The result of a horizontal – The calculated X-Coordinate representing a .
  • Shear(y) : The result of a vertical – The calculated Y-Coordinate representing a .
  • σ : The lower case version of the Greek alphabet letter Sigma – Represents the Shear Factor.
  • x : The X-Coordinate originating from the source/input – The horizontal coordinate value intended to be sheared.
  • y : The Y-Coordinate originating from the source/input – The vertical coordinate value intended to be sheared.
  • H : Source height in pixels.
  • W : Source width in pixels.

Note: When performing a implementing both the horizontal and vertical planes each coordinate plane can be calculated using a different shearing factor.

The algorithms have been adapted in order to implement a middle pixel offset by means of subtracting the product of the related plane boundary and the specified Shearing Factor, which will then be divided by a factor of two.

Rabbit: Shear X 1.0, Y 0.1

Rabbit Shear X 1.0, Y 0.1

Implementing a Shear Transformation

The sample source code performs through the implementation of the ShearXY and ShearImage.

The ShearXY targets the structure. The algorithms discussed in the previous sections have been implemented in this function from a C# perspective. The definition as illustrated by the following code snippet:

public static Point ShearXY(this Point source, double shearX, 
                                               double shearY, 
                                               int offsetX,  
                                               int offsetY) 
{
    Point result = new Point(); 

result.X = (int)(Math.Round(source.X + shearX * source.Y)); result.X -= offsetX;
result.Y = (int)(Math.Round(source.Y + shearY * source.X)); result.Y -= offsetY;
return result; }

Rabbit: Shear X 0.0, Y 0.5

Rabbit Shear X 0.0, Y 0.5

The ShearImage targets the class. This method expects as parameter values a horizontal and a vertical shearing factor. Providing a shearing factor of zero results in no shearing being implemented in the corresponding direction. The definition as follows:

public static Bitmap ShearImage(this Bitmap sourceBitmap, 
                               double shearX, 
                               double shearY) 
{ 
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int xOffset = (int )Math.Round(sourceBitmap.Width * shearX / 2.0);
int yOffset = (int )Math.Round(sourceBitmap.Height * shearY / 2.0);
int sourceXY = 0; int resultXY = 0;
Point sourcePoint = new Point(); Point resultPoint = new Point();
Rectangle imageBounds = new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height);
for (int row = 0; row < sourceBitmap.Height; row++) { for (int col = 0; col < sourceBitmap.Width; col++) { sourceXY = row * sourceData.Stride + col * 4;
sourcePoint.X = col; sourcePoint.Y = row;
if (sourceXY >= 0 && sourceXY + 3 < pixelBuffer.Length) { resultPoint = sourcePoint.ShearXY(shearX, shearY, xOffset, yOffset);
resultXY = resultPoint.Y * sourceData.Stride + resultPoint.X * 4;
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 <= resultBuffer.Length) { resultBuffer[resultXY + 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 5] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 6] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 7] = 255; }
if (resultXY - 3 >= 0) { resultBuffer[resultXY - 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY - 3] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY - 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY - 1] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 1] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 3] = 255; } } } } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rabbit: Shear X 0.5, Y 0.0

Rabbit Shear X 0.5, Y 0.0

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction.

The sample images featuring the image of a Desert Cottontail Rabbit is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia. The original author is attributed as Larry D. Moore.

The sample images featuring the image of a Rabbit in Snow is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia. The original author is attributed as George Tuli.

The sample images featuring the image of an Eastern Cottontail Rabbit has been released into the public domain by its author. The original image can be downloaded from .

The sample images featuring the image of a Mountain Cottontail Rabbit is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code. The original image can be downloaded from .

Rabbit: Shear X 1.0, Y 0.0

Rabbit Shear X 1.0, Y 0.0

Rabbit: Shear X 0.5, Y 0.1

Rabbit Shear X 0.5, Y 0.1

Rabbit: Shear X -0.5, Y -0.25

Rabbit Shear X -0.5, Y -0.25

Rabbit: Shear X -0.5, Y 0.0

Rabbit Shear X -0.5, Y 0.0

Rabbit: Shear X 0.25, Y 0.0

Rabbit Shear X 0.25, Y 0.0

Rabbit: Shear X 0.50, Y 0.0

Rabbit Shear X 0.50, Y 0.0

Rabbit: Shear X 0.0, Y 0.5

Rabbit Shear X 0.0, Y 0.5

Rabbit: Shear X 0.0, Y 0.25

Rabbit Shear X 0.0, Y 0.25

Rabbit: Shear X 0.0, Y 1.0

Rabbit Shear X 0.0, Y 1.0

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Calculating Gaussian Kernels

Article Purpose

This purpose of this article is to explain and illustrate in detail the requirements involved in calculating Gaussian Kernels intended for use in image convolution when implementing Gaussian Blur filters. This article’s discussion spans from exploring concepts in theory and continues on to implement concepts through C# sample source code.

Ant: Gaussian Kernel 5×5 Weight 19

Ant Gaussian Kernel 5x5 Weight 19

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here

Calculating Gaussian Kernels Sample Source code

Using the Sample Application

A Sample Application forms part of the accompanying sample source code, intended to implement the topics discussed and also provides the means to replicate and test the concepts being illustrated.

The sample application is a Windows Forms based application which provides functionality enabling users to generate/calculate Gaussian Kernels. Calculation results are influenced through user specified options in the form of: Kernel Size and Weight.

Ladybird: Gaussian Kernel 5×5 Weight 5.5

Gaussian Kernel 5x5 Weight 5.5

In the sample application and related sample source code when referring to Kernel Size, a reference is being made relating to the physical size dimensions of the kernel/matrix used in convolution. When higher values are specified in setting the Kernel Size, the resulting output image will reflect a greater degree of blurring. Kernel Sizes being specified as lower values result in the output image reflecting a lesser degree of blurring.

In a similar fashion to the Kernel size value, the Weight value provided when generating a Kernel results in smoother/more blurred images when specified as higher values. Lower values assigned to the Weight value has the expected result of less blurring being evident in output images.

Prey Mantis: Gaussian Kernel 13×13 Weight 13

Prey Mantis Gaussian Kernel 13x13 Weight 13

The sample application has the ability to provide the user with a visual representation implementing the calculated kernel value blurring. Users are able to select source/input image from the local file system by clicking the Load Image button. When desired, users are able to save blurred/filtered images to the local file system by clicking the Save Image button.

The image below is screenshot of the Gaussian Kernel Calculator sample application in action:

Gaussian Kernel Calculator Sample Application

Calculating Gaussian Convolution Kernels

The formula implemented in calculating Gaussian Kernels can be implemented in C# source code fairly easily. Once the method in which the formula operates has been grasped the actual code implementation becomes straight forward.

The Gaussian Kernel formula can be expressed as follows:

Gaussian Kernel formula

The formula contains a number of symbols, which define how the filter will be implemented. The symbols forming part of the Gaussian Kernel formula are described in the following list:

  • G(x y) – A value calculated using the Gaussian Kernel formula. This value forms part of a Kernel, representing a single element.
  • π – Pi, one of the better known members of the Greek alphabet. The mathematical constant defined as 22 / 7.
  • σ – The lower case version of the Greek alphabet letter Sigma. This symbol simply represents a threshold or factor value, as specified by the user.
  • e – The formula references a lower case e symbol. The symbol represents Euler’s number. The value of Euler’s number has been defined as a mathematical constant equating to 2.71828182846.
  • x, y – The variables referenced as x and y relate to pixel coordinates within an image. y Representing the vertical offset or row and x represents the horizontal offset or column.

Note: The formula’s implementation expects x and y to equal zero values when representing the coordinates of the pixel located in the middle of the kernel.

Ladybird: Gaussian Kernel 13×13 Weight 9.5

Gaussian Kernel 13x13 Weight 9.5

When calculating the kernel elements, the coordinate values expressed by x and y should reflect the distance in pixels from the middle pixel. All coordinate values must be greater than zero.

In order to gain a better grasp on the Gaussian kernel formula we can implement the formula in steps. If we were to create a 3×3 kernel and specified a weighting value of 5.5 our calculations can start off as indicated by the following illustration:

Gaussian Kernel Formula

The formula has been implement on each element forming part of the kernel, 9 values in total. Coordinate values have now been replaced with actual values, differing for each position/element. Calculating zero to the power of two equates to zero. In the scenario above indicating zeros which express exponential values might help to ease initial understanding, as opposed to providing simplified values and potentially causing confusing scenarios. The following image illustrates the calculated values of each kernel element:

Gaussian Kernel Values non summed

Ant: Gaussian Kernel 9×9 Weight 19

Ant Gaussian Kernel 9x9 Weight 19

An important requirement to take note of at this point being that the sum total of all the elements contained as part of a kernel/matrix must equate to one. Looking at our calculated results that is not the case. The kernel needs to be modified in order to satisfy the requirement of having a sum total value of 1 when adding together all the elements of the kernel.

At this point the sum total of the kernel equates to 0.046322548968. We can correct the kernel values, ensuring the sum total of all kernel elements equate to 1. The kernel values should be updated by multiplying each element by one divided by the current kernel sum. In other words each item should be multiplied by:

1.0 / 0.046322548968

After updating the kernel by multiplying each element with the values mentioned above, the result as follows:

Calculated Gaussian Kernel Values

We have now successfully calculated a 3×3 Gaussian Blur kernel matrix which implements a weight value of 5.5. Implementing the Gaussian blur has the following effect:

Rose: Gaussian Kernel 3×3 Weight 5.5

Rose Gaussian Kernel 3x3 Weight 5.5

The Original Image

Rose_Amber_Flush_20070601

The calculated Gaussian Kernel can now be implemented when performing image convolution.

Implementing Gaussian Kernel Calculations

In this section of the article we will be exploring how to implement Gaussian Blur kernel calculations in terms of C# code. Defined as part of the sample source code the definition of the static MatrixCalculator class, exposing the static Calculate method. All of the formula calculation tasks discussed in the previous section have been implemented within this method.

As parameter values the method expects a value indicating the kernel size and a value representing the Weight value. The Calculate method returns a two dimensional array of type double. The return value array represents the calculated kernel.

The definition of the MatrixCalculator.Calculate method as follows:

public static double[,] Calculate(int length, double weight) 
{
    double[,] Kernel = new double [length, lenght]; 
    double sumTotal = 0; 

  
    int kernelRadius = lenght / 2; 
    double distance = 0; 

  
    double calculatedEuler = 1.0 /  
    (2.0 * Math.PI * Math.Pow(weight, 2)); 

  
    for (int filterY = -kernelRadius; 
         filterY <= kernelRadius; filterY++) 
    {
        for (int filterX = -kernelRadius; 
            filterX <= kernelRadius; filterX++) 
        {
            distance = ((filterX * filterX) +  
                       (filterY * filterY)) /  
                       (2 * (weight * weight)); 

  
            Kernel[filterY + kernelRadius,  
                   filterX + kernelRadius] =  
                   calculatedEuler * Math.Exp(-distance); 

  
            sumTotal += Kernel[filterY + kernelRadius,  
                               filterX + kernelRadius]; 
        } 
    } 

  
    for (int y = 0; y < lenght; y++) 
    { 
        for (int x = 0; x < lenght; x++) 
        { 
            Kernel[y, x] = Kernel[y, x] *  
                           (1.0 / sumTotal); 
        } 
    } 

  
    return Kernel; 
}

Ladybird: Gaussian Kernel 19×19 Weight 9.5

Gaussian Kernel 19x19 Weight 9.5

The sample source code provides the definition of the ConvolutionFilter extension method, targeting the Bitmap class. This method accepts as a parameter a two dimensional array representing the matrix kernel to implement when performing image convolution. The matrix kernel value passed to this function originates from the calculated Gaussian kernel.

Detailed below is the definition of the ConvolutionFilter extension method:

public static Bitmap ConvolutionFilter(this Bitmap sourceBitmap,  
                                         double[,] filterMatrix,  
                                              double factor = 1,  
                                                   int bias = 0)  
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                            sourceBitmap.Width, sourceBitmap.Height), 
                                              ImageLockMode.ReadOnly,  
                                        PixelFormat.Format32bppArgb); 

   
    byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; 
    byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height]; 

   
    Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); 
    sourceBitmap.UnlockBits(sourceData); 

   
    double blue = 0.0; 
    double green = 0.0; 
    double red = 0.0; 

   
    int filterWidth = filterMatrix.GetLength(1); 
    int filterHeight = filterMatrix.GetLength(0); 

   
    int filterOffset = (filterWidth-1) / 2; 
    int calcOffset = 0; 

   
    int byteOffset = 0; 

   
    for (int offsetY = filterOffset; offsetY <  
        sourceBitmap.Height - filterOffset; offsetY++) 
    {
        for (int offsetX = filterOffset; offsetX <  
            sourceBitmap.Width - filterOffset; offsetX++) 
        {
            blue = 0; 
            green = 0; 
            red = 0; 

   
            byteOffset = offsetY *  
                         sourceData.Stride +  
                         offsetX * 4; 

   
            for (int filterY = -filterOffset;  
                filterY <= filterOffset; filterY++) 
            { 
                for (int filterX = -filterOffset; 
                    filterX <= filterOffset; filterX++) 
                { 

   
                    calcOffset = byteOffset +  
                                 (filterX * 4) +  
                                 (filterY * sourceData.Stride); 

   
                    blue += (double  )(pixelBuffer[calcOffset]) * 
                            filterMatrix[filterY + filterOffset,  
                                                filterX + filterOffset]; 

   
                    green += (double  )(pixelBuffer[calcOffset + 1]) * 
                             filterMatrix[filterY + filterOffset,  
                                                filterX + filterOffset]; 

   
                    red += (double  )(pixelBuffer[calcOffset + 2]) * 
                           filterMatrix[filterY + filterOffset,  
                                              filterX + filterOffset]; 
                } 
            } 

   
            blue = factor * blue + bias; 
            green = factor * green + bias; 
            red = factor * red + bias; 

   
            blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue)); 
            green = (green > 255 ? 255 : (green < 0 ? 0 : green)); 
            red = (red > 255 ? 255 : (red < 0 ? 0 : blue)); 

   
            resultBuffer[byteOffset] = (byte)(blue); 
            resultBuffer[byteOffset + 1] = (byte)(green); 
            resultBuffer[byteOffset + 2] = (byte)(red); 
            resultBuffer[byteOffset + 3] = 255; 
        }
    }

   
    Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height); 

   
    BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, 
                             resultBitmap.Width, resultBitmap.Height), 
                                              ImageLockMode.WriteOnly, 
                                         PixelFormat.Format32bppArgb); 

   
    Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); 
    resultBitmap.UnlockBits(resultData); 

   
    return resultBitmap; 
}

Ant: Gaussian Kernel 7×7 Weight 19

Ant Gaussian Kernel 7x7 Weight 19

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction.

The sample images featuring an image of a prey mantis is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia.

The sample images featuring an image of an ant has been released into the public domain by its author, Sean.hoyland. This applies worldwide. In some countries this may not be legally possible; if so: Sean.hoyland grants anyone the right to use this work for any purpose, without any conditions, unless such conditions are required by law. The original image can be downloaded from Wikipedia.

The sample images featuring an image of a ladybird (ladybug or lady beetle) is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license and can be downloaded from Wikipedia.

The sample images featuring an image of a wasp is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikimedia.org.

The Original Image

1280px-Gemeiner_Widderbock_4483

Wasp Gaussian Kernel 3×3 Weight 9.25

Wasp Gaussian Kernel 3x3 Weight 9.25

Wasp Gaussian Kernel 5×5 Weight 9.25

Wasp Gaussian Kernel 5x5 Weight 9.25

Wasp Gaussian Kernel 7×7 Weight 9.25

Wasp Gaussian Kernel 7x7 Weight 9.25

Wasp Gaussian Kernel 9×9 Weight 9.25

Wasp Gaussian Kernel 9x9 Weight 9.25

Wasp Gaussian Kernel 11×11 Weight 9.25

Wasp Gaussian Kernel 11x11 Weight 9.25

Wasp Gaussian Kernel 13×13 Weight 9.25

Wasp Gaussian Kernel 13x13 Weight 9.25

Wasp Gaussian Kernel 15×15 Weight 9.25

Wasp Gaussian Kernel 15x15 Weight 9.25

Wasp Gaussian Kernel 17×17 Weight 9.25

Wasp Gaussian Kernel 17x17 Weight 9.25

Wasp Gaussian Kernel 19×19 Weight 9.25

Wasp Gaussian Kernel 19x19 Weight 9.25

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

Dewald Esterhuizen

I’ve published a number of articles related to imaging and images of which you can find URL links here:


Dewald Esterhuizen

Blog Stats

  • 507,333 hits

Enter your email address to follow and receive notifications of new posts by email.

Join 215 other followers

Archives

Twitter feed


%d bloggers like this: