Archive for the 'Image Transform' Category

C# How to: Min/Max Edge Detection

Article Purpose

This article serves as a detailed discussion on implementing  through maximum and minimum value subtraction. Additional concepts illustrated in this article include implementing a and RGB conversion.

Frog Filter 3×3 Smoothed

Frog Filter 3x3 Smoothed

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Min Max Edge Detection Sample Source Code

Using the Sample Application

This article’s accompanying sample source code includes a based sample application. The sample application provides an implementation of the concepts explored by this article. Concepts discussed can be easily replicated and tested by using the sample application.

Source/input files can be specified from the local system when clicking the Load Image button. Additionally users also have the option to save resulting filtered by clicking the Save Image button.

The sample application user interface enables the user  to specify three filter configuration values. These values serve as input parameters to the Min/Max Edge Detection Filter and can be described as follows:

  • Filter Size – Determines the number of surrounding to consider when calculating the minimum and maximum pixel values. This value equates to the size of a pixel’s neighbourhood of pixels. When gradient edges expressed in a source image requires a higher or lower level of expression in result images the filter size value should be adjusted. Higher Filter Size values result in gradient edges being expressed at greater intensity levels in resulting images. Inversely, lower Filter Size values delivers the opposite result of gradient edges being expressed at lesser intensity levels.
  • Smooth Noise – when present in source images, can to varying degrees affect the Min/Max Edge Detection Filter’s accuracy in calculating gradient edges. In order to reduce the negative affects of source image noise an image smoothing filter may be implemented. Smoothing out image noise requires additional filter processing and therefore requires additional computation time. If source images reflect minor or no image noise, additional image smoothing may be excluded to reduce filter processing duration.
  • Grayscale – When required, result images can be expressed in grayscale through configuring this value.

The following image represents a screenshot of the Min/Max Edge Detection Sample application in action.

Min Max Edge Detection Sample Application

Min/Max

The method of illustrated in this article can be classified as a variation of commonly implemented edge detection methods. expressed within a source can be determined through the presence of sudden and significant changes in gradient levels that occur within a small/limited perimeter.

As a means to determine gradient level changes the Min/Max Edge Detection algorithm performs inspection, comparing maximum and minimum colour channel values. Should the difference between maximum and minimum colour values be significant, it would be an indication of a significant change in gradient level within the being inspected.

Image noise represents interference in relation to regular gradient level expression. Image noise does not signal the presence of an , although could potentially result in incorrectly determining image edge presence. Image noise and the negative impact thereof can be significantly reduced when applying image smoothing, also sometimes referred to as . The Min/Max Edge Detection algorithm makes provision for optional image smoothing implemented in the form of a filter.

The following sections provide more detail regarding the concepts introduced in this section, pixel neighbourhood and median filter.

Frog Filter 3×3 Smoothed

Frog Filter 3x3 Smoothed

Pixel Neighbourhood

A refers to a set of pixels, all of which are related through location coordinates. The width and height of a must be equal, in other words, a pixel neighbourhood can only be square. Additionally, the width/height of a pixel neighbourhood must be an uneven value. When inspecting a pixel’s neighbouring pixels, the pixel being inspected will always be located at the exact center of the . Only when a pixel neighbourhood’s width/height are an uneven value can such a have an exact center pixel. Each pixel represented in an has a different set of neighbours, some neighbours overlap, but no two pixels have the exact same neighbours. A pixel’s neighbouring pixels can be determined when considering the pixel to be at the center of a block of pixels, extending half the neighbourhood size less one in horizontal, vertical and diagonal directions.

Median Filter

In context of this article and the Min/Max Edge Detection filter, median filtering has been implemented as a means to reduce source image noise. From the we gain the following quote:

In statistics and probability theory, the median is the number separating the higher half of a data sample, a population, or a probability distribution, from the lower half. The median of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking the middle one (e.g., the median of {3, 3, 5, 9, 11} is 5).

Frog Filter 3×3 Smoothed

Frog2_Filter3x3_Smoothed

The application of a filter is based in the concept of as discussed earlier. The implementation steps required when applying a median filter can be described as follows:

  1. Iterate every pixel. The of each pixel in a source image needs to be determined and inspected.
  2. Order/Sort pixel neighbourhood values. Once a pixel neighbourhood for a specific pixel has been determined the values expressed by all the pixels in that neighbourhood needs to be sorted or ordered according to value.
  3. Determine midpoint value. In relation to the sorted values, the value positioned exactly halfway between first and last value needs to be determined. As an example, if a pixel neighbourhood contains a total of nine pixels, the midpoint would be at position number five, which is four positions from the first and last value inclusive. The midpoint value in a sorted range of neighbourhood pixel values, is the median value of that pixel neighbourhood’s values.

The median filter should not be confused with the . A median will always be a midpoint value from a sorted value range, whereas a value is equal to the calculated average of a value range. The median filter has the characteristic of reducing image noise whilst still preserving . The mean filter will also reduce image noise, but will do so through generalized , also referred to as , which does not preserve .

Note that when applying a median filter to RGB colour images values need to be determined per individual colour channel.

Frog Filter 3×3 Smoothed

Frog Filter 3x3 Smoothed

Min/Max Edge Detection Algorithm

based in a min/max approach requires relatively few steps, which can be combined in source code implementations to be more efficient from a computational/processing perspective. A higher level logical definition of the steps required can be described as follows:

  1. Image Noise Reduction – If image is required apply a filter to the source image.
  2. Iterate through all of the pixels contained within an .
  3. For each pixel being iterated, determine the neighbouring pixels. The size will be determined by the specified filter size.
  4. Determine the Minimum and Maximum pixel value expressed within the pixel neighbourhood.
  5. Subtract the Minimum from the Maximum value and assign the result to the pixel currently being iterated.
  6. Apply Grayscale conversion to the pixel currently being iterated, only if grayscale conversion had been configured.

Implementing a Min/Max Edge Detection Filter

The source code implementation of the Min/Max Edge Detection Filter declares two methods, a filter method and an method. A median filter and edge detection filter cannot be processed simultaneously. When applying a filter, the median value of a pixel neighbourhood determined from a source image should be expressed in a separate result image. The original source image should not be altered whilst inspecting pixel neighbourhoods and calculating median values. Only once all pixel values in the result image has been set, can the result image serve as a source image to an filter method.

The following code snippet provides the source code definition of the MedianFilter method.

private static byte[] MedianFilter(this byte[] pixelBuffer,
                                    int imageWidth,
                                    int imageHeight,
                                    int filterSize)
{
    byte[] resultBuffer = new byte[pixelBuffer.Length];

    int filterOffset = (filterSize - 1) / 2;
    int calcOffset = 0;
    int stride = imageWidth * pixelByteCount;

    int byteOffset = 0;
    var neighbourCount = filterSize * filterSize;
    int medianIndex = neighbourCount / 2;

    var blueNeighbours = new byte[neighbourCount];
    var greenNeighbours = new byte[neighbourCount];
    var redNeighbours = new byte[neighbourCount];

    for (int offsetY = filterOffset; offsetY <
        imageHeight - filterOffset; offsetY++)
    {
        for (int offsetX = filterOffset; offsetX <
            imageWidth - filterOffset; offsetX++)
        {
            byteOffset = offsetY *
                            stride +
                            offsetX * pixelByteCount;

            for (int filterY = -filterOffset, neighbour = 0;
                filterY <= filterOffset; filterY++)
            {
                for (int filterX = -filterOffset;
                    filterX <= filterOffset; filterX++, neighbour++)
                {
                    calcOffset = byteOffset +
                                    (filterX * pixelByteCount) +
                                    (filterY * stride);

                    blueNeighbours[neighbour] = pixelBuffer[calcOffset];
                    greenNeighbours[neighbour] = pixelBuffer[calcOffset + greenOffset];
                    redNeighbours[neighbour] = pixelBuffer[calcOffset + redOffset];
                }
            }

            Array.Sort(blueNeighbours);
            Array.Sort(greenNeighbours);
            Array.Sort(redNeighbours);

            resultBuffer[byteOffset] = blueNeighbours[medianIndex];
            resultBuffer[byteOffset + greenOffset] = greenNeighbours[medianIndex];
            resultBuffer[byteOffset + redOffset] = redNeighbours[medianIndex];
            resultBuffer[byteOffset + alphaOffset] = maxByteValue;
        }
    }

    return resultBuffer;
}

Notice the definition of three separate arrays, each intended to represent a ’s pixel values related to a specific colour channel. Each neighbourhood colour channel byte array needs to be sorted according to value. The value located at the array index exactly halfway from the start and the end of the array represents the value. When a median value has been determined, the result buffer pixel related to the source buffer pixel in terms of XY Location needs to be set.

Frog Filter 3×3 Smoothed

Frog Filter 3x3 Smoothed

The sample source code defines two overloaded versions of an edge detection method. The first version is defined as an targeting the class. A FilterSize parameter is the only required parameter, intended to specify width/height. In addition, when invoking this method  optional parameters may be specified. When image noise reduction should be implemented the smoothNoise parameter should be defined as true. If resulting images are required in grayscale the last parameter, , should reflect true. The following code snippet provides the definition of the MinMaxEdgeDetection method.

public static Bitmap MinMaxEdgeDetection(this Bitmap sourceBitmap,
                                            int filterSize, 
                                            bool smoothNoise = false, 
                                            bool grayscale = false)
{
    return sourceBitmap.ToPixelBuffer()
                        .MinMaxEdgeDetection(sourceBitmap.Width, 
                                            sourceBitmap.Height, 
                                            filterSize,
                                            smoothNoise,
                                            grayscale)
                        .ToBitmap(sourceBitmap.Width, 
                                    sourceBitmap.Height);
}

The MinMaxEdgeDetection method as expressed above essentially acts as a wrapper method, invoking the overloaded version of this method, performing mapping between objects and byte array pixel buffers.

An overloaded version of the MinMaxEdgeDetection method performs all of the tasks required in through means of minimum maximum value subtraction. The method definition as provided by the following code snippet.

private static byte[] MinMaxEdgeDetection(this byte[] sourceBuffer,
                                          int imageWidth,
                                          int imageHeight,
                                          int filterSize,
                                          bool smoothNoise = false,
                                          bool grayscale = false)
{
    byte[] pixelBuffer = sourceBuffer;

    if (smoothNoise)
    {
        pixelBuffer = sourceBuffer.MedianFilter(imageWidth, 
                                                imageHeight, 
                                                filterSize);
    }

    byte[] resultBuffer = new byte[pixelBuffer.Length];

    int filterOffset = (filterSize - 1) / 2;
    int calcOffset = 0;
    int stride = imageWidth * pixelByteCount;

    int byteOffset = 0;

    byte minBlue = 0, minGreen = 0, minRed = 0;
    byte maxBlue = 0, maxGreen = 0, maxRed = 0;

    for (int offsetY = filterOffset; offsetY <
        imageHeight - filterOffset; offsetY++)
    {
        for (int offsetX = filterOffset; offsetX <
            imageWidth - filterOffset; offsetX++)
        {
            byteOffset = offsetY *
                            stride +
                            offsetX * pixelByteCount;

            minBlue = maxByteValue;
            minGreen = maxByteValue;
            minRed = maxByteValue;

            maxBlue = minByteValue;
            maxGreen = minByteValue;
            maxRed = minByteValue;

            for (int filterY = -filterOffset;
                filterY <= filterOffset; filterY++)
            {
                for (int filterX = -filterOffset;
                    filterX <= filterOffset; filterX++)
                {
                    calcOffset = byteOffset +
                                    (filterX * pixelByteCount) +
                                    (filterY * stride);

                    minBlue = Math.Min(pixelBuffer[calcOffset], minBlue);
                    maxBlue = Math.Max(pixelBuffer[calcOffset], maxBlue);

                    minGreen = Math.Min(pixelBuffer[calcOffset + greenOffset], minGreen);
                    maxGreen = Math.Max(pixelBuffer[calcOffset + greenOffset], maxGreen);

                    minRed = Math.Min(pixelBuffer[calcOffset + redOffset], minRed);
                    maxRed = Math.Max(pixelBuffer[calcOffset + redOffset], maxRed);
                }
            }

            if (grayscale)
            {
                resultBuffer[byteOffset] = ByteVal((maxBlue - minBlue) * 0.114 + 
                                                    (maxGreen - minGreen) * 0.587 + 
                                                    (maxRed - minRed) * 0.299);

                resultBuffer[byteOffset + greenOffset] = resultBuffer[byteOffset];
                resultBuffer[byteOffset + redOffset] = resultBuffer[byteOffset];
                resultBuffer[byteOffset + alphaOffset] = maxByteValue;
            }
            else
            {
                resultBuffer[byteOffset] = (byte)(maxBlue - minBlue);
                resultBuffer[byteOffset + greenOffset] = (byte)(maxGreen - minGreen);
                resultBuffer[byteOffset + redOffset] = (byte)(maxRed - minRed);
                resultBuffer[byteOffset + alphaOffset] = maxByteValue;
            }
        }
    }

    return resultBuffer;
}

As discussed earlier, image if required should be the first task performed. Based on parameter value the method applies a filter to the source image buffer.

When iterating a a comparison is performed between the currently iterated neighbouring pixel’s value and the previously determined minimum and maximum values.

When the grayscale method parameter reflects true, a grayscale algorithm is applied to the difference between the determined maximum and minimum values.

Should the grayscale method parameter reflect false, grayscale algorithm logic will not execute. Instead, the result obtained from subtracting the determined minimum and maximum values are assigned to the relevant pixel and colour channel on the result buffer image.

Frog Filter 3×3 Smoothed

Frog Filter 3x3 Smoothed

Sample Images

This article features several sample images provided as examples. All sample images were created using the sample application. All of the original source images used in generating sample images have been licensed by their respective authors to allow for reproduction here. The following section lists each original source image and related license and copyright details.

Red-eyed Tree Frog (Agalychnis callidryas), photographed near Playa Jaco in Costa Rica © 2007 Careyjamesbalboa (Carey James Balboa) has been released into the public domain by the author.

Red_eyed_tree_frog_edit2

Yellow-Banded Poison Dart Frog © 2013 H. Krisp  is used here under a Creative Commons Attribution 3.0 Unported license.

Bumblebee_Poison_Frog_Dendrobates_leucomelas

Green and Black Poison Dart Frog © 2011 H. Krisp  is used here under a Creative Commons Attribution 3.0 Unported license.

Dendrobates-auratus-goldbaumsteiger

Atelopus certus calling male © 2010 Brian Gratwicke  is used here under a Creative Commons Attribution 2.0 Generic license.

Atelopus_certus_calling_male_edit

Tyler’s Tree Frog (Litoria tyleri) © 2006 LiquidGhoul  has been released into the public domain by the author.

Litoria_tyleri

Dendropsophus microcephalus © 2010 Brian Gratwicke  is used here under a Creative Commons Attribution 2.0 Generic license.

Dendropsophus_microcephalus_-_calling_male_(Cope,_1886)

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Distortion Blur

Article Purpose

This article explores the process of implementing an Image Distortion Blur filter. This image filter is classified as a non-photo realistic image filter, primarily implemented in rendering artistic effects.

Flower: Distortion Factor 15

Flower: Distortion Factor 15

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Flower: Distortion Factor 10

Flower: Distortion Factor 10

Using the Sample Application

The sample source code that accompanies this article includes a based sample application. The concepts explored in this article have all been implemented as part of the sample application. From an end user perspective the following configurable options are available:

  • Load/Save Images – Clicking the Load Image button allows a user to specify a source/input . If desired, output filtered can be saved to the local system by clicking the Save Image button.
  • Distortion Factor – The level or intensity of distortion applied when implementing the filter can be specified when adjusting the Distortion Factor through the user interface. Lower factor values result in less distortion being evident in resulting . Specifying higher factor values result in more intense distortion being applied.

The following image is screenshot of the Image Distortion Blur sample application:

ImageDistortionBlur_SampleApplication

Flower: Distortion Factor 10

Flower: Distortion Factor 10

Flower: Distortion Factor 10

Flower: Distortion Factor 10

Image Distortion

In this article and the accompanying sample source code are distorted through slightly adjusting each individual ’s coordinates. The direction and distance by which coordinates are adjusted differ per as a result of being randomly selected. The maximum distance offset applied depends on the user specified Distortion Factor. Once all coordinates have been updated, implementing a provides smoothing and an effect.

Applying an Image Distortion Filter requires implementing the following steps:

  1. Iterate Pixels – Each forming part of the source/input should be iterated.
  2. Calculate new Coordinates – For every being iterated generate two random values representing XY-coordinate offsets to be applied to a ’s current coordinates. Offset values can equate to less than zero in order to represent coordinates above or to the left of the current .
  3. Apply Median Filter – The newly offset will appear somewhat speckled in the resulting . Applying a reduces the speckled appearance whilst retaining a distortion effect.

Flower: Distortion Factor 10

Flower: Distortion Factor 10

Flower: Distortion Factor 10

Flower: Distortion Factor 10

Median Filter

Applying a is the final step required when implementing an Image Distortion Blur filter. are often implemented in reducing . The method of image distortion illustrated in this article express similarities when compared to . In order to soften the appearance of we implement a .

A can be applied through implementing the following steps:

  1. Iterate Pixels – Each forming part of the source/input should be iterated.
  2. Inspect Pixel Neighbourhood – Each neighbouring in relation to the currently being iterated should be added to a temporary collection.
  3. Determine Neighbourhood Median – Once all neighbourhood have been added to a temporary collection, sort the collection by value. The element value located at the middle of the collection represents the neighbourhood’s value.

Flower: Distortion Factor 10

Flower: Distortion Factor 10

Flower: Distortion Factor 15

Flower: Distortion Factor 15

Implementing Image Distortion

The sample source code defines the DistortionBlurFilter method, an targeting the class. The following code snippet illustrates the implementation:

public static Bitmap DistortionBlurFilter( 
         this Bitmap sourceBitmap, int distortFactor) 
{
    byte[] pixelBuffer = sourceBitmap.GetByteArray(); 
    byte[] resultBuffer = sourceBitmap.GetByteArray(); 

int imageStride = sourceBitmap.Width * 4; int calcOffset = 0, filterY = 0, filterX = 0; int factorMax = (distortFactor + 1) * 2; Random rand = new Random();
for (int k = 0; k + 4 < pixelBuffer.Length; k += 4) { filterY = distortFactor - rand.Next(0, factorMax); filterX = distortFactor - rand.Next(0, factorMax);
if (filterX * 4 + (k % imageStride) < imageStride && filterX * 4 + (k % imageStride) > 0) { calcOffset = k + filterY * imageStride + 4 * filterX;
if (calcOffset >= 0 && calcOffset + 4 < resultBuffer.Length) { resultBuffer[calcOffset] = pixelBuffer[k]; resultBuffer[calcOffset + 1] = pixelBuffer[k + 1]; resultBuffer[calcOffset + 2] = pixelBuffer[k + 2]; } } }
return resultBuffer.GetImage(sourceBitmap.Width, sourceBitmap.Height).MedianFilter(3); }

Flower: Distortion Factor 15

Flower: Distortion Factor 15

Implementing a Median Filter

The MedianFilter targets the class. The implementation as follows:

public static Bitmap MedianFilter(this Bitmap sourceBitmap, 
                                  int matrixSize) 
{ 
    byte[] pixelBuffer = sourceBitmap.GetByteArray(); 
    byte[] resultBuffer = new byte[pixelBuffer.Length]; 
    byte[] middlePixel; 

int imageStride = sourceBitmap.Width * 4; int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0, filterY = 0, filterX = 0; List<int> neighbourPixels = new List<int>();
for (int k = 0; k + 4 < pixelBuffer.Length; k += 4) { filterY = -filterOffset; filterX = -filterOffset; neighbourPixels.Clear();
while (filterY <= filterOffset) { calcOffset = k + (filterX * 4) + (filterY * imageStride);
if (calcOffset > 0 && calcOffset + 4 < pixelBuffer.Length) { neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); }
filterX++;
if (filterX > filterOffset) { filterX = -filterOffset; filterY++; } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[k] = middlePixel[0]; resultBuffer[k + 1] = middlePixel[1]; resultBuffer[k + 2] = middlePixel[2]; resultBuffer[k + 3] = middlePixel[3]; }
return resultBuffer.GetImage(sourceBitmap.Width, sourceBitmap.Height); }

Flower: Distortion Factor 25

Flower: Distortion Factor 25

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following images feature as sample images:

674px-Lil_chalcedonicum_01EB_Griechenland_Hrisomiglia_17_07_01

683px-Lil_carniolicum_subsp_ponticum_01EB_Tuerkei_Ikizdere_02_07_93

1022px-LiliumSargentiae

1024px-Lilium_longiflorum_(Easter_Lily)

1280px-LiliumBulbiferumCroceumBologna

LiliumSuperbum1

Orange_Lilium_-_Relic38_-_Ontario_Canada

White_and_yellow_flower

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Abstract Colours Filter

Article Purpose

This article explores Abstract Colour Image filters as a process of Non-photo Realistic Image Rendering. The output produced reflects a variety of artistic effects.

Colour Values Red, Blue Filter Size 9
Edge Tracing Black Edge Threshold 55

Mushroom: Red - Blue, Filter Size 9, Edge Tracing Black, Edge Threshold 55

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Colour Values Blue Filter Size 9
Edge Tracing Double Intensity Edge Threshold 60

Mushroom: Blue, Filter Size 9, Color Shift Right, Edge Tracing Double Intensity, Edge Threshold 60

Using the Sample Application

The sample source code that accompanies this article includes a based sample application. The concepts discussed in this article have been illustrated through the implementation of the sample application.

When executing the sample application, through the user interface several options are made available to the user, described as follows:

  • Load/Save Source/input may be loaded from the local system through clicking the Load Image button. If desired by the user, resulting output can be saved to the local file system through clicking the Save Image button.
  • Colour Channels – The colour values Red, Green and Blue can be updated or remain unchanged when applying a filter, indicated through the associated .
  • Filter Size – The level of filter intensity depends on the size of the filter. Larger filter sizes result in more intense filtering being applied. Smaller filter size result in less intense filtering being applied.
  • Colour Shift Type – Colour intensity values can be swapped around through selecting the Colour Shift type: Shift Left and Shift Right.
  • Edge Tracing Type – When applying a filter the edges forming part of the original source/input will be expressed as part of the output . The method in which are indicated/highlighted will be determined through the type of Edge Tracing specified when applying the filter. Supported types of Edge Tracing include: Black, White, Half Intensity, Double Intensity and Inverted.
  • Edge Threshold – Edges forming part of the source/input are determines through means of implementing a threshold value. Lower threshold values result in more emphasized edges expressed within resulting . Higher threshold values reduce edge emphasis in resulting .

 

Colour Values Green Filter Size 9
Edge Tracing Double Intensity Edge Threshold 60

Mushroom: Green, Filter Size 9, Color Shift Left, Edge Tracing Double Intensity, Edge Threshold 60

Colour Values Red, Blue Filter Size 9
Edge Tracing Double Intensity Edge Threshold 55

Mushroom: Red - Blue, Filter Size 9, Color Shift Right, Edge Tracing Double Intensity, Edge Threshold 55

The following image is a screenshot of the Image Abstract Colour Filter sample application in action:

Image Abstract Colour Filter Sample Application

Abstracting Image Colours

The Abstract Colour Filter explored in this article can be considered a non-photo realistic filter. As the title implies, non-photo realistic filters transforms an input , usually a photograph, producing a result which visibly lacks the aspects of realism expressed in the input . In most scenarios the objective of non-photo realistic filters can be described as using photographic in rendering having an animated appearance.

Colour Values Blue Filter Size 11
Edge Tracing Double Intensity Edge Threshold 60

Mushroom: Blue, Filter Size 11, Color Shift Left, Edge Tracing Double Intensity, Edge Threshold 60

Colour Values Red Filter Size 11
Edge Tracing Double Intensity Edge Threshold 60

Mushroom: Red, Filter Size 11, Color Shift Left, Edge Tracing Double Intensity, Edge Threshold 60

The Abstract Colour Filter can be broken down into two main components: Colour Averaging and . Through implementing a variety of colour averaging algorithms resulting express abstract yet uniform colours. Abstract colours result in output no longer appearing photo realistic, instead output appear unconventionally augmented/artistic.

Output express a lesser degree of definition and detail, when compared to input . In some scenarios output might not be easily recognisable. In order to retain some detail, edge/boundary detail detected from input will be emphasised in result .

Colour Values Green Filter Size 11
Edge Tracing Double Intensity Edge Threshold 60

Mushroom: Green, Filter Size 11, Color Shift Left, Edge Tracing, Double Intensity, Edge Threshold 60

Colour Values Green, Blue Filter Size 11
Edge Tracing Double Intensity Edge Threshold 75

Mushroom: Green - Blue, Filter Size 11, Color Shift None, Edge Tracing Double Intensity, Edge Threshold 75

The steps required when applying an Abstract Colour Filter can be described as follows:

  1. Perform – Using the source/input perform , producing a binary expressing in the foreground/as white.
  2. Calculate Neighbourhood Colour Averages – Iterate each forming part of the input , inspecting the ’s neighbouring . Calculate the sum total and average of each colour channel, Red, Green and Blue. The value of the currently being iterated should be set depending to neighbourhood average.
  3. Trace Edges within Colour Averages – Simultaneously iterate the colour average and the edge detected . If the being iterated forms part of an edge, update the corresponding in the average colour , depending on the specified method of Edge Tracing.

 

Colour Values Red, Green Filter Size 11
Edge Tracing Double Intensity Edge Threshold 75

Mushroom: Red - Green, Filter Size 11, Color Shift None, Edge Tracing Double Intensity, Edge Threshold 75

Colour Values Red Filter Size 11
Edge Tracing Black Edge Threshold 60

Mushroom: Red, Filter Size 11, Color Shift Left, Edge Tracing Black, Edge Threshold 60

Implementing Pixel Neighbourhood Colour Averaging

In the sample source code neighbourhood colour averaging has been implemented through the definition of the AverageColoursFilter . This method creates a new using the source as input. The following code snippet provides the definition:

public static Bitmap AverageColoursFilter(this Bitmap sourceBitmap,  
                                          int matrixSize,   
                                          bool applyBlue = true, 
                                          bool applyGreen = true, 
                                          bool applyRed = true, 
                                          ColorShiftType shiftType = 
                                          ColorShiftType.None)  
{
    byte[] pixelBuffer = sourceBitmap.GetByteArray(); 
    byte[] resultBuffer = new byte[pixelBuffer.Length]; 

int calcOffset = 0; int byteOffset = 0; int blue = 0; int green = 0; int red = 0; int filterOffset = (matrixSize - 1) / 2;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceBitmap.Width*4 + offsetX * 4;
blue = 0; green = 0; red = 0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceBitmap.Width * 4);
blue += pixelBuffer[calcOffset]; green += pixelBuffer[calcOffset + 1]; red += pixelBuffer[calcOffset + 2]; } }
blue = blue / matrixSize; green = green / matrixSize; red = red / matrixSize;
if (applyBlue == false ) { blue = pixelBuffer[byteOffset]; }
if (applyGreen == false ) { green = pixelBuffer[byteOffset + 1]; }
if (applyRed == false ) { red = pixelBuffer[byteOffset + 2]; }
if (shiftType == ColorShiftType.None) { resultBuffer[byteOffset] = (byte)blue; resultBuffer[byteOffset + 1] = (byte)green; resultBuffer[byteOffset + 2] = (byte)red; resultBuffer[byteOffset + 3] = 255; } else if (shiftType == ColorShiftType.ShiftLeft) { resultBuffer[byteOffset] = (byte)green; resultBuffer[byteOffset + 1] = (byte)red; resultBuffer[byteOffset + 2] = (byte)blue; resultBuffer[byteOffset + 3] = 255; } else if (shiftType == ColorShiftType.ShiftRight) { resultBuffer[byteOffset] = (byte)red; resultBuffer[byteOffset + 1] = (byte)blue; resultBuffer[byteOffset + 2] = (byte)green; resultBuffer[byteOffset + 3] = 255; } } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Colour Values Green, Blue Filter Size 17
Edge Tracing Black Edge Threshold 85

Mushroom: Green - Blue, Filter Size 17, Color Shift Left, Edge Tracing Black, Edge Threshold 85

Implementing Gradient Based Edge Detection

When applying an Abstract Colours Filter, one of the required steps involve . The sample source code implements   through the definition of the GradientBasedEdgeDetectionFilter method. This method has been defined as an targeting the class. The as follows:

public static Bitmap GradientBasedEdgeDetectionFilter( 
                        this Bitmap sourceBitmap, 
                        byte threshold = 0) 
{
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle (0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
int sourceOffset = 0, gradientValue = 0; bool exceedsThreshold = false;
for (int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for (int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { sourceOffset = offsetY * sourceData.Stride + offsetX * 4; gradientValue = 0; exceedsThreshold = true;
// Horizontal Gradient CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold, 2); // Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Horizontal Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4, sourceOffset + 4, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Vertical Gradient exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride, sourceOffset + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold, 2); // Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset - 4 + sourceData.Stride, ref gradientValue, threshold, 2);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NW-SE exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - 4 - sourceData.Stride, sourceOffset + 4 + sourceData.Stride, ref gradientValue, threshold);
if (exceedsThreshold == false) { gradientValue = 0;
// Diagonal Gradient : NE-SW exceedsThreshold = CheckThreshold(pixelBuffer, sourceOffset - sourceData.Stride + 4, sourceOffset + sourceData.Stride - 4, ref gradientValue, threshold); } } } } }
resultBuffer[sourceOffset] = (byte)(exceedsThreshold ? 255 : 0); resultBuffer[sourceOffset + 1] = resultBuffer[sourceOffset]; resultBuffer[sourceOffset + 2] = resultBuffer[sourceOffset]; resultBuffer[sourceOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Colour Values Red, Green Filter Size 5
Edge Tracing Black Edge Threshold 85

Mushroom: Red - Green - Blue, Filter Size 5, Color Shift Right, Edge Tracing Black, Edge Threshold 85

Implementing an Abstract Colour Filter

The AbstractColorsFilter method serves as means of combining an average colour and an edge detected . This targets the class. The following code snippet details the definition:

public static Bitmap AbstractColorsFilter(this Bitmap sourceBitmap, 
                                          int matrixSize, 
                                          byte edgeThreshold, 
                                          bool applyBlue = true, 
                                          bool applyGreen = true, 
                                          bool applyRed = true, 
                                          EdgeTracingType edgeType =  
                                          EdgeTracingType.Black, 
                                          ColorShiftType shiftType = 
                                          ColorShiftType.None) 
{ 
    Bitmap edgeBitmap =  
    sourceBitmap.GradientBasedEdgeDetectionFilter(edgeThreshold); 

Bitmap colorBitmap = sourceBitmap.AverageColoursFilter(matrixSize, applyBlue, applyGreen, applyRed, shiftType);
byte[] edgeBuffer = edgeBitmap.GetByteArray(); byte[] colorBuffer = colorBitmap.GetByteArray(); byte[] resultBuffer = colorBitmap.GetByteArray();
for (int k = 0; k + 4 < edgeBuffer.Length; k += 4) { if (edgeBuffer[k] == 255) { switch (edgeType) { case EdgeTracingType.Black: resultBuffer[k] = 0; resultBuffer[k+1] = 0; resultBuffer[k+2] = 0; break; case EdgeTracingType.White: resultBuffer[k] = 255; resultBuffer[k+1] = 255; resultBuffer[k+2] = 255; break; case EdgeTracingType.HalfIntensity: resultBuffer[k] = ClipByte(resultBuffer[k] * 0.5); resultBuffer[k + 1] = ClipByte(resultBuffer[k + 1] * 0.5); resultBuffer[k + 2] = ClipByte(resultBuffer[k + 2] * 0.5); break; case EdgeTracingType.DoubleIntensity: resultBuffer[k] = ClipByte(resultBuffer[k] * 2); resultBuffer[k + 1] = ClipByte(resultBuffer[k + 1] * 2); resultBuffer[k + 2] = ClipByte(resultBuffer[k + 2] * 2); break; case EdgeTracingType.ColorInversion: resultBuffer[k] = ClipByte(255 - resultBuffer[k]); resultBuffer[k+1] = ClipByte(255 - resultBuffer[k+1]); resultBuffer[k+2] = ClipByte(255 - resultBuffer[k+2]); break; } } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
Colour Values Red, Green Filter Size 17
Edge Tracing Double Intensity Edge Threshold 85

Mushroom: Red - Green, Filter Size 17, Color Shift None, Edge Tracing Double Intensity, Edge Threshold 85

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following images feature as sample images:

1280px-Mycena_atkinsoniana_60804

1024px-Lactarius_indigo_48568

Amanita_muscaria_(fly_agaric)

683px-Pleurotus_pulmonarius_LC0228

Additional Filter Result Images

The following series of images represent additional filter results.

Colour Values Red Filter Size 9
Edge Tracing Double Intensity Edge Threshold 60

Mushroom: Red, Filter Size 9, Color Shift Right, Edge Tracing Double, Edge Threshold 60

Colour Values Green, Blue Filter Size 9
Edge Tracing Double Intensity Edge Threshold 60

Mushroom: Green - Blue, Filter Size 9, Color Shift Right, Edge Tracing Double Intensity, Edge Threshold 60

Colour Values Green, Blue Filter Size 9
Edge Tracing Black Edge Threshold 60

Mushroom: Green - Blue, Filter Size 9, Color Shift Left, Edge Tracing Black, Edge Threshold 60

Colour Values Red, Green, Blue Filter Size 9
Edge Tracing Double Intensity Edge Threshold 55

Mushroom: Red - Green - Blue, Filter Size 9, Color Shift None, Edge Tracing Double Intensity, Edge Threshold 55

Colour Values Red, Green, Blue Filter Size 11
Edge Tracing Black Edge Threshold 75

Mushroom: Red - Green - Blue, Filter Size 11, Color Shift None, Edge Tracing Black, Edge Threshold 75

Colour Values Red, Blue Filter Size 11
Edge Tracing Double Intensity Edge Threshold 75

Mushroom: Red - Blue, Filter Size 11, Color Shift None, Edge Tracing Double Intensity, Edge Threshold 75

Colour Values Green Filter Size 17
Edge Tracing Black Edge Threshold 85

Mushroom: Green, Filter Size 17, Color Shift None, Edge Tracing Black, Edge Threshold 85

Colour Values Red Filter Size 17
Edge Tracing Black Edge Threshold 85

Mushroom: Red, Filter Size 17, Color Shift None, Edge Tracing Black, Edge Threshold 85

Colour Values Red, Green, Blue Filter Size 5
Edge Tracing Half Edge Edge Threshold 85

Mushroom: Red - Green - Blue, Filter Size 5, Color Shift None, Edge Tracing Half Edge, Threshold 85

Colour Values Blue Filter Size 5
Edge Tracing Double Intensity Edge Threshold 75

Mushroom: Blue, FilterSize 5, Color Shift None, Edge Tracing Double Intensity, Edge Threshold 75

Colour Values Green Filter Size 5
Edge Tracing Double Intensity Edge Threshold 75

Mushroom: Green, Filter Size 5, Color Shift None, Edge Tracing Double Intensity, Edge Threshold 75

Colour Values Red Filter Size 5
Edge Tracing Double Intensity Edge Threshold 75

Mushroom: Red, Filter Size 5, Color Shift None, Edge Tracing Double Intensity, Edge Threshold 75

Colour Values Red, Green, Blue Filter Size 5
Edge Tracing Black Edge Threshold 75

Mushroom: Red - Green - Blue, Filter Size 3, Color Shift Left, Edge Tracing Black, Edge Threshold 75

Colour Values Red, Green, Blue Filter Size 9
Edge Tracing Black Edge Threshold 55

Mushroom: Red - Green - Blue, Filter Size 3, Color Shift None, Edge Tracing Black, Edge Threshold 75

Colour Values Red, Green, Blue Filter Size 3
Edge Tracing Black Edge Threshold 75

Mushroom: Red - Green - Blue, Filter Size 3, Color Shift Right, Edge Tracing Black, Edge Threshold 75

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Boundary Extraction

Article Purpose

This article explores various concepts, which feature in combination when implementing Image Boundary Extraction. Concepts covered within this article include: Morphological and , Addition and Subtraction, Boundary Sharpening, Boundary Tracing and Boundary Extraction.

Parrot: Boundary Extraction, 3×3, Red, Green, Blue

Parrot: Boundary Extraction, 3x3, Red, Greed, Blue

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

This article’s accompanying sample source code includes the definition of a sample application. The sample application serves as an implementation of the concepts discussed in this article. In using the sample application concepts can be easily tested and replicated.

The sample application has been defined as a . The user interface enables the user to configure several options which influence the output produced from filtering processes. The following section describes the options available to a user when executing the sample application:

  • Loading and Saving files – Users can specify source/input through clicking the Load Image button. If desired, resulting filtered can be saved to the local system when clicking the Save Image button.
  • Filter Type – The types of filters implemented represent variations on Image Boundary Extraction. The supported filter types are: Conventional Boundary extraction, Boundary Sharpening and Boundary Tracing.
  • Filter Size – Filter intensity/strength will mostly be reliant on the filter size implemented. A Filter size represents the number of neighbouring examined when applying filters.
  • Colours Applied – The sample source code and sample application provides functionality allowing a filter to only effect user specified colour components. Colour components are represented in the form of an RGB colour scheme. The inclusion or exclusion of the colour components Red, Green and Blue will be determined through user configuration.
  • Structuring Element – As mentioned, the Filter Size option determines the size of neighbourhood examined. The ’s setup determine the neighbouring   within the neighbourhood size bounds that should be used as input when calculating filter results.

The following is a screenshot of the Image Boundary Extraction sample application in action:

Image Boundary Extaction Sample  Application

Parrot: Boundary Extraction, 3×3, Green

Parrot: Boundary Extraction, 3x3, Green

Morphological Boundary Extraction

Image Boundary Extraction can be considered a method of . In contrast to more commonly implemented   methods, Image Boundary Extraction originates from Morphological Image Filters.

When drawing a comparison, Image Boundary Extraction and express strong similarities. results from the difference in and . Considered from a different point of view, creating one expressing thicker edges and another expressing thinner edges provides the means to calculate the difference in edges.

Image Boundary Extraction implements the same concept as . The base concept can be regarded as calculating the difference between two which rendered the same , but expressing a difference in . Image Boundary Extraction relies on calculating the difference between either and the source or and the source . The difference between and in most cases result in more of difference than the difference between and the source or and the source . The result of Image Boundary Extraction representing less of a difference than can be observed in Image Boundary Extraction being expressed in finer/smaller width lines.

is another method of which functions along the same basis. Edges are determined by calculating the difference between two , each having been filtered from the same source , using a of differing intensity levels.

Parrot: Boundary Extraction, 3×3, Red, Green, Blue

Parrot: Boundary Extraction, 3x3, Red, Green, Blue

Boundary Sharpening

The concept of Boundary Sharpening refers to enhancing or sharpening the boundaries or edges expressed in a source/input . Boundaries can be easily determined or extracted as discussed earlier when exploring Boundary Extraction.

The steps involved in performing Boundary Sharpening can be described as follows:

  1. Extract Boundaries – Determine boundaries by performing and calculating the difference between the dilated and the source .
  2. Match Source Edges and Extracted Boundaries – The boundaries extracted in the previous step represent the difference between and the original source . Ensure that extracted boundaries match the source through performing on a copy of the source/input .
  3. Emphasise Extracted boundaries in source image – Perform addition using the extracted boundaries and dilated copy of the source .

Parrot: Boundary Extraction, 3×3, Red, Green, Blue

Parrot: Boundary Extraction, 3x3, Red, Green, Blue

Boundary Tracing

Boundary Tracing refers to applying filters which result in /boundaries appearing darker or more pronounced. This type of filter also relies on Boundary Extraction.

Boundary Tracing can be implemented in two steps, described as follows:

  1. Extract Boundaries – Determine boundaries by performing and calculating the difference between the dilated and the source .
  2. Emphasise Extracted boundaries in source image – Subtract the extracted boundaries from the original source .

Parrot: Boundary Extraction, 3×3, Red, Green, Blue

Parrot: Boundary Extraction, 3x3, Red, Green, Blue

Implementing Morphological Erosion and Dilation

The accompanying sample source code defines the MorphologyOperation method,  defined as an targeting the class. In terms of parameters this method expects a two dimensional array representing a . The other required  parameter represents an value indicating which Morphological Operation to perform, either or .

The following code snippet provides the definition in full:

private static Bitmap MorphologyOperation(this Bitmap sourceBitmap,
                                          bool[,] se,
                                          MorphologyOperationType morphType,
                                          bool applyBlue = true,
                                          bool applyGreen = true,
                                          bool applyRed = true)
{ 
    BitmapData sourceData =
               sourceBitmap.LockBits(new Rectangle(0, 0,
               sourceBitmap.Width, sourceBitmap.Height),
               ImageLockMode.ReadOnly,
               PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (se.GetLength(0) - 1) / 2; int calcOffset = 0, byteOffset = 0; byte blueErode = 0, greenErode = 0, redErode = 0; byte blueDilate = 0, greenDilate = 0, redDilate = 0;
for (int offsetY = 0; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = 0; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
blueErode = 255; greenErode = 255; redErode = 255; blueDilate = 0; greenDilate = 0; redDilate = 0;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { if (se[filterY + filterOffset, filterX + filterOffset] == true) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
calcOffset = (calcOffset < 0 ? 0 : (calcOffset >= pixelBuffer.Length + 2 ? pixelBuffer.Length - 3 : calcOffset));
blueDilate = (pixelBuffer[calcOffset] > blueDilate ? pixelBuffer[calcOffset] : blueDilate);
greenDilate = (pixelBuffer[calcOffset + 1] > greenDilate ? pixelBuffer[calcOffset + 1] : greenDilate);
redDilate = (pixelBuffer[calcOffset + 2] > redDilate ? pixelBuffer[calcOffset + 2] : redDilate);
blueErode = (pixelBuffer[calcOffset] < blueErode ? pixelBuffer[calcOffset] : blueErode);
greenErode = (pixelBuffer[calcOffset + 1] < greenErode ? pixelBuffer[calcOffset + 1] : greenErode);
redErode = (pixelBuffer[calcOffset + 2] < redErode ? pixelBuffer[calcOffset + 2] : redErode); } } }
blueErode = (applyBlue ? blueErode : pixelBuffer[byteOffset]); blueDilate = (applyBlue ? blueDilate : pixelBuffer[byteOffset]);
greenErode = (applyGreen ? greenErode : pixelBuffer[byteOffset + 1]); greenDilate = (applyGreen ? greenDilate : pixelBuffer[byteOffset + 1]);
redErode = (applyRed ? redErode : pixelBuffer[byteOffset + 2]); redDilate = (applyRed ? redDilate : pixelBuffer[byteOffset + 2]);
if (morphType == MorphologyOperationType.Erosion) { resultBuffer[byteOffset] = blueErode; resultBuffer[byteOffset + 1] = greenErode; resultBuffer[byteOffset + 2] = redErode; } else if (morphType == MorphologyOperationType.Dilation) { resultBuffer[byteOffset] = blueDilate; resultBuffer[byteOffset + 1] = greenDilate; resultBuffer[byteOffset + 2] = redDilate; }
resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height); BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Parrot: Boundary Extraction, 3×3, Red, Green

Parrot: Boundary Extraction, 3x3, Red, Green

Implementing Image Addition

The sample source code encapsulates the process of combining two separate through means of addition. The AddImage method serves as a single declaration of addition functionality. This method has been defined as an targeting the class. Boundary Sharpen filtering implements addition.

The following code snippet provides the definition of the AddImage :

private static Bitmap AddImage(this Bitmapsource Bitmap, 
                               Bitmap addBitmap)
{
    BitmapData sourceData =
               sourceBitmap.LockBits(new Rectangle (0, 0,
               sourceBitmap.Width, sourceBitmap.Height),
               ImageLockMode.ReadOnly,
               PixelFormat.Format32bppArgb);

byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
BitmapData addData = addBitmap.LockBits(new Rectangle(0, 0, addBitmap.Width, addBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] addBuffer = new byte[addData.Stride * addData.Height];
Marshal.Copy(addData.Scan0, addBuffer, 0, addBuffer.Length);
addBitmap.UnlockBits(addData);
for (int k = 0; k + 4 < resultBuffer.Length && k + 4 < addBuffer.Length; k += 4) { resultBuffer[k] = AddColors(resultBuffer[k], addBuffer[k]); resultBuffer[k + 1] = AddColors(resultBuffer[k + 1], addBuffer[k + 1]); resultBuffer[k + 2] = AddColors(resultBuffer[k + 2], addBuffer[k + 2]); resultBuffer[k + 3] = 255; }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
private static byte AddColors(byte color1, byte color2) 
{
    int result = color1 + color2; 

return (byte)(result < 0 ? 0 : (result > 255 ? 255 : result)); }

Parrot: Boundary Extraction, 3×3, Red, Green, Blue

Parrot: Boundary Extraction, 3x3, Red, Green, Blue

Implementing Image Subtraction

In a similar fashion regarding the AddImage method the sample code defines the SubractImage method.  By definition this method serves as an targeting the class. Image subtraction has been implemented in Boundary Extraction and Boundary Tracing.

The definition of the SubtractImage method listed as follows:

private static Bitmap SubtractImage(this Bitmap sourceBitmap,  
                                         Bitmap subtractBitmap) 
{
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
BitmapData subtractData = subtractBitmap.LockBits(new Rectangle(0, 0, subtractBitmap.Width, subtractBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] subtractBuffer = new byte[subtractData.Stride * subtractData.Height];
Marshal.Copy(subtractData.Scan0, subtractBuffer, 0, subtractBuffer.Length);
subtractBitmap.UnlockBits(subtractData);
for (int k = 0; k + 4 < resultBuffer.Length && k + 4 < subtractBuffer.Length; k += 4) { resultBuffer[k] = SubtractColors(resultBuffer[k], subtractBuffer[k]);
resultBuffer[k + 1] = SubtractColors(resultBuffer[k + 1], subtractBuffer[k + 1]);
resultBuffer[k + 2] = SubtractColors(resultBuffer[k + 2], subtractBuffer[k + 2]);
resultBuffer[k + 3] = 255; }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }
private static byte SubtractColors(byte color1, byte color2) 
{
    int result = (int)color1 - (int)color2; 

return (byte)(result < 0 ? 0 : result); }

 Parrot: Boundary Extraction, 3×3, Green

Parrot: Boundary Extraction, 3x3, Green

Implementing Image Boundary Extraction

In the sample source code processing Image Boundary Extraction can be achieved when invoking the BoundaryExtraction method. Defined as an , the BoundaryExtraction method targets the class.

As discussed earlier, this method performs Boundary Extraction through subtracting the source from a dilated copy of the source .

The following code snippet details the definition of the BoundaryExtraction method:

private static Bitmap
BoundaryExtraction(this Bitmap sourceBitmap, 
                   bool[,] se, bool applyBlue = true, 
                   bool applyGreen = true, bool applyRed = true) 
{
    Bitmap resultBitmap = 
           sourceBitmap.MorphologyOperation(se,  
           MorphologyOperationType.Dilation, applyBlue,  
                                  applyGreen, applyRed); 

resultBitmap = resultBitmap.SubtractImage(sourceBitmap);
return resultBitmap; }

Parrot: Boundary Extraction, 3×3, Red, Blue

Parrot: Boundary Extraction, 3x3, Red, Blue

Implementing Image Boundary Sharpening

Boundary Sharpening in the sample source code has been implemented through the definition of the BoundarySharpen method. The BoundarySharpen targets the class. The following code snippet provides the definition:

private static Bitmap 
BoundarySharpen(this Bitmap sourceBitmap, 
                bool[,] se, bool applyBlue = true, 
                bool applyGreen = true, bool applyRed = true) 
{
    Bitmap resultBitmap = 
           sourceBitmap.BoundaryExtraction(se, applyBlue, 
                                           applyGreen, applyRed); 

resultBitmap = sourceBitmap.MorphologyOperation(se, MorphologyOperationType.Dilation, applyBlue, applyGreen, applyRed).AddImage(resultBitmap);
return resultBitmap; }

Parrot: Boundary Extraction, 3×3, Green

Parrot: Boundary Extraction, 3x3, Green

Implementing Image Boundary Tracing

Boundary Tracing has been defined through the BoundaryTrace , which targets the class. Similar to the BoundarySharpen method this method performs Boundary Extraction, the result of which serves to be subtracted from the original source . Subtracting boundaries/edges result in those boundaries/edges being darkened, or traced. The definition of the BoundaryTracing detailed as follows:

private static Bitmap
BoundaryTrace(this Bitmap sourceBitmap, 
              bool[,] se, bool applyBlue = true, 
              bool applyGreen = true, bool applyRed = true) 
{
    Bitmap resultBitmap =
    sourceBitmap.BoundaryExtraction(se, applyBlue,  
                                    applyGreen, applyRed); 

resultBitmap = sourceBitmap.SubtractImage(resultBitmap);
return resultBitmap; }

Parrot: Boundary Extraction, 3×3, Green, Blue

Parrot: Boundary Extraction, 3x3, Green, Blue

Implementing a Wrapper Method

The BoundaryExtractionFilter method is the only method defined as publicly accessible. Following convention, this method’s definition signals the method as an targeting the class. This method has the intention of acting as a wrapper method, a single method capable of performing Boundary Extraction, Boundary Sharpening and Boundary Tracing, depending on method parameters.

The definition of the BoundaryExtractionFilter method detailed by the following code snippet:

public static Bitmap
BoundaryExtractionFilter(this Bitmap sourceBitmap, 
                         bool[,] se, BoundaryExtractionFilterType  
                         filterType, bool applyBlue = true, 
                         bool applyGreen = true, bool applyRed = true) 
{
    Bitmap resultBitmap = null; 

if (filterType == BoundaryExtractionFilterType.BoundaryExtraction) { resultBitmap = sourceBitmap.BoundaryExtraction(se, applyBlue, applyGreen, applyRed); } else if (filterType == BoundaryExtractionFilterType.BoundarySharpen) { resultBitmap = sourceBitmap.BoundarySharpen(se, applyBlue, applyGreen, applyRed); } else if (filterType == BoundaryExtractionFilterType.BoundaryTrace) { resultBitmap = sourceBitmap.BoundaryTrace(se, applyBlue, applyGreen, applyRed); }
return resultBitmap; }

Parrot: Boundary Extraction, 3×3, Red, Green, Blue

Parrot: Boundary Extraction, 3x3, Red, Green, Blue

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following images feature as sample images:

1280px-Ara_macao_-Diergaarde_Blijdorp_-flying-8a

Ara_macao_-flying_away-8a

Ara_ararauna_Luc_Viatour

1280px-Macaws_at_Seaport_Village_-USA-8a

Ara_macao_-on_a_small_bicycle-8

Psarisomus_dalhousiae_-_Kaeng_Krachan

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image ASCII Art

Article Purpose

This article explores the concept of rendering from source . Beyond exploring concepts this article also provides a practical implementation of all the steps required in creating an ASCII Filter.

Sir Tim Berners-Lee: 2 Pixels Per Character, 12 Characters,  Font Size 4, Zoom 100

Sir Tim Berners-Lee 2 Pixels Per Character, 12 Characters,  Font Size 4, Zoom 100

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

The sample source code that accompanies this article includes a sample application. The concepts illustrated in this article can tested and replicated using the sample application.

The sample application user interface implements a variety of functionality which can be described as follows:

  • Loading/Saving Images – Users are able to load source/input from the local system through clicking the Load Image button. Rendered can be saved as an file when clicking the Save Image button.
  • Pixels Per Character – This configuration option determines the number of pixels represented by a single character. Lower values result in better detail/definition and a larger proportional output. Higher values result in less detail/definition and a smaller proportional output.
  • Character Count – The number of unique characters to be rendered can be adjusted through updating this value. This value can be considered similar to number of shades of gray in a  .
  • Font Size – This option determines the Font Size related to the rendered text.
  • Zoom Level – Configure this value in order to apply a scaling level when rendering text.
  • Copy to Clipboard – Copy the current to the Windows Clipboard, in Rich Text Format.

The following image is screenshot of the Image ASCII Art sample application is action:

Image ASCII Art Sample Application

Image ASCII Art Sample Application

Anders Hejlsberg: 1 Pixel Per Character, 24 Characters, Font Size 6, Zoom 20

Anders Hejlsberg: 1 Pixel Per Character, 24 Characters, Font Size 6, Zoom 20

Converting Pixels to Characters

in various forms have been part of computer culture since the pioneering days of computing. From we gain the following:

ASCII art is a graphic design technique that uses computers for presentation and consists of pictures pieced together from the 95 printable (from a total of 128) characters defined by the ASCII Standard from 1963 and ASCII compliant character sets with proprietary extended characters (beyond the 128 characters of standard 7-bit ASCII). The term is also loosely used to refer to text based visual art in general. ASCII art can be created with any text editor, and is often used with free-form languages. Most examples of ASCII art require a fixed-width font (non-proportional fonts, as on a traditional typewriter) such as Courier for presentation.

Among the oldest known examples of ASCII art are the creations by computer-art pioneer Kenneth Knowlton from around 1966, who was working for Bell Labs at the time.[1] "Studies in Perception I" by Ken Knowlton and Leon Harmon from 1966 shows some examples of their early ASCII art.[2]

One of the main reasons ASCII art was born was because early printers often lacked graphics ability and thus characters were used in place of graphic marks. Also, to mark divisions between different print jobs from different users, bulk printers often used ASCII art to print large banners, making the division easier to spot so that the results could be more easily separated by a computer operator or clerk. ASCII art was also used in early e-mail when images could not be embedded.

Bjarne Stroustrup: 1 Pixel Per Character, 12 Characters, Font Size 6, Zoom 60

Bjarne Stroustrup: 1 Pixel Per Character, 12 Characters, Font Size 6, Zoom 60

This article explores the steps involved in rendering text representing , implementing source/input in rendering text representations. The following sections details the steps required to render text from source/input :

  1. Generate Random Characters – Generate a consisting of random characters. The number of characters will be determined through user input relating to the Character Count option. When generating the random ensure that all characters added to the are unique. In addition avoid adding control characters or punctuation characters. Control characters are non-visible characters such as Start of Text, Beep, New Line or Carriage Return. Most punctuation characters occupy a lot less screen space compared to regular alphabet characters.
  2. Determine Row and Column Count – Rows and Columns in terms of the Character Count option indicate the ratio between pixels and characters. The number of rows equate to the height in pixels divided by the Character Count. The number of columns equate to the width in pixels divided by the Character Count.
  3. Iterate Rows/Columns and Determine Colour Averages – Iterate pixels in terms of a rows and columns grid strategy. Calculate the sum total of each grid region’s colour values. Calculate the average/mean colour value through dividing the colour sum total by the Character Count squared.
  4. Map Average Colour Intensity to a Character – Using the average colour values calculate in the previous step, calculate a colour intensity value ranging between 0 and the number of randomly generate characters. The intensity value should be implemented as an array index in accessing the of random characters. All of the pixels included in calculating an average value should be represented by the random character located at the index equating to the colour average intensity value.

Linus Torvalds: 1 Pixel Per Character, 16 Characters, Font Size 5, Zoom 60

Linus Torvalds: 1 Pixel Per Character, 16 Characters, Font Size 5, Zoom 60

Converting Text to an Image

When rendering high definition the resulting text can easily consist of several thousand characters. Attempting to display such a vast number of text in a traditional text editor in most scenarios would be futile. An alternative method of retaining a high definition whilst still being viewable can be achieved through creating an from the rendered text and then reducing the dimensions.

The sample code employs the following steps when converting rendered text to an :

  1. Determine Required Image Dimensions – Determine the dimensions required to fit the rendered text.
  2. Create a new Image and set the background colour – After having determined the required dimensions create a new consisting of those dimensions. Set every pixel in the new to Black.
  3. Draw Rendered Text – The rendered text should be drawn on the new in plain White.
  4. Resize Image – In order to ensure more manageable dimensions resize the with a specified factor.

Alan Turing: 1 Pixel Per Character, 16 Characters, Font Size 4, Zoom 100

Alan Turing: 1 Pixel Per Character, 16 Characters, Font Size 4, Zoom 100

Implementing an Image ASCII Filter

The sample source code implements four methods when implementing an ASCII Filter, the methods are:

  • ASCIIFilter
  • GenerateRandomString
  • RandomStringSort
  • GetColorCharacter

The GenerateRandomString  method, as the name implies, generates a consisting of randomly selected characters. The number of characters contained in the will be determined by the parameter value passed to this method. The following code snippet provides the implementation of the GenerateRandomString method:

private static string GenerateRandomString(int maxSize) 
{
    StringBuilder stringBuilder = new StringBuilder(maxSize); 
    Random randomChar = new Random(); 

char charValue;
for (int k = 0; k < maxSize; k++) { charValue = (char)(Math.Floor(255 * randomChar.NextDouble() * 4));
if (stringBuilder.ToString().IndexOf(charValue) != -1) { charValue = (char)(Math.Floor((byte)charValue * randomChar.NextDouble())); }
if (Char.IsControl(charValue) == false && Char.IsPunctuation(charValue) == false && stringBuilder.ToString().IndexOf(charValue) == -1) { stringBuilder.Append(charValue); randomChar = new Random((int)((byte)charValue * (k + 1) + DateTime.Now.Ticks)); } else { randomChar = new Random((int)((byte)charValue * (k + 1) + DateTime.UtcNow.Ticks)); k -= 1; } }
return stringBuilder.ToString().RandomStringSort(); }

Sir Tim Berners-Lee: 4 Pixels Per Character, 16 Characters, Font Size 6, Zoom 100

Sir Tim Berners-Lee, 4 Pixels Per Character, 16 Characters, Font Size 6, Zoom 100

The RandomStringSort method has been defined as an targeting the . This method provides a means of sorting a in a random manner, in essence shuffling a ’s characters. The definition as follows:

public static string RandomStringSort(this string stringValue) 
{
    char[] charArray = stringValue.ToCharArray(); 

Random randomIndex = new Random((byte)charArray[0]); int iterator = charArray.Length;
while(iterator > 1) { iterator -= 1;
int nextIndex = randomIndex.Next(iterator + 1);
char nextValue = charArray[nextIndex]; charArray[nextIndex] = charArray[iterator]; charArray[iterator] = nextValue; }
return new string(charArray); }

Anders Hejlsberg: 3 Pixels Per Character, 12 Characters, Font Size 5, Zoom 50

Anders Hejlsberg: 3 Pixels Per Character, 12 Characters, Font Size 5, Zoom 50

The sample source code defines the GetColorCharacter method, intended to map pixels to character values. This method has been defined as an targeting the . The definition as follows:

private static string colorCharacters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; 

private static string GetColorCharacter(int blue, int green, int red) { string colorChar = String.Empty; int intensity = (blue + green + red) / 3 * (colorCharacters.Length - 1) / 255;
colorChar = colorCharacters.Substring(intensity, 1).ToUpper(); colorChar += colorChar.ToLower();
return colorChar; }

Bjarne Stroustrup: 1 Pixel Per Character, 12 Characters, Font Size 4, Zoom 100

Bjarne Stroustrup: 1 Pixel Per Character, 12 Characters, Font Size 4, Zoom 100

The ASCIIFilter method defined by the sample source code has the task of translating source/input into text based . This method has been defined as an targeting the class. The following code snippet provides the definition:

public static string ASCIIFilter(this Bitmap sourceBitmap, int pixelBlockSize,  
                                                           int colorCount = 0) 
{
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, 
                            sourceBitmap.Width, sourceBitmap.Height), 
                                              ImageLockMode.ReadOnly, 
                                        PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
StringBuilder asciiArt = new StringBuilder();
int avgBlue = 0; int avgGreen = 0; int avgRed = 0; int offset = 0;
int rows = sourceBitmap.Height / pixelBlockSize; int columns = sourceBitmap.Width / pixelBlockSize;
if (colorCount > 0) { colorCharacters = GenerateRandomString(colorCount); }
for (int y = 0; y < rows; y++) { for (int x = 0; x < columns; x++) { avgBlue = 0; avgGreen = 0; avgRed = 0;
for (int pY = 0; pY < pixelBlockSize; pY++) { for (int pX = 0; pX < pixelBlockSize; pX++) { offset = y * pixelBlockSize * sourceData.Stride + x * pixelBlockSize * 4;
offset += pY * sourceData.Stride; offset += pX * 4;
avgBlue += pixelBuffer[offset]; avgGreen += pixelBuffer[offset + 1]; avgRed += pixelBuffer[offset + 2]; } }
avgBlue = avgBlue / (pixelBlockSize * pixelBlockSize); avgGreen = avgGreen / (pixelBlockSize * pixelBlockSize); avgRed = avgRed / (pixelBlockSize * pixelBlockSize);
asciiArt.Append(GetColorCharacter(avgBlue, avgGreen, avgRed)); }
asciiArt.Append("\r\n" ); }
return asciiArt.ToString(); }

Linus Torvalds: 1 Pixel Per Character, 8 Characters, Font Size 4, Zoom 80

Linus Torvalds: 1 Pixel Per Character, 8 Characters, Font Size 4, Zoom 80

Implementing Text to Image Functionality

The sample source code implements the GDI+ class when drawing rendered text onto . The sample source code defines the TextToImage method, an extending the . The definition listed as follows:

public static Bitmap TextToImage(this string text, Font font,  
                                                float factor) 
{
    Bitmap textBitmap = new Bitmap(1, 1); 

Graphics graphics = Graphics.FromImage(textBitmap);
int width = (int)Math.Ceiling( graphics.MeasureString(text, font).Width * factor);
int height = (int)Math.Ceiling( graphics.MeasureString(text, font).Height * factor);
graphics.Dispose();
textBitmap = new Bitmap(width, height, PixelFormat.Format32bppArgb);
graphics = Graphics.FromImage(textBitmap); graphics.Clear(Color.Black);
graphics.CompositingQuality = CompositingQuality.HighQuality; graphics.InterpolationMode = InterpolationMode.HighQualityBicubic; graphics.PixelOffsetMode = PixelOffsetMode.HighQuality; graphics.SmoothingMode = SmoothingMode.HighQuality; graphics.TextRenderingHint = TextRenderingHint.AntiAliasGridFit;
graphics.ScaleTransform(factor, factor); graphics.DrawString(text, font, Brushes.White, new PointF(0, 0));
graphics.Flush(); graphics.Dispose();
return textBitmap; }

Sir Tim Berners-Lee: 1 Pixel Per Character, 32 Characters, Font Size 4, Zoom 100

Sir Tim Berners-Lee, 1 Pixel Per Character, 32 Characters, Font Size 4, Zoom 100

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature a sample images:

The following section lists the original image files that were used as source/input images in generating the images found throughout this article.

Alan Turing

Alan Turing

Anders Hejlsberg

Anders Hejlsberg

Bjarne Stroustrup

Bjarne Stroustrup

Linus Torvalds

Linus Torvalds

Tim Berners-Lee

Tim Berners-Lee

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:


Dewald Esterhuizen

Blog Stats

  • 869,866 hits

Enter your email address to follow and receive notifications of new posts by email.

Join 228 other subscribers

Archives

RSS SoftwareByDefault on MSDN

  • An error has occurred; the feed is probably down. Try again later.