Posts Tagged 'Computer vision'



C# How to: Image Edge Detection

Article Purpose

The objective of this article is to explore various algorithms. The types of discussed are: , , , and . All instances are implemented by means of .

Sample source code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

The concepts explored in this article can be easily replicated by making use of the Sample Application, which forms part of the associated sample source code accompanying this article.

When using the Image Edge Detection sample application you can specify a input/source image by clicking the Load Image button. The dropdown towards the bottom middle part of the screen relates the various methods discussed.

If desired a user can save the resulting image to the local file system by clicking the Save Image button.

The following image is screenshot of the Image Edge Detection sample application in action:

Image Edge Detection Sample Application

Edge Detection

A good description of edge detection forms part of the on :

Edge detection is the name for a set of mathematical methods which aim at identifying points in a at which the changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges. The same problem of finding discontinuities in 1D signals is known as and the problem of finding signal discontinuities over time is known as . Edge detection is a fundamental tool in , and , particularly in the areas of and .

Image Convolution

A good introduction article  to can be found at: http://homepages.inf.ed.ac.uk/rbf/HIPR2/convolve.htm. From the article we learn the following:

Convolution is a simple mathematical operation which is fundamental to many common image processing operators. Convolution provides a way of `multiplying together’ two arrays of numbers, generally of different sizes, but of the same dimensionality, to produce a third array of numbers of the same dimensionality. This can be used in image processing to implement operators whose output pixel values are simple linear combinations of certain input pixel values.

In an image processing context, one of the input arrays is normally just a graylevel image. The second array is usually much smaller, and is also two-dimensional (although it may be just a single pixel thick), and is known as the kernel.

Single Matrix Convolution

The sample source code implements the ConvolutionFilter method, an targeting the class. The ConvolutionFilter method is intended to apply a user defined and optionally covert an to grayscale. The implementation as follows:

private static Bitmap ConvolutionFilter(Bitmap sourceBitmap, 
                                     double[,] filterMatrix, 
                                          double factor = 1, 
                                               int bias = 0, 
                                     bool grayscale = false) 
{
    BitmapData sourceData = 
                   sourceBitmap.LockBits(new Rectangle(0, 0,
                   sourceBitmap.Width, sourceBitmap.Height),
                                     ImageLockMode.ReadOnly, 
                                PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
if(grayscale == true) { float rgb = 0;
for(int k = 0; k < pixelBuffer.Length; k += 4) { rgb = pixelBuffer[k] * 0.11f; rgb += pixelBuffer[k + 1] * 0.59f; rgb += pixelBuffer[k + 2] * 0.3f;
pixelBuffer[k] = (byte)rgb; pixelBuffer[k + 1] = pixelBuffer[k]; pixelBuffer[k + 2] = pixelBuffer[k]; pixelBuffer[k + 3] = 255; } }
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth-1) / 2; int calcOffset = 0;
int byteOffset = 0;
for(int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for(int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for(int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for(int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double)(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double)(pixelBuffer[calcOffset+1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double)(pixelBuffer[calcOffset+2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
if(blue > 255) { blue = 255;} else if(blue < 0) { blue = 0;}
if(green > 255) { green = 255;} else if(green < 0) { green = 0;}
if(red > 255) { red = 255;} else if(red < 0) { red = 0;}
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Horizontal and Vertical Matrix Convolution

The ConvolutionFilter has been overloaded to accept two matrices, representing a vertical and a horizontal . The implementation as follows:

public static Bitmap ConvolutionFilter(this Bitmap sourceBitmap,
                                        double[,] xFilterMatrix,
                                        double[,] yFilterMatrix,
                                              double factor = 1,
                                                   int bias = 0,
                                         bool grayscale = false)
{
    BitmapData sourceData = 
                   sourceBitmap.LockBits(new Rectangle(0, 0,
                   sourceBitmap.Width, sourceBitmap.Height),
                                     ImageLockMode.ReadOnly,
                                PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
if (grayscale == true) { float rgb = 0;
for (int k = 0; k < pixelBuffer.Length; k += 4) { rgb = pixelBuffer[k] * 0.11f; rgb += pixelBuffer[k + 1] * 0.59f; rgb += pixelBuffer[k + 2] * 0.3f;
pixelBuffer[k] = (byte)rgb; pixelBuffer[k + 1] = pixelBuffer[k]; pixelBuffer[k + 2] = pixelBuffer[k]; pixelBuffer[k + 3] = 255; } }
double blueX = 0.0; double greenX = 0.0; double redX = 0.0;
double blueY = 0.0; double greenY = 0.0; double redY = 0.0;
double blueTotal = 0.0; double greenTotal = 0.0; double redTotal = 0.0;
int filterOffset = 1; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blueX = greenX = redX = 0; blueY = greenY = redY = 0;
blueTotal = greenTotal = redTotal = 0.0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blueX += (double) (pixelBuffer[calcOffset]) * xFilterMatrix[filterY + filterOffset, filterX + filterOffset];
greenX += (double) (pixelBuffer[calcOffset + 1]) * xFilterMatrix[filterY + filterOffset, filterX + filterOffset];
redX += (double) (pixelBuffer[calcOffset + 2]) * xFilterMatrix[filterY + filterOffset, filterX + filterOffset];
blueY += (double) (pixelBuffer[calcOffset]) * yFilterMatrix[filterY + filterOffset, filterX + filterOffset];
greenY += (double) (pixelBuffer[calcOffset + 1]) * yFilterMatrix[filterY + filterOffset, filterX + filterOffset];
redY += (double) (pixelBuffer[calcOffset + 2]) * yFilterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
blueTotal = Math.Sqrt((blueX * blueX) + (blueY * blueY));
greenTotal = Math.Sqrt((greenX * greenX) + (greenY * greenY));
redTotal = Math.Sqrt((redX * redX) + (redY * redY));
if (blueTotal > 255) { blueTotal = 255; } else if (blueTotal < 0) { blueTotal = 0; }
if (greenTotal > 255) { greenTotal = 255; } else if (greenTotal < 0) { greenTotal = 0; }
if (redTotal > 255) { redTotal = 255; } else if (redTotal < 0) { redTotal = 0; }
resultBuffer[byteOffset] = (byte)(blueTotal); resultBuffer[byteOffset + 1] = (byte)(greenTotal); resultBuffer[byteOffset + 2] = (byte)(redTotal); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Original Sample Image

The original source image used to create all of the sample images in this article has been licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license. The original image is attributed to Kenneth Dwain Harrelson and can be downloaded from Wikipedia.

Monarch_In_May

Laplacian Edge Detection

The method of counts as one of the commonly used implementations. From we gain the following definition:

Discrete Laplace operator is often used in image processing e.g. in edge detection and motion estimation applications. The discrete Laplacian is defined as the sum of the second derivatives and calculated as sum of differences over the nearest neighbours of the central pixel.

A number of / variations may be applied with results ranging from slight to fairly pronounced. In the following sections of this article we explore two common implementations, 3×3 and 5×5.

Laplacian 3×3

When implementing a 3×3 you will notice little difference between colour and grayscale result .

public static Bitmap 
Laplacian3x3Filter(this Bitmap sourceBitmap, 
                      bool grayscale = true)
{
    Bitmap resultBitmap = 
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                                Matrix.Laplacian3x3,
                                  1.0, 0, grayscale);

return resultBitmap; }
public static double[,] Laplacian3x3
{ 
   get   
   { 
       return new double[,]
       { { -1, -1, -1, },  
         { -1,  8, -1, },  
         { -1, -1, -1, }, }; 
   } 
} 

Laplacian 3×3

Laplacian 3x3

Laplacian 3×3 Grayscale

Laplacian 3x3 Grayscale

Laplacian 5×5

The 5×5  produces result with a noticeable difference between colour and grayscale . The detected edges are expressed in a fair amount of fine detail, although the has a tendency to be sensitive to .

public static Bitmap 
Laplacian5x5Filter(this Bitmap sourceBitmap, 
                      bool grayscale = true)
{
    Bitmap resultBitmap =
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                                Matrix.Laplacian5x5,
                                  1.0, 0, grayscale);

return resultBitmap; }
public static double[,] Laplacian5x5 
{ 
    get   
    { 
       return new double[,]
       { { -1, -1, -1, -1, -1, },  
         { -1, -1, -1, -1, -1, },  
         { -1, -1, 24, -1, -1, },  
         { -1, -1, -1, -1, -1, },  
         { -1, -1, -1, -1, -1  } }; 
    } 
}

Laplacian 5×5

Laplacian 5x5

Laplacian 5×5 Grayscale

Laplacian 5x5 Grayscale

Laplacian of Gaussian

The (LoG) is a common variation of the filter. is intended to counter the noise sensitivity of the regular filter.

attempts to remove noise by implementing smoothing by means of a . In order to optimize performance we can calculate a single representing a and .

public static Bitmap 
LaplacianOfGaussian(this Bitmap sourceBitmap)
{
    Bitmap resultBitmap =
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                         Matrix.LaplacianOfGaussian, 
                                       1.0, 0, true);

return resultBitmap; }
public static double[,] LaplacianOfGaussian
{ 
    get   
    { 
        return new double[,]
        { {  0,  0, -1,  0,  0 },  
          {  0, -1, -2, -1,  0 },  
          { -1, -2, 16, -2, -1 }, 
          {  0, -1, -2, -1,  0 }, 
          {  0,  0, -1,  0,  0 } };
    } 
} 

Laplacian of Gaussian

Laplacian Of Gaussian

Laplacian (3×3) of Gaussian (3×3)

Different variations can be combined in an attempt to produce results best suited to the input . In this case we first apply a 3×3 followed by a 3×3 filter.

public static Bitmap 
Laplacian3x3OfGaussian3x3Filter(this Bitmap sourceBitmap)
{
    Bitmap resultBitmap =
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                                 Matrix.Gaussian3x3,
                                1.0 / 16.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap, Matrix.Laplacian3x3, 1.0, 0, false);
return resultBitmap; }
public static double[,] Laplacian3x3
{ 
   get   
   { 
       return new double[,]
       { { -1, -1, -1, },  
         { -1,  8, -1, },  
         { -1, -1, -1, }, }; 
   } 
} 
public static double[,] Gaussian3x3
{ 
   get   
   { 
       return new double[,]
       { { 1, 2, 1, },  
         { 2, 4, 2, },  
         { 1, 2, 1, } }; 
   } 
} 

Laplacian 3×3 Of Gaussian 3×3

Laplacian 3x3 Of Gaussian 3x3

Laplacian (3×3) of Gaussian (5×5 – Type 1)

In this scenario we apply a variation of a 5×5 followed by a 3×3 filter.

public static Bitmap 
Laplacian3x3OfGaussian5x5Filter1(this Bitmap sourceBitmap)
{
    Bitmap resultBitmap = 
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                            Matrix.Gaussian5x5Type1,
                               1.0 / 159.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap, Matrix.Laplacian3x3, 1.0, 0, false);
return resultBitmap; }
public static double[,] Laplacian3x3
{ 
   get   
   { 
       return new double[,]
       { { -1, -1, -1, },  
         { -1,  8, -1, },  
         { -1, -1, -1, }, }; 
   } 
} 
public static double[,] Gaussian5x5Type1 
{ 
   get   
   { 
       return new double[,]   
       { { 2, 04, 05, 04, 2 },  
         { 4, 09, 12, 09, 4 },  
         { 5, 12, 15, 12, 5 }, 
         { 4, 09, 12, 09, 4 }, 
         { 2, 04, 05, 04, 2 }, }; 
   } 
} 

Laplacian 3×3 Of Gaussian 5×5 – Type 1

Laplacian 3x3 Of Gaussian 5x5 Type1

Laplacian (3×3) of Gaussian (5×5 – Type 2)

The following implementation is very similar to the previous implementation. Applying a variation of a 5×5 results in slight differences.

public static Bitmap 
Laplacian3x3OfGaussian5x5Filter2(this Bitmap sourceBitmap)
{
    Bitmap resultBitmap = 
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                            Matrix.Gaussian5x5Type2,
                               1.0 / 256.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap, Matrix.Laplacian3x3, 1.0, 0, false);
return resultBitmap; }
public static double[,] Laplacian3x3
{ 
   get   
   { 
       return new double[,]
       { { -1, -1, -1, },  
         { -1,  8, -1, },  
         { -1, -1, -1, }, }; 
   } 
} 
public static double[,] Gaussian5x5Type2 
{ 
   get   
   {
       return new double[,]  
       { {  1,   4,  6,  4,  1 },  
         {  4,  16, 24, 16,  4 },  
         {  6,  24, 36, 24,  6 }, 
         {  4,  16, 24, 16,  4 }, 
         {  1,   4,  6,  4,  1 }, }; 
   }
} 

Laplacian 3×3 Of Gaussian 5×5 – Type 2

Laplacian 3x3 Of Gaussian 5x5 Type2

Laplacian (5×5) of Gaussian (3×3)

This variation of the filter implements a 3×3 , followed by a 5×5 . The resulting appears significantly brighter when compared to a 3×3 .

public static Bitmap 
Laplacian5x5OfGaussian3x3Filter(this Bitmap sourceBitmap)
{
    Bitmap resultBitmap = 
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                                 Matrix.Gaussian3x3,
                                1.0 / 16.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap, Matrix.Laplacian5x5, 1.0, 0, false);
return resultBitmap; }
public static double[,] Laplacian5x5 
{ 
    get   
    { 
       return new double[,]
       { { -1, -1, -1, -1, -1, },  
         { -1, -1, -1, -1, -1, },  
         { -1, -1, 24, -1, -1, },  
         { -1, -1, -1, -1, -1, },  
         { -1, -1, -1, -1, -1  } }; 
    } 
}
public static double[,] Gaussian3x3
{ 
   get   
   { 
       return new double[,]
       { { 1, 2, 1, },  
         { 2, 4, 2, },  
         { 1, 2, 1, } }; 
   } 
} 

Laplacian 5×5 Of Gaussian 3×3

Laplacian 5x5 Of Gaussian 3x3

Laplacian (5×5) of Gaussian (5×5 – Type 1)

Implementing a larger results in a higher degree of smoothing, equating to less .

public static Bitmap 
Laplacian5x5OfGaussian5x5Filter1(this Bitmap sourceBitmap)
{
    Bitmap resultBitmap = 
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                            Matrix.Gaussian5x5Type1,
                               1.0 / 159.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap, Matrix.Laplacian5x5, 1.0, 0, false);
return resultBitmap; }
public static double[,] Laplacian5x5 
{ 
    get   
    { 
       return new double[,]
       { { -1, -1, -1, -1, -1, },  
         { -1, -1, -1, -1, -1, },  
         { -1, -1, 24, -1, -1, },  
         { -1, -1, -1, -1, -1, },  
         { -1, -1, -1, -1, -1  } }; 
    } 
}
public static double[,] Gaussian5x5Type1 
{ 
   get   
   { 
       return new double[,]   
       { { 2, 04, 05, 04, 2 },  
         { 4, 09, 12, 09, 4 },  
         { 5, 12, 15, 12, 5 }, 
         { 4, 09, 12, 09, 4 }, 
         { 2, 04, 05, 04, 2 }, }; 
   } 
} 

Laplacian 5×5 Of Gaussian 5×5 – Type 1

Laplacian 5x5 Of Gaussian 5x5 Type1

Laplacian (5×5) of Gaussian (5×5 – Type 2)

The variation of most applicable when implementing a filter depends on expressed by a source . In this scenario the first variations (Type 1) appears to result in less .

public static Bitmap 
Laplacian5x5OfGaussian5x5Filter2(this Bitmap sourceBitmap)
{
    Bitmap resultBitmap = 
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                            Matrix.Gaussian5x5Type2, 
                               1.0 / 256.0, 0, true);

resultBitmap = ExtBitmap.ConvolutionFilter(resultBitmap, Matrix.Laplacian5x5, 1.0, 0, false);
return resultBitmap; }
public static double[,] Laplacian5x5 
{ 
    get   
    { 
       return new double[,]
       { { -1, -1, -1, -1, -1, },  
         { -1, -1, -1, -1, -1, },  
         { -1, -1, 24, -1, -1, },  
         { -1, -1, -1, -1, -1, },  
         { -1, -1, -1, -1, -1  } }; 
    } 
}
public static double[,] Gaussian5x5Type2 
{ 
   get   
   {
       return new double[,]  
       { {  1,   4,  6,  4,  1 },  
         {  4,  16, 24, 16,  4 },  
         {  6,  24, 36, 24,  6 }, 
         {  4,  16, 24, 16,  4 }, 
         {  1,   4,  6,  4,  1 }, }; 
   }
} 

Laplacian 5×5 Of Gaussian 5×5 – Type 2

Laplacian 5x5 Of Gaussian 5x5 Type2

Sobel Edge Detection

is another common implementation of . We gain the following from :

The Sobel operator is used in , particularly within edge detection algorithms. Technically, it is a , computing an approximation of the of the image intensity function. At each point in the image, the result of the Sobel operator is either the corresponding gradient vector or the norm of this vector. The Sobel operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. On the other hand, the gradient approximation that it produces is relatively crude, in particular for high frequency variations in the image.

Unlike the filters discussed earlier, filter results differ significantly when comparing colour and grayscale . The filter tends to be less sensitive to compared to the filter. The detected edge lines are not as finely detailed/granular as the detected edge lines resulting from filters.

public static Bitmap 
Sobel3x3Filter(this Bitmap sourceBitmap, 
                  bool grayscale = true)
{
    Bitmap resultBitmap =
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                          Matrix.Sobel3x3Horizontal, 
                            Matrix.Sobel3x3Vertical, 
                                  1.0, 0, grayscale);

return resultBitmap; }
 
public static double[,] Sobel3x3Horizontal
{ 
   get   
   {
       return new double[,]  
       { { -1,  0,  1, },  
         { -2,  0,  2, },  
         { -1,  0,  1, }, }; 
   } 
} 
public static double[,] Sobel3x3Vertical 
{ 
   get   
   { 
       return new double[,]  
       { {  1,  2,  1, },  
         {  0,  0,  0, },  
         { -1, -2, -1, }, }; 
   } 
}

Sobel 3×3

Sobel 3x3

Sobel 3×3 Grayscale

Sobel 3x3 Grayscale

Prewitt Edge Detection

As with the other methods of discussed in this article the method is also a fairly common implementation. From we gain the following quote:

The Prewitt operator is used in , particularly within algorithms. Technically, it is a , computing an approximation of the of the image intensity function. At each point in the image, the result of the Prewitt operator is either the corresponding gradient vector or the norm of this vector. The Prewitt operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. On the other hand, the gradient approximation which it produces is relatively crude, in particular for high frequency variations in the image. The Prewitt operator was developed by Judith M. S. Prewitt.

In simple terms, the operator calculates the of the image intensity at each point, giving the direction of the largest possible increase from light to dark and the rate of change in that direction. The result therefore shows how "abruptly" or "smoothly" the image changes at that point, and therefore how likely it is that that part of the image represents an edge, as well as how that edge is likely to be oriented. In practice, the magnitude (likelihood of an edge) calculation is more reliable and easier to interpret than the direction calculation.

Similar to the filter, resulting express a significant difference when comparing colour and grayscale .

public static Bitmap 
PrewittFilter(this Bitmap sourceBitmap, 
                 bool grayscale = true)
{
    Bitmap resultBitmap =
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                        Matrix.Prewitt3x3Horizontal, 
                          Matrix.Prewitt3x3Vertical, 
                                  1.0, 0, grayscale);

return resultBitmap; }
public static double[,] Prewitt3x3Horizontal 
{ 
   get   
   { 
       return new double[,]  
       { { -1,  0,  1, },  
         { -1,  0,  1, },  
         { -1,  0,  1, }, }; 
   } 
} 
  
public static double[,] Prewitt3x3Vertical 
{ 
   get   
   { 
       return new double[,]  
       { {  1,  1,  1, },  
         {  0,  0,  0, },  
         { -1, -1, -1, }, }; 
   }
}

Prewitt

Prewitt

Prewitt Grayscale

Prewitt Grayscale

Kirsch Edge Detection

The method is often implemented in the form of Compass . In the following scenario we only implement two components: Horizontal and Vertical. Resulting tend to have a high level of brightness.

public static Bitmap 
KirschFilter(this Bitmap sourceBitmap, 
                bool grayscale = true)
{
    Bitmap resultBitmap =
           ExtBitmap.ConvolutionFilter(sourceBitmap, 
                         Matrix.Kirsch3x3Horizontal, 
                           Matrix.Kirsch3x3Vertical, 
                                  1.0, 0, grayscale);

return resultBitmap; }
public static double[,] Kirsch3x3Horizontal 
{ 
   get   
   {
       return new double[,]  
       { {  5,  5,  5, },  
         { -3,  0, -3, },  
         { -3, -3, -3, }, }; 
   } 
} 
public static double[,] Kirsch3x3Vertical
{ 
   get   
   { 
       return new double[,]  
       { {  5, -3, -3, },  
         {  5,  0, -3, },  
         {  5, -3, -3, }, }; 
   } 
}

Kirsch

Kirsch 

Kirsch Grayscale

Kirsch Grayscale

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Bitmap Colour Substitution implementing thresholds

Article Purpose

This article is aimed at detailing how to implement the process of substituting the colour values that form part of a image. Colour substitution is implemented by means of a threshold value. By implementing a threshold a range of similar colours can be substituted.

Sample source code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the sample Application

The provided sample source code builds a Windows Forms application which can be used to test/implement the concepts described in this article. The sample application enables the user to load an file from the file system, the user can then specify the colour to replace, the replacement colour and the threshold to apply. The following image is a screenshot of the sample application in action.

BitmapColourSubstitution_Scaled

The scenario detailed in the above screenshot shows the sample application being used to create an where the sky has more of a bluish hue when compared to the original .

Notice how replacement colour does not simply appear as a solid colour applied throughout. The replacement colour gets implemented matching the intensity of the colour being substituted.

The colour filter options:

FilterOptions

The colour to replace was taken from the original , the replacement colour is specified through a colour picker dialog. When a user clicks on either displayed, the colour of the pixel clicked on sets the value of the replacement colour. By adjusting the threshold value the user can specify how wide or narrow the range of colours to replace should be. The higher the threshold value, the wider the range of colours that will be replaced.

The resulting image can be saved by clicking the “Save Result” button. In order to apply another colour substitution on the resulting image click the button labelled “Set Result as Source”.

Colour Substitution Filter Data

The sample source code provides the definition for the ColorSubstitutionFilter class. The purpose of this class is to contain data required when applying colour substitution. The ColorSubstitutionFilter class is defined as follows:

public class ColorSubstitutionFilter
{
    private int thresholdValue = 10;
    public int ThresholdValue
    {
        get { return thresholdValue; }
        set { thresholdValue = value; }
    }

private Color sourceColor = Color.White; public Color SourceColor { get { return sourceColor; } set { sourceColor = value; } }
private Color newColor = Color.White; public Color NewColor { get { return newColor; } set { newColor = value; } } }

To implement a colour substitution filter we first have to create an object instance of type ColorSubstitutionFilter. A colour substitution requires specifying a SourceColor, which is the colour to replace/substitute and a NewColour, which defines the colour that will replace the SourceColour. Also required is a ThresholdValue, which determines a range of colours based on the SourceColor.

Colour Substitution implemented as an Extension method

The sample source code defines the ColorSubstitution extension method which targets the class. Invoking the ColorSubstitution requires passing a parameter of type ColorSubstitutionFilter, which defines how colour substitution is to be implemented. The following code snippet contains the definition of the ColorSubstitution method.

public static Bitmap ColorSubstitution(this Bitmap sourceBitmap, ColorSubstitutionFilter filterData)
{
    Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height, PixelFormat.Format32bppArgb);

BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb); BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
byte[] resultBuffer = new byte[resultData.Stride * resultData.Height]; Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
byte sourceRed = 0, sourceGreen = 0, sourceBlue = 0, sourceAlpha = 0; int resultRed = 0, resultGreen = 0, resultBlue = 0;
byte newRedValue = filterData.NewColor.R; byte newGreenValue = filterData.NewColor.G; byte newBlueValue = filterData.NewColor.B;
byte redFilter = filterData.SourceColor.R; byte greenFilter = filterData.SourceColor.G; byte blueFilter = filterData.SourceColor.B;
byte minValue = 0; byte maxValue = 255;
for (int k = 0; k < resultBuffer.Length; k += 4) { sourceAlpha = resultBuffer[k + 3];
if (sourceAlpha != 0) { sourceBlue = resultBuffer[k]; sourceGreen = resultBuffer[k + 1]; sourceRed = resultBuffer[k + 2];
if ((sourceBlue < blueFilter + filterData.ThresholdValue && sourceBlue > blueFilter - filterData.ThresholdValue) &&
(sourceGreen < greenFilter + filterData.ThresholdValue && sourceGreen > greenFilter - filterData.ThresholdValue) &&
(sourceRed < redFilter + filterData.ThresholdValue && sourceRed > redFilter - filterData.ThresholdValue)) { resultBlue = blueFilter - sourceBlue + newBlueValue;
if (resultBlue > maxValue) { resultBlue = maxValue;} else if (resultBlue < minValue) { resultBlue = minValue;}
resultGreen = greenFilter - sourceGreen + newGreenValue;
if (resultGreen > maxValue) { resultGreen = maxValue;} else if (resultGreen < minValue) { resultGreen = minValue;}
resultRed = redFilter - sourceRed + newRedValue;
if (resultRed > maxValue) { resultRed = maxValue;} else if (resultRed < minValue) { resultRed = minValue;}
resultBuffer[k] = (byte)resultBlue; resultBuffer[k + 1] = (byte)resultGreen; resultBuffer[k + 2] = (byte)resultRed; resultBuffer[k + 3] = sourceAlpha; } } }
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

The ColorSubstitution method can be labelled as due to its implementation. Being implies that the source/input data will not be modified, instead a new instance will be created reflecting the source data as modified by the operations performed in the particular method.

The first statement defined in the ColorSubstitution method body instantiates an instance of a new , matching the size dimensions of the source object. Next the method invokes the method on the source and result instances. When invoking the underlying data representing a will be locked in memory. Being locked in memory can also be described as signalling/preventing the Garbage Collector to not move around in memory the data being locked. Invoking results in the Garbage Collector functioning as per normal, moving data in memory and updating the relevant memory references when required.

The source code continues by copying all the representing the source to an array of bytes that represents the resulting . At this stage the source and result s are exactly identical and as yet unmodified. In order to determine which pixels based on colour should be modified the source code iterates through the byte array associated with the result .

Notice how the for loop increments by 4 with each loop. The underlying data represents a 32 Bits per pixel Argb , which equates to 8 bits/1 representing an individual colour component, either Alpha, Red, Green or Blue. Defining the for loop to increment by 4 results in each loop iterating 4 or 32 bits, in essence 1 pixel.

Within the for loop we determine if the colour expressed by the current pixel adjusted by the threshold value forms part of the colour range that should be updated. It is important to remember that an individual colour component is a byte value and can only be set to a value between 0 and 255 inclusive.

The Implementation

The ColorSubstitution method is implemented by the sample source code  through a Windows Forms application. The ColorSubstitution method requires that the source specified must be  formatted as a 32 Bpp Argb . When the user loads a source image from the file system the sample application attempts to convert the selected file by invoking the Format32bppArgbCopy which targets the class. The definition is as follows:

public static Bitmap Format32bppArgbCopy(this Bitmap sourceBitmap)
{
    Bitmap copyBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height, PixelFormat.Format32bppArgb);

using (Graphics graphicsObject = Graphics.FromImage(copyBitmap)) { graphicsObject.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality; graphicsObject.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; graphicsObject.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality; graphicsObject.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
graphicsObject.DrawImage(sourceBitmap, new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), GraphicsUnit.Pixel); }
return copyBitmap; }

Colour Substitution Examples

The following section illustrates a few examples of colour substitution result . The source image features Bellis perennis also known as the common European Daisy (see Wikipedia). The image file is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license. The original image can be downloaded here. The following image is a scaled down version of the original:

Bellis_perennis_white_(aka)_scaled

Light Blue Colour Substitution

Colour Component Source Colour Substitute Colour
Red   255   121
Green   223   188
Blue   224   255

Daisy_light_blue

Medium Blue Colour Substitution

Colour Component Source Colour Substitute Colour
Red   255   34
Green   223   34
Blue   224   255

Daisy_medium_blue

Medium Green Colour Substitution

Colour Component Source Colour Substitute Colour
Red   255   0
Green   223   128
Blue   224   0

Daisy_medium_green

Purple Colour Substitution

Colour Component Source Colour Substitute Colour
Red   255   128
Green   223   0
Blue   224   255

Daisy_purple

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:


Dewald Esterhuizen

Unknown's avatar

Blog Stats

  • 892,466 hits

Enter your email address to follow and receive notifications of new posts by email.

Join 91 other subscribers

Archives

RSS SoftwareByDefault on MSDN

  • An error has occurred; the feed is probably down. Try again later.