## Posts Tagged 'Difference of Gaussians'

### Article Purpose

It is the purpose of this article to illustrate the concept of  . This article extends the conventional implementation of algorithms through the application of equally sized   only differing by a weight factor.

Frog: Kernel 5×5, Weight1 0.1, Weight2 2.1 ### Using the Sample Application

This article relies on a sample application included as part of the accompanying sample source code. The sample application serves as a practical implementation of the concepts explored throughout this article.

The sample application user interface enables the user to configure and control the implementation of a filter. The configuration options exposed through the sample application’s user interface can be detailed as follows:

• Load/Save Images – When executing the sample application users are able to load source/input from the local system through clicking the Load Image button. If desired, the sample application enables users to save resulting to the local file system through clicking the Save Image button.
• Kernel Size – This option relates to the size of the that is to be implemented when performing through . Smaller are faster to compute and generally result in detected in the source/input to be expressed through thinner gradient edges. Larger can be computationally expensive to compute as sizes increase. In addition, the edges detected in source/input will generally be expressed as thicker gradient edges in resulting .
• Weight Values – The sample application calculates   and in doing so implements a weight factor. A Weight Factor determines the blur intensity observed in result after having applied . Higher weight factors result in a more intense level of being applied. As expected, lower weight factors values result in  a less intense level of being applied. If the value of the first weight factor exceeds the value of the second weight factor resulting will be generated with a Black background and edges being indicated in White. In a similar fashion, when the second weight factor value exceeds that of the first weight factor resulting will be generated with a White background and edges being indicated in Black. The greater the difference between the first and second weight factor values result in a greater degree of removal. When weight factor values only differ slightly, resulting may be prone to .

The following image is screenshot of the Weighted Difference of Gaussians sample application in action: Frog: Kernel 5×5, Weight1 1.8, Weight2 0.1 ### Gaussian Blur

The algorithm can be described as one of the most popular and widely implemented methods of . From we gain the following excerpt:

A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function. It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination. Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales.

Mathematically, applying a Gaussian blur to an image is the same as convolving the image with a Gaussian function. This is also known as a two-dimensional Weierstrass transform.

Take Note: The algorithm has the attribute of smoothing detail/definition whilst also having an edge preservation attribute. When applying a to an a level of detail/definition will be blurred/smoothed away, done in a fashion that would exclude/preserve edges.

Frog: Kernel 5×5, Weight1 2.7, Weight2 0.1 ### Difference of Gaussians Edge Detection

refers to a specific method of . , common abbreviated as DoG, functions through the implementation of .

A clear and concise description can be found on the Wikipedia Article Page:

In imaging science, difference of Gaussians is a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original. In the simple case of grayscale images, the blurred images are obtained by convolving the original grayscale images with Gaussian kernels having differing standard deviations. Blurring an image using a Gaussian kernel suppresses only high-frequency spatial information. Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the difference of Gaussians is a band-pass filter that discards all but a handful of spatial frequencies that are present in the original grayscale image.

In a conventional sense involves applying to created as copies of the original source/input . There must be a difference in the size of the implemented when applying . A typical example would be applying a 3×3 on one copy whilst applying a 5×5 on another copy. The final step requires creating a result populated by subtracting the two blurred copies. The results obtained from subtraction represents the edges forming part of the source/input .

This article extends beyond the conventional method of implementing . The implementation illustrated in this article retains the core concept of subtracting values which have been blurred to different intensities. The implementation method explored here differs from the conventional method in the sense that the implemented do not differ in size. Both are in fact required to have the same size dimensions.

The implemented are equal in terms of their size dimensions, although values are different. Expressed from another angle: equally sized of which one represents a more intense level of than the other. A resulting intensity can be determined by the weight factor implemented when calculating the values.

Frog: Kernel 5×5, Weight1 3.7, Weight2 0.2 The advantages of implementing equally sized can be described as follows:

Single Convolution implementation:  involves executing several nested code loops. Application performance can be severely negatively impacted when executing large nested loops. The conventional method of implementing generally involves having to implement two instances of , once per copy. The method implemented in this article executes the code loops related to only once. Considering the are equal in size, both can be iterated within the same set of loops.

Eliminating Image subtraction: In conventional implementations expressing differing intensity levels of have to be subtracted. The implementation method described in this article eliminates the need to perform subtraction. When applying using both simultaneously the two results obtained, one  from each , can be subtracted and assigned to the result . In addition, through calculating both results at the same time further reduces the need to create two temporary source copies.

Frog: Kernel 5×5, Weight1 2.4, Weight2 0.3 ### Difference of Gaussians Edge Detection Required Steps

When implementing a several steps are required, those steps are detailed as follows:

1. Calculate Kernels – Before implementing two have to be calculated. The calculated are required to be of equal size and differ in intensity. The sample application allows the user to configure intensity through updating the weight values, expressed as Weight 1 and Weight 2.
2. Convert Source Image to Grayscale – Applying on   outperforms on RGB . When converting an RGB pixel to a pixel, colour components are combined to form a single gray level intensity. In other words a   consists of a third of the number of pixels when compared to the RGB from which the had been rendered. In the case of an ARGB the derived will be expressed in 25% of the number of pixels forming part of the source ARGB . When applying the number of processor cycles increases when the pixel count increases.
3. Perform Convolution Implementing Thresholds – Using the newly created perform for both calculated in the first step. The result value equates to subtracting the two results obtained from .  If the result value exceeds the difference between the first and second weight value, the resulting pixel should be set to White, if not, set the result pixel to Black.

Frog: Kernel 5×5, Weight1 2.1, Weight2 0.5 ### Calculating Gaussian Convolution Kernels

The sample application implements calculations. The implemented in are calculated at runtime, as opposed to being hard coded. Being able to dynamically construct has the advantage of providing a greater degree of control in runtime regarding application.

Several steps are involved in calculating . The first required step being to determine the Size and Weight. The size and weight factor of a comprises the two configurable values implemented when calculating . In the case of this article and the sample application those values will be configured by the user through the sample application’s user interface.

The formula implemented in calculating can be expressed as follows: The formula contains a number of symbols, which define how the filter will be implemented. The symbols forming part of the formula are described in the following list:

• G(x y) – A value calculated using the Kernel formula. This value forms part of a , representing a single element.
• π – Pi, one of the better known members of the Greek alphabet. The mathematical constant defined as 22 / 7.
• σ – The lower case version of the Greek alphabet letter Sigma. This symbol simply represents a threshold or factor value, as specified by the user.
• e – The formula references a lower case e symbol. The symbol represents . The value of has been defined as a mathematical constant equating to 2.71828182846.
• x, y – The variables referenced as x and y relate to pixel coordinates within an . y Representing the vertical offset or row and x represents the horizontal offset or column.

Note: The formula’s implementation expects x and y to equal zero values when representing the coordinates of the pixel located in the middle of the .

Frog: Kernel 7×7, Weight1 0.1, Weight2 2.0 ### Implementing Gaussian Kernel Calculations

The sample application defines the GaussianCalculator.Calculate method. This method accepts two parameters, kernel size and kernel weight. The following code snippet details the implementation:

```public static double[,] Calculate(int lenght, double weight)
{
double[,] Kernel = new double [lenght, lenght];
double sumTotal = 0;

int kernelRadius = lenght / 2;
double distance = 0;

double calculatedEuler = 1.0 /
(2.0 * Math.PI * Math.Pow(weight, 2));

{
{
distance = ((filterX * filterX) +
(filterY * filterY)) /
(2 * (weight * weight));

calculatedEuler * Math.Exp(-distance);

}
}

for (int y = 0; y < lenght; y++)
{
for (int x = 0; x < lenght; x++)
{
Kernel[y, x] = Kernel[y, x] *
(1.0 / sumTotal);
}
}

return Kernel;
}```

Frog: Kernel 3×3, Weight1 0.1, Weight2 1.8 ### Implementing Difference of Gaussians Edge Detection

The sample source code defines the DifferenceOfGaussianFilter method. This method has been defined as an targeting the class. The following code snippet provides the implementation:

```public static Bitmap DifferenceOfGaussianFilter(this Bitmap sourceBitmap,
int matrixSize, double weight1,
double weight2)
{
double[,] kernel1 =
GaussianCalculator.Calculate(matrixSize,
(weight1 > weight2 ? weight1 : weight2));

double[,] kernel2 =
GaussianCalculator.Calculate(matrixSize,
(weight1 > weight2 ? weight2 : weight1));

BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte [sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte [sourceData.Stride * sourceData.Height];
byte[] grayscaleBuffer = new byte [sourceData.Width * sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);

double rgb = 0;

for (int source = 0, dst = 0;
source < pixelBuffer.Length && dst < grayscaleBuffer.Length;
source += 4, dst++)
{
rgb = pixelBuffer * 0.11f;
rgb += pixelBuffer * 0.59f;
rgb += pixelBuffer * 0.3f;

grayscaleBuffer[dst] = (byte)rgb;
}

double color1 = 0.0;
double color2 = 0.0;

int filterOffset = (matrixSize - 1) / 2;
int calcOffset = 0;

for (int source = 0, dst = 0;
source < grayscaleBuffer.Length && dst + 4 < resultBuffer.Length;
source++, dst += 4)
{
color1 = 0;
color2 = 0;

for (int filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{
calcOffset = source + (filterX) +
(filterY * sourceBitmap.Width);

calcOffset = (calcOffset < 0 ? 0 :
(calcOffset >= grayscaleBuffer.Length ?
grayscaleBuffer.Length - 1 : calcOffset));

color1 += (grayscaleBuffer[calcOffset]) *
kernel1[filterY + filterOffset,
filterX + filterOffset];

color2 += (grayscaleBuffer[calcOffset]) *
kernel2[filterY + filterOffset,
filterX + filterOffset];
}
}

color1 = color1 - color2;
color1 = (color1 >= weight1 - weight2 ? 255 : 0);

resultBuffer[dst] = (byte)color1;
resultBuffer[dst + 1] = (byte)color1;
resultBuffer[dst + 2] = (byte)color1;
resultBuffer[dst + 3] = 255;
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode.WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
}```

Frog: Kernel 3×3, Weight1 2.1, Weight2 0.7 ### Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction. The following image files feature as sample images:

Panamanian Golden Frog Dendropsophus Microcephalus Tyler’s Tree Frog Mimic Poison Frog Phyllobates Terribilis ### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

### Article purpose

In we explore the concept of . implements as a means of achieving . All of the concepts explored are implemented by accessing  and manipulating the raw pixel data exposed by an , no GDI+ or conventional drawing code is required.

### Sample source code

is accompanied by a sample source code Visual Studio project which is available for download .

### Using the Sample Application

The concepts explored in can be easily replicated by making use of the Sample Application, which forms part of the associated sample source code accompanying this article.

When using the Difference Of Gaussians sample application you can specify a input/source image by clicking the Load Image button. The dropdown towards the bottom middle part of the screen relates the various methods discussed.

If desired a user can save the resulting image to the local file system by clicking the Save Image button.

The following image is screenshot of the Difference Of Gaussians sample application in action: ### What is Difference of Gaussians?

, commonly abbreviated as DoG, is a method of implementing  . Central to the method of is the application of .

From we gain the following :

In , Difference of Gaussians is a enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original. In the simple case of , the blurred images are obtained by the original with Gaussian kernels having differing standard deviations. Blurring an image using a suppresses only information. Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images. Thus, the difference of Gaussians is a that discards all but a handful of spatial frequencies that are present in the original grayscale image.

In simple terms can be implemented by applying two of different intensity levels to the same source . The resulting is then created by subtracting the two of different .

### Applying a Convolution Matrix filter

In the sample source code accompanying is applied by invoking the ConvolutionFilter method. This method accepts a two dimensional array of type double representing the convolution /. This method is also capable of first converting source to , which can be specified as a method parameter. Resulting sometimes tend to be very dark, which can be corrected by specifying a suitable bias value.

The following code snippet provides the implementation of the ConvolutionFilter method:

``` private static Bitmap ConvolutionFilter(Bitmap sourceBitmap,
double[,] filterMatrix,
double factor = 1,
int bias = 0,
bool grayscale = false )
{
BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0,
sourceBitmap.Width, sourceBitmap.Height),
PixelFormat.Format32bppArgb);

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];

Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);

if (grayscale == true)
{
float rgb = 0;

for (int k = 0; k < pixelBuffer.Length; k += 4)
{
rgb = pixelBuffer[k] * 0.11f;
rgb += pixelBuffer[k + 1] * 0.59f;
rgb += pixelBuffer[k + 2] * 0.3f;

pixelBuffer[k] = (byte )rgb;
pixelBuffer[k + 1] = pixelBuffer[k];
pixelBuffer[k + 2] = pixelBuffer[k];
pixelBuffer[k + 3] = 255;
}
}

double blue = 0.0;
double green = 0.0;
double red = 0.0;

int filterWidth = filterMatrix.GetLength(1);
int filterHeight = filterMatrix.GetLength(0);

int filterOffset = (filterWidth-1) / 2;
int calcOffset = 0;

int byteOffset = 0;

for (int offsetY = filterOffset; offsetY <
sourceBitmap.Height - filterOffset; offsetY++)
{
for (int  offsetX = filterOffset; offsetX <
sourceBitmap.Width - filterOffset; offsetX++)
{
blue = 0;
green = 0;
red = 0;

byteOffset = offsetY *
sourceData.Stride +
offsetX * 4;

for (int  filterY = -filterOffset;
filterY <= filterOffset; filterY++)
{
for (int  filterX = -filterOffset;
filterX <= filterOffset; filterX++)
{

calcOffset = byteOffset +
(filterX * 4) +
(filterY * sourceData.Stride);

blue += (double)(pixelBuffer[calcOffset]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];

green += (double)(pixelBuffer[calcOffset + 1]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];

red += (double)(pixelBuffer[calcOffset + 2]) *
filterMatrix[filterY + filterOffset,
filterX + filterOffset];
}
}

blue = factor * blue + bias;
green = factor * green + bias;
red = factor * red + bias;

if (blue > 255)
{ blue = 255; }
else if (blue < 0)
{ blue = 0; }

if (green > 255)
{ green = 255; }
else if (green < 0)
{ green = 0; }

if (red > 255)
{ red = 255; }
else if (red < 0)
{ red = 0; }

resultBuffer[byteOffset] = (byte )(blue);
resultBuffer[byteOffset + 1] = (byte )(green);
resultBuffer[byteOffset + 2] = (byte )(red);
resultBuffer[byteOffset + 3] = 255;
}
}

Bitmap resultBitmap = new Bitmap(sourceBitmap.Width,
sourceBitmap.Height);

BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0,
resultBitmap.Width, resultBitmap.Height),
ImageLockMode .WriteOnly,
PixelFormat.Format32bppArgb);

Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);

return resultBitmap;
} ```

### The Gaussian Matrix

The sample source code defines three / values, a 3×3 and two slightly different 5×5 matrices. The Gaussian3x3 requires a factor of 1 / 16, the Gaussian5x5Type1 a factor of 1 / 159 and the factor required by the Gaussian5x5Type2 equates to 1 / 256.

```public static class Matrix
{
public static double[,] Gaussian3x3
{
get
{
return new double[,]
{ { 1, 2, 1, },
{ 2, 4, 2, },
{ 1, 2, 1, }, };
}
}

public static double[,] Gaussian5x5Type1
{
get
{
return new double[,]
{ { 2, 04, 05, 04, 2  },
{ 4, 09, 12, 09, 4  },
{ 5, 12, 15, 12, 5  },
{ 4, 09, 12, 09, 4  },
{ 2, 04, 05, 04, 2  }, };
}
}

public static double[,] Gaussian5x5Type2
{
get
{
return new   double[,]
{ {  1,   4,  6,  4,  1  },
{  4,  16, 24, 16,  4  },
{  6,  24, 36, 24,  6  },
{  4,  16, 24, 16,  4  },
{  1,   4,  6,  4,  1  }, };
}
}
}```

### Subtracting Images

When implementing the method of after having applied two varying levels of the resulting need to be subtracted. The sample source code associated with implements the SubtractImage when subtracting .

The following code snippet details the implementation of the SubtractImage :

```private static void SubtractImage(this Bitmap subtractFrom,
Bitmap subtractValue,
bool invert = false,
int bias = 0)
{
BitmapData sourceData =
subtractFrom.LockBits(new Rectangle(0, 0,
subtractFrom.Width, subtractFrom.Height),
PixelFormat.Format32bppArgb);

byte[] resultBuffer = new byte[sourceData.Stride *
sourceData.Height];

Marshal.Copy(sourceData.Scan0, resultBuffer, 0,
resultBuffer.Length);

BitmapData subtractData =
subtractValue.LockBits(new Rectangle(0, 0,
subtractValue.Width, subtractValue.Height),
PixelFormat.Format32bppArgb);

byte[] subtractBuffer = new byte[subtractData.Stride *
subtractData.Height];

Marshal.Copy(subtractData.Scan0, subtractBuffer, 0,
subtractBuffer.Length);

subtractValue.UnlockBits(subtractData);

int blue = 0;
int green = 0;
int red = 0;

for (int k = 0; k < resultBuffer.Length &&
k < subtractBuffer.Length; k += 4)
{
if (invert == true )
{
blue = 255 - resultBuffer[k] -
subtractBuffer[k] + bias;

green = 255 - resultBuffer[k + 1] -
subtractBuffer[k + 1] + bias;

red = 255 - resultBuffer[k + 2] -
subtractBuffer[k + 2] + bias;
}
else
{
blue = resultBuffer[k] -
subtractBuffer[k] + bias;

green = resultBuffer[k + 1] -
subtractBuffer[k + 1] + bias;

red = resultBuffer[k + 2] -
subtractBuffer[k + 2] + bias;
}

blue = (blue < 0 ? 0 : (blue > 255 ? 255 : blue));
green = (green < 0 ? 0 : (green > 255 ? 255 : green));
red = (red < 0 ? 0 : (red > 255 ? 255 : red));

resultBuffer[k] = (byte )blue;
resultBuffer[k + 1] = (byte )green;
resultBuffer[k + 2] = (byte )red;
resultBuffer[k + 3] = 255;
}

Marshal.Copy(resultBuffer, 0, sourceData.Scan0,
resultBuffer.Length);

subtractFrom.UnlockBits(sourceData);
}```

### Difference of Gaussians Extension methods

The sample source code implements   by means of two : DifferenceOfGaussians3x5Type1 and DifferenceOfGaussians3x5Type2. Both methods are virtually identical, the only difference being the 5×5 being implemented.

Both methods create two new , each having a of different levels of intensity applied. The two new are subtracted in order to create a single resulting .

The following source code snippet provides the implementation of the DifferenceOfGaussians3x5Type1 and DifferenceOfGaussians3x5Type2 :

```public static Bitmap DifferenceOfGaussians3x5Type1(
this Bitmap sourceBitmap,
bool grayscale = false,
bool invert = false,
int bias = 0)
{
Bitmap bitmap3x3 = ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian3x3, 1.0 / 16.0,
0, grayscale);

Bitmap bitmap5x5 = ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian5x5Type1, 1.0 / 159.0,
0, grayscale);

bitmap3x3.SubtractImage(bitmap5x5, invert, bias);

return bitmap3x3;
}

public static Bitmap DifferenceOfGaussians3x5Type2(
this Bitmap sourceBitmap,
bool grayscale = false,
bool invert = false,
int bias = 0)
{
Bitmap bitmap3x3 = ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian3x3, 1.0 / 16.0,
0, true );

Bitmap bitmap5x5 = ExtBitmap.ConvolutionFilter(sourceBitmap,
Matrix.Gaussian5x5Type2, 1.0 / 256.0,
0, true );

bitmap3x3.SubtractImage(bitmap5x5, invert, bias);

return bitmap3x3;
}```

### Sample Images

The Original Image Difference Of Gaussians 3×5 Type1 Difference Of Gaussians 3×5 Type2 Difference Of Gaussians 3×5 Type1 Bias 128 Difference Of Gaussians 3×5 Type 2 Bias 96 ### Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

## Blog Stats

• 779,536 hits      