Archive for the 'Learn Everyday' Category



C# How to: Image Transform Shear

Article Purpose

This article is focussed on illustrating the steps required in performing an . All of the concepts explored have been implemented by means of raw pixel data processing, no conventional drawing methods, such as GDI, are required.

Rabbit: Shear X 0.4, Y 0.4

Rabbit Shear X 0.4, Y 0.4

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

article features a based sample application which is included as part of the accompanying sample source code. The concepts explored in this article can be illustrated in a practical implementation using the sample application.

The sample application enables a user to load source/input from the local system when clicking the Load Image button. In addition users are also able to save output result to the local file system by clicking the Save Image button.

Image can be applied to either X or Y, or both X and Y pixel coordinates. When using the sample application the user has option of adjusting Shear factors, as indicated on the user interface by the numeric up/down controls labelled Shear X and Shear Y.

The following image is a screenshot of the Image Transform Shear Sample Application in action:

Image Transform Shear Sample Application

Rabbit: Shear X -0.5, Y -0.25

Rabbit Shear X -0.5, Y -0.25

Image Shear Transformation

A good definition of the term can be found on the Wikipedia :

In , a shear mapping is a that displaces each point in fixed direction, by an amount proportional to its signed distance from a line that is to that direction.[1] This type of mapping is also called shear transformation, transvection, or just shearing

A can be applied as a horizontal shear, a vertical shear or as both. The algorithms implemented when performing a can be expressed as follows:

Horizontal Shear Algorithm

Horizontal Shear Algorithm

Vertical Shear Algorithm

Vertical Shear Algorithm

The algorithm description:

  • Shear(x) : The result of a horizontal – The calculated X-Coordinate representing a .
  • Shear(y) : The result of a vertical – The calculated Y-Coordinate representing a .
  • σ : The lower case version of the Greek alphabet letter Sigma – Represents the Shear Factor.
  • x : The X-Coordinate originating from the source/input – The horizontal coordinate value intended to be sheared.
  • y : The Y-Coordinate originating from the source/input – The vertical coordinate value intended to be sheared.
  • H : Source height in pixels.
  • W : Source width in pixels.

Note: When performing a implementing both the horizontal and vertical planes each coordinate plane can be calculated using a different shearing factor.

The algorithms have been adapted in order to implement a middle pixel offset by means of subtracting the product of the related plane boundary and the specified Shearing Factor, which will then be divided by a factor of two.

Rabbit: Shear X 1.0, Y 0.1

Rabbit Shear X 1.0, Y 0.1

Implementing a Shear Transformation

The sample source code performs through the implementation of the ShearXY and ShearImage.

The ShearXY targets the structure. The algorithms discussed in the previous sections have been implemented in this function from a C# perspective. The definition as illustrated by the following code snippet:

public static Point ShearXY(this Point source, double shearX, 
                                               double shearY, 
                                               int offsetX,  
                                               int offsetY) 
{
    Point result = new Point(); 

result.X = (int)(Math.Round(source.X + shearX * source.Y)); result.X -= offsetX;
result.Y = (int)(Math.Round(source.Y + shearY * source.X)); result.Y -= offsetY;
return result; }

Rabbit: Shear X 0.0, Y 0.5

Rabbit Shear X 0.0, Y 0.5

The ShearImage targets the class. This method expects as parameter values a horizontal and a vertical shearing factor. Providing a shearing factor of zero results in no shearing being implemented in the corresponding direction. The definition as follows:

public static Bitmap ShearImage(this Bitmap sourceBitmap, 
                               double shearX, 
                               double shearY) 
{ 
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int xOffset = (int )Math.Round(sourceBitmap.Width * shearX / 2.0);
int yOffset = (int )Math.Round(sourceBitmap.Height * shearY / 2.0);
int sourceXY = 0; int resultXY = 0;
Point sourcePoint = new Point(); Point resultPoint = new Point();
Rectangle imageBounds = new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height);
for (int row = 0; row < sourceBitmap.Height; row++) { for (int col = 0; col < sourceBitmap.Width; col++) { sourceXY = row * sourceData.Stride + col * 4;
sourcePoint.X = col; sourcePoint.Y = row;
if (sourceXY >= 0 && sourceXY + 3 < pixelBuffer.Length) { resultPoint = sourcePoint.ShearXY(shearX, shearY, xOffset, yOffset);
resultXY = resultPoint.Y * sourceData.Stride + resultPoint.X * 4;
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 <= resultBuffer.Length) { resultBuffer[resultXY + 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 5] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 6] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 7] = 255; }
if (resultXY - 3 >= 0) { resultBuffer[resultXY - 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY - 3] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY - 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY - 1] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 1] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 3] = 255; } } } } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Rabbit: Shear X 0.5, Y 0.0

Rabbit Shear X 0.5, Y 0.0

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction.

The sample images featuring the image of a Desert Cottontail Rabbit is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia. The original author is attributed as Larry D. Moore.

The sample images featuring the image of a Rabbit in Snow is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia. The original author is attributed as George Tuli.

The sample images featuring the image of an Eastern Cottontail Rabbit has been released into the public domain by its author. The original image can be downloaded from .

The sample images featuring the image of a Mountain Cottontail Rabbit is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code. The original image can be downloaded from .

Rabbit: Shear X 1.0, Y 0.0

Rabbit Shear X 1.0, Y 0.0

Rabbit: Shear X 0.5, Y 0.1

Rabbit Shear X 0.5, Y 0.1

Rabbit: Shear X -0.5, Y -0.25

Rabbit Shear X -0.5, Y -0.25

Rabbit: Shear X -0.5, Y 0.0

Rabbit Shear X -0.5, Y 0.0

Rabbit: Shear X 0.25, Y 0.0

Rabbit Shear X 0.25, Y 0.0

Rabbit: Shear X 0.50, Y 0.0

Rabbit Shear X 0.50, Y 0.0

Rabbit: Shear X 0.0, Y 0.5

Rabbit Shear X 0.0, Y 0.5

Rabbit: Shear X 0.0, Y 0.25

Rabbit Shear X 0.0, Y 0.25

Rabbit: Shear X 0.0, Y 1.0

Rabbit Shear X 0.0, Y 1.0

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

Advertisement

C# How to: Image Transform Rotate

Article Purpose

This article provides a discussion exploring the concept of rotation as a . In addition to conventional rotation this article illustrates the concept of individual colour channel rotation.

Daisy: Rotate Red 0o, Green 10o, Blue 20o

Daisy Rotate Red 0 Green 10 Blue 20 

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

A Sample Application has been included in the sample source code that accompanies this article. The sample application serves as an implementation of the concepts discussed throughout this article. Concepts can be easily tested and replicated using the sample application.

Daisy: Rotate Red 15o, Green 5o, Blue 10o

Daisy Rotate Red 15 Green 5 Blue 10

When using the sample application users are able to load source/input from the local system by clicking the Load Image button. Required user input via the user interface can be found in the form of three numeric up/down controls labelled Blue, Green and Red respectively. Each control represents the degree to which the related colour component should be rotated. Possible input values range from –360 to 360. Positive values result in clockwise rotation, whereas negative values result in counter clockwise rotation. The sample application enables users to save result to the local file system by clicking the Save Image button.

The following image is a screenshot of the Image Transform Rotate sample application in action:

Image Transform Rotate Sample Application

Image Rotation Transformation

A applied to an from a theoretical point of view is based in . From we learn the following :

In mathematics, transformation geometry (or transformational geometry) is the name of a mathematical and approach to the study of by focusing on groups of , and the properties of figures that are under them. It is opposed to the classical synthetic geometry approach of Euclidean geometry, that focus on geometric constructions.

Rose: Rotate Red –20o, Green 0o, Blue 20o

Rose Rotate Red -20 Green 0 Blue 20

In this article rotation is implemented through applying a set algorithm to the coordinates of each pixel forming part of a source/input . In the corresponding result the calculated rotated pixel coordinates in terms of colour channel values will be assigned to the colour channel values of the original pixel.

The algorithms implemented when calculating  a pixel’s rotated coordinates can be expressed as follows:

RotateX_Algorithm

RotateY_Algorithm

Symbols/variables contained in the algorithms:

  • R (x) : The result of rotating a pixel’s x-coordinate.
  • R (y) : The result of rotating a pixel’s y-coordinate.
  • x : The source pixel’s x-coordinate.
  • y : The source pixel’s y-coordinate.
  • W : The width in pixels of the source .
  • H : The height in pixels of the source .
  • ɑ : The lower case Greek alphabet letter alpha. The value represented by alpha reflects the degree of rotation.

Butterfly: Rotate Red 10o, Green 0o, Blue 0o

Butterfly Rotate Red 10 Green 0 Blue 0

In order to apply a each pixel forming part of the source/input should be iterated. The algorithms expressed above should be applied to each pixel.

The pixel coordinates located at exactly the middle of an can be calculated through dividing the width with a factor of two in regards to the X-coordinate. The Y-coordinate can be calculated through dividing the height also with a factor of two. The algorithms calculate the coordinates of the middle pixel and implements the coordinates as offsets. Implementing the pixel offsets  results in being rotated around the ’s middle, as opposed to the the top left pixel (0,0).

This article and the associated sample source code extends the concept of traditional rotation through implementing rotation on a per colour channel basis. Through user input the individual degree of rotation can be specified for each colour channel, namely Red, Green and Blue. Functionality has been implemented allowing each colour channel to be rotated to a different degree. In essence the algorithms described above have to be implemented three times per pixel iterated.

Daisy: Rotate Red 30o, Green 0o, Blue 180o

Daisy Rotate Red 30 Green 0 Blue 180 

Implementing a Rotation Transformation

The sample source code implements a through the of two : RotateXY and RotateImage.

The RotateXY targets the structure. This method serves as an encapsulation of the logic behind calculating rotating coordinates at a specified angle. The practical C# code implementation of the algorithms discussed in the previous section can be found within this method. The definition as follows:

public static Point RotateXY(this Point source, double degrees,
                                       int offsetX, int offsetY)
{ 
   Point result = new Point();
 
   result.X = (int)(Math.Round((source.X - offsetX) *
              Math.Cos(degrees) - (source.Y - offsetY) *
              Math.Sin(degrees))) + offsetX;

result.Y = (int)(Math.Round((source.X - offsetX) * Math.Sin(degrees) + (source.Y - offsetY) * Math.Cos(degrees))) + offsetY;
return result; }

Rose: Rotate Red –60o, Green 0o, Blue 60o

Rose Rotate Red -60 Green 0 Blue 60

The RotateImage targets the class. This method expects three rotation degree/angle values, each corresponding to a colour channel. Positive degrees result in clockwise rotation and negative values result in counter clockwise rotation.  The definition as follows:

public static Bitmap RotateImage(this Bitmap sourceBitmap,  
                                       double degreesBlue, 
                                      double degreesGreen, 
                                        double degreesRed) 
{ 
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle(0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
//Convert to Radians degreesBlue = degreesBlue * Math.PI / 180.0; degreesGreen = degreesGreen * Math.PI / 180.0; degreesRed = degreesRed * Math.PI / 180.0;
//Calculate Offset in order to rotate on image middle int xOffset = (int )(sourceBitmap.Width / 2.0); int yOffset = (int )(sourceBitmap.Height / 2.0);
int sourceXY = 0; int resultXY = 0;
Point sourcePoint = new Point(); Point resultPoint = new Point();
Rectangle imageBounds = new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height);
for (int row = 0; row < sourceBitmap.Height; row++) { for (int col = 0; col < sourceBitmap.Width; col++) { sourceXY = row * sourceData.Stride + col * 4;
sourcePoint.X = col; sourcePoint.Y = row;
if (sourceXY >= 0 && sourceXY + 3 < pixelBuffer.Length) { //Calculate Blue Rotation
resultPoint = sourcePoint.RotateXY(degreesBlue, xOffset, yOffset);
resultXY = (int)(Math.Round( (resultPoint.Y * sourceData.Stride) + (resultPoint.X * 4.0)));
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 < resultBuffer.Length) { resultBuffer[resultXY + 4] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 7] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY] = pixelBuffer[sourceXY];
resultBuffer[resultXY + 3] = 255; } }
//Calculate Green Rotation
resultPoint = sourcePoint.RotateXY(degreesGreen, xOffset, yOffset);
resultXY = (int)(Math.Round( (resultPoint.Y * sourceData.Stride) + (resultPoint.X * 4.0)));
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 < resultBuffer.Length) { resultBuffer[resultXY + 5] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 7] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY + 1] = pixelBuffer[sourceXY + 1];
resultBuffer[resultXY + 3] = 255; } }
//Calculate Red Rotation
resultPoint = sourcePoint.RotateXY(degreesRed, xOffset, yOffset);
resultXY = (int)(Math.Round( (resultPoint.Y * sourceData.Stride) + (resultPoint.X * 4.0)));
if (imageBounds.Contains(resultPoint) && resultXY >= 0) { if (resultXY + 6 < resultBuffer.Length) { resultBuffer[resultXY + 6] = pixelBuffer[sourceXY + 2]; resultBuffer[resultXY + 7] = 255; }
if (resultXY + 3 < resultBuffer.Length) { resultBuffer[resultXY + 2] = pixelBuffer[sourceXY + 2];
resultBuffer[resultXY + 3] = 255; } } } } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Daisy: Rotate Red 15o, Green 5o, Blue 5o

Daisy Rotate Red 15 Green 5 Blue 5

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction.

The sample images featuring an image of a yellow daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikimedia.org.

The sample images featuring an image of a white daisy is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia.

The sample images featuring an image of a CPU is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license. The original author is credited as Andrew Dunn. The original image can be downloaded from .

The sample images featuring an image of a rose is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license. The original image can be downloaded from .

The sample images featuring an image of a butterfly is licensed under the Creative Commons Attribution 3.0 Unported license and can be downloaded from Wikimedia.org.

The Original Image

Intel_80486DX2_bottom

CPU: Rotate Red 90o, Green 0o, Blue –30o

CPU Rotate Red 90 Green 0 Blue -30

CPU: Rotate Red 0o, Green 10o, Blue 0o

CPU Rotate Red 0 Green 10 Blue 0

CPU: Rotate Red –4o, Green 4o, Blue 6o

CPU Rotate Red -4 Green 4 Blue 6

CPU: Rotate Red 10o, Green 0o, Blue 0o

CPU Rotate Red 10 Green 0 Blue 0

CPU: Rotate Red 10o, Green –5o, Blue 0o

CPU Rotate Red 10 Green -5 Blue 0

CPU: Rotate Red 10o, Green 0o, Blue 10o

CPU Rotate Red 10 Green 0 Blue 10

CPU: Rotate Red –10o, Green 10o, Blue 0o

CPU Rotate Red -10 Green 10 Blue 0

CPU: Rotate Red 30o, Green –30o, Blue 0o

CPU Rotate Red 30 Green -30 Blue 0

CPU: Rotate Red 40o, Green 20o, Blue 0o

CPU Rotate Red 40 Green 20 Blue 0

CPU: Rotate Red 40o, Green 20o, Blue 0o

CPU Rotate Red 60 Green 30 Blue 0

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Blur

Article Purpose

This article serves to provides an introduction and discussion relating to methods and techniques. The Image Blur methods covered in this article include: , , , and  .

Daisy: Mean 9×9

Daisy Mean 9x9

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download .

Using the Sample Application

This article is accompanied by a sample application, intended to provide a means of testing and replicating topics discussed in this article. The sample application is a based application of which the user interface enables the user to select an type to implement.

When clicking the Load Image button users are able to browse the local file system in order to select source/input . In addition users are also able to save blurred result when clicking the Save Image button and browsing the local file system.

Daisy: Mean 7×7

Daisy Mean 7x7

The sample application provides the user with the ability to select the method of to implement. The dropdown located on the right-hand side of the user interface lists all of the supported methods of . When a user selects an item from the , the associated blur method will be implemented on the preview .

The image below is a screenshot of the Image Blur Filter sample application in action:

Image Blur Filter Sample Application

Image Blur Overview

The process of can be regarded as reducing the sharpness or crispness defined by an . results in detail/ being perceived as less distinct. are often blurred as a method of smoothing an .

perceived as too crisp/sharp can be softened by applying a variety of techniques and intensity levels. Often are smoothed/blurred in order to remove/reduce . In implementations better results are often achieved when first implementing through smoothing/. can even be implemented in a fashion where results reflect , a method known as .

In this article and the accompanying sample source code all methods of supported have been implemented through , with the exception of the filter. Each of the supported methods in essence only represent a different   . The technique capable of achieving optimal results will to varying degrees be dependent on the features present in the specified source/input . Each method provides a different set of desired properties and compromises. In the following sections an overview of each method will be discussed.

Daisy: Mean 9×9

Daisy Mean 9x9

Mean Filter/Box Blur

The also sometimes referred to as a represents a fairly simplistic implementation and definition. A definition can be found on as follows:

A box blur is an in which each pixel in the resulting image has a value equal to the average value of its neighbouring pixels in the input image. It is a form of low-pass ("blurring") filter and is a .

Due to its property of using equal weights it can be implemented using a much simpler accumulation algorithm which is significantly faster than using a sliding window algorithm.

as a title relates to all weight values in a being equal, therefore the alternate title of . In most cases a will only contain the value one. When performing implementing a , the factor value equates to the 1 being divided by the sum of all values.

The following is an example of a 5×5 convolution kernel:

Mean Filter Blur 5x5 Kernel

The consist of 25 elements, therefore the factor value equates to one divided by twenty five.

The Blur does not result in the same level of smoothing achieved by other methods. The method can also be susceptible to directional artefacts.

Daisy Mean 5×5

Daisy Mean 5x5

Gaussian Blur

The method of is a popular and often implemented filter. In contrast to the method produce resulting appearing to contain a more uniform level of smoothing. When implementing a is often applied to source/input resulting in . The has a good level of edge preservation, hence being used in operations.

From we gain the following :

A Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a . It is a widely used effect in graphics software, typically to reduce image noise and reduce detail. The visual effect of this blurring technique is a smooth blur resembling that of viewing the image through a translucent screen, distinctly different from the bokeh effect produced by an out-of-focus lens or the shadow of an object under usual illumination. Gaussian smoothing is also used as a pre-processing stage in computer vision algorithms in order to enhance image structures at different scales

A potential drawback to implementing a results from the filter being computationally intensive. The following represents a 5×5 . The sum total of all elements in the equate to 159, therefore a factor value of 1.0 / 159.0 will be implemented.

Guassian Blur 5x5 Kernel

Daisy: Gaussian 5×5

Daisy Gaussian 5x5

Median Filter Blur

The is classified as a non-linear filter. In contrast to the other methods of discussed in this article the implementation does not involve or a predefined matrix . The following can be found on :

In signal processing, it is often desirable to be able to perform some kind of on an image or signal. The median filter is a nonlinear technique, often used to remove . Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, on an image). Median filtering is very widely used in digital because, under certain conditions, it preserves edges while removing noise.

Daisy: Median 7×7

Daisy Median 7x7

As the name implies, the operates by calculating the value of a pixel group also referred to as a window. Calculating a value involves a number of steps. The required steps are listed as follows:

  1. Iterate each pixel that forms part of the source/input .
  2. In relation to the pixel currently being iterated determine neighbouring pixels located within the bounds defined by the window size. The window location should be offset in order to align the window’s middle pixel and the pixel currently being iterated.
  3. Neighbouring pixels located within the bounds  defined by the window should be added to a one dimensional neighbourhood array. Once all value have been added, the array should be sorted by value.
  4. The pixel value located at the middle of the sorted neighbourhood array qualifies as the value. The newly determined value should be assigned to the pixel currently being iterated.
  5. Repeat the steps listed above until all pixels within the source/input have been iterated.

Similar to the filter the has the ability to smooth whilst providing edge preservation. Depending on the window size implemented and the physical dimensions of input/source the can be computationally expensive.

Daisy: Median 9×9

Daisy Median 9x9

Motion Blur

The sample source implements filters. in the traditional sense has been association with photography and video capturing. can often be observed in scenarios where rapid movements are being captured to photographs or video recording. When recording a single frame, rapid movements could result in the changing  before the frame being captured has completed.

can be synthetically imitated through the implementation of Digital filters. The size of the provided when implementing affects the filter intensity perceived in result . Relating to filters the size of the specified in influences the perception and appearance of how rapidly movement had occurred to have blurred the resulting . Larger produce the appearance of more rapid motion, whereas smaller result in less rapid motion being perceived.

Daisy: Motion Blur 7×7 135 Degrees

Daisy Motion Blur 7x7 135 Degrees

Depending on the specified the ability exists to create the appearance of movement having occurred in a certain direction. The sample source code implements filters at 45 degrees, 135 degrees and in both directions simultaneously.

The listed below represents a 5×5 filter occurring at  45 degrees and 135 degrees:

MotionBlur5x5

Image Blur Implementation

The sample source code implements all of the concepts explored throughout this article. The source code definition can be grouped into 4 sections: ImageBlurFilter method, ConvolutionFilter method, MedianFilter method and the Matrix class. The following article sections relate to the 4 main source code sections.

The ImageBlurFilter has the purpose of invoking the correct blur filter method and relevant method parameters. This method acts as a method wrapper providing the technical implementation details required when performing a specified blur filter.

The definition of the ImageBlurFilter as follows:

 public static Bitmap ImageBlurFilter(this Bitmap sourceBitmap,  
                                             BlurType blurType) 
{  
     Bitmap resultBitmap = null; 

switch (blurType) { case BlurType.Mean3x3: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean3x3, 1.0 / 9.0, 0); } break; case BlurType.Mean5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean5x5, 1.0 / 25.0, 0); } break; case BlurType.Mean7x7: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean7x7, 1.0 / 49.0, 0); } break; case BlurType.Mean9x9: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean9x9, 1.0 / 81.0, 0); } break; case BlurType.GaussianBlur3x3: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.GaussianBlur3x3, 1.0 / 16.0, 0); } break; case BlurType.GaussianBlur5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.GaussianBlur5x5, 1.0 / 159.0, 0); } break; case BlurType.MotionBlur5x5: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5, 1.0 / 10.0, 0); } break; case BlurType.MotionBlur5x5At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5At45Degrees, 1.0 / 5.0, 0); } break; case BlurType.MotionBlur5x5At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur5x5At135Degrees, 1.0 / 5.0, 0); } break; case BlurType.MotionBlur7x7: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7, 1.0 / 14.0, 0); } break; case BlurType.MotionBlur7x7At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7At45Degrees, 1.0 / 7.0, 0); } break; case BlurType.MotionBlur7x7At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur7x7At135Degrees, 1.0 / 7.0, 0); } break; case BlurType.MotionBlur9x9: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9, 1.0 / 18.0, 0); } break; case BlurType.MotionBlur9x9At45Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9At45Degrees, 1.0 / 9.0, 0); } break; case BlurType.MotionBlur9x9At135Degrees: { resultBitmap = sourceBitmap.ConvolutionFilter( Matrix.MotionBlur9x9At135Degrees, 1.0 / 9.0, 0); } break; case BlurType.Median3x3: { resultBitmap = sourceBitmap.MedianFilter(3); } break; case BlurType.Median5x5: { resultBitmap = sourceBitmap.MedianFilter(5); } break; case BlurType.Median7x7: { resultBitmap = sourceBitmap.MedianFilter(7); } break; case BlurType.Median9x9: { resultBitmap = sourceBitmap.MedianFilter(9); } break; case BlurType.Median11x11: { resultBitmap = sourceBitmap.MedianFilter(11); } break; }
return resultBitmap; }

Daisy: Motion Blur 9×9

Daisy Motion Blur 9x9

The Matrix class serves as a collection of  various definitions. The Matrix class and all public properties are defined as static. The definition of the Matrix class as follows:

     public static class Matrix 
    {  
         public static double[,] Mean3x3 
         {  
             get 
             {  
                 return new double[,]   
                { {  1, 1, 1, },  
                  {  1, 1, 1, },  
                  {  1, 1, 1, }, }; 
             }  
         }  

public static double[,] Mean5x5 { get { return new double[,] { { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1 }, }; } }
public static double[,] Mean7x7 { get { return new double[,] { { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1 }, }; } }
public static double[,] Mean9x9 { get { return new double[,] { { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, { 1, 1, 1, 1, 1, 1, 1, 1, 1 }, }; } }
public static double[,] GaussianBlur3x3 { get { return new double[,] { { 1, 2, 1, }, { 2, 4, 2, }, { 1, 2, 1, }, }; } }
public static double[,] GaussianBlur5x5 { get { return new double[,] { { 2, 04, 05, 04, 2 }, { 4, 09, 12, 09, 4 }, { 5, 12, 15, 12, 5 }, { 4, 09, 12, 09, 4 }, { 2, 04, 05, 04, 2 }, }; } }
public static double[,] MotionBlur5x5 { get { return new double[,] { { 1, 0, 0, 0, 1 }, { 0, 1, 0, 1, 0 }, { 0, 0, 1, 0, 0 }, { 0, 1, 0, 1, 0 }, { 1, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur5x5At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 1 }, { 0, 0, 0, 1, 0 }, { 0, 0, 1, 0, 0 }, { 0, 1, 0, 0, 0 }, { 1, 0, 0, 0, 0 }, }; } }
public static double[,] MotionBlur5x5At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur7x7 { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 1 }, { 0, 1, 0, 0, 0, 1, 0 }, { 0, 0, 1, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 1, 0, 0 }, { 0, 1, 0, 0, 0, 1, 0 }, { 1, 0, 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur7x7At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 0, 0, 1 }, { 0, 0, 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 1, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 1, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0, 0, 0 }, { 1, 0, 0, 0, 0, 0, 0 }, }; } }
public static double[,] MotionBlur7x7At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0 }, { 0, 1, 0, 0, 0, 0, 0 }, { 0, 0, 1, 0, 0, 0, 0 }, { 0, 0, 0, 1, 0, 0, 0 }, { 0, 0, 0, 0, 1, 0, 0 }, { 0, 0, 0, 0, 0, 1, 0 }, { 0, 0, 0, 0, 0, 0, 1 }, }; } }
public static double[,] MotionBlur9x9 { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 1, }, { 0, 1, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 1, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 1, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 1, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 1, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 1, 0, }, { 1, 0, 0, 0, 0, 0, 0, 0, 1, }, }; } }
public static double[,] MotionBlur9x9At45Degrees { get { return new double[,] { { 0, 0, 0, 0, 0, 0, 0, 0, 1, }, { 0, 0, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 0, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 0, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 0, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 0, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 0, 0, }, { 1, 0, 0, 0, 0, 0, 0, 0, 0, }, }; } }
public static double[,] MotionBlur9x9At135Degrees { get { return new double[,] { { 1, 0, 0, 0, 0, 0, 0, 0, 0, }, { 0, 1, 0, 0, 0, 0, 0, 0, 0, }, { 0, 0, 1, 0, 0, 0, 0, 0, 0, }, { 0, 0, 0, 1, 0, 0, 0, 0, 0, }, { 0, 0, 0, 0, 1, 0, 0, 0, 0, }, { 0, 0, 0, 0, 0, 1, 0, 0, 0, }, { 0, 0, 0, 0, 0, 0, 1, 0, 0, }, { 0, 0, 0, 0, 0, 0, 0, 1, 0, }, { 0, 0, 0, 0, 0, 0, 0, 0, 1, }, }; } } }

Daisy: Median 7×7

Daisy Median 7x7

The MedianFilter targets the class. The MedianFilter method applies a using the specified and matrix size (window size), returning a new representing the filtered .

The definition of the MedianFilter as follows:

 public static Bitmap MedianFilter(this Bitmap sourceBitmap, 
                                   int matrixSize) 
{ 
     BitmapData sourceData = 
                sourceBitmap.LockBits(new Rectangle(0, 0, 
                sourceBitmap.Width, sourceBitmap.Height), 
                ImageLockMode.ReadOnly, 
                PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
List<int> neighbourPixels = new List<int>(); byte[] middlePixel;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
neighbourPixels.Clear();
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[byteOffset] = middlePixel[0]; resultBuffer[byteOffset + 1] = middlePixel[1]; resultBuffer[byteOffset + 2] = middlePixel[2]; resultBuffer[byteOffset + 3] = middlePixel[3]; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Daisy: Motion Blur 9×9

Daisy Motion Blur 9x9

The sample source code performs by invoking the ConvolutionFilter .

The definition of the ConvolutionFilter as follows:

private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap, 
                                          double[,] filterMatrix, 
                                               double factor = 1, 
                                                    int bias = 0) 
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly, 
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double)(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double)(pixelBuffer[calcOffset + 1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double)(pixelBuffer[calcOffset + 2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red));
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Sample Images

This article features a number of sample images. All featured images have been licensed allowing for reproduction.

The sample images featuring an image of a yellow daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikimedia.org.

The sample images featuring an image of a white daisy is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license and can be downloaded from Wikipedia.

The sample images featuring an image of a pink daisy is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license and can be downloaded from Wikipedia.

The sample images featuring an image of a purple daisy is licensed under the Creative Commons Attribution-ShareAlike 3.0 License and can be downloaded from Wikipedia.

The Original Image

Purple_osteospermum

Daisy: Gaussian 3×3

Daisy Gaussian 3x3

Daisy: Gaussian 5×5

Daisy Gaussian 5x5

Daisy: Mean 3×3

Daisy Mean 3x3

Daisy: Mean 5×5

Daisy Mean 5x5

Daisy: Mean 7×7

Daisy Mean 7x7

Daisy: Mean 9×9

Daisy Mean 9x9

Daisy: Median 3×3

Daisy Median 3x3

Daisy: Median 5×5

Daisy Median 5x5

Daisy: Median 7×7

Daisy Median 7x7

Daisy: Median 9×9

Daisy Median 9x9

Daisy: Median 11×11

Daisy Median 11x11

Daisy: Motion Blur 5×5

Daisy Motion Blur 5x5

Daisy: Motion Blur 5×5 45 Degrees

Daisy Motion Blur 5x5 45 Degrees

Daisy: Motion Blur 5×5 135 Degrees

Daisy Motion Blur 5x5 135 Degrees

Daisy: Motion Blur 7×7

Daisy Motion Blur 7x7

Daisy: Motion Blur 7×7 45 Degrees

Daisy Motion Blur 7x7 45 Degree

Daisy: Motion Blur 7×7 135 Degrees

Daisy Motion Blur 7x7 135 Degrees

Daisy: Motion Blur 9×9

Daisy Motion Blur 9x9

Daisy: Motion Blur 9×9 45 Degrees

Daisy Motion Blur 9x9 45 Degrees

Daisy: Motion Blur 9×9 135 Degrees

Daisy Motion Blur 9x9 135 Degrees

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Sharpen Edge Detection

Article Purpose

It is the objective of this article to explore and provide a discussion based in the concept of through means of . Illustrated are various methods of sharpening and in addition a implemented in reduction.

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

The sample source code accompanying this article includes a based Sample Application. The concepts illustrated throughout this article can easily be tested and replicated by making use of the Sample Application.

The Sample Application exposes seven main areas of functionality:

  • Loading input/source images.
  • Saving image result.
  • Sharpen Filters
  • Median Filter Size
  • Threshold value
  • Grayscale Source
  • Mono Output

When using the Sample application users are able to select input/source from the local file system by clicking the Load Image button. If desired, users may save result to the local file system by clicking the Save Image button.

The sample source code and sample application implement various methods of . Each method of results in varying degrees of . Some methods are more effective than other methods. The method being implemented serves as a primary factor influencing results. The effectiveness of the selected method is reliant on the input/source provided. The sample application implements the following methods:

  • Sharpen5To4
  • Sharpen7To1
  • Sharpen9To1
  • Sharpen12To1
  • Sharpen24To1
  • Sharpen48To1
  • Sharpen10To8
  • Sharpen11To8
  • Sharpen821

is regarded as a common problem relating to . Often will be incorrectly detected as forming part of an edge within an . The sample source code implements a in order to counter act . The size/intensity of the applied can be specified via the labelled Median Filter Size.

The Threshold value configured through the sample application’s user interface has a two-fold implementation. In a scenario where output images are created in a black and white format the Threshold value will be implemented to determine whether a pixel should be either black or white. When output are created as full colour the Threshold value will be added to each pixel, acting as a bias value.

In some scenarios can be achieved more effectively when specifying format source/input . The purpose of the labelled Grayscale Source is to format source/input in a format before implementing .

The labelled Mono Output, when selected, has the effect of producing result in a black and white format.

The image below is a screenshot of the Sharpen Edge Detection sample application in action:

Sharpen Edge Detection Sample Application

Edge Detection through Image Sharpening

The sample source code performs on source/input by means of . The steps performed can be broken down to the following items:

  1. If specified, apply a filter to the input/source image. A filter results in smoothing an . can be reduced when implementing a . smoothing/ often results reducing details/. The is well suited to smoothing away whilst implementing edge preservation. When performing the functions as an ideal method of reducing whilst not negatively impacting tasks.
  2. If specified, convert the source/input to by iterating each pixel that forms part of the . Each pixel’s colour components are calculated multiplying by factor values: Red x 0.3  Green x 0.59  Blue x 0.11.
  3. Using the specified   iterate each pixel forming part of the source/input , performing on each pixel colour channel.
  4. If the output has been specified as Mono, the middle pixel calculated in should be multiplied with the specified factor value. Each colour component should be compared to the specified threshold value and be assigned as either black or white.
  5. If the output has not been specified as Mono, the middle pixel calculated in should be multiplied with the factor value to which the threshold/bias value should be added. The value of each colour component will be set to the result of subtracting the calculated convolution/filter/bias value from the pixel’s original colour component value. In other words perform using applying a factor and bias which should then be subtracted from the original source/input .

Implementing Sharpen Edge Detection

The sample source code achieves through image sharpening by implementing three methods: MedianFilter and two overloaded methods titled SharpenEdgeDetect.

The MedianFilter method is defined as an targeting the class. The definition as follows:

 public static Bitmap MedianFilter(this Bitmap sourceBitmap, 
                                   int matrixSize) 
{ 
     BitmapData sourceData = 
                sourceBitmap.LockBits(new Rectangle(0, 0, 
                sourceBitmap.Width, sourceBitmap.Height), 
                ImageLockMode.ReadOnly, 
                PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
List<int> neighbourPixels = new List<int>(); byte[] middlePixel;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
neighbourPixels.Clear();
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[byteOffset] = middlePixel[0]; resultBuffer[byteOffset + 1] = middlePixel[1]; resultBuffer[byteOffset + 2] = middlePixel[2]; resultBuffer[byteOffset + 3] = middlePixel[3]; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

The public implementation of the SharpenEdgeDetect has the purpose of translating user specified options into the relevant method calls to the private implementation of the SharpenEdgeDetect . The public implementation of the SharpenEdgeDetect method as follows:

public static Bitmap SharpenEdgeDetect(this Bitmap sourceBitmap, 
                                            SharpenType sharpen, 
                                                   int bias = 0, 
                                         bool grayscale = false, 
                                              bool mono = false, 
                                       int medianFilterSize = 0) 
{ 
    Bitmap resultBitmap = null; 

if (medianFilterSize == 0) { resultBitmap = sourceBitmap; } else { resultBitmap = sourceBitmap.MedianFilter(medianFilterSize); }
switch (sharpen) { case SharpenType.Sharpen7To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen7To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen9To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen9To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen12To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen12To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen24To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen24To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen48To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen48To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen5To4: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen5To4, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen10To8: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen10To8, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen11To8: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen11To8, 3.0 / 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen821: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen821, 8.0 / 1.0, bias, grayscale, mono); } break; }
return resultBitmap; }

The Matrix class provides the definition of static pre-defined values. The definition as follows:

public static class Matrix   
{
    public static double[,] Sharpen7To1 
    {
        get   
        { 
            return new double[,]   
            {  { 1,  1,  1, },  
               { 1, -7,  1, },   
               { 1,  1,  1, }, }; 
        }  
    }  

public static double[,] Sharpen9To1 { get { return new double[,] { { -1, -1, -1, }, { -1, 9, -1, }, { -1, -1, -1, }, }; } }
public static double[,] Sharpen12To1 { get { return new double[,] { { -1, -1, -1, }, { -1, 12, -1, }, { -1, -1, -1, }, }; } }
public static double[,] Sharpen24To1 { get { return new double[,] { { -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, }, { -1, -1, 24, -1, -1, }, { -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, }, }; } }
public static double[,] Sharpen48To1 { get { return new double[,] { { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, 48, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, }; } }
public static double[,] Sharpen5To4 { get { return new double[,] { { 0, -1, 0, }, { -1, 5, -1, }, { 0, -1, 0, }, }; } }
public static double[,] Sharpen10To8 { get { return new double[,] { { 0, -2, 0, }, { -2, 10, -2, }, { 0, -2, 0, }, }; } }
public static double[,] Sharpen11To8 { get { return new double[,] { { 0, -2, 0, }, { -2, 11, -2, }, { 0, -2, 0, }, }; } }
public static double[,] Sharpen821 { get { return new double[,] { { -1, -1, -1, -1, -1, }, { -1, 2, 2, 2, -1, }, { -1, 2, 8, 2, 1, }, { -1, 2, 2, 2, -1, }, { -1, -1, -1, -1, -1, }, }; } } }

The private implementation of the SharpenEdgeDetect performs through and then performs subtraction. The definition as follows:

private static Bitmap SharpenEdgeDetect(this Bitmap sourceBitmap, 
                                          double[,] filterMatrix, 
                                               double factor = 1, 
                                                    int bias = 0, 
                                          bool grayscale = false, 
                                               bool mono = false) 
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly, 
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
if (grayscale == true) { for (int pixel = 0; pixel < pixelBuffer.Length; pixel += 4) { pixelBuffer[pixel] = (byte)(pixelBuffer[pixel] * 0.11f);
pixelBuffer[pixel + 1] = (byte)(pixelBuffer[pixel + 1] * 0.59f);
pixelBuffer[pixel + 2] = (byte)(pixelBuffer[pixel + 2] * 0.3f); } }
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double )(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double )(pixelBuffer[calcOffset + 1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double )(pixelBuffer[calcOffset + 2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
if (mono == true) { blue = resultBuffer[byteOffset] - factor * blue; green = resultBuffer[byteOffset + 1] - factor * green; red = resultBuffer[byteOffset + 2] - factor * red;
blue = (blue > bias ? 255 : 0);
green = (blue > bias ? 255 : 0);
red = (blue > bias ? 255 : 0); } else { blue = resultBuffer[byteOffset] - factor * blue + bias;
green = resultBuffer[byteOffset + 1] - factor * green + bias;
red = resultBuffer[byteOffset + 2] - factor * red + bias;
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red)); }
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height); BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Sample Images

The sample image used in this article is in the public domain because its copyright has expired. This applies to Australia, the European Union and those countries with a copyright term of life of the author plus 70 years. The original image can be downloaded from Wikipedia.

The Original Image

NovaraExpZoologischeTheilLepidopteraAtlasTaf53

Sharpen5To4, Median 0, Threshold 0

Sharpen5To4 Median 0 Threshold 0

Sharpen5To4, Median 0, Threshold 0, Mono

Sharpen5To4 Median 0 Threshold 0 Mono

Sharpen7To1, Median 0, Threshold 0

Sharpen7To1 Median 0 Threshold 0

Sharpen7To1, Median 0, Threshold 0, Mono

Sharpen7To1 Median 0 Threshold 0 Mono

Sharpen9To1, Median 0, Threshold 0

Sharpen9To1 Median 0 Threshold 0

Sharpen9To1, Median 0, Threshold 0, Mono

Sharpen9To1 Median 0 Threshold 0 Mono

Sharpen10To8, Median 0, Threshold 0

Sharpen10To8 Median 0 Threshold 0

Sharpen10To8, Median 0, Threshold 0, Mono

Sharpen10To8 Median 0 Threshold 0 Mono

Sharpen11To8, Median 0, Threshold 0

Sharpen11To8 Median 0 Threshold 0

Sharpen11To8, Median 0, Threshold 0, Grayscale, Mono

Sharpen11To8 Median 0 Threshold 0 Grayscale Mono

Sharpen12To1, Median 0, Threshold 0

Sharpen12To1 Median 0 Threshold 0

Sharpen12To1, Median 0, Threshold 0, Mono

Sharpen12To1 Median 0 Threshold 0 Mono

Sharpen24To1, Median 0, Threshold 0

Sharpen24To1 Median 0 Threshold 0

Sharpen24To1, Median 0, Threshold 0, Grayscale, Mono

Sharpen24To1 Median 0 Threshold 0 Grayscale Mono

Sharpen24To1, Median 0, Threshold 0, Mono

Sharpen24To1 Median 0 Threshold 0 Mono

Sharpen24To1, Median 0, Threshold 21, Grayscale, Mono

Sharpen24To1 Median 0 Threshold 21 Grayscale Mono

Sharpen48To1, Median 0, Threshold 0

Sharpen48To1 Median 0 Threshold 0

Sharpen48To1, Median 0, Threshold 0, Grayscale, Mono

Sharpen48To1 Median 0 Threshold 0 Grayscale Mono

Sharpen48To1, Median 0, Threshold 0, Mono

Sharpen48To1 Median 0 Threshold 0 Mono

Sharpen48To1, Median 0, Threshold 226, Mono

Sharpen48To1 Median 0 Threshold 226 Mono

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Gradient Based Edge Detection

Article purpose

This article provides a technical discussion exploring the topic of Gradient Based Edge Detection and related aspects. Several filtering options are illustrated and explained ranging from pure black and white to .

Gradient Based Edge Detection

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

All of the concepts implemented in this article can be replicated and tested by making use of the sample application included in the associated sample source code. The sample application user interface provides several configurable options to be implemented when performing Gradient Based Edge Detection. The available configuration categories are: Filter Type, Derivative Level, Threshold and Colour Factor Filters.

Gradient Based Edge Detection

Configurable Filter Types exposed to the end user consist of:

  • None – When selecting this option no filtering will be applied. Source/input are displayed reflecting no change. 
  • Edge Detect Mono – This option  represents basic Gradient Based Edge Detection. Resulting are only expressed in terms of black and white pixels.
  • Edge Detect Gradient – Gradient Based Edge Detection revolves around calculating pixel colour gradients. This option signifies  a scenario where the pixels forming resulting express the relevant pixel’s colour gradient, when a pixel has been determined to reflect part of an edge. If  a pixel is not considered to be part of an edge, the relevant pixel’s colour value will be set to black.
  • Sharpen – In terms of , can be achieved by emphasising detected edges in source/input images. Emphasising edges involves combining a source/input and an which express only detected edges.
  • Sharpen Gradient – This option combines calculated colour gradients and the original colour value of a pixel on a per pixel basis when a pixel has been determined to be part of an edge. If a pixel does not form part of an edge, the pixel’s colour value is set to that of the original pixel colour.

Gradient Based Edge Detection

The user interface defines two : First Derivative and Second Derivative. These user interface options relate to the method being implemented, either First Order or Second Order derivative operators.

Comparing a global threshold and colour gradients on a per pixel scenario forms the basis of Gradient Based Edge Detection. The TrackBar labelled Threshold enables the user to adjust the global threshold value implemented in pixel colour gradient comparisons.

Gradient Based Edge Detection

The Colour Factor Filters impact on the level or extent to which colours are expressed in resulting . The three colour factors, Red, Green and Blue are intended to be used in combination with the filtering options: Filter Type and Threshold. Colour Factor Filter affects when implemented in combination:

  • Filter Type – Edge Detect Mono: Not applicable. Edge Detect Mono filtering discards all pixel colour data.
  • Filter Type – Edge Detect Gradient: If a pixel is detected as part of an edge, the pixel’s colour values will be set to the gradient calculated when evaluating edge criteria. Gradient values are multiplied by Colour Factor values before being assigned to a resulting pixel.
  • Filter Type – Sharpen: The pixels which form part of an edge, in terms of the resulting the corresponding pixels will be set to the same colour values. In addition pixel colour values in the resulting are multiplied by with the relevant Colour Factor. The pixels not detected as part of an edge will not be multiplied with any Colour Factor values.
  • Filter Type – Sharpen Gradient: Edge detected pixels in a source/input will have the effect of corresponding pixels in the resulting being assigned to the calculated colour gradient value, multiplied with the relevant Colour Factor value. In the scenario of a pixel not being detected as forming part of an edge, Colour Factors will not be implemented.
  • Threshold: The global threshold value specified by the user determines the level of sensitivity to which edges will be detected. The degree to which edges are detected through the threshold value impacts upon whether a pixel will be multiplied with the relevant Colour Factor value.

Gradient Based Edge Detection

The user has the option of saving filtered to the local system by clicking the Save Image button. The image below is a screenshot of the Gradient Based Edge Detection sample application in action:

Gradient Based Edge Detection Sample Application

Gradient Based Edge Detection Theory

Gradient Based Edge Detection qualifies to be classified as a neighbouring pixel algorithm. When calculating a pixel’s value in order to determine if a pixel should be expressed as part of an edge or not, the result will be determined by:

  • The values expressed by neighbouring pixels. The more intense or sudden differences that occur between neighbouring pixels will result in higher accuracy .
  • A user specified global threshold value used in comparison operations acts as a cut-off value, ultimately being the final factor to determine if a pixel should be expressed as part of an edge.

Gradient Based Edge Detection

In the sample source code we implement the following steps when calculating whether a pixel should be considered as part of an edge:

  1. Iterate each pixel that forms part of the source/input .
  2. Calculate and combine horizontal and vertical gradients for each of the colour components Red, Green and Blue. If the sum total of each colour component’s calculated gradient exceeds the global threshold value consider the pixel being iterated as part of an edge. If the sum total of colour gradients equate to less than the global threshold implement step 3.
  3. Calculate a pixel’s horizontal gradient per colour component. When comparing the gradient sum total against the global threshold consider the pixel being iterated part of an edge if the sum total of gradient values exceed that of the threshold. If the total of colour gradient value do not exceed the threshold value continue to step 4.
  4. Calculate a pixel’s vertical gradient per colour component. When comparing the gradient sum total against the global threshold consider the pixel being iterated part of an edge if the sum total of gradient values exceed that of the threshold. If the sum total of colour gradients  do not exceed the threshold value continue to step 5.
  5. Calculate and combine diagonal gradients for each of the colour components Red, Green and Blue. If the sum total of each colour component’s calculated gradient exceeds the global threshold value consider the pixel being iterated as being part of an edge. If the sum total of colour gradients equate to less than the global threshold the pixel being iterated should not be considered as part of an edge.

Gradient Based Edge Detection

If we determined that a pixel forms part of an edge, the value expressed by the corresponding pixel in the resulting will be determined by the Image Filter configuration value:

  • Edge Detect Mono – All pixels will be set to white.
  • Edge Detect Gradient – Each colour component will be assigned to the related colour gradient calculated when performing . Each Colour gradient will be multiplied with the related colour factor.
  • Sharpen – The value of a resulting pixel will be calculated as the product of the corresponding source pixel and the related colour factor value.
  • Sharpen Gradient – Results are calculated in terms of the sum total of the corresponding input pixel and the product of the related colour gradient and colour factor value.

Gradient Based Edge Detection

Implementing Gradient Based Edge Detection

The sample source code associated with this article provides the defines of the GradientBasedEdgeDetectionFilter targeting the class. This method  iterates every pixel contained in the source/input . Whilst iterating pixels the method creates a 3×3 window/ covering the neighbouring pixels surrounding the pixel currently being iterated. Colour gradients are calculated from every pixel’s neighbouring pixels.

The following Code snippet provides the definition of the GradientBasedEdgeDetectionFilter :

public static Bitmap GradientBasedEdgeDetectionFilter( 
                                this Bitmap sourceBitmap, 
                                EdgeFilterType filterType, 
                                DerivativeLevel derivativeLevel,  
                                float redFactor = 1.0f, 
                                float greenFactor = 1.0f, 
                                float blueFactor = 1.0f, 
                                byte threshold = 0) 
{ 
    BitmapData sourceData = 
               sourceBitmap.LockBits(new Rectangle (0, 0, 
               sourceBitmap.Width, sourceBitmap.Height), 
               ImageLockMode.ReadOnly, 
               PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int derivative = (int)derivativeLevel; int byteOffset = 0; int blueGradient, greenGradient, redGradient = 0; double blue = 0, green = 0, red = 0;
bool exceedsThreshold = false;
for(int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for (int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
blueGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]) / derivative;
blueGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]) / derivative;
byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]) / derivative;
greenGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]) / derivative;
byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]) / derivative;
redGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]) / derivative;
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { byteOffset -= 2;
blueGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]); byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]); byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]);
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { byteOffset -= 2;
blueGradient = Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { byteOffset -= 2;
blueGradient = Math.Abs(pixelBuffer[byteOffset - 4 - sourceData.Stride] - pixelBuffer[byteOffset + 4 + sourceData.Stride]) / derivative;
blueGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride + 4] - pixelBuffer[byteOffset + sourceData.Stride - 4]) / derivative;
byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - 4 - sourceData.Stride] - pixelBuffer[byteOffset + 4 + sourceData.Stride]) / derivative;
greenGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride + 4] - pixelBuffer[byteOffset + sourceData.Stride - 4]) / derivative;
byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - 4 - sourceData.Stride] - pixelBuffer[byteOffset + 4 + sourceData.Stride]) / derivative;
redGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride + 4] - pixelBuffer[byteOffset + sourceData.Stride - 4]) / derivative;
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { exceedsThreshold = false ; } } } }
byteOffset -= 2;
if (exceedsThreshold) { if (filterType == EdgeFilterType.EdgeDetectMono) { blue = green = red = 255; } else if (filterType == EdgeFilterType.EdgeDetectGradient) { blue = blueGradient * blueFactor; green = greenGradient * greenFactor; red = redGradient * redFactor; } else if (filterType == EdgeFilterType.Sharpen) { blue = pixelBuffer[byteOffset] * blueFactor; green = pixelBuffer[byteOffset + 1] * greenFactor; red = pixelBuffer[byteOffset + 2] * redFactor; } else if (filterType == EdgeFilterType.SharpenGradient) { blue = pixelBuffer[byteOffset] + blueGradient * blueFactor; green = pixelBuffer[byteOffset + 1] + greenGradient * greenFactor; red = pixelBuffer[byteOffset + 2] + redGradient * redFactor; } } else { if (filterType == EdgeFilterType.EdgeDetectMono || filterType == EdgeFilterType.EdgeDetectGradient) { blue = green = red = 0; } else if (filterType == EdgeFilterType.Sharpen || filterType == EdgeFilterType.SharpenGradient) { blue = pixelBuffer[byteOffset]; green = pixelBuffer[byteOffset + 1]; red = pixelBuffer[byteOffset + 2]; } }
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red));
resultBuffer[byteOffset] = (byte)blue; resultBuffer[byteOffset + 1] = (byte)green; resultBuffer[byteOffset + 2] = (byte)red; resultBuffer[byteOffset + 3] = 255; } } Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Gradient Based Edge Detection

Sample Images

The banner images depicting a butterfly featured throughout this article were generated using the sample application. The original image has been licenced under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license. The original image is attributed to Kenneth Dwain Harrelson and can be downloaded from Wikipedia.

The sample image featuring a Scarlet Macaw has been licensed under the Creative Commons Attribution-Share Alike 3.0 Germany license. The original image can be downloaded from .

The Original Image

Ara_macao_-flying_away-8a

Edge Detect, Second Derivative, Threshold 50 

Edge Detect, Second Derivative, Threshold 50

Edge Detect Gradient, First Derivative, Blue

Edge Detect Gradient, First Derivative, Blue

Edge Detect Gradient, First Derivative, Green

Edge Detect Gradient, First Derivative, Green

Edge Detect Gradient, First Derivative, Green and Blue

Edge Detect Gradient, First Derivative, Green and Blue

Edge Detect Gradient, First Derivative, Red

Edge Detect Gradient, First Derivative, Red

Edge Detect Gradient, First Derivative, Red and Blue

Edge Detect Gradient, First Derivative, Red and Blue

Edge Detect Gradient, First Derivative, Red and Green

Edge Detect Gradient, First Derivative, Red and Green

Edge Detect Gradient, First Derivative, Red, Green and Blue

Edge Detect Gradient, First Derivative, Red, Green and Blue

Edge Detect Sharpen, Second Derivative, Threshold 40, Black

Edge Detect Sharpen, Second Derivative, Threshold 40, Black

Edge Detect Sharpen, Second Derivative, Threshold 40, Blue

Edge Detect Sharpen, Second Derivative, Threshold 40, Blue

Edge Detect Sharpen, Second Derivative, Threshold 40, Green

Edge Detect Sharpen, Second Derivative, Threshold 40, Green

Edge Detect Sharpen, Second Derivative, Threshold 40, Green and Blue

Edge Detect Sharpen, Second Derivative, Threshold 40, Green and Blue

Edge Detect Sharpen, Second Derivative, Threshold 40, Red

Edge Detect Sharpen, Second Derivative, Threshold 40, Red

Edge Detect Sharpen, Second Derivative, Threshold 40, Red and Blue

Edge Detect Sharpen, Second Derivative, Threshold 40, Red and Blue

Edge Detect Sharpen, Second Derivative, Threshold 40, Red and Green

Edge Detect Sharpen, Second Derivative, Threshold 40, Red and Green

Edge Detect Sharpen, Second Derivative, Threshold 40, White

Edge Detect Sharpen, Second Derivative, Threshold 40, White

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Blue

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Blue

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Green

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Green

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Green and Blue

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Green and Blue

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Red

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Red

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Red and Blue

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Red and Blue

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Red and Green

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, Red and Green

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, White

Edge Detect Sharpen Gradient, First Derivative, Threshold 0, White

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:


Dewald Esterhuizen

Blog Stats

  • 843,498 hits

Enter your email address to follow and receive notifications of new posts by email.

Join 228 other subscribers

Archives


%d bloggers like this: