Archive for the 'Image Arithmetic' Category



C# How to: Sharpen Edge Detection

Article Purpose

It is the objective of this article to explore and provide a discussion based in the concept of through means of . Illustrated are various methods of sharpening and in addition a implemented in reduction.

Sample Source Code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the Sample Application

The sample source code accompanying this article includes a based Sample Application. The concepts illustrated throughout this article can easily be tested and replicated by making use of the Sample Application.

The Sample Application exposes seven main areas of functionality:

  • Loading input/source images.
  • Saving image result.
  • Sharpen Filters
  • Median Filter Size
  • Threshold value
  • Grayscale Source
  • Mono Output

When using the Sample application users are able to select input/source from the local file system by clicking the Load Image button. If desired, users may save result to the local file system by clicking the Save Image button.

The sample source code and sample application implement various methods of . Each method of results in varying degrees of . Some methods are more effective than other methods. The method being implemented serves as a primary factor influencing results. The effectiveness of the selected method is reliant on the input/source provided. The sample application implements the following methods:

  • Sharpen5To4
  • Sharpen7To1
  • Sharpen9To1
  • Sharpen12To1
  • Sharpen24To1
  • Sharpen48To1
  • Sharpen10To8
  • Sharpen11To8
  • Sharpen821

is regarded as a common problem relating to . Often will be incorrectly detected as forming part of an edge within an . The sample source code implements a in order to counter act . The size/intensity of the applied can be specified via the labelled Median Filter Size.

The Threshold value configured through the sample application’s user interface has a two-fold implementation. In a scenario where output images are created in a black and white format the Threshold value will be implemented to determine whether a pixel should be either black or white. When output are created as full colour the Threshold value will be added to each pixel, acting as a bias value.

In some scenarios can be achieved more effectively when specifying format source/input . The purpose of the labelled Grayscale Source is to format source/input in a format before implementing .

The labelled Mono Output, when selected, has the effect of producing result in a black and white format.

The image below is a screenshot of the Sharpen Edge Detection sample application in action:

Sharpen Edge Detection Sample Application

Edge Detection through Image Sharpening

The sample source code performs on source/input by means of . The steps performed can be broken down to the following items:

  1. If specified, apply a filter to the input/source image. A filter results in smoothing an . can be reduced when implementing a . smoothing/ often results reducing details/. The is well suited to smoothing away whilst implementing edge preservation. When performing the functions as an ideal method of reducing whilst not negatively impacting tasks.
  2. If specified, convert the source/input to by iterating each pixel that forms part of the . Each pixel’s colour components are calculated multiplying by factor values: Red x 0.3  Green x 0.59  Blue x 0.11.
  3. Using the specified   iterate each pixel forming part of the source/input , performing on each pixel colour channel.
  4. If the output has been specified as Mono, the middle pixel calculated in should be multiplied with the specified factor value. Each colour component should be compared to the specified threshold value and be assigned as either black or white.
  5. If the output has not been specified as Mono, the middle pixel calculated in should be multiplied with the factor value to which the threshold/bias value should be added. The value of each colour component will be set to the result of subtracting the calculated convolution/filter/bias value from the pixel’s original colour component value. In other words perform using applying a factor and bias which should then be subtracted from the original source/input .

Implementing Sharpen Edge Detection

The sample source code achieves through image sharpening by implementing three methods: MedianFilter and two overloaded methods titled SharpenEdgeDetect.

The MedianFilter method is defined as an targeting the class. The definition as follows:

 public static Bitmap MedianFilter(this Bitmap sourceBitmap, 
                                   int matrixSize) 
{ 
     BitmapData sourceData = 
                sourceBitmap.LockBits(new Rectangle(0, 0, 
                sourceBitmap.Width, sourceBitmap.Height), 
                ImageLockMode.ReadOnly, 
                PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
List<int> neighbourPixels = new List<int>(); byte[] middlePixel;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
neighbourPixels.Clear();
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[byteOffset] = middlePixel[0]; resultBuffer[byteOffset + 1] = middlePixel[1]; resultBuffer[byteOffset + 2] = middlePixel[2]; resultBuffer[byteOffset + 3] = middlePixel[3]; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

The public implementation of the SharpenEdgeDetect has the purpose of translating user specified options into the relevant method calls to the private implementation of the SharpenEdgeDetect . The public implementation of the SharpenEdgeDetect method as follows:

public static Bitmap SharpenEdgeDetect(this Bitmap sourceBitmap, 
                                            SharpenType sharpen, 
                                                   int bias = 0, 
                                         bool grayscale = false, 
                                              bool mono = false, 
                                       int medianFilterSize = 0) 
{ 
    Bitmap resultBitmap = null; 

if (medianFilterSize == 0) { resultBitmap = sourceBitmap; } else { resultBitmap = sourceBitmap.MedianFilter(medianFilterSize); }
switch (sharpen) { case SharpenType.Sharpen7To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen7To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen9To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen9To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen12To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen12To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen24To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen24To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen48To1: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen48To1, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen5To4: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen5To4, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen10To8: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen10To8, 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen11To8: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen11To8, 3.0 / 1.0, bias, grayscale, mono); } break; case SharpenType.Sharpen821: { resultBitmap = resultBitmap.SharpenEdgeDetect( Matrix.Sharpen821, 8.0 / 1.0, bias, grayscale, mono); } break; }
return resultBitmap; }

The Matrix class provides the definition of static pre-defined values. The definition as follows:

public static class Matrix   
{
    public static double[,] Sharpen7To1 
    {
        get   
        { 
            return new double[,]   
            {  { 1,  1,  1, },  
               { 1, -7,  1, },   
               { 1,  1,  1, }, }; 
        }  
    }  

public static double[,] Sharpen9To1 { get { return new double[,] { { -1, -1, -1, }, { -1, 9, -1, }, { -1, -1, -1, }, }; } }
public static double[,] Sharpen12To1 { get { return new double[,] { { -1, -1, -1, }, { -1, 12, -1, }, { -1, -1, -1, }, }; } }
public static double[,] Sharpen24To1 { get { return new double[,] { { -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, }, { -1, -1, 24, -1, -1, }, { -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, }, }; } }
public static double[,] Sharpen48To1 { get { return new double[,] { { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, 48, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, { -1, -1, -1, -1, -1, -1, -1, }, }; } }
public static double[,] Sharpen5To4 { get { return new double[,] { { 0, -1, 0, }, { -1, 5, -1, }, { 0, -1, 0, }, }; } }
public static double[,] Sharpen10To8 { get { return new double[,] { { 0, -2, 0, }, { -2, 10, -2, }, { 0, -2, 0, }, }; } }
public static double[,] Sharpen11To8 { get { return new double[,] { { 0, -2, 0, }, { -2, 11, -2, }, { 0, -2, 0, }, }; } }
public static double[,] Sharpen821 { get { return new double[,] { { -1, -1, -1, -1, -1, }, { -1, 2, 2, 2, -1, }, { -1, 2, 8, 2, 1, }, { -1, 2, 2, 2, -1, }, { -1, -1, -1, -1, -1, }, }; } } }

The private implementation of the SharpenEdgeDetect performs through and then performs subtraction. The definition as follows:

private static Bitmap SharpenEdgeDetect(this Bitmap sourceBitmap, 
                                          double[,] filterMatrix, 
                                               double factor = 1, 
                                                    int bias = 0, 
                                          bool grayscale = false, 
                                               bool mono = false) 
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly, 
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
if (grayscale == true) { for (int pixel = 0; pixel < pixelBuffer.Length; pixel += 4) { pixelBuffer[pixel] = (byte)(pixelBuffer[pixel] * 0.11f);
pixelBuffer[pixel + 1] = (byte)(pixelBuffer[pixel + 1] * 0.59f);
pixelBuffer[pixel + 2] = (byte)(pixelBuffer[pixel + 2] * 0.3f); } }
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) { calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double )(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double )(pixelBuffer[calcOffset + 1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double )(pixelBuffer[calcOffset + 2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
if (mono == true) { blue = resultBuffer[byteOffset] - factor * blue; green = resultBuffer[byteOffset + 1] - factor * green; red = resultBuffer[byteOffset + 2] - factor * red;
blue = (blue > bias ? 255 : 0);
green = (blue > bias ? 255 : 0);
red = (blue > bias ? 255 : 0); } else { blue = resultBuffer[byteOffset] - factor * blue + bias;
green = resultBuffer[byteOffset + 1] - factor * green + bias;
red = resultBuffer[byteOffset + 2] - factor * red + bias;
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red)); }
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height); BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Sample Images

The sample image used in this article is in the public domain because its copyright has expired. This applies to Australia, the European Union and those countries with a copyright term of life of the author plus 70 years. The original image can be downloaded from Wikipedia.

The Original Image

NovaraExpZoologischeTheilLepidopteraAtlasTaf53

Sharpen5To4, Median 0, Threshold 0

Sharpen5To4 Median 0 Threshold 0

Sharpen5To4, Median 0, Threshold 0, Mono

Sharpen5To4 Median 0 Threshold 0 Mono

Sharpen7To1, Median 0, Threshold 0

Sharpen7To1 Median 0 Threshold 0

Sharpen7To1, Median 0, Threshold 0, Mono

Sharpen7To1 Median 0 Threshold 0 Mono

Sharpen9To1, Median 0, Threshold 0

Sharpen9To1 Median 0 Threshold 0

Sharpen9To1, Median 0, Threshold 0, Mono

Sharpen9To1 Median 0 Threshold 0 Mono

Sharpen10To8, Median 0, Threshold 0

Sharpen10To8 Median 0 Threshold 0

Sharpen10To8, Median 0, Threshold 0, Mono

Sharpen10To8 Median 0 Threshold 0 Mono

Sharpen11To8, Median 0, Threshold 0

Sharpen11To8 Median 0 Threshold 0

Sharpen11To8, Median 0, Threshold 0, Grayscale, Mono

Sharpen11To8 Median 0 Threshold 0 Grayscale Mono

Sharpen12To1, Median 0, Threshold 0

Sharpen12To1 Median 0 Threshold 0

Sharpen12To1, Median 0, Threshold 0, Mono

Sharpen12To1 Median 0 Threshold 0 Mono

Sharpen24To1, Median 0, Threshold 0

Sharpen24To1 Median 0 Threshold 0

Sharpen24To1, Median 0, Threshold 0, Grayscale, Mono

Sharpen24To1 Median 0 Threshold 0 Grayscale Mono

Sharpen24To1, Median 0, Threshold 0, Mono

Sharpen24To1 Median 0 Threshold 0 Mono

Sharpen24To1, Median 0, Threshold 21, Grayscale, Mono

Sharpen24To1 Median 0 Threshold 21 Grayscale Mono

Sharpen48To1, Median 0, Threshold 0

Sharpen48To1 Median 0 Threshold 0

Sharpen48To1, Median 0, Threshold 0, Grayscale, Mono

Sharpen48To1 Median 0 Threshold 0 Grayscale Mono

Sharpen48To1, Median 0, Threshold 0, Mono

Sharpen48To1 Median 0 Threshold 0 Mono

Sharpen48To1, Median 0, Threshold 226, Mono

Sharpen48To1 Median 0 Threshold 226 Mono

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Cartoon Effect

Article purpose

In this article we explore the tasks related to creating a Cartoon Effect from which reflect real world non-animated scenarios. When applying a Cartoon Effect it becomes possible with relative ease to create appearing to have originated from a drawing/animation.

Cartoon version of Steve Ballmer: Low Pass 3×3, Threshold 65.

Low Pass 3x3 Threshold 65

Sample source code

This article is accompanied by a sample source code Visual Studio project which is available for download .

CPU: Gaussian 7×7, Threshold 84

Gaussian 7x7 Threshold 84 CPU

Using the Sample Application

A Sample Application has been included as part of the sample source code accompanying this article. The Sample Application is a based application which enables a user to specify source/input , apply various methods of implementing the Cartoon Effect. In addition users are able to save generated images to the local system.

When using the Sample Application click the Load Image button to load files from the local file system. On the right-hand side of the Sample application’s user interface, users are provided with two configuration options: Smoothing Filter and Threshold.

Rose: Gaussian 3×3 Threshold 28.

Gaussian 3x3 Threshold 28

In this article and sample source code detail and definition can be reduced through means of image smoothing filters. Several smoothing options are available to the user, the following section serves as a discussion of each option.

None – When specifying the Smoothing Filter option None, no smoothing operations will be performed on source/input .

3×3 – filters can be very effective at removing , smoothing an background, whilst still preserving the edges expressed in the sample/input . A  / of 3×3 dimensions result in slight .

 5×5 – A operation being implemented by making use of a / defined with dimensions of 5×5. A slightly larger results in an increased level of being expressed by output . A greater level of   equates to a larger degree of reduction/removal.

Rose: Gaussian 7×7 Threshold 48.

Gaussian 7x7 Threshold 48 

7×7 – As can be expected when specifying a / conforming to 7×7 size dimension an even more intense level of can be detected when looking at result . Notice how increased levels of negatively affects the process of . Consider the following: In a scenario where too many elements are being detected as part of an edge as a result of , specifying a higher level of should reduce edges being detected. The reasoning can be explained in terms of reducing /detail, higher levels of will thus result in a greater level of detail/definition reduction. Lower definition are less likely to express the same level of detected edges when compared to higher definition .

CPU: Median 3×3, Threshold 96.

Median 3x3 Threshold 96 CPU

 3×3 – When applying a to an the resulting should express a lesser degree of . In other words, the can be considered as well suited to performing . Also note that a under certain conditions has the ability to preserve the edges contained in an . In the following section we explore the importance of in achieving a Cartoon Effect. Important concepts to take note of: The when implemented on an performs whilst preserving edges. In relation, represents a core concept/task when creating a Cartoon Effect. The ’s edge preservation property compliments the process of . When an contains a low level of the Median 3×3 Filter could be considered.

5×5 – The 5×5 dimension implementation of the result in producing which exhibit a higher degree of smoothing and a lesser expression of . If the 3×3  fails to provide adequate levels of smoothing and the 5×5 could be implemented.

Cartoon version of Steve Ballmer: Sharpen 3×3, Threshold 80.

Sharpen 3x3 Threshold 80

7×7 – The last implemented by the sample source code conforms to a 7×7 size dimension. This filter variation results in a high level of image . The trade off to more effective will be expressed in result appearing extremely smooth, in some scenarios perhaps overly so.

Mean 3×3 – The Mean Filter provides a different implementation towards achieving image smoothing and .

Mean 5×5 – The 5×5 dimension Mean Filter variation serves as a more intense version of the Mean 3×3 Filter. Depending on the level of and type of a Mean Filter could prove a more efficient implementation in comparison to a .

Low Pass 3×3 – In much the same fashion as and Mean Filters, a achieves smoothing and . Notice when comparing , Mean and Filtering, the differences observed in output results are only expressed as slight differences. The most effective filter to apply should be seen as as being dependent on the input/source characteristics.

CPU: Gaussian 3×3, Threshold 92.

Gaussian 3x3 Threshold 92 CPU 

Low Pass 5×5 – This filter variation being of a larger dimension serves as a more intense implementation of the 3×3 Filter.

Sharpen 3×3 – In certain scenarios input/source may already be smoothed/blurred to such an extent where the process performs below expectation. can be improved when applying a .

Threshold values specified by the user through the user interface serves the purpose of enabling the user to finely control the extent/intensity of edges being detected. Implementing a higher Threshold value will have the result of less edges being detected. In order to reduce the level of being detected as false edges the Threshold value should be increased. When too few edges are being detected the Threshold value should be decreased.

The following image is a screenshot of the Image Cartoon Effect Sample Application in action:

Image Cartoon Effect Sample Application

Explanation of the Cartoon Effect

The Cartoon Effect can be characterised as an image filter producing result which appear similar to input/source with the exception of having an animated appearance.

The Cartoon Effect consists of reducing image detail/definition whilst at the same instance performing . The resulting smoothed and the edges detected in the source/input should be combined, where detected edges are being expressed in the colour black. The final reflects an appearance similar to that of an animated/artist drawn image.

Various methods of reducing detail/definition are supported in the sample source code. Most methods consist of implementing smoothing. The following configurable methods are implemented:

Rose: Mean 5×5 Threshold 37.

Mean 5x5 Threshold 37 

All of the filter methods listed above are implemented by means of . The size dimensions listed for each filter option relates to the dimension of the / being implemented by a filter.

When applying a filter, the intensity/extent will be determined by the size dimensions of the / implemented. Smaller / dimensions result in a filter being applied to a lesser extent. Larger / dimensions will result in the filter effect being more evident, being applied to a greater extent. reduction will be achieved when implementing a filter.

The Sample Source code implements Gradient Based Edge Detection using the original source/input , therefore not being influenced by any smoothing operations. I have published an in-depth article on the topic of Gradient Based Edge Detection which can be located here: .

Rose: Median 3×3 Threshold 37.

Median 3x3 Threshold 37

The Sample source code implements Gradient Based Edge Detection by means of iterating each pixel that forms part of the sample/input . Whilst iterating pixels the sample code calculate various gradients from the current pixel’s neighbouring pixels, on a per colour component basis (Red, Green and Blue). Referring to neighbouring pixels, calculations include the value of each of the surrounding pixels in regards to the pixel currently being iterated. Neighbouring pixel calculations are better know as /window/ operations.

Note: Do not confuse and the method in which we iterate and calculate gradients. Although both methods have various aspects in common, is regarded as linear filter processing, whereas our method qualifies as a non-linear filter.

We calculate various gradients, which is to be compared against the user specified global threshold value. If a calculated gradient value exceeds the value of the user specified threshold the pixel currently being iterated will be considered as part of an edge.

The first gradients to be calculated involves the pixel directly above, below, left and right of the current pixel. A gradient will be calculated for each colour component. The gradient values being calculated can be considered as an indicator reflecting the rate of change. If the sum total of the calculated gradients exceed that of the global threshold the pixel will be considered as forming part an edge.

When the comparison of the threshold value and the total gradient value reflects in favour of the threshold the following set of gradients will be calculated. This process of calculating gradients will continue either until a gradient value exceeds the threshold or all gradients have been calculated.

If a pixel was detected as forming part of an edge, the pixel’s colour will be set to black. In the case of non-edge pixels, the original colour components from the source/input image will be used in setting the current pixel’s value.

Rose: Gaussian 3×3 Threshold 28

Gaussian 3x3 Threshold 28

Implementing Cartoon Effects

The sample source code implementation can be divided into five distinct components: Cartoon Effect Filter, smoothing helper method, implementation, implementation and the collection of pre-defined / values.

The sample source code defines the MedianFilter targeting the class. The following code snippet provides the definition:

 public static Bitmap MedianFilter(this Bitmap sourceBitmap, 
                                   int matrixSize) 
{ 
     BitmapData sourceData = 
                sourceBitmap.LockBits(new Rectangle(0, 0, 
                sourceBitmap.Width, sourceBitmap.Height), 
                ImageLockMode.ReadOnly, 
                PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int filterOffset = (matrixSize - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
List<int> neighbourPixels = new List<int>(); byte[] middlePixel;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
neighbourPixels.Clear();
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
neighbourPixels.Add(BitConverter.ToInt32( pixelBuffer, calcOffset)); } }
neighbourPixels.Sort(); middlePixel = BitConverter.GetBytes( neighbourPixels[filterOffset]);
resultBuffer[byteOffset] = middlePixel[0]; resultBuffer[byteOffset + 1] = middlePixel[1]; resultBuffer[byteOffset + 2] = middlePixel[2]; resultBuffer[byteOffset + 3] = middlePixel[3]; } }
Bitmap resultBitmap = new Bitmap (sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

The SmoothingFilterType , defined by the sample source code, serves as a strongly typed definition of a the collection of implemented smoothing filters. The definition as follows:

 public enum SmoothingFilterType  
 {
     None, 
     Gaussian3x3, 
     Gaussian5x5, 
     Gaussian7x7, 
     Median3x3, 
     Median5x5, 
     Median7x7, 
     Median9x9, 
     Mean3x3, 
     Mean5x5, 
     LowPass3x3, 
     LowPass5x5, 
     Sharpen3x3, 
 } 

The Matrix class contains the definition of all the two dimensional / values implemented when performing . The definition as follows:

public static class Matrix 
{ 
    public static double[,] Gaussian3x3 
    { 
        get 
        {
            return new double[,]   
             { { 1, 2, 1, },  
               { 2, 4, 2, },  
               { 1, 2, 1, }, }; 
        } 
    }
 
    public static double[,] Gaussian5x5 
    {
        get 
        { 
            return new double[,]   
             { { 2, 04, 05, 04, 2  },  
               { 4, 09, 12, 09, 4  },  
               { 5, 12, 15, 12, 5  }, 
               { 4, 09, 12, 09, 4  }, 
               { 2, 04, 05, 04, 2  }, }; 
        } 
    } 
 
    public static double[,] Gaussian7x7 
    {
        get 
        { 
            return new double[,]   
             { { 1,  1,  2,  2,  2,  1,  1, },  
               { 1,  2,  2,  4,  2,  2,  1, },  
               { 2,  2,  4,  8,  4,  2,  2, },  
               { 2,  4,  8, 16,  8,  4,  2, },  
               { 2,  2,  4,  8,  4,  2,  2, },  
               { 1,  2,  2,  4,  2,  2,  1, },  
               { 1,  1,  2,  2,  2,  1,  1, }, }; 
        } 
    } 
 
    public static double[,] Mean3x3 
    { 
        get 
        { 
            return new double[,]   
             { { 1, 1, 1, },  
               { 1, 1, 1, },  
               { 1, 1, 1, }, }; 
        } 
    } 
 
    public static double[,] Mean5x5 
    { 
        get 
        { 
            return new double[,]   
             { { 1, 1, 1, 1, 1, },  
               { 1, 1, 1, 1, 1, },  
               { 1, 1, 1, 1, 1, },  
               { 1, 1, 1, 1, 1, },  
               { 1, 1, 1, 1, 1, }, }; 
        } 
    } 
 
    public static double [,] LowPass3x3 
    { 
        get 
        { 
            return new double [,]   
             { { 1, 2, 1, },  
               { 2, 4, 2, },   
               { 1, 2, 1, }, }; 
        }
    } 
 
    public static double[,] LowPass5x5 
    { 
        get 
        { 
            return new double[,]   
             { { 1, 1,  1, 1, 1, },  
               { 1, 4,  4, 4, 1, },  
               { 1, 4, 12, 4, 1, },  
               { 1, 4,  4, 4, 1, },  
               { 1, 1,  1, 1, 1, }, }; 
        }
    }
 
    public static double[,] Sharpen3x3 
    { 
        get 
         {
            return new double[,]   
             { { -1, -2, -1, },  
               {  2,  4,  2, },   
               {  1,  2,  1, }, }; 
         }
    } 
} 

Rose: Low Pass 3×3 Threshold 61

Low Pass 3x3 Threshold 61

The SmoothingFilter targets the class. This method implements . The primary task performed by the SmoothingFilter involves translating filter options into the correct method calls. The definition as follows:

public static Bitmap SmoothingFilter(this Bitmap sourceBitmap, 
                            SmoothingFilterType smoothFilter = 
                            SmoothingFilterType.None) 
 {
    Bitmap inputBitmap = null; 

switch (smoothFilter) { case SmoothingFilterType.None: { inputBitmap = sourceBitmap; } break; case SmoothingFilterType.Gaussian3x3: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Gaussian3x3, 1.0 / 16.0, 0); } break; case SmoothingFilterType.Gaussian5x5: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Gaussian5x5, 1.0 / 159.0, 0); } break; case SmoothingFilterType.Gaussian7x7: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Gaussian7x7, 1.0 / 136.0, 0); } break; case SmoothingFilterType.Median3x3: { inputBitmap = sourceBitmap.MedianFilter(3); } break; case SmoothingFilterType.Median5x5: { inputBitmap = sourceBitmap.MedianFilter(5); } break; case SmoothingFilterType.Median7x7: { inputBitmap = sourceBitmap.MedianFilter(7); } break; case SmoothingFilterType.Median9x9: { inputBitmap = sourceBitmap.MedianFilter(9); } break; case SmoothingFilterType.Mean3x3: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean3x3, 1.0 / 9.0, 0); } break; case SmoothingFilterType.Mean5x5: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Mean5x5, 1.0 / 25.0, 0); } break; case SmoothingFilterType.LowPass3x3: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.LowPass3x3, 1.0 / 16.0, 0); } break; case SmoothingFilterType.LowPass5x5: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.LowPass5x5, 1.0 / 60.0, 0); } break; case SmoothingFilterType.Sharpen3x3: { inputBitmap = sourceBitmap.ConvolutionFilter( Matrix.Sharpen3x3, 1.0 / 8.0, 0); } break; }
return inputBitmap; }

The ConvolutionFilter which targets the class implements . The definition as follows:

private static Bitmap ConvolutionFilter(this Bitmap sourceBitmap, 
                                          double[,] filterMatrix, 
                                               double factor = 1, 
                                                    int bias = 0) 
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, 
                             sourceBitmap.Width, sourceBitmap.Height), 
                                               ImageLockMode.ReadOnly, 
                                         PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
double blue = 0.0; double green = 0.0; double red = 0.0;
int filterWidth = filterMatrix.GetLength(1); int filterHeight = filterMatrix.GetLength(0);
int filterOffset = (filterWidth - 1) / 2; int calcOffset = 0;
int byteOffset = 0;
for (int offsetY = filterOffset; offsetY < sourceBitmap.Height - filterOffset; offsetY++) { for (int offsetX = filterOffset; offsetX < sourceBitmap.Width - filterOffset; offsetX++) { blue = 0; green = 0; red = 0;
byteOffset = offsetY * sourceData.Stride + offsetX * 4;
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++) { for (int filterX = -filterOffset; filterX <= filterOffset; filterX++) {
calcOffset = byteOffset + (filterX * 4) + (filterY * sourceData.Stride);
blue += (double)(pixelBuffer[calcOffset]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
green += (double)(pixelBuffer[calcOffset + 1]) * filterMatrix[filterY + filterOffset, filterX + filterOffset];
red += (double)(pixelBuffer[calcOffset + 2]) * filterMatrix[filterY + filterOffset, filterX + filterOffset]; } }
blue = factor * blue + bias; green = factor * green + bias; red = factor * red + bias;
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red));
resultBuffer[byteOffset] = (byte)(blue); resultBuffer[byteOffset + 1] = (byte)(green); resultBuffer[byteOffset + 2] = (byte)(red); resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode .WriteOnly, PixelFormat .Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Cartoon version of Steve Ballmer: Sharpen 3×3 Threshold 80

Sharpen 3x3 Threshold 80

The CartoonEffectFilter targets the class. This method defines all the tasks required in order to implement a Cartoon Filter. From an implementation point of view, consuming code is only required to invoke this method, no other additional method calls are required. The definition as follows:

public static Bitmap CartoonEffectFilter( 
                                this Bitmap sourceBitmap, 
                                byte threshold = 0, 
                                SmoothingFilterType smoothFilter  
                                = SmoothingFilterType.None) 
{ 
    sourceBitmap = sourceBitmap.SmoothingFilter(smoothFilter); 

BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height];
byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height];
Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
int byteOffset = 0; int blueGradient, greenGradient, redGradient = 0; double blue = 0, green = 0, red = 0;
bool exceedsThreshold = false;
for (int offsetY = 1; offsetY < sourceBitmap.Height - 1; offsetY++) { for (int offsetX = 1; offsetX < sourceBitmap.Width - 1; offsetX++) { byteOffset = offsetY * sourceData.Stride + offsetX * 4;
blueGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]);
blueGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]);
greenGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]);
redGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { byteOffset -= 2;
blueGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]); byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]); byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - 4] - pixelBuffer[byteOffset + 4]);
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { byteOffset -= 2;
blueGradient = Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - sourceData.Stride] - pixelBuffer[byteOffset + sourceData.Stride]);
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { byteOffset -= 2;
blueGradient = Math.Abs(pixelBuffer[byteOffset - 4 - sourceData.Stride] - pixelBuffer[byteOffset + 4 + sourceData.Stride]);
blueGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride + 4] - pixelBuffer[byteOffset + sourceData.Stride - 4]);
byteOffset++;
greenGradient = Math.Abs(pixelBuffer[byteOffset - 4 - sourceData.Stride] - pixelBuffer[byteOffset + 4 + sourceData.Stride]);
greenGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride + 4] - pixelBuffer[byteOffset + sourceData.Stride - 4]);
byteOffset++;
redGradient = Math.Abs(pixelBuffer[byteOffset - 4 - sourceData.Stride] - pixelBuffer[byteOffset + 4 + sourceData.Stride]);
redGradient += Math.Abs(pixelBuffer[byteOffset - sourceData.Stride + 4] - pixelBuffer[byteOffset + sourceData.Stride - 4]);
if (blueGradient + greenGradient + redGradient > threshold) { exceedsThreshold = true ; } else { exceedsThreshold = false ; } } } }
byteOffset -= 2;
if (exceedsThreshold) { blue = 0; green = 0; red = 0; } else { blue = pixelBuffer[byteOffset]; green = pixelBuffer[byteOffset + 1]; red = pixelBuffer[byteOffset + 2]; }
blue = (blue > 255 ? 255 : (blue < 0 ? 0 : blue));
green = (green > 255 ? 255 : (green < 0 ? 0 : green));
red = (red > 255 ? 255 : (red < 0 ? 0 : red));
resultBuffer[byteOffset] = (byte)blue; resultBuffer[byteOffset + 1] = (byte)green; resultBuffer[byteOffset + 2] = (byte)red; resultBuffer[byteOffset + 3] = 255; } }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Sample Images

The sample image used in this article which features Bill Gates has been licensed under the Creative Commons Attribution 2.0 Generic license and can be from .

The sample image featuring Steve Ballmer has been licensed under the Creative Commons Attribution 2.0 Generic license and can be from .

The sample image featuring an Amber flush Rose has been licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license and can be from .

The sample image featuring a Computer Processor has been licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license and can be downloaded from .l The original author is attributed as Andrew Dunnhttp://www.andrewdunnphoto.com/

The Original Image

BillGates2012

No Smoothing, Threshold 100

No Smoothing Threshold 100 Gates

Gaussian 3×3, Threshold 73

Gaussian 3x3 Threshold 73 Gates

Gaussian 5×5, Threshold 78

Gaussian 5x5 Threshold 78 Gates

Gaussian 7×7, Threshold 84

Gaussian 7x7 Threshold 84 Gates

Low Pass 3×3, Threshold 72

LowPass 3x3 Threshold 72 Gates

Low Pass 5×5, Threshold 81

LowPass 5x5 Threshold 81 Gates

Mean 3×3, Threshold 79

Mean 3x3 Threshold 79 Gates

Mean 5×5, Threshold 80

Mean 5x5 Threshold 80 Gates

Median 3×3, Threshold 85

Median 3x3 Threshold 85 Gates

Median 5×5, Threshold 105

Median 5x5 Threshold 105 Gates

Median 7×7, Threshold 127

Median 7x7 Threshold 127 Gates

Median 9×9, Threshold 154

Median 9x9 Threshold 154 Gates

Sharpen 3×3, Threshold 114

Sharpen 3x3 Threshold 114 Gates

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Image Arithmetic

Article Purpose

The objective of this article is to illustrate Arithmetic being implemented when blending/combining two separate into a single result . The types of Image Arithmetic discussed are: Average, Add, SubtractLeft, SubtractRight, Difference, Multiply, Min, Max and Amplitude.

I created the following by implementing Image Arithmetic using as input a photo of a friend’s ear and a photograph taken at a live concert performance by The Red Hot Chili Peppers.

The-RHCP-Sound_Scaled

Sample source code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Download15

Using the Sample Application

The Sample source code accompanying this article includes a Sample Application developed on a platform. The Sample Application is indented to provide an implementation of the various types of Image Arithmetic explored in this article.

The Image Arithmetic sample application allows the user to select two source/input from the local file system. The user interface defines a ComboBox dropdown populated with entries relating to types of Image Arithmetic.

The following is a screenshot taken whilst creating the “Red Hot Chili Peppers Concert – Side profile Ear” blended illustrated in the first shown in this article. Notice the stark contrast when comparing the source/input preview . Implementing Image Arithmetic allows us to create a smoothly blended result :

ImageArithmetic_SampleApplication

Newly created can be saved to the local file system by clicking the ‘Save Image’ button.

Image Arithmetic

In simple terms Image Arithmetic involves the process of performing calculations on two ’ corresponding pixel colour components. The values resulting from performing calculations represent a single which is combination of the two original source/input . The extent to which a source/input will be represented in the resulting is dependent on the type of Image Arithmetic employed.

The ArithmeticBlend Extension method

In this article Image Arithmetic has been implemented as a single targeting the class. The ArithmeticBlend expects as parameters two source/input objects and a value indicating the type of Image Arithmetic to perform.

The ColorCalculationType defines an value for each type of Image Arithmetic supported. The definition as follows:

public enum ColorCalculationType 
{ 
   Average, 
   Add, 
   SubtractLeft, 
   SubtractRight, 
   Difference, 
   Multiply, 
   Min, 
   Max, 
   Amplitude 
}

It is only within the ArithmeticBlend that we perform Image Arithmetic. This method accesses the underlying pixel data of each sample and creates copies stored in arrays. Each element within the array data buffer represents a single colour component, either Alpha, Red, Green or Blue.

The following code snippet details the implementation of the ArithmeticBlend :

 public static Bitmap ArithmeticBlend(this Bitmap sourceBitmap, Bitmap blendBitmap,  
                                 ColorCalculator.ColorCalculationType calculationType) 
{ 
    BitmapData sourceData = sourceBitmap.LockBits(new Rectangle (0, 0, 
                            sourceBitmap.Width, sourceBitmap.Height), 
                            ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb); 

byte[] pixelBuffer = new byte[sourceData.Stride * sourceData.Height]; Marshal.Copy(sourceData.Scan0, pixelBuffer, 0, pixelBuffer.Length); sourceBitmap.UnlockBits(sourceData);
BitmapData blendData = blendBitmap.LockBits(new Rectangle (0, 0, blendBitmap.Width, blendBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] blendBuffer = new byte [blendData.Stride * blendData.Height]; Marshal.Copy(blendData.Scan0, blendBuffer, 0, blendBuffer.Length); blendBitmap.UnlockBits(blendData);
for (int k = 0; (k + 4 < pixelBuffer.Length) && (k + 4 < blendBuffer.Length); k += 4) { pixelBuffer[k] = ColorCalculator.Calculate(pixelBuffer[k], blendBuffer[k], calculationType);
pixelBuffer[k + 1] = ColorCalculator.Calculate(pixelBuffer[k + 1], blendBuffer[k + 1], calculationType);
pixelBuffer[k + 2] = ColorCalculator.Calculate(pixelBuffer[k + 2], blendBuffer[k + 2], calculationType); }
Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height);
BitmapData resultData = resultBitmap.LockBits(new Rectangle (0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(pixelBuffer, 0, resultData.Scan0, pixelBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

We access and copy the underlying pixel data of each input by making use of the method and also the method.

The method iterates both array data buffers simultaneously, having set the for loop condition to regard the array size of both arrays. Scenarios where array data buffers will differ in size occurs when the source specified are not equal in terms of size dimensions.

Notice how each iteration increments the loop counter by a factor of four allowing us to treat each iteration as a complete pixel value. Remember that each data buffer element represents an individual colour component. Every four elements represents a single pixel consisting of the components: Alpha, Red, Green and Blue

Take Note: The ordering of colour components are the exact opposite of the expected order. Each pixel’s colour components are ordered: Blue, Green, Red, Alpha. Since we are iterating an entire pixel with each iteration the for loop counter value will always equate to an element index representing the Blue colour component. In order to access the Red and Green colour components we simply add the values one and two respectively to the for loop counter value, depending on whether accessing the Green or Red colour components.

The task of performing the actual arithmetic has been encapsulated within the static Calculate method, a public member of the static class ColorCalculator. The Calculate method is more detail in the following section of this article.

The final task performed by the ArithmeticBlend method involves creating a new instance of the class which is then updated/populated using the resulting array data buffer previously modified.

The ColorCalculator.Calculate method

The algorithms implemented in Image Arithmetic are encapsulated within the ColorCalculator.Calculate method. When implementing this method no knowledge of the technical implementation details are required. The parameters required are two values each representing a single colour component, one from each source . The only other required parameter is an value of type ColorCalculationType which will indicate which type of Image Arithmetic should be implemented using the parameters as operands.

The following code snippet details the full implementation of the ColorCalculator.Calculate method:

 public static byte Calculate(byte color1, byte color2, 
                   ColorCalculationType calculationType) 
{ 
    byte resultValue = 0; 
    int intResult = 0; 

if (calculationType == ColorCalculationType.Add) { intResult = color1 + color2; } else if (calculationType == ColorCalculationType.Average) { intResult = (color1 + color2) / 2; } else if (calculationType == ColorCalculationType.SubtractLeft) { intResult = color1 - color2; } else if (calculationType == ColorCalculationType.SubtractRight) { intResult = color2 - color1; } else if (calculationType == ColorCalculationType.Difference) { intResult = Math.Abs(color1 - color2); } else if (calculationType == ColorCalculationType.Multiply) { intResult = (int)((color1 / 255.0 * color2 / 255.0) * 255.0); } else if (calculationType == ColorCalculationType.Min) { intResult = (color1 < color2 ? color1 : color2); } else if (calculationType == ColorCalculationType.Max) { intResult = (color1 > color2 ? color1 : color2); } else if (calculationType == ColorCalculationType.Amplitude) { intResult = (int)(Math.Sqrt(color1 * color1 + color2 * color2) / Math .Sqrt(2.0)); }
if (intResult < 0) { resultValue = 0; } else if (intResult > 255) { resultValue = 255; } else { resultValue = (byte)intResult; }
return resultValue; }

The bulk of the ColorCalculator.Calculate method’s implementation is set around a series of if/else if statements evaluating the method parameter passed when the method had been invoked.

Colour component values can only range from 0 to 255 inclusive. Calculations performed might result in values which do not fall within the valid range of values. Calculated values less than zero are set to zero and values exceeding 255 are set to 255, sometimes this is referred to clamping.

The following sections of this article provides an explanation of each type of Image Arithmetic implemented.

Image Arithmetic: Add

if (calculationType == ColorCalculationType.Add)
{
    intResult = color1 + color2;
}

The Add algorithm is straightforward, simply adding together the two colour component values. In other words the resulting colour component will be set to equal the sum of both source colour component, provided the total does not exceed 255.

Sample Image

ImageArithmetic_Add

Image Arithmetic: Average

if (calculationType == ColorCalculationType.Average)
{
    intResult = (color1 + color2) / 2;
}

The Average algorithm calculates a simple average by adding together the two colour components and then dividing the result by two.

Sample Image

ImageArithmetic_Average

Image Arithmetic: SubtractLeft

if (calculationType == ColorCalculationType.SubtractLeft)
{
    intResult = color1 - color2;
}

The SubtractLeft algorithm subtracts the value of the second colour component parameter from the first colour component parameter.

Sample Image

ImageArithmetic_SubtractLeft

Image Arithmetic: SubtractRight

if (calculationType == ColorCalculationType.SubtractRight)
{
    intResult = color2 - color1;
}

The SubtractRight algorithm, in contrast to SubtractLeft, subtracts the value of the first colour component parameter from the second colour component parameter.

Sample Image

ImageArithmetic_SubtractRight

Image Arithmetic: Difference

if (calculationType == ColorCalculationType.Difference)
{
    intResult = Math.Abs(color1 - color2);
}

The Difference algorithm subtracts the value of the second colour component parameter from the first colour component parameter. By passing the result of the subtraction as a parameter to the Math.Abs method the algorithm ensures only calculating absolute/positive values. In other words calculating the difference in value between colour component parameters.

Sample Image

ImageArithmetic_Difference

Image Arithmetic: Multiply

if (calculationType == ColorCalculationType.Multiply)
{
    intResult = (int)((color1 / 255.0 * color2 / 255.0) * 255.0);
}

The Multiply algorithm divides each colour component parameter by a value of 255 and the proceeds to multiply the results of the division, the result is then further multiplied by a value of 255.

Sample Image

ImageArithmetic_Multiply

Image Arithmetic: Min

if (calculationType == ColorCalculationType.Min)
{
    intResult = (color1 < color2 ? color1 : color2);
}

The Min algorithm simply compares the two colour component parameters and returns the smallest value of the two.

Sample Image

ImageArithmetic_Min

Image Arithmetic: Max

if (calculationType == ColorCalculationType.Max)
{
    intResult = (color1 > color2 ? color1 : color2);
}

The Max algorithm, as can be expected, will produce the exact opposite result when compared to the Min algorithm. This algorithm compares the two colour component parameters and returns the larger value of the two.

Sample Image

ImageArithmetic_Max

Image Arithmetic: Amplitude

 else if (calculationType == ColorCalculationType.Amplitude) 
{ 
         intResult = (int)(Math.Sqrt(color1 * color1 +
                                     color2 * color2) /
                                     Math.Sqrt(2.0)); 
} 
  

The Amplitude algorithm calculates the amplitude of the two colour component parameters by multiplying each colour component by itself and then sums the results. The last step divides the result thus far by the square root of two.

Sample Image

ImageArithmetic_Amplitude

Related Articles

C# How to: Swapping Bitmap ARGB Colour Channels

Article Purpose

The intention of is to explain and illustrate the various possible combinations that can be implemented when swapping the underlying colour channels related to a  image. The concepts explained can easily be replicated by making use of the included sample application.

Sample source code

is accompanied by a sample source code Visual Studio project which is available for download here.

Using the sample Application

The sample application associated with allows the user to select a source image, apply a colour shifting option. The user is provided  with the option to save to disk the resulting new . The below is a screenshot of the Bitmap ARGB Swapping application in action:

SampleAppScreenshot

The scenario illustrated above shows an of flowers being transformed by swapping the underlying colour channels. In this case the ShiftLeft algorithm had been applied. The original is licenced under the , the original image can be downloaded from Wikipedia.

Types of Colour Swapping

The sample source code defines the type ColorSwapType, which represents the possible combinations of colour channel swapping that can be applied to a . The source code extract below provides the definition of the ColorSwapType :

public enum ColorSwapType
{
    ShiftRight,
    ShiftLeft,
    SwapBlueAndRed,
    SwapBlueAndGreen,
    SwapRedAndGreen,
}

When directly manipulating a object’s pixel values an important detail should be noted: Bitmap colour channels in memory are represented in the order Blue, Green, Red and Alpha despite being commonly referred to by abbreviation ARGB!

The following list describes each colour swapping type’s outcome:

  • ShiftRight: Starting at Blue, each colour’s value is set to the colour channel to the right. The value of Blue is applied to Red, Red’s original value applied to Green, Green’s original value applied to Blue.
  • ShiftLeft: Starting at Blue, each colour’s value is set to the colour channel to the left. The value of Blue is applied to Green, Green’s original value applied to Red, Red’s original value applied to Blue.
  • SwapBlueAndRed: The value of the Blue channel is applied to the Red channel and the original value of the Red channel is then applied to the Blue channel. The value of the Green channel remains unchanged.
  • SwapBlueAndGreen: The value of the Blue channel is applied to the Green channel and the original value of the Green channel is then applied to the Blue channel. The value of the Red  channel remains unchanged.
  • SwapRedAndGreen: The value of the Red channel is applied to the Green channel and the original value of the Green channel is then applied to the Red channel. The value of the Blue channel remains unchanged.

The Colour Swap Filter

The sample source code defines the ColorSwapFilter class. This class provides several member properties, which in combination represent the options involved in applying a colour swap filter. The source code snippet below provides the definition of the ColorSwapFilter type:

public class ColorSwapFilter
{
   private ColorSwapType swapType = ColorSwapType.ShiftRight;
   public ColorSwapType SwapType
   {
        get{ return swapType;}
        set{ swapType = value;}
   }

private bool swapHalfColorValues = false; public bool SwapHalfColorValues { get{ return swapHalfColorValues;} set{ swapHalfColorValues = value;} }
private bool invertColorsWhenSwapping = false; public bool InvertColorsWhenSwapping { get{ return invertColorsWhenSwapping;} set{ invertColorsWhenSwapping = value;} }
public enum ColorSwapType { ShiftRight, ShiftLeft, SwapBlueAndRed, SwapBlueAndGreen, SwapRedAndGreen, } }

The member properties defined by the ColorSwapFilter class:

  • Implementing the ColorSwapType discussed earlier, the SwapType member property defines the type of colour channel swapping to apply.
  • Before swapping colour channel values, colour values can be inverted depending on whether InvertColorsWhenSwapping equates to true.
  • In order to reduce the intensity of the resulting image, the SwapHalfColorValues property should be set to true. The end result being destination colour channels are set to 50% of relevant source colour channel values.

Applying the Colour Swap Filter

The sample source code accompanying defines the SwapColorsCopy method, an targeting class. When invoking the SwapColorsCopy extension method, the calling code is required to specify an input and an instance of the ColorSwapFilter class. By virtue of being an the input/source will be specified by the object instance invoking the SwapColorsCopy method.

The source code listing below provides the definition of the SwapColorsCopy .

public static Bitmap SwapColorsCopy(this Bitmap originalImage, ColorSwapFilter swapFilterData)
{
    BitmapData sourceData = originalImage.LockBits
                            (new Rectangle(0, 0, originalImage.Width, originalImage.Height),
                            ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);

byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height]; Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length); originalImage.UnlockBits(sourceData);
byte sourceBlue = 0, resultBlue = 0, sourceGreen = 0, resultGreen = 0, sourceRed = 0, resultRed = 0; byte byte2 = 2, maxValue = 255;
for (int k = 0; k < resultBuffer.Length; k += 4) { sourceBlue = resultBuffer[k]; sourceGreen = resultBuffer[k + 1]; sourceRed = resultBuffer[k + 2];
if (swapFilterData.InvertColorsWhenSwapping == true) { sourceBlue = (byte)(maxValue - sourceBlue); sourceGreen = (byte)(maxValue - sourceGreen); sourceRed = (byte)(maxValue - sourceRed); }
if (swapFilterData.SwapHalfColorValues == true) { sourceBlue = (byte)(sourceBlue / byte2); sourceGreen = (byte)(sourceGreen / byte2); sourceRed = (byte)(sourceRed / byte2); }
switch (swapFilterData.SwapType) { case ColorSwapFilter.ColorSwapType.ShiftRight: { resultBlue = sourceGreen; resultRed = sourceBlue; resultGreen = sourceRed; break; } case ColorSwapFilter.ColorSwapType.ShiftLeft: { resultBlue = sourceRed; resultRed = sourceGreen; resultGreen = sourceBlue; break; } case ColorSwapFilter.ColorSwapType.SwapBlueAndRed: { resultBlue = sourceRed; resultRed = sourceBlue; break; } case ColorSwapFilter.ColorSwapType.SwapBlueAndGreen: { resultBlue = sourceGreen; resultGreen = sourceBlue; break; } case ColorSwapFilter.ColorSwapType.SwapRedAndGreen: { resultRed = sourceGreen; resultGreen = sourceGreen; break; } }
resultBuffer[k] = resultBlue; resultBuffer[k + 1] = resultGreen; resultBuffer[k + 2] = resultRed; }
Bitmap resultBitmap = new Bitmap(originalImage.Width, originalImage.Height, PixelFormat.Format32bppArgb); BitmapData resultData = resultBitmap.LockBits (new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

Due to the architecture and implementation of the .net when manipulating a object’s underlying colour values we need to ensure locking the relevant data buffer in memory. When invoking the class’ method the calling code prevents the from shifting and updating memory references. Once a ’s underlying pixel buffer has been locked in memory the source code creates a data buffer of type byte array and then copies the ’s underlying pixel buffer data.

BitmapData sourceData = originalImage.LockBits
                        (new Rectangle(0, 0, originalImage.Width, originalImage.Height),
                        ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);

byte[] resultBuffer = new byte[sourceData.Stride * sourceData.Height]; Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length); originalImage.UnlockBits(sourceData);

The sample source code next iterates the pixel buffer array. Notice how the for loop increments by 4 with each loop. Every four elements of the data buffer in combination represents one pixel, each colour channel expressed as a value ranging from 0 to 255 inclusive.

for (int k = 0; k < resultBuffer.Length; k += 4)

If required each colour channel will first be assigned to a value equating to its inverse value by subtracting from 255.

if (swapFilterData.InvertColorsWhenSwapping == true)
{
     sourceBlue = (byte)(maxValue - sourceBlue);
     sourceGreen = (byte)(maxValue - sourceGreen);
     sourceRed = (byte)(maxValue - sourceRed);
}

When the supplied ColorSwapFilter object method parameter defines SwapHalfColorValues as true the source colour value will be divided by 2.

if (swapFilterData.SwapHalfColorValues == true)
{
     sourceBlue = (byte)(sourceBlue / byte2);
     sourceGreen = (byte)(sourceGreen / byte2);
     sourceRed = (byte)(sourceRed / byte2);
}
 

The next section implements a case statement, each option implementing the required colour channel swap algorithm. The last step expressed as part of the for loop results in assigning newly manipulated values to the data buffer.

The SwapColorsCopy extension method can be described as being immutable in the sense that the input value remains unchanged, instead manipulating and returning a copy of the input data. Following the data buffer iteration the sample source creates a new instance of the class and locks it into memory by invoking the method. By implementing the method the source code copies the data buffer to the underlying buffer associated with the newly created object.

 Bitmap resultBitmap = new Bitmap(originalImage.Width, originalImage.Height, 
                                     PixelFormat.Format32bppArgb);
 
BitmapData resultData = resultBitmap.LockBits (new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap;

The implementation: a

The sample source code accompanying defines a , the intention of which being to illustrate a test implementation. The following series of images were created using the sample application:

The source/input image is licenced under the , the original image can be downloaded from Wikipedia.

The Original Image

800px-HK_Sheung_Wan_Hollywood_Road_Park_Flowers_in_Purple

The ShiftLeft Colour Swapping algorithm:

ShiftLeft

Inverted:

ShiftLeft_inverted

The ShiftRight Colour Swapping algorithm:

ShiftRight

Inverted:

ShiftRight_inverted

The SwapBlueAndGreen Colour Swapping algorithm:

SwapBlueAndGreen

Inverted:

SwapBlueAndGreen_inverted

The SwapBlueAndRed Colour Swapping algorithm:

SwapBlueAndRed

Inverted:

SwapBlueAndRed_inverted

The SwapRedAndGreen Colour Swapping algorithm:

SwapRedAndGreen

Inverted:

SwapRedAndGreen_inverted

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:

C# How to: Bitmap Colour Substitution implementing thresholds

Article Purpose

This article is aimed at detailing how to implement the process of substituting the colour values that form part of a image. Colour substitution is implemented by means of a threshold value. By implementing a threshold a range of similar colours can be substituted.

Sample source code

This article is accompanied by a sample source code Visual Studio project which is available for download here.

Using the sample Application

The provided sample source code builds a Windows Forms application which can be used to test/implement the concepts described in this article. The sample application enables the user to load an file from the file system, the user can then specify the colour to replace, the replacement colour and the threshold to apply. The following image is a screenshot of the sample application in action.

BitmapColourSubstitution_Scaled

The scenario detailed in the above screenshot shows the sample application being used to create an where the sky has more of a bluish hue when compared to the original .

Notice how replacement colour does not simply appear as a solid colour applied throughout. The replacement colour gets implemented matching the intensity of the colour being substituted.

The colour filter options:

FilterOptions

The colour to replace was taken from the original , the replacement colour is specified through a colour picker dialog. When a user clicks on either displayed, the colour of the pixel clicked on sets the value of the replacement colour. By adjusting the threshold value the user can specify how wide or narrow the range of colours to replace should be. The higher the threshold value, the wider the range of colours that will be replaced.

The resulting image can be saved by clicking the “Save Result” button. In order to apply another colour substitution on the resulting image click the button labelled “Set Result as Source”.

Colour Substitution Filter Data

The sample source code provides the definition for the ColorSubstitutionFilter class. The purpose of this class is to contain data required when applying colour substitution. The ColorSubstitutionFilter class is defined as follows:

public class ColorSubstitutionFilter
{
    private int thresholdValue = 10;
    public int ThresholdValue
    {
        get { return thresholdValue; }
        set { thresholdValue = value; }
    }

private Color sourceColor = Color.White; public Color SourceColor { get { return sourceColor; } set { sourceColor = value; } }
private Color newColor = Color.White; public Color NewColor { get { return newColor; } set { newColor = value; } } }

To implement a colour substitution filter we first have to create an object instance of type ColorSubstitutionFilter. A colour substitution requires specifying a SourceColor, which is the colour to replace/substitute and a NewColour, which defines the colour that will replace the SourceColour. Also required is a ThresholdValue, which determines a range of colours based on the SourceColor.

Colour Substitution implemented as an Extension method

The sample source code defines the ColorSubstitution extension method which targets the class. Invoking the ColorSubstitution requires passing a parameter of type ColorSubstitutionFilter, which defines how colour substitution is to be implemented. The following code snippet contains the definition of the ColorSubstitution method.

public static Bitmap ColorSubstitution(this Bitmap sourceBitmap, ColorSubstitutionFilter filterData)
{
    Bitmap resultBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height, PixelFormat.Format32bppArgb);

BitmapData sourceData = sourceBitmap.LockBits(new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb); BitmapData resultData = resultBitmap.LockBits(new Rectangle(0, 0, resultBitmap.Width, resultBitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
byte[] resultBuffer = new byte[resultData.Stride * resultData.Height]; Marshal.Copy(sourceData.Scan0, resultBuffer, 0, resultBuffer.Length);
sourceBitmap.UnlockBits(sourceData);
byte sourceRed = 0, sourceGreen = 0, sourceBlue = 0, sourceAlpha = 0; int resultRed = 0, resultGreen = 0, resultBlue = 0;
byte newRedValue = filterData.NewColor.R; byte newGreenValue = filterData.NewColor.G; byte newBlueValue = filterData.NewColor.B;
byte redFilter = filterData.SourceColor.R; byte greenFilter = filterData.SourceColor.G; byte blueFilter = filterData.SourceColor.B;
byte minValue = 0; byte maxValue = 255;
for (int k = 0; k < resultBuffer.Length; k += 4) { sourceAlpha = resultBuffer[k + 3];
if (sourceAlpha != 0) { sourceBlue = resultBuffer[k]; sourceGreen = resultBuffer[k + 1]; sourceRed = resultBuffer[k + 2];
if ((sourceBlue < blueFilter + filterData.ThresholdValue && sourceBlue > blueFilter - filterData.ThresholdValue) &&
(sourceGreen < greenFilter + filterData.ThresholdValue && sourceGreen > greenFilter - filterData.ThresholdValue) &&
(sourceRed < redFilter + filterData.ThresholdValue && sourceRed > redFilter - filterData.ThresholdValue)) { resultBlue = blueFilter - sourceBlue + newBlueValue;
if (resultBlue > maxValue) { resultBlue = maxValue;} else if (resultBlue < minValue) { resultBlue = minValue;}
resultGreen = greenFilter - sourceGreen + newGreenValue;
if (resultGreen > maxValue) { resultGreen = maxValue;} else if (resultGreen < minValue) { resultGreen = minValue;}
resultRed = redFilter - sourceRed + newRedValue;
if (resultRed > maxValue) { resultRed = maxValue;} else if (resultRed < minValue) { resultRed = minValue;}
resultBuffer[k] = (byte)resultBlue; resultBuffer[k + 1] = (byte)resultGreen; resultBuffer[k + 2] = (byte)resultRed; resultBuffer[k + 3] = sourceAlpha; } } }
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length); resultBitmap.UnlockBits(resultData);
return resultBitmap; }

The ColorSubstitution method can be labelled as due to its implementation. Being implies that the source/input data will not be modified, instead a new instance will be created reflecting the source data as modified by the operations performed in the particular method.

The first statement defined in the ColorSubstitution method body instantiates an instance of a new , matching the size dimensions of the source object. Next the method invokes the method on the source and result instances. When invoking the underlying data representing a will be locked in memory. Being locked in memory can also be described as signalling/preventing the Garbage Collector to not move around in memory the data being locked. Invoking results in the Garbage Collector functioning as per normal, moving data in memory and updating the relevant memory references when required.

The source code continues by copying all the representing the source to an array of bytes that represents the resulting . At this stage the source and result s are exactly identical and as yet unmodified. In order to determine which pixels based on colour should be modified the source code iterates through the byte array associated with the result .

Notice how the for loop increments by 4 with each loop. The underlying data represents a 32 Bits per pixel Argb , which equates to 8 bits/1 representing an individual colour component, either Alpha, Red, Green or Blue. Defining the for loop to increment by 4 results in each loop iterating 4 or 32 bits, in essence 1 pixel.

Within the for loop we determine if the colour expressed by the current pixel adjusted by the threshold value forms part of the colour range that should be updated. It is important to remember that an individual colour component is a byte value and can only be set to a value between 0 and 255 inclusive.

The Implementation

The ColorSubstitution method is implemented by the sample source code  through a Windows Forms application. The ColorSubstitution method requires that the source specified must be  formatted as a 32 Bpp Argb . When the user loads a source image from the file system the sample application attempts to convert the selected file by invoking the Format32bppArgbCopy which targets the class. The definition is as follows:

public static Bitmap Format32bppArgbCopy(this Bitmap sourceBitmap)
{
    Bitmap copyBitmap = new Bitmap(sourceBitmap.Width, sourceBitmap.Height, PixelFormat.Format32bppArgb);

using (Graphics graphicsObject = Graphics.FromImage(copyBitmap)) { graphicsObject.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality; graphicsObject.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; graphicsObject.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality; graphicsObject.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
graphicsObject.DrawImage(sourceBitmap, new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), new Rectangle(0, 0, sourceBitmap.Width, sourceBitmap.Height), GraphicsUnit.Pixel); }
return copyBitmap; }

Colour Substitution Examples

The following section illustrates a few examples of colour substitution result . The source image features Bellis perennis also known as the common European Daisy (see Wikipedia). The image file is licensed under the Creative Commons Attribution-Share Alike 2.5 Generic license. The original image can be downloaded here. The following image is a scaled down version of the original:

Bellis_perennis_white_(aka)_scaled

Light Blue Colour Substitution

Colour Component Source Colour Substitute Colour
Red   255   121
Green   223   188
Blue   224   255

Daisy_light_blue

Medium Blue Colour Substitution

Colour Component Source Colour Substitute Colour
Red   255   34
Green   223   34
Blue   224   255

Daisy_medium_blue

Medium Green Colour Substitution

Colour Component Source Colour Substitute Colour
Red   255   0
Green   223   128
Blue   224   0

Daisy_medium_green

Purple Colour Substitution

Colour Component Source Colour Substitute Colour
Red   255   128
Green   223   0
Blue   224   255

Daisy_purple

Related Articles and Feedback

Feedback and questions are always encouraged. If you know of an alternative implementation or have ideas on a more efficient implementation please share in the comments section.

I’ve published a number of articles related to imaging and images of which you can find URL links here:


Dewald Esterhuizen

Blog Stats

  • 768,207 hits

Enter your email address to follow and receive notifications of new posts by email.

Join 223 other followers

Archives

Twitter feed


%d bloggers like this: