SUNY Institute of Technology

Department of Computer and Information Science

 

 

Four digital photography algorithms

 

Vladimir Omelko, April 2009

 

 

 

 

 

 

Abstract

This project presents image processing algorithm research done in the area of photography. The algorithms include color balancing by skin tones, skin softening, finding and using dominating colors with texture application for image border and background generation, and high dynamic range (HDR) processing. The first two algorithms are to be used in portrait photography, while the other two have a more general applicability.

Color balancing is done by converting skin tones to their typical values instead of using grey areas of the scene which may not always be available. Skin softening is achieved by using various filtering techniques. Image border and background generation is accomplished by analyzing dominating colors. Texture can be added using the Perlin noise technique (see appendix A-1). HDR photography processing consists of multi-exposure image generation and tone mapping in the second stage.

All mentioned algorithms are first defined and then implemented, with the exception of HDR processing that is based on two existing papers. However, an implementation of HDR that is different from those found in the literature is presented.

The ideas behind the presented algorithms may be extended in several directions.

Table of Contents

1.1 Analog to digital imaging  3

1.2 Feature enhancement algorithms  4

2.1 Color balance  6

2.2 Perfect skin  6

2.3 Image presentation  7

2.4 HDR and tone mapping  8

2.5 CImg  8

3.1 Color balancing by skin tones  9

3.2 Algorithm for skin smoothing  10

3.3 Automatic background or frame creation  11

3.4 HDR processing  13

4.1 Color balancing by skin tones  14

4.2 Algorithm for skin smoothing  18

4.3 Automatic background or frame creation  24

4.4 HDR processing  33

5.1 Work summary  46

5.2 Future work  48

A-1 Perling noise  52

 

 

Chapter 1

Introduction

Since the days of the silver-halide process, photographers around the world used various techniques to work with captured data and thus improve image presentation. The age of digital light capture and powerful yet fully accessible personal computers provide opportunities for new data processing techniques bound to further enhance photographic presentation. The goal of this project is to define and implement image enhancement algorithms that can be used in portrait and other types of photography.

1.1 Analog to digital imaging

During the last decade, image acquisition equipment almost completely moved into digital realm leaving traditional film technology behind. Many areas, from science and medical imaging to commercial and even amateur photography, have been affected.

Now that the digital format became native, it is assumed that image files should be copied to and viewed at the computer first, before they go to press. The stage when film had to stay in a darkroom is referred to these days as digital darkroom (see glossary). Images are processed by using various software applications.

Powerful image processing applications are becoming handy for not only simple operations like cropping, straightening the angle and exposure adjustment, but also for more mathematically complex ones that include noise removal, sharpening, various filtering, stitching, stacking and more.

Digital photography [5] has emerged as one of several forms of digital imaging that uses photographic equipment to make an image. Once acquired, a digital image can be further processed [6] using computer algorithms.

1.2 Feature enhancement algorithms

Digital photography has become the application area for many algorithms. After their application, useful information can be extracted from an image and used to further enhance it or make some other decisions. The focus of this project is to work with algorithms that use digital images (digital photographs) as input as well as output parameters.

White balancing has been part of color image processing since the invention of color photography. Pleasing skin tones are very desirable in portrait photography for obvious reasons. The challenge of this project is to achieve pleasing color balance by performing global color adjustments based on skin tones.

Another challenge of portrait photography is achieving soft skin texture as a result of post processing. Modern media and advertisement depicts perfect looking people even though in reality their appearance is far from perfect. Achieving skin softening through the application of various filters is another goal of this project.

Organizers of art galleries, photographic frame designers and graphics designers, spend significant resources to achieve optimum image presentation. An area of interest is to analyze an image and extract color information that is relevant to image presentation. What color mat should one cut to present a particular photograph? The same can be said about the album page, digital photo-frame and other ways of presentation. Can texture be added to the frame to further enhance presentation?

There are many articles related to high dynamic range (HDR) processing that have been published in recent years. The author provides a HDR processing implementation that is based on [7] and tone mapping described in [8]. As the very last project effort, the author tried implementing his own HDR processing algorithm that does not require a tone mapping step.

Chapter 2

Background

There are many challenges in modern photography. In order to save some time, various algorithms are used in post-processing the workflow. Photography as a form of art is fascinating, but various improvements can still be made during the post-processing stage.

2.1 Color balance

Normally, color balance is set from the white or off-white area of the image. Color balancing is critical in portrait photography, as a person looks unnatural with the wrong skin tone. Color balancing has less effect on images that do not depict people. The idea here is to use a person’s skin for setting the color balance. This may achieve a near perfect skin tone as the rest of the image that does not include the person’s skin is not that important.

Two points can be added to justify this approach. The first point is that the proposed skin tone correction is not likely to lead to correct color balance as if it were set using the natural grey [1], but it will hopefully lead to a more pleasing looking skin. The second point is that in the author’s view, even modern professional cameras distort color relations presented in the captured image compared to the actual scene. This is due to cameras’ limited dynamic range and lack of consistency in camera to camera response curve. More on this topic is presented in tone mapping that is usually the second stage in HDR processing. It is quite common to have photographers assert that every type of camera has its own look. This statement is valid, although subjective, as it requires precise measurements and has to be supported with numbers.

2.2 Perfect skin

Looking at various magazine covers, one may raise the question if the photographed people really have the same real life look in terms of their skin smoothness. The odd conclusion is that they do not.

The usage of powerful applications like Adobe Photoshop, in many cases, makes this notable difference in the look of the skin. The skin softening algorithm in this project is the author’s attempt to soften the skin, make it smooth, baby-like, free of blemishes, pores and pimples. In doing this, it is crucial for the final image to retain important details, so that the original portrait is enhanced.

The idea is to separate details from lower frequency information of the given image, do some sharpening on the high frequency data and to Gaussian-blur on low frequency areas. After putting it together, the skin should look more appealing.

2.3 Image presentation

When an image is placed into the album, it is frequently the case that the album page needs artistic arrangement in order to present the image in the most effective way. The same may be said about the frame that surrounds the image. One can think of gathering information from the image to be used in background creation.  For example, color information found in the image can be selectively used in background creation. Other parameters to consider are brightness, contrast and even texture.

The mat board artistically matches the artwork, if its color matches one of the colors the artwork contains. Finding few dominating colors that the image contains and using one of those colors as the color of the frame or background, may enhance presentation. But what if two images are placed side-by-side for presentation? Choosing two dominating colors (one from each image), and then blending those based on some texture, may generate a good presentation frame or background. The Perlin noise algorithm [9] can be used for texture creation (see appendix A-1).

2.4 HDR and tone mapping

The desirability of HDR has been recognized for some time but its wider usage was, until recently, limited by the lack of computer processing power.

HDR images enable photographers to record a greater range of tonal detail than what a given camera could capture in a single photo.  This opens up a whole new set of lighting possibilities which one might have previously avoided for technical reasons. 

Limited dynamic range of digital cameras can be addressed by using multiple exposures [10]. This technique allows capturing a vast dynamic range. By varying the shutter speed, a digital camera can change how much light it lets in.  HDR imaging attempts to use this characteristic by creating images composed of multiple exposures, which can surpass the dynamic range of a single exposure.

2.5 CImg

There are a few handy tools available that can assist in image processing by handling the most common image operations and functions. These are ImageJ [11], CImg [12], OpenCV [13] and others. Since CImg, which is based on C++ templates, has already implemented many image processing functions and comes in one large header file, it was selected for this project. CImg also maintains minimal platform dependence that is conveniently located inside the Display package. A small set of I/O functions is also supported and can be easily expanded by a seamless interface to Image Magick [14].

Chapter 3

Design

This chapter briefly describes techniques that are used in the project. It also outlines the steps for each algorithm. Those steps are further discussed in the implementation.

3.1 Color balancing by skin tones

Color balancing (see glossary) is a well known and documented step of color compensation [1] for a given image, to correct the color shift that occurs due to photographic equipment imperfections in challenging lighting situations. The correction values for color balance adjustments are calculated by considering naturally white or grey objects that are part of the scene. This is why the process is frequently referred to as white balance. A problem frequently arises when there is no natural grey present in the image, or the scene is lit with light sources of multiple colors. The most common way of color balancing in such cases is to manually achieve pleasing colors, which can be difficult to do if judging by eye. In some types of photography, slight color imbalance can be tolerated, but in others it is crucial. Portrait photography is color critical, since the deviations in skin tonality are very noticeable.

The idea presented in this project is to automatically color balance the image by balancing the human skin tone, by setting it to its typical values [15] and [16]. There are only three major ethnicities (Africans, Asians and Caucasians) and it is pretty easy to define the color chart that describes their skin tones. This project uses one of the very simple ready-made charts [17] shown in figure 3.1.1. The top row represents the typical highlight, midrange and shadow skin tones of Africans. The other rows represent Asians and Caucasians respectively.

The user has to select an appropriate color patch from the chart that best describes the suspected skin tone of the face located in the image under calibration. Then the user selects the point on that face to complete calibration. The global colors will be adjusted to bring the RGB color ratio from the chart to the image under the test.

Figure 3.1.1 – Skin tone chart

3.2 Algorithm for skin smoothing

This algorithm is used in portrait post-processing to create smooth, good looking skin. In nature, good looking skin has no blemishes, noticeable pores or spots. There are a few post processing techniques available to achieve this desired look. One is described in [18] and [19]. These techniques are mostly used in magazines, ads and other publications. These techniques are well known; however, this project attempts to provide simplifications, and implementation using the CImg library. The algorithm can be simplified to the following three steps:

a) Compute a new image by applying a high pass filter to the original.

b) Compute a second new image that is a blurred version of the original.

c) Output the final image by blending the original with the image generated in b) based on the image generated in a).

Skin contains much less detail then other parts of the face. It is an area of very little contrast. Even a small amount of blurring makes the skin smooth and helps hide small blemishes like scars, pimples and pores. On the other hand, it is vital to retain areas of high frequencies (hair, eyes, eyebrows etc) that contain details, as those areas will make the final image sharp.

3.3 Automatic background or frame creation

The idea is the result of numerous portraits printed on post cards, where each card design takes into account the image or set of images the card depicts. This assumes that the image does not completely cover the card. The same challenge comes when making album pages. It is difficult and time consuming to design an album page that works with images pasted into it. Another example is to find the right mat board color and texture for a given image that is about to be framed in the frame shop.

This project uses a color finding algorithm to automatically calculate the set of dominating colors, which can be used for border, frame, background or mat board, for a given image. Some efforts of finding matching colors are described in [20] and [21], but they are finding colors based on a given color, which is opposite to the goal of this project.

Analyzing colors that are present in a given image should provide valuable input for deciding what color or set of colors the frame surrounding the image should retain. These are the steps:

a) Reduce the color depth to minimize the number of colors.

b) Calculate the few colors that are dominating.

c) Pick one of the dominating colors as the color of the frame.

d) Present the original image inside that frame.

If two or more images are used inside the presentation, then the dominating colors will have to be calculated for every image. The result of this operation provides the number of color sets. The next step is to calculate a new set of colors, which is an intersection of all color sets. The color from the new set can then be used as the color of the frame.

To make it more interactive, and to observe the visual effect, the program should show these frames of different colors in a slide show fashion and change them every second.

Another interesting idea is to blend two matching colors and use this blend to paint the presentation frame. In doing this, the texture generation algorithm has to take place to achieve such a blend. The Perlin noise algorithm [22]  [23] is chosen for random texture generation in this project.

3.4 HDR processing

HDR processing normally includes two steps:

a)    Calculate one output image from multiple input images that represent different exposures.

b)    Convert the HDR image so it can be viewed.

The high dynamic range output image consists of a weighted average of the multi-exposed input images, and thus contains information captured by each of the input images. This is achieved by implementation of Mark Robertson’s algorithm described in [7].

Tone mapping techniques are used to display high contrast images on devices with limited dynamic range of luminance values. Print-outs, CRT or LCD monitors, and projectors all have a limited dynamic range, incapable of full range of light intensity reproduction. Tone mapping addresses the problem of contrast reduction from the scene values to the displayable range while preserving the image details and color appearance.

In this project, the author implemented Drago’s adaptive logarithmic mapping algorithm described in [8].

As the final part of this project and to conclude the HDR processing research, the author attempted to implement his own simple HDR processing. The idea came as it was noted that the implementation of Robertson’s HDR stacking algorithm causes some clipping in highlight areas.

Chapter 4

Implementation

This chapter is the most important part of this report, as it describes all the experimentation work in the project. This chapter provides formulas and arguments that justify the implementation. C++ source code listings are included where applicable to illustrate the implementation. Included are numerous images that give a clearer illustration of before and after effects.

4.1 Color balancing by skin tones

Consider the off white balance portrait shown in Figure 4.1.1. The task is to color balance it using the skin sample and the template sample shown in Figure 3.1.1.

Figure 4.1.1 – Skin that needs color balance

Grey level color balance can be achieved by simple matrix multiplication presented in 4.1.1:

New pixel values Rnew, Gnew, Bnew are calculated based on old pixel values Rpix, Gpix, Bpix and sample pixel value (, and ) where the color is known to be grey.

To color balance using skin tones, we may use the same formula except that the  (known grey pixel of the image) has to be replaced with  which is based on the sample pixel but selected from the skin area (4.1.2).

Now, the challenge is in finding values for . An attempt may be based on the RGBtemplSkin pixel selected from the skin template image shown in Figure 3.1.1, and RGBsmplSkin selected from the skin area of the original image we are trying to balance.

The ratio between RGB channels represented by selected template pixel (equation 4.1.4) from Figure 3.1.1:

Now, the same may be done for the sample pixel selected from the original image (equation 4.1.6) we are trying to color balance:

From these, RGBcoef may be found using equation 4.1.7:

Below is the snippet of the source code to achieve it

   float rSmpl=0,gSmpl=0,bSmpl=0,greySmpl=0;

   for (int x=disp.mouse_x-3; x<disp.mouse_x+3; x++)

   {

    for (int y=disp.mouse_y-3; y<disp.mouse_y+3; y++)

    {

     if (rSmpl==0)

     {

      rSmpl = images[0](x,y,0,0);

      gSmpl = images[0](x,y,0,1);

      bSmpl = images[0](x,y,0,2);

     }

     else

     {

      rSmpl = (rSmpl + images[0](x,y,0,0))/2;

      gSmpl = (gSmpl + images[0](x,y,0,1))/2;

      bSmpl = (bSmpl + images[0](x,y,0,2))/2;

     }

    }

   }

   //rSmpl = images[0](disp.mouse_x,disp.mouse_y,0,0);

   //gSmpl = images[0](disp.mouse_x,disp.mouse_y,0,1);

   //bSmpl = images[0](disp.mouse_x,disp.mouse_y,0,2);

   greySmpl = (rSmpl+gSmpl+bSmpl)/3;

   cout << "SAMPLE PIXEL:" << rSmpl << "," << gSmpl << "," << bSmpl << "," << (int)greySmpl << endl;

 

   float rCoeff=0,gCoeff=0,bCoeff=0;

   rCoeff = (skinR/skinGrey)/(rSmpl/greySmpl);

   gCoeff = (skinG/skinGrey)/(gSmpl/greySmpl);

   bCoeff = (skinB/skinGrey)/(bSmpl/greySmpl);

   cout << "COEFFICIENTS:" << rCoeff << "," << gCoeff << "," << bCoeff << endl;

   

   for (unsigned int x=0; x<images[0].width; x++)

   {

    for (unsigned int y=0; y<images[0].height; y++)

    {

     float rPix = images[0](x,y,0,0);

     float gPix = images[0](x,y,0,1);

     float bPix = images[0](x,y,0,2);

     images[1](x,y,0,0) = rPix*rCoeff;

     images[1](x,y,0,1) = gPix*gCoeff;

     images[1](x,y,0,2) = bPix*bCoeff;

    }

   }

   images[1].normalize(0,255);

 

The product of this simple algorithm is presented in figures 4.1.2 and 4.1.3.

Figure 4.1.2 – Sample color balance

Figure 4.1.3 – Another sample color balance

In each series, the original image is followed by the color balanced image, followed by the template chart. The selected template cell is shown in the red square.

4.2 Algorithm for skin smoothing

To show this concept, the sample portraits presented in figure 4.2.1 are used.

This concept was started with the idea of using Fast Fourier Transform (FFT) to convert the image into the frequency domain. Then, an attempt was made to modify the image data in the frequency domain, so that the final image would retain high frequencies when converted back into the spatial domain using the inverse FFT. In reality, it was difficult to decide which area of the image was to be excluded in the frequency domain so that the final image retains enough detail. In addition, the frequency mask has to have the “fathering” property to achieve descent gradient roll off from high frequencies into low frequencies. Yet another issue with the FFT function in the CImg library is that it accepts only images of one-to-one aspect ratios, making its usage not very practical.

Figure 4.2.1 – Sample portraits

To overcome these issues, it was decided to filter frequencies in the spatial domain, by application of a convolution filter [24] [25] [26] that takes original an image and a 3x3 matrix as parameters. CImg has already implemented a convolve function, so it was used.

For a digital image, convolution can be presented by the equation 4.2.1:

The object, h(ρ,κ), in the equation is a weighting function, or in the discrete case, a rectangular matrix of numbers. The matrix is the moving window. Pixel (r,c) in the output image is the weighted sum of pixels from the original image in the neighborhood of (r,c) traced by the matrix. Each pixel in the neighborhood of (r,c) is multiplied by the corresponding matrix value. The sum of those products is the value of pixel (r,c) in the output image. Based on the values in the matrix, convolution delivers completely different effects. If the matrix is to be all 1’s, then the final image will be blurred. The matrix used in this project is chosen to have the opposite effect, such that fine details are emphasized in the final image.

Several different 3x3 matrices were used to achieve acceptable separation of high frequency data:

CImg<float> mask(3,3,1,1,0,-1,0,-1,4,-1,0,-1,0);

CImg<float> mask2(3,3,1,1,-1,0,0,0,2,0,0,0,-1);

CImg<float> mask3(3,3,1,1,0,0,-1,0,2,0,-1,0,0);

CImg<float> mask4(3,3,1,1,-1,-1,-1,-1,8,-1,-1,-1,-1);

 

mask

mask2

mask3

mask4

The most appealing results were achieved by the application of mask(3,3,1,1,0,-1,0,-1,4,-1,0,-1,0) and  mask4(3,3,1,1,-1,-1,-1,-1,8,-1,-1,-1,-1) (shown above). Application of convolution allows separation of high frequency details. Its implementation is based on matrix multiplication, so matrices that are symmetrical lead to two dimensional separation of high frequencies. Non-symmetrical matrices mask2(3,3,1,1,-1,0,0,0,2,0,0,0,-1) and mask3(3,3,1,1,0,0,-1,0,2,0,-1,0,0), emphasized high frequencies diagonally and ca not be used. There are of course other symmetrical matrices that yield appropriate results.

Here is the snippet of the code that implements these steps:

images[1]=images[0].get_convolve(mask);

images[1].normalize(0,255);

images[1].equalize_histogram();

 

cimg_forXYZV(images[1],x,y,z,v) images[1](x,y,z,v)=

(images[1](x,y,z,0)+

images[1](x,y,z,1)+

images[1](x,y,z,2))/3;

 

images[1].normalize(0,255);

images[1].blur(blurMask);

 

After applying the convolution mask, normalization and contrast adjustments using the CImg equalize_histogram function, it was decided to convert the image into grayscale. This was done by going through every pixel and averaging RGB channels. After normalization, some blurring was applied to smooth the transition into lower frequencies. These steps allow generating a nice gray scale image depicting high frequencies “hidden” in the thick black lines and low frequencies in lighter areas. A final blurring worked well as it generated a grey color halo effect surrounding the high frequency lines shown in figure 4.2.2.

Figure 4.2.2 – High frequency mask

The idea behind the next step was to sort the low frequencies from the original image and apply some blurring to it. This would soften the person’s skin and hide all the blemishes. In reality, there is no need to sort low frequency information, as one simple blurring instantly destroys all high frequencies. Here is the code:

images[2]=images[0].get_blur(blur);

 

This operation returns the blurred image shown in figure 4.2.3.

Figure 4.2.3 – Blurred image (images[2])

The last step in the algorithm is to calculate one last image based on the generated information. After a few different considerations, a very simple approach was devised:

 

cimg_forXYZV(images[3],x,y,z,v) images[3](x,y,z,v) = images[2](x,y,z,v);

 

cimg_forXYZV(images[3],x,y,z,v) images[3](x,y,z,v) =

(images[1](x,y,z,v)*images[2](x,y,z,v))/255 +

(images[0](x,y,z,v)*(1-(images[1](x,y,z,v)/255)));

 

images[3].sharpen(10.0,1.0,0.0,0.0);

 

The blurred image (images[2]) was copied into images[3] of the image array. Then, high frequency data from the original image (images[0]), were extracted based on pixel values in the high frequency mask (images[1]), and overlaid on the top of blurred image (images[3]). This proportional calculation caused pixels to be “hidden” under black high frequency lines, overwriting corresponding pixels of the blurred image. Grey pixels are corresponding to blended pixel values, and white pixels have caused no change in the final image. A small amount of sharpening was added to do the final touch on the generated image.

Figure 4.2.4 shows the final image after overlaying was performed. As one can tell, skin areas are very soft giving the portrait an almost “dreamy” look while retaining crucial details.

Figure 4.2.4 – Soft skin

This simple algorithm allows a portrait photographer to automatically generate portraits with a more appealing, smooth looking skin, without fear of loosing important details. Figure 4.2.5 shows similar results from other images.

Figure 4.2.5 – Other examples

4.3 Automatic background or frame creation

Finding dominating colors is dependent on the color depth [27] of the image. A 24-bit true color image uses 8 bits to represent red, 8 bits to represent blue and 8 bits to represent green. 28 = 256 levels of each of these three colors can therefore be combined to give a total of 16,777,216 mixed colors (256 × 256 × 256).

It is necessary to reduce color depth before finding dominating colors. Color depth reduction can be achieved by so called posterization (see glossary). Posterization occurs when an image's apparent bit depth has been decreased so much that it has a visual impact. Normally this effect is very undesirable in digital imaging, as it leads to bad image quality. In this project CImg’s quantize function is used to achieve the effect of posterization. There is also a danger in reducing color depth too much, as it will lead to too few colors, but most importantly, it will lead to shift in the color.

This snippet of code shows quantization:

   //

   // Apply posterization.

   //

   images2[1]=src2.get_resize(300,300,1,3).get_blur(blur).quantize(level,true);

 

After quantization, colors are counted and statistics remembered inside the array:

   //

   // Find dominating colors.

   //

   cimg_forXYZ(images2[1],x,y,z)

   {

    for (int k = 0; k < colLen; k++)

    {

     if (colors[k][0] == -1 || colors[k][1] == -1 || colors[k][2] == -1)

     {

      colors[k][0] = images2[1](x,y,z,0);

      colors[k][1] = images2[1](x,y,z,1);

      colors[k][2] = images2[1](x,y,z,2);

 

      dominance[k]++;

      break;

     }

     else if (colors[k][0] == images2[1](x,y,z,0) &&

colors[k][1] == images2[1](x,y,z,1) &&

colors[k][2] == images2[1](x,y,z,2))

     {

      dominance[k]++;

      break;

     }

    }

   }

 

Then colors are sorted in a descending order, so the most dominating colors are at the top:

   //

   // Sort colors

   //

   int i, j, tmp;

   float tmpR, tmpG, tmpB;

   for (i = 1; i < colLen; ++i)

   {

    tmp = dominance[i];

    tmpR = colors[i][0];

    tmpG = colors[i][1];

    tmpB = colors[i][2];

    //for (j = i; j > 0 && dominance[j-1] > tmp; j--) //ascending order

    for (j = i; j > 0 && dominance[j-1] < tmp; j--) //descending order

    {

     dominance[j] = dominance[j-1];

     colors[j][0] = colors[j-1][0];

     colors[j][1] = colors[j-1][1];

     colors[j][2] = colors[j-1][2];

    }

    dominance[j] = tmp;

    colors[j][0] = tmpR;

    colors[j][1] = tmpG;

    colors[j][2] = tmpB;

   }

 

Figures 4.3.1 and 4.3.2 illustrate the results of this algorithm. The original image is to the left, followed by the image of reduced color depth, followed by the color palette that visually presents dominating colors. To the right is the original image that has been placed in the frame, which is filled with one of the calculated colors.

Figure 4.3.1 – Colored frame

Figure 4.3.2 – Another colored frame

The important part is that the frame can be filled with any color of the color palette, providing a good artistic value in terms of portrait presentation, because all calculated colors are present in the image. To make it more interactive in the software, the slide-show effect was made to change the color of the frame automatically. Figure 4.3.3 shows a larger sample of the final image in the frame.

Figure 4.3.3 – Larger colored frame

The next step is to calculate the dominating colors for more than one image. To achieve this, the dominating colors were first found for each image, and then the new color set was calculated as an intersection of all individual color sets. After some code reorganization, the following was done:

    //

    // Reduce Color Depth

    //

    images[iii][1]=src[iii].get_resize(300,300,1,3).get_blur(blur).quantize(level,true);

 

    //

    // Find all colors and how many times each used.

    //

    int lowerThreshold = 50;

    int upperThreshold = 205;

    findColors(images[iii][1], &colors[iii][0][0], &dominance[iii][0], colLen, lowerThreshold, upperThreshold);

 

    //

    // Sort Colors

    //

    sortColors(&colors[iii][0][0], &dominance[iii][0], false, colLen);

 

Below is the code that calculates the combined color set. Calculations are based on the fuzziness variable, which defines the absolute difference between two color values when they are compared. The fuzziness value controls the number of colors in the final set. Variable colorLen2, defines the number of dominating colors used in the comparison.

   int colorLen2 = 10;

   //

   // Generate Combined Colors.

   //

   for (int i = 0; i < colorLen2; i++)

   {

    bool found = false;

    for (int j = 1; j < imageCount; j++)

    {

     int k = 0;

     for (k = 0; k < colorLen2; k++)

     {

      if ((abs(colors[0][i][0] - colors[j][k][0]) <= fuzziness) &&

       (abs(colors[0][i][1] - colors[j][k][1]) <= fuzziness) &&

       (abs(colors[0][i][2] - colors[j][k][2]) <= fuzziness))

      {

       found = true;

       break;

      }

     }

     if (found)

     {

      if (j == (imageCount - 1))

      {

       for (int l = 0; l < colorLen2; l++)

       {

        if (colorsFinal[l][0] == -1 &&

colorsFinal[l][1] == -1 &&

colorsFinal[l][2] == -1)

        {

         cout << "FinalColorsCount = " << l+1 << endl;

         // Approximate between colors.

         colorsFinal[l][0] = (colors[0][i][0] + colors[j][k][0])/2;

         colorsFinal[l][1] = (colors[0][i][1] + colors[j][k][1])/2;

         colorsFinal[l][2] = (colors[0][i][2] + colors[j][k][2])/2;

         break;

        }

       }

     }

      continue;

     }

     else

      break;

    }

   }

 

Figure 4.3.4 illustrates the result of this operation, with the fuzziness equal to 50. There were seven colors found for this level of fuzziness. By experiment, the fuzziness value of 48 had reduced the final color set to three members.

Figure 4.3.4 – Frame colored for two images.

There are multiple matching colors available and more than one of them may be used for presentation. Good artistic presentation is frequently supported by the background which helps to convey the art presentation. People that are photographed in studio, posed in front of the backdrop, and the color properties of the backdrop are carefully selected to match the color properties of the subject (color of the skin, color of clothes) that is being photographed. It is also important that the color blending is done according to a good texture.

The Perling noise algorithm is known as a tool that can generate real looking, smooth textures, as if they were painted by an artist. Impressed by its capabilities and to demonstrate the concept, the author provides an implementation of the two-dimensional Perlin noise function tweaked to generate good looking textures. Appendix A-1 describes the implementation of the Perlin noise function.

Figure 4.3.5 – Perlin noise application

 

Figure 4.3.5 illustrates the single color blend with black. The fuzziness was set to 40, which caused finding three colors. Below is the snippet of the code showing that every background pixel was generated using the two-dimensional Perlin noise function:

   for (unsigned int x=0; x<imagesLast[1].width; x++)

   {

    for (unsigned int y=0; y<imagesLast[1].height; y++)

    {

     //

     // Generate noise c = 0...1

     //

     float c = (float)perlinNoise2d(x, y);

 

     //

     // Reverse

     //

     //c = 1 - c;

 

     //

     // Apply noise

     //

     imagesLast[1](x,y,0,0) *= c;

     imagesLast[1](x,y,0,1) *= c;

     imagesLast[1](x,y,0,2) *= c;

    }

   }

 

The function returns a value in the range of 0 to 1, so multiplication allows a simple application. The texture may be reversed if variable c is set to 1 – c.

To demonstrate two-color blending (equation 4.3.1), the following modification was made:

   for (unsigned int x=0; x<imagesLast[1].width; x++)

   {

    for (unsigned int y=0; y<imagesLast[1].height; y++)

    {

     //

     // Generate noise c = 0...1

     //

     float c = (float)perlinNoise2d(x, y);

 

     //

     // Reverse

     //

     //c = 1 - c;

 

     //

     // Apply noise by blending

     //

     imagesLast[1](x,y,0,0) = c * colorsFinal[colorIndex][0] +

(1 - c) * colorsFinal[colorIndex+1][0];

     imagesLast[1](x,y,0,1) = c * colorsFinal[colorIndex][1] +

(1 - c) * colorsFinal[colorIndex+1][1];

     imagesLast[1](x,y,0,2) = c * colorsFinal[colorIndex][2] +

(1 - c) * colorsFinal[colorIndex+1][2];

    }

   }

 

Color channels are calculated based on equations 4.3.2-4.3.4. If c equals 1, the first color is used, but if c equals 0, then the second is used. Anything in the middle gets blended.

Figure 4.3.6 demonstrates the outcome of this technique. Colors are blended with very smooth and natural looking gradations.

Figure 4.3.6 - Perlin noise application with blending

 

4.4 HDR processing

As it was already said, an HDR image consists of the weighted average of multi-exposed input images, and thus contains information captured by each of the input images. This is achieved by an implementation of Mark Robertson’s algorithm described in [7].

His approach is to calculate the weight for every pixel of every exposure. The weights are chosen based on the confidence that the observed data is accurate. The response function of a camera (Figure 4.4.1) will typically be steepest, or most sensitive, towards the middle of its output range, or 128 for 8-bit data.

Figure 4.4.1 – Camera response function

 

As the output levels approach the extremes, 0 and 255, the sensitivity of the camera typically decreases. For this reason, a weighting function will be chosen such that values near 128 are weighted more heavily than those near 0 and 255. The function chosen here is a Gaussian-like function (4.4.1),

but scaled and shifted so that (0) = (255) = 0, and (127.5) = 1.0. Variable  represents a j-th pixel value of an i-th image. This choice of weighting function implies that there is very low confidence in the accuracy of pixel values near 0 and 255, while high confidence in the accuracy of pixel values near 128. The weighting function is shown in figure 4.4.2.

Figure 4.4.2 – Weighting function

The final equation for the desired high dynamic range image estimate can be presented as 4.4.2:

Variable , represents the j-th pixel value in the HDR image. Variable , represents an i-th image exposure time, and  represents camera response function for a given pixel value y. In this project, the camera response function is assumed linear and thus I=0 if y=0, I=1 if y=127.5, and y=2 if y=255.

In reality, the response function has to be calculated for every camera, but this is outside of the scope of this project.

Here is the source code that implements the above algorithm:

int w = input[0].dimx();

int h = input[0].dimy();

 

CImg<float> hdr(w,h,1,3);

 

//

// Assume lenear response values Iyij for 0 - 255

//

vector<ColorPixel> response;

response.resize(256, ColorPixel(0.0f, 0.0f, 0.0f));

for(unsigned int i = 0; i < 256; ++i)

{

  float val = (float)i / 128.0f;

  response[i] = ColorPixel(val, val, val);

}

 

//

// Weighting function wij "bell shape" 127.5 - max, 0 and 255 - min

//

vector<float> weights;

weights.resize(256, 0.0f);

 

weights[0] = 0.0f;

weights[255] = 0.0f;

 

for(unsigned int i = 1; i < 255; ++i)

{

  float tmp = (float)i - 127.5f;

  weights[i] = (float)exp(-4.0f * ( (tmp*tmp)/(127.5 * 127.5)));

}

 

//

// Calculate pixel values xj

//

float r_c, g_c, b_c, t, t2;

float r_d, g_d, b_d;

for(int j = 0; j < h; ++j)

{

  for(int i = 0; i < w; ++i)

  {

   r_c = g_c = b_c = 0.0f;

   r_d = g_d = b_d = 0.0f;

 

   for(unsigned int k = 0; k < input.size(); k++)

   {

    float cr = input[k](i,j,0,0);

    float cg = input[k](i,j,0,1);

    float cb = input[k](i,j,0,2);

 

    t = exp_times[k];

 

    t2 = t*t;

 

    r_c += weights[(int)cr] * t * response[(int)cr][0];

    g_c += weights[(int)cg] * t * response[(int)cg][1];

    b_c += weights[(int)cb] * t * response[(int)cb][2];

 

    r_d += weights[(int)cr] * t2;

    g_d += weights[(int)cg] * t2;

    b_d += weights[(int)cb] * t2;

   }

 

   if(r_d != 0.0f)

    r_c /= r_d;

   if(g_d != 0.0f)

    g_c /= g_d;

   if(b_d != 0.0f)

    b_c /= b_d;

 

   hdr(i,j,0,0) = r_c;

   hdr(i,j,0,1) = g_c;

   hdr(i,j,0,2) = b_c;

  }

}

 

As the result of this operation, a CImg object called hdr, contains the HDR image. Recall that the calculated HDR image has a float value representing every pixel channel and thus cannot be shown on any existing output device. Pixel values in such an image may go well beyond 0 to 255.

In order to show the HDR image, tone mapping has to take place, as contrast and color have to be recalculated to match the actual scene. Intuition suggests that performing normalization should be enough, and the author made that mistake indeed.

Tone mapping addresses the problem of contrast reduction from the scene values to the displayable range while preserving the image details and color appearance. In this project, the author used Drago’s adaptive logarithmic mapping algorithm described in [8].

Drago first converts the image from traditional RGB color space into CIE XYZ color space [28], in which the X,Y,Z values are derived parameters from the red, green and blue colors. This color space is rooted in the functionality of the human eye. The CIE XYZ color space was deliberately designed so that the Y parameter was a measure of the brightness or luminance of a color. The chromaticity of a color was then specified by the two derived parameters x and y, two of the three normalized values which are functions of all three tristimulus values X, Y, and Z.

In equation 4.4.3, L stands for luminance, and is represented by the channel Y. Displayed luminance values  are calculated for each pixel of an HDR image. Luminance values  and  characterize the scene, and  is the maximum luminance capability of the displaying medium.  is used as a scale factor to adapt the output to its intended display. For example,  = 100 cd/m2 for CRT displays. The parameter b is essential to adjust compression of high values and visibility of details in dark areas. The value of b is suggested to stay between 0.0 and 1.0.

Once new pixel values are calculated, the final image needs to be converted back into RGB color space. Below is the source code showing these calculations:

//

// Convert RGB to XYZ.

//

CImg<float> hdrXYZ = hdr.get_RGBtoXYZ();

 

float b = 0.5f; //0.0 ... 1.0

int width = hdrXYZ.width;

int height = hdrXYZ.height;

 

//

// Calculate Luminance

//

 

float avgL = 0.0f;

float maxLw = 0.0f;

float maxLd = 100.0f;

 

int size = width * height;

for(int x=0; x<width; x++)

{

  for(int y=0; y<height; y++)

  {

   avgL += hdrXYZ(x,y,0,1);

   maxLw = ( hdrXYZ(x,y,0,1) > maxLw ) ? hdrXYZ(x,y,0,1) : maxLw ;

  }

}

avgL = avgL/size; // get average luminance

 

//

// Tone mapping of every pixel

//

CImg<float> tmp(hdrXYZ);

 

maxLw /= avgL; // normalize maximum luminance by average luminance

 

for(int y=0; y<height; y++)

{

  for(int x=0; x<width; x++)

  {

   float Lw = hdrXYZ(x,y,0,1) / avgL; // normalize luminance by average luminance

   float Ld = ( (maxLd * 0.01f) * log(Lw+1.0f) / log (2.0f + pow(Lw/maxLw, log(b)/log(0.5f)) * 8.0f) ) / log10(maxLw + 1.0f);

  

   tmp(x,y,0,1) = Ld;

  }

}

 

//

// Scale XYZ image based on luminance.

//

for(int x=0; x<width ; x++)

{

  for(int y=0 ; y<height ; y++)

  {

   float scale = tmp(x,y,0,1) / hdrXYZ(x,y,0,1);

   hdrXYZ(x,y,0,0) *= scale;

   hdrXYZ(x,y,0,1) *= scale;

   hdrXYZ(x,y,0,2) *= scale;

  }

}

 

//

// Convert back from XYZ to RGB.

//

hdrXYZ.XYZtoRGB().display();

 

The CImg library became handy again, as RGB to XYZ conversion functions are already implemented. Note that before the final conversion from XYZ into RGB took place, pixel values were scaled, as only luminance (Ld) values were calculated and stored in channel one. Channels zero and two are scaled based on channel one.

Figure 4.4.3 shows the set of input images that were captured at different shutter speeds. The darkest images expose for highlights, and the brighter expose for shadows.

 

Figure 4.4.3 – HDR input images

 

 

Figure 4.4.4 – HDR output image

HDR output is shown in figure 4.4.4. This image appearance confirms that HDR processing delivered satisfactory results. Shadow areas inherited all the details from the images that were exposed for shadows. Highlights are still a bit strong, but clearly taken from the images that were exposed for highlights. The appearance of a few light green pixels in the nose area of the subject, suggests that there is some clipping that still takes place in the highlight area. This deficiency need to be investigated and provisioned during calculations.

Figures 4.4.5 and 4.4.6 show an example of three sample HDR processing.

Figure 4.4.5 – Three input images

Figure 4.4.6 – Another HDR output image

Since the image generated in figure 4.4.6 confirms clipping in highlight, efforts were made to identify if HDR stacking is the source of the problem or tone mapping. To prove this, the author modified the HDR stacking algorithm. The idea of weights is retained from Robertson’s algorithm, but the way values are averaged is different. Averaging is not taking in account the camera response curve, assuming it to be linear. This new algorithm is defined by the following equations:

In 4.4.4, variable y represents the j-th pixel of i-th image. Variable x represents the j-th pixel value of stacked HDR image.

To work with individual color channels, the c (channel) index was added in 4.4.5. The weight is calculated for each color channel as in 4.4.6.

Here is the source code:

int w = input[0].dimx();

int h = input[0].dimy();

 

CImg<float> hdr(w,h,1,3);

 

//

// Weighting function wij "bell shape" 127.5 - max, 0 and 255 - min

//

vector<float> weights;

weights.resize(256, 0.0f);

 

weights[0] = 0.0f;

weights[255] = 0.0f;

 

for(unsigned int i = 1; i < 255; ++i)

{

  float tmp = (float)i - 127.5f;

  weights[i] = (float)exp(-4.0f * ( (tmp*tmp)/(127.5 * 127.5)));

}

 

for(int j = 0; j < h; ++j)

{

  for(int i = 0; i < w; ++i)

  {

   float r = 0.0f;

   float g = 0.0f;

   float b = 0.0f;

   for(unsigned int k = 0; k < input.size(); k++)

   {

    r += input[k](i,j,0,0) * weights[(int)input[k](i,j,0,0)];

    g += input[k](i,j,0,1) * weights[(int)input[k](i,j,0,1)];

    b += input[k](i,j,0,2) * weights[(int)input[k](i,j,0,2)];

   }

 

   hdr(i,j,0,0) = r;

   hdr(i,j,0,1) = g;

   hdr(i,j,0,2) = b;

  }

}

 

The HDR image shown in figure 4.4.7 was computated from the images shown in figure 4.4.5.

Figure 4.4.7 – HDR image based on equation 4.4.6

Based on figure 4.4.7, the highlight clipping occurring in the implementation of Robertson’s algorithm is apparent. It is also noticeble that the HDR image has very saturated colors. Color saturation is caused by separate, per channel weight calculation. The better name for such effect would be color deviation.

To fix this, the author proposes that each channel is weighted by the same value. The weight can be calculated based on equation 4.4.8. Pixel values are calculated based on equation 4.4.7.

This change is illustrated in the source code below:

for(int j = 0; j < h; ++j)

{

  for(int i = 0; i < w; ++i)

  {

   float r = 0.0f;

   float g = 0.0f;

   float b = 0.0f;

   for(unsigned int k = 0; k < input.size(); k++)

   {

    float cgrey = (input[k](i,j,0,0) + input[k](i,j,0,1) + input[k](i,j,0,2)) / 3;

    r += input[k](i,j,0,0) * weights[(int)cgrey];

    g += input[k](i,j,0,1) * weights[(int)cgrey];

    b += input[k](i,j,0,2) * weights[(int)cgrey];

   }

  

   hdr(i,j,0,0) = r;

   hdr(i,j,0,1) = g;

   hdr(i,j,0,2) = b;

  }

}

 

Figure 4.4.8 shows the result of such modification. This achieves the goal of reducing the color shift.

Figure 4.4.8 – HDR image with reduced color shift.

Chapter 5

Conclusions

In the author’s opinion, this project was productive, as most goals specified in the introduction were reached. This research opened the opportunity for future work that could improve these implementations presented in this project, and develop them to the point when they can be used in a photographer’s professional workflow.

5.1 Work summary

The algorithm for color balancing by skin tones enhances the image by removing the global color cast. It can be used to achieve realistic skin tones, but it ca not be used, nor was it designed for, the fine white balance adjustments. The small set of skin tone patches in figure 3.1.1, represents only major ethnicities, but in reality, every skin tone is unique. The primary goal of every portrait photographer is to produce good looking portraits in a short amount of time, so automation like this is definitely an advantage.

 The skin softening algorithm is another time saving tool for a portrait photographer. This algorithm has definitely delivered positive results with a relatively small number of mathematical calculations. It was not the author’s intent to challenge other commercial software that provides similar solutions, but to confirm that his simple idea worked. Figure 4.2.4 is a good example of the algorithm’s performance.

Automatic background creation is one the most interesting ideas that had been thought of for at least two years. There is a potentially large number of applications ranging from photo-albums, frames, backgrounds and photographic backdrops to digital photo-frames and others that would benefit from such algorithm. The Perlin noise generator can craft very realistic textures. These textures were used to blend color, making valuable artistic enhancements.

HDR stacking and tone mapping are popular topics, especially in nature photography. Advancements in digital cameras are very significant, but they are still inferior to low ISO film in terms of dynamic ranges. This probably explains the current popularity of the HDR processing. The author’s intent was to implement the Robinson’s algorithm of HDR stacking and Drago’s algorithm of tone mapping. After implementation, some clipping was discovered in the highlight areas. This imperfection was not resolved, but the author updated the HDR stacking algorithm to remove clipping areas.

5.2 Future work

Based on the effects of convolution, another algorithm of multi-resolution composition can be studied to overcome the camera’s depth of field (DOF) limitation. In such algorithm, multiple images are acquired at different focal points, and then overlaid based on the amount of high frequencies in each image. After such an operation, the DOF of the final image should get expanded. The author is aware that there is commercial software called TuFuse that allows such blending, but it is unclear which logic TuFuse is using to achieve such effects. This merits further investigation and a more open implementation.

A Perlin noise texture is a valuable addition to an automatic background creation algorithm. It would be beneficial if the texture of the Perlin noise was dependent on the image content. More research is needed to validate this point.

References

[1] White balance, http://en.wikipedia.org/wiki/White_balance

[2] HDR imaging, http://en.wikipedia.org/wiki/High_dynamic_range_imaging

[3] Luminance, http://en.wikipedia.org/wiki/Luminance

[4] Radience, http://en.wikipedia.org/wiki/Radiance

[5] Digital photography, http://en.wikipedia.org/wiki/Digital_photography

[6] Digital image processing, http://en.wikipedia.org/wiki/Digital_image_processing

[7] Mark A. Robertson, Sean Borman, and Robert L. Stevenson, Dynamic range improvement through multiple exposures, Department of Electrical Engineering University of Notre Dame, Notre Dame, IN 46556http://www.seanborman.com/publications/icip99a.pdf

[8] F. Drago, K. Myszkowski, T. Annen, Adaptive Logarithmic Mapping For Displaying High Contrast Scenes, N. Chiba, Iwate University, Morioka, Japan. MPI Informatik, Saarbrücken,Germany, http://www.mpi-inf.mpg.de/resources/tmo/logmap/logmap.pdf

[9] Perlin noise, http://freespace.virgin.net/hugo.elias/models/m_perlin.htm

[10] High dynamic range, http://www.cambridgeincolour.com/tutorials/high-dynamic-range.htm

[11] Image J, http://rsbweb.nih.gov/ij/

[12] CImg Library, http://cimg.sourceforge.net/

[13] Open Computer Vision Library, http://sourceforge.net/projects/opencvlibrary/

[14] Introduction to Image Magic, http://www.imagemagick.org/script/index.php

[15] Pleasing Skin Tones, http://www.smugmug.com/help/skin-tone

[16] Skin Tones Tutorial in Photoshop, http://www.idigitalemotion.com/tutorials/guest/skin_tone/skintone.html

[17] Skin Tones, http://www.polykarbon.com/tutorials/skintones/tone1.htm

[18] Professional Skin Smoothing, http://freeonlineclasses.net/photoshop-tutorials/photo-retouching/professional-skin-smoothing.html

[19] Smooth skin, http://www.lunacore.com/photoshop/tutorials/tut020.htm

[20] Website Color Match, http://www.hypergurl.com/colormatch.php

[21] The Color Wizard, http://www.colorsontheweb.com/colorwizard.asp

[22] Andreas Jönsson, Generating Perlin Noise, http://www.angelcode.com/dev/perlin/perlin.asp

[23] Perlin noise, http://freespace.virgin.net/hugo.elias/models/m_perlin.htm

[24] Marshall Tappen, Computer Vision, http://www.cs.ucf.edu/~mtappen/cap5415/lecs/lec1.pdf

[25] Convolution-based Operations, http://www.ph.tn.tudelft.nl/Courses/FIP/noframes/fip-Convolut-2.html

[26] Lode Vandevenne, Image filtering, http://student.kuleuven.be/~m0216922/CG/filtering.html

[27] Color depth, http://en.wikipedia.org/wiki/Color_depth

[28] CIE 1931 color space, http://en.wikipedia.org/wiki/CIE_1931_color_space

[29] Ken Perlin, Making noise, http://www.noisemachine.com/talk1/

 

Glossary

Color balance - In photography and image processing, color balance [1] is the global adjustment of color intensities. The ultimate goal is to render colors as they naturally appear at the scene. Color balance is also sometimes referred to as white balance.

Digital darkroom - The hardware, software and techniques used in digital photography that replace the darkroom equivalents, such as enlarging, cropping, dodging and burning, as well as processes that do not have a film equivalent.

RAW file - A vendor specific image file that retains the most data a particular digital camera can deliver based on the hardware. Canon RAW files have a CR2 extension, Nikon RAW files have a NEF extension. Vendors normally supply RAW converters to convert into common image file formats such as TIFF, BMP, JPEG and so on.

High dynamic range (HDR) imaging – A set of techniques that allow a greater dynamic range of luminances between light and dark areas of a scene than normal digital imaging techniques [2].

Luminance – A photometric measure of the luminous intensity per unit area of light traveling in a given direction [3]. It describes the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle. The SI unit for luminance is candela per square meter (cd/m2). The CGS unit for luminance is the stilb, which is equal to one candela per square centimeter or 10 kcd/m2.

Radiance and spectral radiance – Radiometric measures that describe the amount of light that passes through or is emitted from a particular area, and falls within a given solid angle in a specified direction [4]. They are used to characterize both emission from diffuse sources and reflection from diffuse surfaces. The SI unit of radiance is watts per strain per square meter (W·sr-1·m-2).

Appendix

A-1 Perling noise

Random number generators are used to create unpredictability, so the objects appear more natural. Even though, random number generators are widely used, their output is still not very natural. The Perlin noise function basically addresses the natural effect. In addition, it may be modified to suit different needs. This is achieved by application of a smoothing function over the data set that is generated by the random function. Perlin noise has been described in [29] and [23].

In this project, for automatic background creation algorithm, the author used a two-dimensional Perlin-noise function. Below is the source code listing:

//

// Perlin Noise

//

 

#define PI 3.14159265

 

bool init_ = false;

int r1, r2, r3;

 

double noise(int x, int y)

{

    if (!init_)

{

  // Initialize random seed.

  srand((unsigned int)time(NULL));

 

  // generate number 1000 - 10000

  r1 = rand() % 10000 + 1000;

  // generate number 100000 - 1000000

  r2 = rand() % 1000000 + 100000;

  // generate number 1000000000 - 2000000000

  r3 = rand() % 2000000000 + 1000000000;

 

  init_ = true;

}

 

int n = x + y * 57;

    n = (n<<13) ^ n;

 

    return ( 1.0 - ( (n * (n * n * r1 + r2) + r3) & 0x7fffffff) / 1073741824.0);

}

 

double interpolate(double x, double y, double a)

{

    double val = (1 - cos(a * PI)) * .5;

    return  x * (1 - val) + y * val;

}

 

double smooth(double x, double y)

{

    double n1 = noise((int)x, (int)y);

    double n2 = noise((int)x + 1, (int)y);

    double n3 = noise((int)x, (int)y + 1);

    double n4 = noise((int)x + 1, (int)y + 1);

 

    double i1 = interpolate(n1, n2, x - (int)x);

    double i2 = interpolate(n3, n4, x - (int)x);

 

    return interpolate(i1, i2, y - (int)y);

}

 

double perlinNoise2d(int x, int y)

{

double frequency = 0.015; // 0.015 Frequency gives you number of noise values defined between each 2-dimensional point.

    double persistence = 0.55; // 0.65 Persistence is a constant multiplier adjusting our amplitude in each iteration.

    double octaves = 8; // 8 Number of iterations over the noise function for a particular point. With each iteration, the frequency is doubled and the amplitude is multiplied by the persistence.

    double amplitude = 0.7; // 0.0 to 1.0 Amplitude is the maximum value added to the total noise value.

 

    double total = 0.0;

   

    for(int lcv = 0; lcv < octaves; lcv++)

    {

        total = total + smooth(x * frequency, y * frequency) * amplitude;

        frequency = frequency * 2;

        amplitude = amplitude * persistence;

    }

 

    if(total < 0) total = 0.0;

    if(total > 1) total = 1.0;

 

    return total;

}

//

// Perlin Noise

//