What is color science and why does it matter?

Color science is the latest term to hit the internet and thrown around by anyone who has an opinion on cameras. These days, you can’t go through any camera review, whether of an actual camera or of a smartphone, without someone mentioning how good or bad the color science is.

So what exactly is the color science of a camera and is it really as important as thecoolcameraguy69 says it is in some YouTube comment you saw online? Let’s find out.

What is color?

Before we find out what color science is, we must learn what color is and how it comes about in a digital image. Color (or chroma or chrominance), is one part of every digital image, the other being luminance.

To capture color information, digital cameras usually use a color filter array laid on top of the sensor. The most popular CFA is the Bayer filter, which is a 2×2 mosaic consisting of one red, one blue and two green filters. The Bayer filter uses two green for luminance information, as our eyes are more sensitive to luminance and the color green.

Bayer color filter

The Bayer CFA filters the light that passes through to the sensor, and each photosite or pixel on the sensor receives a specific color value, whether it’s red, blue or green. This results in an incomplete picture, which is then completed using a process called demosaicing or debayering. In this process, an algorithm interpolates the color value of each pixel using information from a neighboring pixel and thus creating a complete image.

This may seem like a bit of a hack, but that’s because it is. Basically, pretty much every digital image you’ve captured is only 1/3rd actual color information and 2/3 interpolation or guesswork to obtain the rest of the colors.

Between capturing the raw data and saving the image in a JPEG file, the camera software puts in a lot of work to render the final image from the various bits of data it receives. This is where color science comes in.

What is color science?

The color science of a camera is a catch-all term for how the camera software chooses to render the colors in the final image from the information it originally captured. Now, colors are only one aspect of the image but it’s usually what decides the look of the image and what gives a particular camera its personality.

Part of the color science process is in the demosaicing that I mentioned earlier. A lot of time and effort goes into making the right demosaicing algorithm that correctly guesses the right color value for each pixel in the image. One of the reasons modern digital camera images look better than older digital cameras with fewer artifacts is due to the advancement in the demosaicing algorithms.

Color wheel

Figuring out the color value of each pixel is one part of the equation. The camera also has to do white balance adjustments to remove any color cast that may inadvertently be captured due to lighting. This is usually done by adjusting the color balance between blue-orange and green-magenta. Blue and orange sit on the opposite side of the color wheel, as do green and magenta, and together they form the four spokes of the color wheel. Adjusting the blue-orange level also makes the image cooler or warmer to the eye, which is why it’s called color temperature adjustment.

How a particular camera adjusts this depends upon how the software is configured. Some cameras are designed to set the white balance accurately for every scene while some others may be more partial towards a warmer or cooler color tone. Often companies will set their cameras to prefer a warmer color tone as it generally makes the image look more pleasing. In smartphones, this can be easily seen in iPhone photos, which have a very distinctive bias towards the orange-green spectrum of the color wheel, which makes skin tones pop and the colors generally look warmer.

Typical warm iPhone image

Lastly, there are the usual picture settings, which includes color saturation, brightness/gamma, sharpness, contrast, and tone-mapping. Each of these contributes towards the particular look of the final image and the personality of the final image.

For example, Samsung phones in the past tended to bump up the saturation and contrast of the images, although in recent times Samsung tends to veer towards increased brightness in images. Huawei cameras have an overly exaggerated contrast and sharpness setting, which makes the images pop a bit when viewed on the phone screen but makes them rather unpleasant on the big screen. Pixel phones also have a very distinctive gamma and contrast curve with the aggressive HDR+ tone-mapping, where the images are generally underexposed for highlight detail with increased saturation and contrast and a cooler color temperature, which creates that very appealing Pixel look people are so fond of but can’t always explain why they like.

[“source=gsmarena”]