Opinion: Dynamic contrast ratio is the new evil in displays

By on
Opinion: Dynamic contrast ratio is the new evil in displays

Consumer digital displays (oh alright, LCD TVs and some LCD computer monitors) are getting steadily better across the board. But in an attempt to lure customers, the companies that make them have resorted to adding crap to their displays that reduces the overall picture quality.

I'm talking about the new evil in displays -- dynamic contrast ratio.

Contrast ratio is the ratio of the blackest black a screen can produce compared to its brightest white; and it's also the most useless measurement in video. Results can vary wildly. Measurements in an all black room can be double the measurements in the same room with a white ceiling, because the light from the display can bounce and reflect off the screen, which raises the brightness of the black areas of the image.

Dynamic contrast ratio, however, is like Cold Power: it claims to make your whites whiter and your blacks blacker. It's a way of trying to fake depth and richness, but because the screens it's built into are actually quite deep and rich without the feature, it's not only unnecessary but it manages to mangle your video in the process.

And more importantly, it's easy to mislead people with the inaccurate measurements taken when dynamic contrast ratio systems are running.

Dynamic contrast ratio systems work by detecting the median brightness of the image. If a scene exceeds or drops below set brightness or darkness thresholds, the contrast ratio is increased or the backlight that illuminates the image is dimmed to make the scene appear brighter or darker respectively.

It generally takes about half a second for this system to detect and change the display’s settings. So what happens if a dark scene strobes white for a moment, and then returns to dark? The display suddenly turns up both the brightness and contrast, and then turns them both down again. This can happen over the course of a second. Yet for the vast majority of that time, your image has been displaying the dark scene.

Directors, broadcast engineers and image technicians have many complex systems to ensure that images are repeatedly and accurately displayed on monitors, cameras, colour correction facilities and cinemas. But they can’t account for the millions of differently calibrated TVs in homes all over the world, so they get the signal as close to what can be considered 'normal' for as many sets as they can, then send it out to you.

But now your TV is messing with the brightness and contrast dials like a rhesus monkey conditioned to getting a nicotine hit after twisting the dials.

Unpredictably changing the setup of a display on the fly destroys the underlying emotional response evoked from the visual stimulus that directors and image professionals work so hard to evoke with their calibrated gear. That bright light in the distance of a dark scene isn’t as bright any more, and those slow, fluid cross-fades from bright to dark scenes are suddenly jerky and uneven as they are butchered by the dynamic contrast setting.

And all for what? A bigger, useless, arbitrary number to tackily scrawl on a piece of cardboard that gets stuck on the side of the display in a store.

This is just the latest instance of technical marketing taking advantage of people’s cluelessness. It’s in the same league as the megapixel race in the world of handheld cameras -- where a higher number doesn't mean a better image. You can prove this by printing two images; one from a 6 megapixel camera-phone, and one from a 6 megapixel Canon 300D.

Your image resolution (measured in megapixels, or millions of pixels) is proportional to the number of photosites on the sensor of your camera. Photosites detect and react to light, then record the information as a value for the pixels that make up an image. They usually get less than a 25th of a second to bathe in light and record a value. To get more megapixels on a sensor, you need to add more photosites. Because the area of the sensor is fixed, the photosites have to be shrunk before more can be added. The smaller the photosites are, the less light they get to bathe in, the lower the values they return and the more you have to turn up the gain up on the output to get a usable image. Which leads to a noisy and rubbish photo.

If your engineers don’t have the resources to design more sensitive photosites to compensate for this, but you still need to increase the resolution of your sensor, you can resort to interpolation. This can add as much resolution as you want the sensor to pick up, but goes all diminishing returns on you as you try and squeeze more detail out of it. The theory is simple: guess what the values of your hypothetical pixels would be and add those approximations to your image to increase t3h m3gapix3lz!!1!LOL!.

Why increase the pixel count if it's not going to give you higher quality images? Because if you stand at the camera counter at any electronics store and listen to what people with credit cards at the ready are asking, the most common question people want answered is "How many megapixels has it got?".

Sadly, even though numbers aren't everything, the ‘more is better’ yardstick is the only easily comprehensible way for consumers to try and quantify quality. People are scared of knowing anything more detailed than that.

It’s why you see VGA webcams (which have 640 x 480, or 0.3 megapixel sensors) ‘delivering’ 8 megapixel images. It’s why there are phones with 10 megapixel cameras onboard.

It’s why dynamic contrast ratio exists.

And it cheapens the effort that goes into making technology that makes images legitimately better.

And it makes me angry.
Got a news tip for our journalists? Share it with us anonymously here.

Most Read Articles

Log In

  |  Forgot your password?