On Measuring and Calibration: Plasma Contrast Ratios

I just finished a plasma TV review for an upcoming issue of S+V. As I was writing up its brightness and contrast ratios, I realized there could be some confusion about the numbers.

If you measure the contrast ratio of plasmas (all plasmas, not just this one) the same way you do other types of televisions - namely LCDs and projectors - they post poorer numbers than other technologies.

This isn't a performance issue as much as it's a measurement issue. And why that is . . . that's kinda interesting.

There isn't an established way to measure contrast ratio, and the most common methods produce wildly different, and sometimes misleading, results.

Here at S+V we measure a full-screen white image (100-IRE, or "maximum" white) and a full screen black image (0-IRE, or the darkest signal a TV can get). This might get you 50 footlamberts on the light side, and something like 0.050 footlamberts on the dark side. 50/0.05=1,000, so that's a contrast ratio of 1,000:1.

This technique is fine for LCDs and projectors, but plasmas work differently.

Each pixel in a plasma TV uses a certain amount of energy to "ignite" its little pocket of gas, creating ultraviolet light. The UV energy excites the red, green, and blue phosphors, which then create the light that you see as an image.

This may be one of Wikipedia's more poorly written articles, but here's how they say plasmas work.

The power supply in the TV can only output so much energy at a time, so if you ask it to do something out of the ordinary (like a full-screen 100-IRE image), it limits how bright the TV can be overall. So - unlike in the case of other technologies - a 100-IRE full-screen image is not the brightest a plasma can be.

The thing is, you almost never see a full-screen white image. The Progressive Insurance ads, where the actors are on some sort of white soundstage, is an example. Skiing during the Olympics is another. In movies and TV shows, this almost never happens.

Studies have shown that the average picture level of the majority of programming is around 25-IRE across the whole screen. Parts of the screen are brighter (of course), but other parts are darker. Therefore, the average is about 25-IRE. But you can't just measure 25-IRE, as that's dark gray and looks like nothing. A 100-IRE window (the upper left of the four "full screen" images above) has maximum white over 25% of the screen. For our purposes, this is the same as 25-IRE across the whole screen.

So a more accurate way to test how a plasma TV works is using this 25% white window, where most of the screen is black, save a 100-IRE rectangular window in the center. This approximates the average picture level of most programming. If you were to run a 12.5% window, or a 5% window, the plasma might be even brighter on that section (up to a point), but these aren't really relevant numbers. Why would you care that one pixel is 70 footlamberts?

I like using car analogies, as I'm a car nut and feel that more people are familiar with cars than with A/V gear. So imagine two cars under test: one goes 0-60 in 5 seconds, the other takes 5.9. The 5-second car is faster, correct? What if the 5-second car has really tall gearing, and does the 0-60 all in first gear, but the 5.9 car has much shorter gearing, requiring 3 gear shifts of 0.3 seconds each for the same stint? Other factors aside, these cars will feel similarly fast, much more so than the numbers suggest. This is more a problem of the fairly meaningless 0-60 measurement than it is a problem for the cars. It's arguable the 5.9 car would be faster in most situations due to is more useable gearing.

Just as the incessant focus on 0-60 times has resulted in slower cars (due to ridiculous gearing), so too have TV manufactures designed some ridiculous methods to produce meaningless contrast ratio figures.

Here's why: Because LCDs and projectors have a set light source (either CCFLs, LEDs, or UHP lamps in most projectors), their image-creating technology just manipulates the light created. With LCDs and LCOS, the light created by backlights/lamps is blocked to create an image. The DMD chip or chips in DLP projectors reflect the light from its lamp. As such, there is no significant electrical current difference between 100-IRE and 0-IRE. So full-screen 100-IRE/0-IRE is about as accurate as such a measurement can be.

Or to put it another way, measuring one of these displays with a 25% window isn't likely to result in any different numbers than a 100-IRE full screen. OK, to be fair, it's more likely to reveal differences in your measurement procedure more than any differences in the actual TV. (There are exceptions to be made depending on lenses and some other variables, but that's best saved for a different article).

Beyond the blatant lying all manufacturers now do about contrast ratios, the "dynamic" contrast ratios of most LCDs and front projectors may produce significant "numbers" but do little for actual on-screen performance. These dynamic numbers are merely the increase or decrease of the backlight - or opening/closing of an iris - depending on the average video level on screen. If it's a dark scene (or a 0-IRE test pattern) the backlight turns down, creating a darker image, if it's a bright scene (or 100-IRE test pattern) the backlight goes back up, creating a bright image.

The problem is that, at any given moment, the contrast ratio isn't nearly as good as the numbers suggest. If you have a brightly lit face on a dark background, the difference between the two is only a fraction of any claimed dynamic contrast ratio. I explain more about this aspect of the problem in the article linked in the last paragraph.

So in the end, it's likely that a given plasma TV will have a better contrast ratio than its numbers suggest - at least when given fair measurement procedures. 

X