Video Upconversion: Facts and Fallacies Page 2

Photo GalleryComponent-video has two key benefits. The first, which is applicable to all forms of video transmission and storage, is efficiency: A component-video signal requires much less bandwidth than the RGB signal from which it was created to deliver a picture of equivalent perceived quality. The other goes back to the development of color television in the 1950s. RCA used component-video to piggyback color onto a regular black-and-white TV signal in such a way that B&W sets would not recognize the extra information while new sets could extract it to display full-color pictures.

And there's where we get to the processes that led to the other types of analog video connections we have today and eventually to video transcoding in A/V receivers and preamplifiers. (You were starting to wonder, weren't you?) RCA's engineers had to figure out a way to attach the color-difference signals without disrupting the standard B&W signal. They achieved this by phase-modulating the color-difference components onto a single subcarrier, creating a combined chrominance (C) signal. The subcarrier resides at 3.58 megahertz on a standard 4.2-MHz NTSC B&W signal, which together are known as a composite-video signal. An NTSC broadcast signal is created by frequency-modulating the audio onto another subcarrier at 4.5 MHz and then amplitude-modulating the entire thing onto an RF (radio-frequency) carrier for transmission. An ordinary color television set pulls that all apart in order to deliver the picture and sound.

Unfortunately, every step along the path from component-video to NTSC broadcast signal entails some sacrifice in quality, so when non-broadcast video sources, such as VCRs and laserdisc players, came on the scene, it was natural to look for ways to avoid some of those steps. First up was sending audio and composite-video via separate connections instead of combining them into an RF signal. Then came the S-video, or Y/C, connection, which keeps the luminance and chrominance separate instead of combining them into a composite signal. And with the advent of DVD, which carries video in digital component format, came the analog component-video connection. Analog component-video is created from the digital original by running the latter through video digital-to-analog (D/A) converters, but the most recent addition to the connection zoo is HDMI, which can carry digital component-video in its native form - perfect for digital component-video sources such as DVD, HD DVD, and Blu-ray Disc players, HDTV tuners, and satellite and digital-cable boxes.

Transcoding: The Bottom Line The upside to all these different types of video connections is that they enable you to eke the last ounce of performance out of your various video sources. The downside is that it's a lot of connections. Just look at the back of a contemporary A/V receiver. It's nice to be able to run one cable from your receiver to your TV instead of half a dozen or more. Transcoding, more commonly called component-video or HDMI upconversion, makes that possible. A receiver or preamp with component-video upconversion will transcode composite-video to S-video and S-video to component-video, so regardless of what type of analog video signal comes in, it will be available at the component-video output. HDMI upconversion takes the process one step further, running the analog component-video signals through analog-to-digital (A/D) converters to yield digital component-video at the HDMI output.

What's most important to remember about video transcoding is that it's a convenience feature, not a performance upgrade. Converting a composite-video signal to component-video will not make it as good as it would have been if it had never been reduced to composite form. And it is not likely that a receiver will do a better job of the conversion than your TV would if you made all the connections there.

Deinterlacing: The Background Another relic of our analog TV heritage is the technique known as interlaced scanning. In NTSC TV, a video stream carries 30 complete frames per second (fps), a frame being a complete still picture created by 480 active scan lines running horizontally across the screen. (The total number of lines in an NTSC frame is 525, but that includes lines in what is known as the vertical blanking interval, or VBI, which carry no picture information.) Each frame is split into two fields, each of which contains every other scan line. The fields are transmitted and displayed sequentially, one every sixtieth of a second, so that the first field of a frame is completely scanned, and then the lines of the second field are scanned between those of the first. This is known as interlaced scanning.

Interlacing is surprisingly effective and serves its original purpose of getting more resolution out of the available transmission bandwidth. But if you compare a scene shot and displayed in standard analog format, now known as 480i, with the same scene displayed with the same number of lines using progressive scanning - in which all the lines in each frame are displayed sequentially instead of being divided into two interlaced fields - the progressive-scan version will look cleaner and smoother. That format is called 480p, the number indicating the active scan lines and the "i" or "p" the scanning method.

ARTICLE CONTENTS

X