TransWikia.com

Why are color difference signals transmitted rather than R,G,B signals?

Electrical Engineering Asked by Moonzarin Esha on November 14, 2021

In color television systems back in the early days, people used to transmit color difference signals (Y-R) and (Y-B) along with Y, which is the luminance signal. My question is why are color difference signals transmitted rather than just two of the three R,G,B signals along with Y? What is the need for color difference signals rather than pure RG, GB, or RB?

4 Answers

As others have explained in detail, YCbCr is basically a compatibility hack for black-and-white TV combined with a way to send chroma information at a lower resolution than luma information to make efficient use of available bandwidth.

What others haven't mentioned, which I think you'd benefit from, is Xiph.org's A Digital Media Primer for Geeks, which starts into video at 16:00 (though the first mention of what you specifically care about begins around 17:23 if I remember correctly).

It covers things like interlacing, gamma, YCbCr, subsampling, and various other things you want to know in order to be effective in doing self-directed study.

Answered by ssokolow on November 14, 2021

You need Y (R+G+B) aka Luminance, to represent the monochrome signal, both for compatibility with B+W television, and because it is the most important part of the colour TV signal, requiring the highest bandwidth.

You need two other signals to convey three colour channels ... but why U and V rather than R and B? (not G, because Y is mostly G anyway)

Because, except for the most brightly hued colours in the image, there is less information in U and V than there would be in R and B : indeed for a grayscale there is no information at all in the colour difference signals, and for pastel colours, very little. Then R and B signals would be mainly duplicates of the G (or Y) signal, while U and V are low amplitude (and do relatively little damage to the monochrome Y signal they are layered on top of)

Reducing the auxiliary information to be transmitted, allows transmission with less signal space.

Experiments showed the bandwidth of the colour difference signals can be reduced to about 0.5 MHz (vs 5.5 MHz for the luminance channel in a 625 line system) without doing too much damage to the image quality.

Answered by user_1818839 on November 14, 2021

I'll address a couple of points briefly here. A full discussion of the analog color TV signal and the decisions that went into its design would fill a book.

When color TV was introduced, the broadcast format was already well established, with 6 MHz channels, using AM VSB for the video (Y) signal and an FM subcarrier at 4.5 MHz for the sound. They needed to find a way to add color information to this signal without using any additional bandwidth and without creating any incompatibility with BW receivers.

In order to accomplish this, they made two key choices:

  • Use the HSB (hue, saturation, brightness) model to represent color images
  • Encode the color information as a phase- and amplitude-modulated subcarrier

Since the existing Y signal is exactly equivalent to brightness in the HSB model, that means that only H and S need to be encoded on the subcarrier. They chose to encode hue as the phase angle of the subcarrier, and saturation as the amplitude of the subcarrier. The 0° phase reference is given by a "color burst" signal that was inserted into the horizontal blanking interval.

By carefully picking the frequencies used,1 and by suppressing the actual color carrier itself and sending only its sidebands, the new color information was effectively "interleaved" into the same spectrum used by the existing Y signal without creating visual artifacts for BW receivers.

Finally, getting back to your actual question, the B-Y and R-Y signals are just one way (the most common way) of expressing the phase and amplitude of the color subcarrier — you can think of them as the "rectangular" coordinates of what is really a "polar" signal. They are named that way because "blue" hue is defined to be close to 0° phase (maximum positive B-Y signal), and "red" hue is defined to be close to 90° phase (maximum positive R-Y signal). The "green" hue is close to 225° phase, which corresponds to the maximum negative values of both B-Y and R-Y.


1 In case you're curious, here are the details for NTSC. The original BW signal used frequencies of 15750 Hz and 60 Hz for horizontal and vertical scanning. In order to minimize the visual effects of any leakage of the audio subcarrier into the video signal, scan frequencies were reduced by a factor of 1.001 (to 15734.3 Hz and 59.94 Hz, respectively). This made the audio subcarrier exactly 286× the horizontal scan rate, which made any artifacts stand more or less still on the screen. This also puts the audio subcarrier in one of the "nulls" of the color subcarrier comb filter. In order to achieve the frequency-domain interleaving that I spoke of, the color subcarrier frequency needed to be an odd multiple of half the horizontal scan rate. They chose 455/2, which makes the color subcarrier frequency 15750/1.001*455/2 = 3.579545 MHz.

Answered by Dave Tweed on November 14, 2021

Since color transformations are a linear 3x3 matrix, so is converting RGB to YUV and back. Y is needed anyway for compatibility with B/W TVs, and Y signal consists mostly of G component, then mostly of R component, and least with B component, in certain proportions.

As RGB signals have full bandwidth from the camera, so does calculated Y. As human eyes are most sensitive to luminance than color, it makes no sense to waste bandwidth for direct transmission of R or G or B signals. The color signal must also be hidden inside B/W transmission.

Since the RGB to YUV is a linear matrix, you get 3 componets out if you put 3 components in. As the coefficents for calculating Y from RGB are known, the other two components are the ones that describe the other two components U and V.

The U and V components need to contain the rest of the info to calculate RGB, and since Y already contains most of G component, the U and V should contain mostly the R and B components, except the RGB parts that were already sent with Y. That's why the G component is not used. As the Y calculation was done with resistor dividers and analog amplifiers from RGB, you needed to minimize the circuitry, and inside a RGB to Y converter there already exist the RGB components and the weighted sum Y, so easiest thing to do was to also calculate R-Y and B-Y as they would result in larger signals than G-Y. So now the R-Y and B-Y are the signals that contain least luminance info and most color info. And since Y needs higher bandwidth and color info doesn't, the bandwidth of color difference signals R-Y and B-Y can be reduced even more, so that's why none of the R or G or B signals are directly sent.

Answered by Justme on November 14, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP