The majority of our vision (where the brain gets most of the detail from) is not actually colour, so the design of colour television takes a similar route. The monochrome picture carries all the detail, and lower resolution colour information is added on top of it.
Well, actually, the term “black and white” is rather misleading. There's all the different shades of grey in between, as well. Monochrome (one colour) is the term to use, but even that's not totally correct (neither black, white, nor grey are colours, so it's not even “one colour”). But enough of being pedantic…
An image passes through the camera lens, and lands on the video target, where it is scanned and turned into an electronic signal that's representative of the picture. If it were done this simply, though, the picture would look rather wierd, as some things would look brighter than they should do, and other things too dark, because our eyes are sensitive to different colours in a certain ratio. So the spectral response of the camera is set up to mimic human vision, which is mostly sensitive to green light, with lesser sensitivities to red and blue light (the ratios being approximately 59% green, 30% red, and 11% blue), so that the monochrome image looks natural (“pan-chromatic”). Using photographic filters and/or the photosensitive characteristics of the target, the amount of red, green, and blue light reaching the target is adjusted to produce a pan-chromatic response. Lesser quality monochrome cameras may not bother trying to be pan-chromatic, and the really old ones (back in the days of developing television) couldn't be. Most colour cameras still need to produce a pan-chromatic image for part of the signal transmission, and they adjust the final balance between the three primary colours, electronically.
Most human's eyesight can't see infra-red or ultra-violet light, and we don't have video displays that could produce it correctly, so they're filtered out completely (0%). If you've seen some very odd-looking monochrome security pictures on the news, where people have bright white glowing hair and eyes, out of focus pictures, and other strange effects. That's generally because they've been shot using a non-panchromatic camera that was sensitive to infra-red, and their hair reflects infra-red very strongly. And infra-red has a different focus point; putting an in-focus normal-light picture on top of an out-of-focus infra-red picture, or vice versa. Similarly, very old television pictures may look strange because they've got different spectral responses; though nowhere near as bad, as correction filters would be used to prevent it. Typically, security cameras don't bother with filters, because the installer doesn't care, doesn't know about filtering, or wants the ability to see in the dark.
Incidentally, the pan-chromatic response is why you don't get very good pictures when filming under predominantly red and/or blue coloured lighting—you can only get 40% of the video signal, at maximum, that you could otherwise get under untinted lighting. Opening the lens wider (to let more light in) only partially helps, as one of the more usual problems with filming under coloured lighting is that the lighting designer doesn't use enough lighting power, but even if they used masses of lighting you're still left with unnatural shading thanks to the unusual colour conditions. And some monochrome cameras are not pan-chromatic, either, so you get unnatural looking pictures all of the time, making some things appear brighter or darker than the brain would expect.
That's almost the whole story, for monochrome video, but not quite. There's non-linearities in the signal path from the video target to the display devices (your television screen), meaning that the signal representing the amount of light on the picture is not directly proportional, it's logarithmic (there's an uneven difference between the amount of light going in, and the amount going out, which varies over the range of illumination). The error is mostly in the television picture tube, but the correction is made in the camera due to a 1950's/1960's design decision that this would be the cheapest/easiest way to do it well. The camera's video signal is tweaked in the opposite direction of a typical television screen's usual response, so that the overall differences are evened out (the gain is increased in the dark portions of the picture, to stretch out the blacks over a wider range, brightening them, while leaving the brighter parts of the picture as they already were). This is called “gamma correction”.
It's unfortunate that the decision was done to stuff up the camera to suit the television, as different display devices developed later on (such as LCD screens) have different responses, and they have to counteract the wrong gamma (for them) to get a proper picture. It would have been better if the camera was corrected (if needed) for a linear response, and the televisions corrected (if needed) for errors in their response, each according to their own actual characteristics.
And there you have it, a pan-chromatic monochrome signal.
When we see colour, it's due to receptors in our eyes that are mostly sensitive to red, green, and blue light, and between the three of them, we see the full spectrum of colours. So the colour television system, again, was designed to work in a similar way—using red, green, and blue light-sensitive components to produce a colour signal. At the simplest level, you have three video targets, one for each colour, with optical filters and prisms (or mirrors), between the lens and targets separating the colours. Those targets are scanned and gamma corrected, this gives three separate red, green, and blue (RGB) video signals. And you could connect them to a video monitor with a bunch of cables.
While that three-cable signal provides the best possible video signal that you could have between equipment, it's not really practical for a broadcast system. Generally, you want just one video signal (composite video) that carries all that information, and it was desirable for it to be compatible with monochrome televisions, too. So a way was devised to combine all the signals together in a way that they could be separated, again, in the colour television receiver, and in a way that the colour signals wouldn't be noticed by the older monochrome television sets.
Firstly, the red, green, and blue signals were combined together in the right proportions (as described previously) to produce a pan-chromatic monochrome video signal—the “luminance” signal (“Y” being the abbreviation used for it). This gives us a picture signal for the old monochrome television sets to use, and provides the majority of the picture detail on the colour sets.
Next was needed a way to send the colour information, as well. The technique used is called a “colour-difference” system, where information is transmitted that are the differences between the black and white image and the colour one. Technically, it's done by electrically combining the red and blue signals with the luminance signals, producing a red minus the luminance signal, and a blue minus the luminance signal. At this stage we have what's known as “component video” (separate signals for luminance, and the two colour-difference signals of R-Y & B-Y), as used in professional analogue video systems (e.g.“Betacam”) and found on the back of many DVD players. This is almost as good as the RGB signals, but not quite (especially when different voltage levels are used for the colour difference signals between different equipment—you can get dull, or strange, colours).
Those colour-difference signals are encoded into a single “chrominance” signal, which will be added to the luminance video signal in a manner that isn't noticed by most monochrome sets (other than a bit of fuzz on the picture), to produce a “composite video” signal. Different countries took different approaches as to how they encoded the colour-difference signals into a chrominance signal (PAL, NTSC, SECAM, etc.). And in an effort to reduce the amount of broadcasting bandwidth that would be required to transmit a colour signal, they deliberately reduced the resolution of the encoded colour signals, and reduced the gamut of colours that could possibly be produced. Each of the different colour systems uses different encoding techniques, and they each have different limitations. One thing common to most of them is that the colour signal is only present on coloured parts of the picture—it's completely suppressed on parts of the picture that are purely monochromatic, and during the portions of the video signal that aren't used to carry picture information. The chrominance signal is quite a lot worse than the separate component signals, because of the reductions made in the colour encoding stages (you never manage to get the full signal back again when you reverse the process).
Now we're nearly there. We have separate luminance (“Y”) and chrominance (“C”) signals which can be used directly between equipment (as “Y/C” signals, as found in “S-Video” connectors), but will be combined together to form a “composite video signal” (known as “CVBS,” which is short for composite video, blanking, and sync) to be used for analogue television broadcasting, or simply for cabling video equipment together that's designed for composite video. The combining of the Y & C signals also requires some reduction in the resolution of the luminance signal, so that they don't interfere with each other. This means that the composite video signal is almost the worst choice, of the lot, for connecting a signal from one thing to another.
Given the option, RGB connections are the best choice for equipment interconnection, as there's the least amount of processing between source and destination. The next best choice is component video (NB: There's various incompatible implementations of component video, with differing signal levels or correction factors, so it's only suitable for use between component video equipment that's meant to be connected together). The next best choice is the S-Video connector, then follows composite video, with RF being the worst choice.
Digital video, whether that's MPEG encoded files, watching DVDs, digital television broadcasting, etc., is generally done as a digitisation of the component video signal. It's at the half-way point of best signal possible, but only if you don't heavily compress the information, or convert between one thing and another (such as de-coding and re-encoding, or re-compressing), or simply do a bad job of digitisation, in the first place.
And, as mentioned in the monochrome section, filming under coloured lighting doesn't work particularly well for any of the video signals that uses encoded colour. Firstly, the panchromatic monochrome signal that provides most of the video signal is only going to be a fraction of what it could be under untinted lighting. And, secondly, all of the colour encoding schemes reduce the resolution of the colour signals, and reduce the gamut of colours that can be produced. You can get more of a picture by improving the strength of your lighting, up to a point (there's no point overexposing the red or blue channels, trying to get a brighter image, overall—you'll just get a solid coloured blob), but you're still stuck with a low resolution problem. Only RGB and component video signals can pass the full resolution of highly coloured images, and even then you can still be hamstrung by cameras that the designers never bothered with building wide bandwidth in the red and blue channels, because they expected the camera to be used with bandwidth-limited composite video signals. And even if you connect all your high-resolution gear together using RGB or component video, as soon as someone further down the chain views the picture using one of the encoded video signal formats, they'll be looking at smudgy video.
For analogue television, this composite video signal is then broadcast as a radio-frequency (“RF”) signal. (Digital television doesn't go through the encoding stages, it broadcasts a digitised component video signal.) This RF signal is the worst way possible to connect equipment together. It's gone through the most processing, from start to finish.
In a colour television receiver set, the above processes are reversed: The RF signal is received, and the composite video signal is detected from it. The composite video is filtered to separate the chrominance and luminance signals, this involves some further loss in resolution. The chrominance signal is decoded to get the two colour-difference signals, this also involves some loss in signal integrity. The colour-difference and luminance signals are matrixed together to get separated red, green, and blue, video signals to drive the display (Y is added to R-Y to get just the red signal, separately Y is added to B-Y to get just the blue signal, and the green signal is derived from the remainder).
For a monochrome set, it's much simpler: The RF signal is received, and the composite video signal is detected from it. And that composite video signal is used to display a picture. The colour signal is ignored, the tuner bandwidth mayn't be wide enough to pass it through. And the video circuitry bandwidth mightn't be wide enough to pass it through, either, even if the tuner did pass it. If the colour signal does get through, it'll be seen as a fuzzy dot pattern on top of the picture, there's nothing in the monochrome receiver to make use of it, anyway.
Whether colour or monochrome, there's a common requirement for them both, that the television screen is synchronised with the transmitted video signal, so that the picture is drawn in the right place. A video signal is a serialised signal, the picture is scanned from left to right across the screen, from the top down to the bottom. Sync signals indicate where the edges of the frames are, both horizontally and vertically.
For RGB systems the sync signals may be carried on separate cables (one for horizontal sync, the other for vertical sync), or the two may be combined into composite sync and supplied with just one cable, or that composite sync may be added to one or more of the video signals (adding it to the green signal is quite common). For all the other systems, composite sync is added to the luminance signal.
The encoded colour systems also have a colour synchonisation signal, the “colour burst.” This is added to the chrominance signal. It's appears, briefly, for a moment before the start of a horizontal line of video. It's not something that's seen by the viewer (you), it's off the edge of the visible frame. In the receiver, the colour decoding circuitry uses it to align its colour sub-carrier oscillators with the encoder's.
The following two pictures (me having a play with one of the ABC's broadcast television cameras during their 2007 open day) show a natural colour image, and a natural pan-chromatic monochrome image (as used in the luminance video signal). Beyond the lack of colour, there's not much difference between the two of them. The relative brightness of objects look similar in both of them, because the luminance signal has been made from the red, green, and blue, video signals, mixed together in the right proportions to produce a natural-looking pan-chromatic response.
The next set of images are the colour-separated images, starting off with a colour separated all by itself (tinted pictures), with the adjacent image showing the tint removed, to give a monochrome representation of that colour, alone. You'll see that these monochrome images are different than the pan-chromatic image (look at the relative brightness changes in these images of my pale blue jacket, the safety cone behind me on the left, and the coloured hats people are wearing).
Firstly, we filter to allow only the red light into the camera (the red image). Secondly, we show what the camera sends as the red video signal (the adjacent un-tinted monochrome picture, proportional to the amount of red light in the picture). This will look red, again, when the signal is fed to the red-video inputs of a video monitor screen, as it'll only be illuminating the red part of the display.
You can see how my jacket is looking a bit darker than how it should naturally look, this is because it doesn't reflect much red light. But anything that does reflect a lot of red light (my hands, the safety cone, anything that is white) will look very bright white. My flesh looks unnaturally bright, and the safety cone looks almost a negative image (compared to all the others). The sky looks unnaturally dark, as there's very little red light in it, comparatively speaking.
Now we do the same thing again, this time with just the green light (the green light of the picture going into the camera, and the green video output). As before, this image will look green when fed to the green part of a video display.
You can see that my jacket is looking brighter, a bit too bright, but not radically so. There's any number of things in the picture that look almost right, but just a bit too bright. This is because green is something that we're predominantly sensitive to, so it's very close to the natural pan-chromatic look, but it's lacking the information from other colours, to darken some things down to their natural look, relatively speaking.
Lastly we do the same thing, this time with only the blue light (the blue light of the picture going into the camera, and the blue video output). Again, while the signal going through the camera doesn't actually have any blue tint to it, it's just a signal voltage indicating a brightness level, the image will be blue when displayed through the blue part of a video monitor.
Now you can see my jacket, and the sky, both look abnormally bright, because they do have a lot of blue in them. And the orange portions of the safety cone look very dark, because there's very little blue in the colour orange.
You may have noticed that the red and blue tinted images look rather dark, this does illustrate one problem with trying to film under coloured lighting. Video is not as good as our eyes, and you're left with a less than brilliant thing to look at.
Also, you may have noticed that the monochrome images all are fairly detailed. That's down to how good that particular camera is. But some cameras are rather fuzzy in the red and blue departments, partially down to how they design the camera (some simply don't even try to have much resolution in the red and blue video channels), and also the optics of the lens play a role (the refractive indexes are different for each colour, so they each focus at a different distance from the lens).
I'll redisplay the monochrome images, without the text between them, so you can have a better look at the differences between the images:
And another set of images, that display some more radical differences between the different monochrome sources, albeit with less things in them that give you a useful point of reference. Notice the differences in the brightness of the sky and the buildings.
It's a colour separation of a single image (North Terrace, in Adelaide, South Australia, as seen from the War Memorial), that shows how you can get some rather extreme differences when not using a pan-chromatic system. This may work well for artistic purposes, but you strike problems when you need to get realistic images.
Now we'll have a look at what the different component video signals look like. As before, there's the high-resolution luminance portion of the image, that gives us most of the picture content and detail. And there's the two colour-difference signals, which display only the differences between the luminance and the colour in question (red or blue).
The luminance and colour-difference signals will be combined together in the video monitor, to produce a full-colour image. The colour-difference signals being used to, virtually, paint the colours on top of the monochrome image.
You can see that there's not much of an image in the colour difference signals, anything uncoloured is a mid-grey in these examples. (In reality, the neutral point is at the black blanking level, and the colour-difference signal goes above and below the black level. But we can't show a below-black signal on here, so we've raised the reference level of the signals, biasing the neutral point around mid-grey.) Anything that is coloured (in the real picture) will be brighter or darker, with the difference from neutral being a measure of how colourful that part of the picture is, and in which direction (e.g. whether there's more blue there, or an absence of blue). In this case, the lack of detail's because the colours in the picture aren't that much different from the luminance (i.e. there's not much vibrant colour in the picture, other than the bright orange safety cones and fencing, and triax camera cable). And, that is why this system works fairly effectively—most pictures are not highly colourful, so you tend to get away with having a less detailed colour signal than the luminance signal.
In systems with low resolution colour signals, such as the encoded chrominance signal in PAL or NTSC video, the colour-difference signals will be fuzzy and blobby. Often, with a fairly significant amount of noise on top of it, too. A noiseless signal looks somewhat like the following two example pictures, having very little fine detail, at all.
Generally, television gets away with a very low resolution colour portion of the image, because the detail comes from something else (the luminance signal). However, certain video recording technologies make it an even lower resolution, still (VHS, heavy MPEG compression, etc.).
Anybody from a theatre background who tries to video-record something illuminated with deeply coloured lighting, will often be sorely disappointed at the filmed results. You've got very little to see on screen, it's dark and murky. You need to illuminate a scene with much more power than you'd, otherwise, probably have used for stage work, and you're still left with a low-resolution colour issue that can only be partially addressed, even when using the very best of cameras. Part of the problem is on the camera side of things, the recording and playback system plays a part, and so does the display device. Any weak link in that chain makes it look bad.
If you're connecting a device to a video monitor, and they both have choices of different connection types, you should pick the best one possible. Some connection options are quite noticeably better than others. The least amount of stages between source and display, the better the results (picture clarity, colour response, etc.). If you refer to the colour video encoding stages diagram (also displayed above) and the opposite colour decoding stages diagram, and take into account that one diagram leads to the other (follow the arrows through both diagrams) with composite video being in the middle, pick a connection that bypasses as many stages between source and display as possible (i.e. a RGB source connected directly to a RGB monitor is the best choice, it's the most direct signal route). Whatever you pick, has to be the same on both sides–you can't connect different types of signals together. For analogue video, the following list is in order of best to worst choices:
If you have digital video connections, and you're dealing with signals which are digital, then you're best to use them, but passing analogue signals through digital connections isn't a good idea (it'll involve conversion to digital, and back to analogue, which will introduce losses). When you have analogue and digital signals to monitor, use different inputs on your monitor appropriate for each of them. For digital video, you have a list of choices like:
Audio-wise, you want to pick the same route as the video connections. If you're connecting the video digitally, you should do the audio the same way (particularly as sometimes there can be a delay between the sound and picture, and you want them to be in time with each other). With digital audio, you have connection choices of optical or electrical digital audio. Theoretically, the optical connections should present less problems than the electrical ones. And with analogue audio you have choices ranging from monophonic through to multi-channel, which you have to pick as suits your audio equipment available to you.
Whilst there are plenty of truly awful cables (bad soldering, corroding connectors and wiring, poor shielding, etc.), most of the special super-duper hideously expensive cables are a complete con job. Connections over short cables (such as under two metres) won't need any special cabling unless you have some really extreme operating conditions (such as being jammed around power transformers, or being near a radio-frequency transmitter, including some mobile phones). The average $5 cable is entirely adequate, and indistinguishable from one many times its price. Not to mention that having to replace one that gets lost or broken isn't damaging to your wealth.
It's only when the conditions become unfavourable that the expense of more special cabling becomes worthwhile. e.g. Tougher leads on equipment that's mobile, where cables will be flexed or dragged about, or connectors that may get squashed. Good shielding when using longer cables, or cables in electrically noisy environments. Proper 75Ω video cables on longer cables, or equipment with bad input or output video circuitry. Handy multi-core cables when you'd like one lead between equipment to carry audio and video signals, rather than a tangle of separate leads. And specialist multi-core cables that provide features only available through a specific type of connector, such as HDMI.
The type of metal in the cable doesn't make any difference to signal quality. Special cables like “oxygen-free copper” don't affect signal quality, that just delays the onset of corrosion (which looks bad, but won't make the slightest difference when it's the outside of a wire that's corroding). Likewise, the type of metal in the plugs doesn't affect the signal, it's another case of avoiding corrosion that interferes with conductivity. Gold plated, or solid gold connections simply don't corrode, that's their only benefit. Silver-plated plugs are slightly cheaper, and the tarnish (corrosion) on silver conducts just as well as the untarnished silver. But using expensive gold or silver plugs with a corroding tin-plated socket is a waste of time, and vice versa. Both plug and socket should really be the same type of metal, using the same type of metal is the best way to avoid corrosion. Corrosion is caused by air and moisture reacting with the metal, and by the contact between, and passing of electricity through, dissimilar metals.
There's no such thing as “one-way” or “directional” cables.
When I select cables, there are only a few things I generally concern myself about: Nice shiny metal without any obvious corrosion. Decent strain relief between cable and connector, so wiring doesn't break. Adequate shielding. Having the right connectors at each end, so I can avoid having to use adaptors. And, a nice snug fitting connection between plug and socket.
On that last note, RCA (or Cinch plugs), are the worst connectors in the world for being a bad fit between plug and socket. And all BNC connectors for video use should be 75Ω connectors. The 50Ω connectors (as used for RF and computer networking) have different physical dimensions, which makes for unreliable connections when a smaller plug fits sloppily into a larger socket (of the other impedance), and can lead to socket damage when the larger plug is forced into a smaller socket. Also the standard cable sizes of 50Ω and 75Ω cable are different from each other, so only the right connector fits properly on the right cable. The impedance difference between them is next to negligible on short video leads.
For well over twenty-five years, that I've been patching equipment together, I've rarely found it necessary to bother with proper 75Ω video cable unless the leads are longer than five metres. It's next to impossible to see, or measure, a difference between video cable, and ordinary shielded audio cable, between a player and video monitor. We'll use proper 75Ω BNC leads between equipment, for consistency and reliability's sake, but I have no qualms about using ordinary audio cable for any patching, if that's all that's available. Especially when connecting the video output from a recorder to a video monitor, and the equipment on both sides only have ordinary RCA connectors.
Now that the average person has access to video with high resolution colour signals (e.g. digital television receivers, and DVDs played through component video connections, etc.), it's become much more apparent how bad some of the others are (e.g. VHS). The same was noticed when VCRs and rental movies came out—how bad some people's television reception was, and how good their television could display a picture that wasn't received from off-air.
Digital video has helped, quite a lot, in this regard, but it's still no panacea. Going digital doesn't automatically make things better, and in some cases it makes thing worse. Good quality analogue is still “good quality,” and all video and sound sources start out as analogue, end up as analogue, and will require converting back and forth at least twice (if being used in a digital system). But cheap digital gear in the home has tended to be better than the cheap analogue gear at home.
Whatever system's used, the camera has to be good, in the first place. And editing digital video often involves decoding and re-encoding, and recompressing video in a lossy manner. Not to mention that consumer level gear, and even industrial level equipment, is far removed from the quality of professional broadcast gear, as well as the price. If you want your personal, or business, video to look like broadcast television or Hollywood movies, you can expect to have to pay an awful lot more for it. e.g. Hiring a domestic camera may cost you $70 a day, a professional industrial camera may be around $250 a day, and a broadcast camera $900 per day. And that's just the rates for the camera equipment, never mind the operator and the rest of the production equipment and post-production work involved. Expect to pay for what you really need, and to get what you've really paid for.