Interlaced video exploits various technical possiblities of video broadcasting, to reduce the frame rate (and therefore the transmission bandwidth), while avoiding a frame rate that would be too slow for the eyes to to cope with (bad strobing). (See persistance of vision).
To avoid noticeable strobing, the rate must be fast. So a scan rate of around 50 Hz is used (Australian and U.K. PAL TV system). But to reduce transmission bandwidth, only part of the image is transmitted with each scan. Each subsequent scan transmits the other parts of the images (that were omitted previously).
Sequential scanning, scans an image rather like the way you read this page. Starting at the top of the image, a line is scanned across it, from left to right, then another line is scanned (from left to right again) below it, until the bottom of the image is reached, then the scan recommences at the top of the image.
Interlaced scanning is almost the same. But every second line is skipped during the first vertical scan; then, during the next vertical scan, the other lines are scanned (the ones that were skipped the first time). It's a two stage process (two scans are required to scan the entire image). Each individual scan, scans one field of video, together the two fields make up the the whole frame.
Thus each transmitted scan uses only half the bandwidth (of a sequentially scanned image at the same scan rate), yet maintains a high enough scan rate to eliminate noticable strobing (50 Hz), and still maintains a fast enough image repetition rate (25 Hz) to simulate live moving pictures.
Two immediately obvious problems are caused by interlaced scanning:
Very highly detailed objects can strobe if they have a image on them that's small enough that it's only scanned by one of the fields (things with a patterns on them usually, like men's suit jackets, and some computer graphics).
Trying to get a still image from a VCR (or other frame-store device) of a fast moving shot can result in a strobing image. As the second field was scanned at a later moment in time, the object will have moved from the position it was in during the first field, and displaying both fields will show a rapidly strobing image between the two moments in time.
The simple solution is to ensure that the still replay device only shows one of the fields video information, showing the same field information for both scans. This eliminates the strobing, but does reduce the image to only half the number of lines down the screen (you're missing out on the scan lines between this fields scan lines).
For digital systems, this is easy enough to do. All they have to do is replicate the data from the first field, into the next one, while still keeping all the scan system timings running correctly. They could even sum the two fields together, to get the full frame, with (a bit) more resolution, but anything that was moving would be blurred.
For mechanical systems (VCRs, for example), getting a good still image is harder:
The heads must be positioned so that only one field of the recorded signal is played back.
You may see the picture flickering at the top of the screen, or, jittering up and down, due to interlace scan signal timing being destroyed.
Normally a frame starts at the beginning of a line, the first field ends half way through a line, the second field starts half way through a line, and ends at the proper end of a line, and the next frame then starts at the beginning of a new line, and so on. If you only actually scan one field, you've destroyed that relationship between the end and start of a field scan positions.
The heads are now scanning the tape at a different speed (because the tape isn't moving as well as the heads moving, now). So that the scan frequencies will change slightly. This isn't usually all that noticeable (in that the screen width and height will change ever so slightly), but unless the video drum speed is modified a bit, the colours will shift position (you'd see the colour information slip sideways, relative to the monochrome image).
And one less obvious interlacing problem occurs all too often:
Some televisions don't have a highly accurate scanning system, so that the second field isn't always scanned exactly between the scan lines of the previous field. This results in (at least) reduced vertical resolution. Poorly aligned VCR heads can also cause this problem to occur on televisions that otherwise (for live broadcasts) are okay. And VCRs operating in paused or special playback modes (slow, reverse, etc.), usually can't provide a true interlaced video signal (even when their heads are aligned properly).
By the way, MPEG video compression uses a similar theory, of spreading transmission of data across more than one occurance (to reduce transmission bandwidth), but in a different application. A series of frames are analysed, and any part of the image that is common to all frames will only be sent once. Subsequent frames will use data that's already been sent, to add to the current frame.
In other words (and over simplified), only parts of the image that are different (any changes) are transmitted.
In reality, key frames are also transmitted, periodically, that contain more than just the recent changes to the image. Without this, you'd not be able to see any MPEG transmission if you hadn't received the first initial frame. And, it's not really possible for one single (initial) frame, to provide the base data for all subsequent frames (unless the whole movie had the same picture, with there never being any different camera shots at all)