You are on page 1of 8

Martin Lewis

Video Engineering

The principles of the 625 line monochrome television The principles of the monochrome television system have significantly defined the structure of the modern colour television [Benoit 1997: 3]. The illusion of continuous motion is achieved by projecting a number of still pictures called frames in rapid succession. The brightness of each frame lingers on the retina for 1/16th of a second and is known as the persistence of vision. Therefore, a system with a frame rate greater than 16 frames per second (fps) is capable of producing a smooth image [Gupta 2006: 5]. For practical reasons the frame rate has been found to work best when derived from mains frequency, consequently 25fps is used in the UK and 30fps in the USA. This reduces the undesired effects of low frequency mains hum [Gulati 2005: 22]. Just as the eye senses variations in light, a monochrome camera captures a frame by focusing it onto a photosensitive target. Each image falls onto a number of picture elements (pixels) arranged in lines, these elements are randomly distributed but can be thought of as forming a rectangular structure [Gupta 2006: 3]. Each element produces an electrical signal proportional to the intensity of incident light; this is 0.7V for peak white, falling to 0V for black. To simultaneously reproduce each element to its corresponding spot on a monitor would be impractical and so the signal is extracted sequentially in a process is known as progressive scanning [Fisher 2001: 510]. A 625 line system scans each element from left to right at high speed producing a continuous signal for each line. The scanning continues vertically down the picture until all the lines have been sent, at which point the process starts again from the top. For a 625 line system running at 25fps 15,625 lines are scanned per second, this is known as line frequency [Fisher 2001: 511]. Horizontal and vertical scanning is achieved by deflecting a scanning spot with a linear current. At the end of every line the beam must rapidly return to its initial position in a process called retrace. During retrace the signal is blanked (0V) so that no picture is produced [Gulati 2006: 17]. The vertical current deflects the beam towards the bottom of the raster but at a much slower speed. At the end of the frame the scanning spot retraces to the start, ready for the next frame [Fisher 2001: 511]. At the receiver the reverse process is used to focus a beam of electrons onto the phosphors
of the picture tube for picture reproduction [Gupta 2006: 41].

Fig. 1. Horizontal and vertical scanning and retrace [Gulati 2006: 17]

-1-

Martin Lewis

Video Engineering

It takes 64s to scan a line, out of which 52s is the active line period and 12s is line blanking. The blanking period consists of three sections: the front porch, line sync and back porch, the timings are summarised in Table I. The front porch provides a transitional period for the picture signal to fall to that of blanking level [Gulati 2006: 40]. This is chosen so that a line ending in peak white can reach blanking level before the start of the synchronisation pulse, thus isolating the picture detail. The line sync is used to trigger horizontal retrace, it is vital that this occurs at same instant for the camera and reproduction system to avoid distortion [Gupta 2006: 61]. The back porch provides time for retrace to finish before reversing the polarity of the current to scan the next line. The front and back porch are equal to blanking level and thus help preserve the DC content of the picture information [Gulati 2006: 41]. To distinguish between picture information and sync pulses, sync pulses are given a negative voltage (-0.3V), this ensures that they remain invisible. During vertical retrace (1.28ms) horizontal scanning continues, consequently 50 lines are blanked per frame leaving the number of active lines as 575 instead of 625 [Fisher 2001: 511]
Table I. Horizontal line period [Fisher 2010: Monochrome lecture]

Period Front porch Sync pulse Back porch Horizontal blanking Visible line time Total line time

Time (s) 1.5 4.7 5.8 12 52 64

Although 25fps is enough to give the illusion of continuity, when blanking is introduced it fails to prevent the screen from producing flicker [Gulati 2005: 21]. Flicker is extremely objectionable but can be overcome by increasing the refresh rate. Interlacing splits each frame into two fields, each frame is scanned twice by increasing the downward rate of travel and hence doubles the refresh rate. Only the odd lines are scanned on the first field and the remaining even lines are filled in on the second field [Fisher 2001: 511]. Of course scanning 50 complete frames per second would also eliminate flicker but this would also double the signal bandwidth [Gulati 2006: 28]. Consequently the introduction of interlacing allows the system to preserve line resolution, whilst doubling the refresh rate to 50Hz without increasing the bandwidth [Benoit 1997: 4]. The width/height ratio of a picture is called the aspect ratio and for a monochrome television is 4:3 [Gupta 2006: 9]. A common aim for all reproduction systems is to produce equal vertical and horizontal resolution. Since the pixels are not arranged in an orderly manner a scanning beam may scan two pixels simultaneously, which degrades the resolution, this is known as the Kell factor and results in 25-35% of the pixels being lost [Gupta 2006: 47]. By considering the Kell factor (0.73 for the UK) and assuming that the number of elements that can be resolved is equal to the number of scanned lines, a monochrome television requires approximately 570 elements on each line for equal vertical/horizontal resolution [Fisher: 2001: 510].

-2-

Martin Lewis

Video Engineering

(a)

(b)

Fig. 2(a) Fine detail, horizontal black and white lines and (b) signal output [Gulati 2006: 25]

In the case of fine detail (figure 2(a)) the system must be capable of resolving each of the 570 elements. Such a scene would produce sequence of square waves switching between 0.7V (peak white) and 0V (black) as shown in figure 2(b). Hence the channel bandwidth can be calculated from the frequency of the resultant square-wave [Gulati 2006: 26]. The square-wave completes 285 complete cycles per line, thus its time period (th) and frequency (fh) can be calculated:

the frequency and hence the channel bandwidth is:

Ultimately the viewing distance decides resolution. Ideally this should small enough for the eye to resolve picture detail, but shouldnt be so small that separate picture elements become visible. The best viewing distance varies from person to person but lies somewhere between 3 to 8 times the picture height with the majority preferring a distance of 5 times picture height [Gupta 2006: 9]. Sound is invariably associated with television and so is reserved bandwidth at the high end of the video spectrum [Gulati 2006: 123]. At the transmitter the desired audio signal is frequency modulated by a sound carrier, which positions it at the extremity of the upper sideband 5.5MHz above the picture carrier [Benoit 1997: 6]. The FM sound signal occupies a frequency spectrum of about 150KHz and must be arranged so that its side bands are interlaced with the harmonics of the monochrome signal [Patchett 1974: 122].

-3-

Martin Lewis Colour Television

Video Engineering

In the UK colour broadcasts officially started in 1967. With the introduction of colour television it was essential that the system remained compatible with monochrome [Fisher 2001: 514]. This meant that it had to operate in the same bandwidth and use the same standards, i.e. lines and fields, as monochrome. In addition, it had to produce a satisfactory picture on a monochrome receiver without modification. In contrast, as not all programmes are in colour it was necessary for the colour system to reproduce a satisfactory black and white picture [Patchett 1974: 18]. Unlike a monochrome camera a colour cameras produce three signals or components, red, green and blue. Each is essentially a monochrome video signal representing a primary colour and therefore has the same bandwidth as monochrome luminance (Y) [Watkinson 2010: 242]. Although the RGB and Y signals are incompatible, RGB can be converted to luminance by correctly weighting each component according to the eyes sensitivity to that colour. Thus the overall luminance of a scene is the sum of the individual contributions of each colour [Gupta 2006: 20]

Y = 0.3R + 0.59G + 0.11B


If Y is derived in this way, a monochrome monitor will display the same results as if a monochrome camera had been used [Watkinson 2010: 243].

Fig. 3. Formation of luminance (Y) from a colour camera output [Gulati 2006: 500]

Instead of transmitting all the three colour signals only two colour signals need be used as the third can be derived from Y. To ensure no unnecessary brightness information is sent luminance is subtracted from the red and blue signals to produce the colour difference signals (R-Y) and (B-Y) [Gulati 2006: 501]. The green signal isnt used as it makes the greatest contribution to Y; consequently the amplitude of (G-Y) is the smallest and would be more susceptible to noise [Watkinson 2010: 243]. Therefore instead of transmitting red, green and blue signals the colour system codes the information into the form of luminance and chrominance signals, the latter containing hue and saturation information. The advantage being that it can easily be made compatible [Patchett 1974: 25].

-4-

Martin Lewis

Video Engineering

Studies into visual perception show that the eye is less acute for chrominance than for luminance transients. In fact, for very small objects the eye can only perceive the brightness rather than the colour. As colour difference signals only convey chrominance the upper limit can be considerably restricted to reduce bandwidth [Watkinson 2010: 244]. The reduction is typically to one half or one quarter depending on the application and can result in a reduction from 5.5MHz to 1.5 MHz [Gupta 2006: 69]. In many ways colour difference signals represent an early application of perceptual coding; a saving of bandwidth by expressing the signal according to the way the eye operates. Colour difference signal are such a big advantage that many cameras convert their output directly to Y, (R-Y), (B-Y) [Watkinson 2010: 244]. Channel Coding The chrominance signals need to be transmitted within the same bandwidth that is used for monochrome i.e. within the same band already occupied by the luminance signal [Patchett 1974: 119]. At first this may seem impossible but on closer inspection can be achieved by frequency multiplexing. Both luminance and chrominance signals have non-continuous spectrums made up of discrete strips centred on multiples of line frequency [Gulati 2006: 67]. Thus there are available spaces for the chrominance to occupy, this is known as frequency interleaving. It is here that the colour information is located however, the colour signals must first be modulated with a carrier frequency to reduce crosstalk [Benoit 1997: 8]. Figure 4 shows that by choosing a carrier that is an odd multiple of line frequency the harmonics of the colour subcarrier can be engineered to fit between the harmonics of the luminance signal. This makes almost perfect separation between colour and luminance for use of comb filters in the receiver [Gulati 2006: 529].

Fig. 4. Frequency interleaving of the chrominance and luminance signals [Gulati 2006: 530]

-5-

Martin Lewis

Video Engineering

Colour difference signals contain both hue and saturation information, which makes it a difficult matter to modulate to one and the same carrier in a way that both can be recovered [Gulati 2006: 528]. Two separate modulators are actually used, one for (BY) and one for (R-Y), however, the carrier frequency fed to (B-Y) modulator is given a relative phase shift of 90 with respect to the other, thus the two sub-carriers are said to be in quadrature. After modulation the two are further combined to yield a single sub-carrier phasor [Patchett 1974: 134]. The amplitude of the modulated signal represents saturation and its phase represents the hue of the scene. Therefore maximum amplitude corresponds to 100% saturation and zero amplitude to peak white [Fisher 2001: 515]. In practice the colour difference signals are first weighted to reduce over-modulation, which would result in colour distortion. A weighting factor of 0.877 is used for (R-Y) and 0.493 for (B-Y), these values are increased to their original values at the receiver for proper reproduction [Gulati 2006: 534]. To demodulate the chrominance information the system must generate the colour subcarrier at the receiver [Gupta 2006: 69]. To ensure a correct frequency and phase relationship, a short sample of the sub-carrier (8 to 11 cycles) called colour burst is sent to the receiver. This is located in the back porch of the horizontal blanking period and is of sufficient amplitude to not interfere with the sync pulse [Gulati 2006: 532]. Phase errors of either the luminance signal or sub-carrier itself are termed differential phase distortion. The American NTSC is very sensitive to phase distortions and therefore produces unwanted tint errors, especially in the region of flesh tones [Gulati 2006: 547].

Fig. 5. A composite colour signal. As seen from the relative amplitude the signal would cause gross over-modulation. Therefore in practice the colour difference signals are weighted [Gulati 2006: 534]

PAL Three colour coding systems exist for colour transmission: NTSC, PAL and SECAM. All three share similar arrangements and it is their method of sub-carrier modulation that characterises each system [Gulati 2006: 528]. In the UK, the BBC carried out a series of colour transmission tests prior to choosing a colour coding system. At first the NTSC system was used, but after comparative tests were made it was established that the PAL was most acceptable. The principal object of PAL was to avoid the shortcomings of the NTSC system, such as the phase errors that occur in transmission over long circuits and the need for a hue control in the receiver [Pawley 1972: 518]. -6-

Martin Lewis

Video Engineering

In a PAL coder (B-Y) and (R-Y) are again modulated by the same carrier frequency, however, the phase of the (B-Y) modulator is reversed from +90 to - 90 at the end of every line. This ensures that any phase distortions are in opposite directions on each line, which when processed by the eyes results in the cancellation of error [Gulati 2006: 544]. For example, if there is a phase error, lines 1, 3, 5 etc, will appear to be too red and lines 2, 4, 6 etc, will be too blue [Patchett 1974: 184]. As the lines are close together the eye will average the colours producing the correct magenta hue but at reduced saturation. This is less objectionable than hue errors and it is this feature of the system that derives its name, phase alternating line [Fisher 2001: 516]. The use of the eye as an averaging mechanism is the basis of the simple PAL. However, as the viewing distance becomes shorter the eye begins to distinguish between the lines and the colours become more pronounced [Patchett 1974: 184]. Using a 64s delay line can make remarkable improvements. This allows the signals on two consecutive lines to be averaged before it is presented to the eye, this is known as PAL-D [Benoit 1997: 11]. One disadvantage is that it results in chrominance errors at horizontal edges, consequently the first and last line of rectangular objects contain unwanted chrominance information [Gulati 2006: 547]. If the sub-carrier frequency is chosen to be a multiple of half line frequency as is done in NTSC, an annoying vertical line-up of dots occur for certain hues. This is due to the phase reversal of the sub-carrier [Gulati 2006: 544]. To overcome this difficulty, an odd multiple of one-quarter line frequency is used instead. This produces an effective three-quarter line offset for one of the signals and one-quarter for the other. Thus the dot pattern wavers diagonally downwards from left or right resulting in better cancellation [Sims 1970: 59]. Further improvement can be made by adding another half cycle per field (25Hz) to achieve dot pattern interlacing which renders the pattern less visible. The sub-carrier frequency can be expressed as:

Thus PAL uses a line-locked sub-carrier frequency of 4.43361875MHz, which is generated using a crystal-controlled oscillator [Benoit 1997: 11]. Compatibility relies on the fact that the luminance and chrominance signals can be transmitted without interference. In reality cross-luminance arises as the chrominance crosses into the upper side band of the luminance channel [Fisher 2001: 516]. In transmission it is usual to include a notch filter at the sub-carrier frequency of the chrominance channel to reduce crosstalk. Any remaining low frequency chrominance is detected by the luminance demodulators and translated into low frequency noise [Sims 1970: 68]. This produces crawling patterns at coloured edges, a problem that is most pronounced at the boundaries of complementary colours.

-7-

Martin Lewis

Video Engineering

The analogue compression achieved by colour coding is not without penalty and it inevitably introduces artefacts. Moreover, PAL transmission arrangements tend to be more complicated than for NTSC, the receivers are more complicated and therefore more expensive [Sims 1970: 136]. In the UK, PAL has proved to be thoroughly practical being both resistant to differential phase distortion and capable of producing excellent pictures. It is robust, giving acceptable reception in difficult conditions and satisfactory black-and-white pictures on monochrome receivers [Pawley 1972: 518]. In conclusion the advantages of PAL far outweigh the disadvantages.

References: Benoit, H. 1997: Digital television: MPEG-1, MPEG-2 and Principles of the DVB System (London :Arnold) Fisher, D. 1994: Chapter 26. Television in John Borwick, ed., Sound Recording Practice (Oxford: OUP), pp.510-527 Fisher, D. 2010: Television Principles: Institute of Sound Recording Lecture (14/10/10) Gulati, R. R. 2006: Monochrome and Colour Television 2nd edition (New Delhi: New Age International Ltd) Gupta, R. G. 2006: Television Engineering and Video Systems (New Delhi: McGrawHill) Patchett, G. N. 1974: Colour Television: With particular reference to the PAL system (London: Norman Price) Pawley, E.1972: BBC Engineering 1922 1972 (London: BBC Publications). Sims, H. V. 1970: Principles of PAL Colour Television and Related Systems (London: Butterworth and Co.) Watkinson, J. 2010: The MPEG Handbook (London: Focal Press)

-8-

You might also like