Professional Documents
Culture Documents
COMMUNICATION
Prepared according to Anna university syllabus R-2017
(Common to III semester-CSE/IT )
G. Elumalai, M.E.,(Ph.D)
Assistant Professor (Grade I)
Department of Electronics and Communication Engineering
Panimalar Engineering College
Chennai.
Dear Students,
We are extremely happy to present the book “Analog and Digital
Communication” for you. This book has been written strictly as per the
revised syllabus (R2013) of Anna University. We have divided the subject
into five units so that the topics can be arranged and understood proper-
ly. The topics within the units have been arranged in a proper sequence
to ensure smooth flow of the subject.
Unit I - Introduce the basic concepts of communication, need of
modulation and different types of analog modulation (Amplitude modu-
lation, Frequency modulation and Phase modulation).
Amplitude Shift Keying (ASK) – Frequency Shift Keying (FSK) Minimum Shift
Keying (MSK) –Phase Shift Keying (PSK) – BPSK – QPSK – 8 PSK – 16 PSK - Quadrature
Amplitude Modulation (QAM) – 8 QAM – 16 QAM – Bandwidth Efficiency– Comparison of
various Digital Communication System (ASK– FSK – PSK – QAM).
Advanced Mobile Phone System (AMPS) - Global System for Mobile Communica-
tions (GSM) - Code division multiple access (CDMA) – Cellular Concept and Frequency
Reuse - Channel Assignment and Hand - Overview of Multiple Access Schemes - Satellite
Communication - Bluetooth
TABLE OF CONTENTS
TABLE OF CONTENTS
Noise and
Distortion
2. Input transducer
3. Transmitter
4. Communication channel
5. Noise
6. Receiver
7. Output transducer
Input transducer
Transmitter
Communication channel
Noise
Receiver
Output transducer
1.2 NOISE
1. Internal noise
2. External noise
1. Shot noise
2. Thermal Noise
3. Partition Noise
4. Flicker Noise
ANALOG AND DIGITAL COMMUNICATION
1. Natural Noise
2. Manmade Noise
Where
I0 = DC in amperes
K - Boltzman constant
Noise figure is defined as, the ratio of the signal to noise power
ratio supplied to the input terminals of a receiver (SNRi) to the signal
ANALOG COMMUNICATION
to noise power ratio supplied to the output terminal (or) load resistor
(SNR0)
Therefore,
(SNR)i
Noise figure (F) = (SNR)
0
V0
Ri RL
1. Input impedance Rt
2. Output impedance RL
From the figure 1.1(a), we can obtain signal input voltage Vsi and
ANALOG AND DIGITAL COMMUNICATION
power Psi as
V sR t
Vsi = R + R ...(1)
a t
Vsi2
and Ps = R ...(2)
t
Similarly the noise input voltage Vni and power Pni can be
calculated
We know that
Vni = 4KTBR
= ...(4)
4KTB Ra Rt
Ra + Rt
2
Vni
and Pni =
Rt
Ra Rt
4KTB
Ra + Rt
Pni =
Rt
Ra
= 4KTB R + R ...(5)
a t
ANALOG COMMUNICATION
(Vsi2Rt/(Ra + Rt)2)
SNRi =
4KTB (Ra/ Ra + Rt)
Vsi2.Rt
= ...(6)
4KTB. Ra (Ra + Rt)
Step 4: Determination of signal output power Pso
A2.Vsi2
Pso = ...(7)
RL
Substitute equation (1) in (7), we get
A2 (VsiRt/ RaRt)2
Pso =
RL
A2Vsi2.Rt2
Pso = ...(8)
RL(Ra + Rt)2
Step 5 Determination of noise output power Pno
Pso
SNR0 =
Pno
Using equation (8) and (9) we get
Pso
SNR0 = P
no
A2Vsi2Rt2
= ...(10)
(Ra + Rt)2 RL.Pno
Step 7 Calculation of Noise figure (F)
Vsi 2Rt
F =
4KTB Ra [Ra + Rt ]
A 2Vsi Rt 2
[Ra + Rt ] R L .Pno
2
F =
( Ra + Rt ) RL .Pno ... (11)
=
4KTB R .R .A 2 a t
Unidirectional/ Technique of
Nature of
Bidirectional transmission
Information
communication signal
1. Simplex system
Communication System
Unidirectional Bidirectional
(Simplex) (Duplex)
Simplex system
of information
Transmitter Transmitter
+ +
Communication
Receiver 1 Receiver 2
link
Figure 1.3 Basic Block diagram of full duplex system
Analog Communication
Digital communication
Base-band transmission
free space.
This is because the Voice signal (in the electrical form) cannot
travel long distance in air.
Why modulation
1.4 MODULATION
Carrier signal
3 x 108
=
2 x 10 x 106
= 15 metre
Modulation
Amplitude- Angle
PAM PWM PPM
modulation(AM) modulation
(PM) (FM)
Where,
DM – Delta modulation.
Linear modulation
Non-linear modulation
Time
1 Cycle
Figure 1.5 One cycle
Wavelength
1.7.3 Bandwidth
Bandwidth (BW) = f2 f1
f1 – lower frequency
BW
f1 f2 Frequency
Frequency Wavelength
Frequency range
designation range
Extremely High
30 - 300 GHZ 1mm - 1cm
frequency (EFH)
super High frequency
3 - 30 GHZ 1 - 10 cm
(SHF)
Ultra High
300MHZ -3GHZ 10cm - 1m
Frequency (UHF)
Very High frequency
30 - 300MHZ 1 -10m
(VHF)
High frequency(HF) 3 - 30MHZ 10-100m
100m-1km
Medium frequency (MF) 300KHZ - 3MHZ
Solved Problem
Velocity of light
Wavelength ‘ λ ‘ =
Frequency
c
=
f
3 x 108
(i) l = = 0.35 M
850 x 106
3 x 108
(ii) l = = 0.158 m
1.9 x 109
ANALOG AND DIGITAL COMMUNICATION
3 x 108
(iii) l = = 0.0107 m
28 x 109
1.7.5 Frequency spectrum
Definition
Let us consider,
Where,
ANALOG COMMUNICATION
VAM(t) = (Vc+Vmsinωmt)sinωct
V
= Vc 1 + m sin ωm t sin ωc t
Vc
Vm
Where, = ma = modulation index.
Vc
Modulation index is defined as the ratio of amplitude of message
signal to amplitude of carrier signal.
Vm
Modulation index ‘ma’ =
Vc
VAM(t) = Vc[1+ ma sinωmt] sinωct ...(4)
We know that,
ANALOG AND DIGITAL COMMUNICATION
cos(A-B) - cos(A+B)
sin A . sin B =
2
Equation (2) becomes,
maVc maVc
VAM(t) = Vcsinωc t + 2 cos (ωc -ωm)t- cos(ωc+ ωm)t ...(3)
2
In equation (3),
Vc
maVc maVc
2 2
Voltage (V)
fLSB fc fUSB
frequency
{
{
carrier USB
LSB
ANALOG COMMUNICATION
The (-) sign associated with the USB – represents a phase shift of
180 .The figure 1.8 shows the frequency domain representation.
Vc
maVc maVc
2 2
Amplitude
fm fm
fc-fm fc fc+ fm
Frequency
BW=2fm
• First term represents the unmodulated carrier signal with the fre-
quency of fc
• Second term represents the lower sideband signal with the frequency
of ( fc- fm ).
• Third term represents the upper sideband signal with the frequency
of ( fc + fm ).
1.8.4 Bandwidth of AM
= ( fc + fm ) - ( fc - fm )
= fc + fm - fc +fm
BW = 2fm
ANALOG AND DIGITAL COMMUNICATION
time
-Vm
Vc
time
-Vc
Vm
0
{
Vc
time
-Vc
Modulated signal
-Vm
Figure 1.9 AM envelope
• The shape of the envelope is identical to the shape of the modulating
signal.
ANALOG AND DIGITAL COMMUNICATION
-ωm
maVc LSB
2
Figure 1.10 Phasor representation of AM-wave
• The phasors for carrier and the upper and lower side frequencies
combine, sometimes in phase (adding) and sometimes out of phase
(subtracting).
Modulation index
Vm
= x 100
Vc
= ma x 100
Vm Vmin = Vc -Vm
Vc
Vmax = Vc -Vm
Figure 1.11 AM-Envelope for Calculation of
modulation index
The graphical representation of AM-Wave is also called as time
domain representation of AM-signal.
ANALOG AND DIGITAL COMMUNICATION
Vmax = Vc + Vm ...(1)
Vmin
= Vc - Vm ...(2)
(or)
Vmax- Vmin
Vm = ...(4)
2
From equation (1) and (2) , the Vc can be calculated as,
Vc = Vmax - Vm ...(5)
Vc = Vmax + Vm ...(6)
Vmax + Vmin
\ Vc = 2 ...(7)
Vm
Ma = ...(8)
Vc
Substitute Vm, Vc values in this equation (8)
Vmax- Vmin/2
ma =
Vmax+ Vmin/2
Vmax- Vmin
∴ma = ...(9)
Vmax+ Vmin
Vmax- Vmin
% ma = x 100
Vmax+ Vmin
...(10)
Vmax- Vmin
VUSB = VLSB = ...(12)
4
Where,
• The message signal can be detected (or) recover from the modulated
signal without any distortion by an envelope detector for under
modulation.
ANALOG COMMUNICATION
Amplitude
Envelope
time
Vm
Vc
Time
Amplitude
Vm
Vc
0
time
Distortion due to Over Modulation
• The envelope of message signal are not same. Due to this enve-
lope detector provides distorted message signal.
The peak voltage of the upper and lower side band frequencies,
maVc
VSUB = ...(5)
2
Substitute equation (5) in (4)
PUSB = PLSB =
(
[maVc/2] 2) )
2
ma2Vc2
PUSB = PLSB = 8R ...(6)
ANALOG AND DIGITAL COMMUNICATION
Where,
Pt
= Pc +PLSB + PUSB
Vc2 ma2 ma2
= + Pc
+ Pc
2R 4 4
ma2 ma2
Pt = Pc +
Pc + Pc
4 4
m 2 m 2
= Pc 1 + a + a
4 4
2
Pt = Pc 1 + ma ...(9)
2
Pt ma 2
1 + = ...(9) a
Pc 2
The equation (9) shows the total power required for transmission
of AM-signal or DSBFC. The figure 1.15 shows the power spectrum for
amplitude modulation.
ANALOG COMMUNICATION
Pc Vc2
2R
PLSB = ma Pc
2
ma2Pc
4 PUSB =
4
Power
LSB USB
fLSB fc fUSB
Frequency
Pt = 1.5 Pc
...(10)
Pt ma2
-1 =
Pc 2
P
2 t − 1 = ma2
Pc
ANALOG AND DIGITAL COMMUNICATION
P
2 t − 1 = ma
...(11)
Pc
P
Modulation index ma = 2 t − 1 = ma
Pc
We know that,
Where,
m 2
I t 2 R = I c 2 R 1 + a
2
m
2
I t 2 = I c 2 1 + a
2
m 2
I t 2 = I c 2 1 + a ... ( 3 )
2
Equation (3) shows that the total transmission current for AM-DSBFC
ANALOG COMMUNICATION
We know that,
m 2
I t 2 = I c 2 1 + a ... (1)
2
It 2
m 2
2
= 1 + a
Ic 2
2 2
It m
2
−1 = a
Ic 2
I 2
2 t 2 − 1 = ma 2
Ic
I 2
ma 2 = 2 t 2 − 1 ... ( 2)
Ic
ma 2Pc ma 2Pc
+
= 4 4 × 100
ma 2
Pc 1 +
2
m 2 m 2
Pc a + a
4 4
× 100
=
ma
2
Pc 1 +
2
ANALOG AND DIGITAL COMMUNICATION
ma2/2
= ma2 x 100
1+
2
ma2/2 ma2
= x 100 = x 100
(2+ma2)/2 2+ma2
ma2
%η = x 100
2 +ma2
For critical modulation ma =1, then the transmission efficiency
becomes,
if ma=1,
12 1
%η = x 100 = x 100
2+12 3
% η = 33.3 %
The maximum transmission efficiency of the AM-DSBFC is
33.3%.This means , that only one-third of the total power is carried by
the sidebands and the rest two-third is a wasted power.
(∴Carrier is suppressed)
Sideband powers
The upper and lower sidebands are having equal power which is
equal to,
(Vsub)2 (Vsub/ 2)2
PUSB = PLSB= = ...(1)a
R R
The peak voltage of the upper and lower sideband frequencies are,
ANALOG COMMUNICATION
maVc
Vsub =
2
∴ Equation (1) becomes
m V /2
PUSB = PLSB = a c
2
ma 2Vc2
= 8R ...(2)
Pt
= PLSB +PUSB
ma 2Vc2 ma 2Vc2
= +
8R 8R
ma 2 Vc 2 Vc 2
Pt = =Pc
2 2R 2R
m 2
Pt = Pc a
2 ...(3)
For critical modulation, ma=1
(2)
1
Pt = PC
Pt = 0.5 Pc
...(4)
ma2
Pc
2
=
m2
Pc a
2
%η = 1 x 100 = 100%
suppression
2. Sideband Not applicable Not One side band
efficiency
5 Number of 1 1 1
Modulating
inputs
6 Complexity Simple Simple Complex
7 Power High Medium Very small
requirement to
cover are
8 Application Radio Radio Point to Point
tion
Solution
BW =2 fm
BW = 2(4 KHZ)
BW =8 KHZ
Solution
Given data
Pc =10 KW
ma = 0.5
m 2
The total RF-power delivered Pt = Pc 1 + a
2
0.52
1 + =10 KW
2
Pt =11.25 KW
ANALOG AND DIGITAL COMMUNICATION
Calculate
Solution
We get,
Vc =500 volts
ma =0.4
ωm
= 3140
ωc = 6.28 x 10 7
(i) ωm
= 2fm = 3140
3140
fm = 499.7
2
fm = 500 HZ
(ii) w c
= 2pfc = 6.28 x 107
6.28 x107
fc =
2
fc = 0.999 x 107 HZ
fc =10 MHZ
ANALOG COMMUNICATION
m 2
(v)Total Power (Pt) = P 1 + a
2
0.42
= 208.3 1 +
2
= 224.9
Pt = 225 watts
Solution
Given data:
fc
=100KHZ
fm =5 KHZ
=105 KHZ
fLSB =fc - fm
=95 KHZ
(3)Bandwidth ‘BW’ =2 fm
=2(5 KHZ)
=10 KHZ
Solution
Given data
= 500 K +10 K
= 510 KHZ
fLSB = fc - fm
= 500 K +10 K
= 490 KHZ
ANALOG COMMUNICATION
Vm
(ii) Modulation index ma =
Vc
15
=
20
= 0.75
0.75 x 20
=
2
= ± 7.5 V
Vmax = Vc + Vm
= 20 + 15
= 35 V
Vmin = Vc - Vm
= 20 - 15
= 5 V
Solution
Given data
Vm1 2
Modulation index m1 = =
Vc 10
ANALOG AND DIGITAL COMMUNICATION
= 0.2
Vm1 3
Modulation index m2 = = = 0.3
Vc 10
Total modulation index ‘ma’ = m12 +m22
=
0.04 +0.09
ma = 0.66
7. An AM-signal has the equation V(t)=[15 + 4 sin 44 x 103 t] (sin 46.5 x
106 t)V. Find
(i)carrier frequency (ii)modulating frequency (iii)modulation index
(iv)sketch the signal in the time domain, showing voltage and time
scales.(v) Peak voltage of unmodulated carrier.
Solution
Vc =15 V
ma = 0.26
ωm = 44 x 103 HZ
ωc =46.5 x 106 HZ
2fc = 46.5 x 106
46.5 x106
fc =
2
f =7.4 x 106 HZ
c
ANALOG COMMUNICATION
2fm =44 x 103
44 x 103
fm =
2
fm = 7 x 103 HZ
Vc = 15 V
LSB USB
fLSB fC fUSB
Frequency
Solution
Given Data
PTSB
= PUSB + PLSB
ma2 ma2
= Pc + Pc
42 4
ma
= Pc
2
0.2 2
= (1000)
2
PTSB = 20 watts
(iii)The total power required for transmission of AM-wave,
2
Pc =PC 1 + ma
2
0.22
Pc =1000 1 +
2
Pt =1,020 watts
(iv) Modulated Carrier Power,
ma 2
Pt = Pc 1 +
2
Pt
=
ma 2
1 +
2
= Pc 1020
0.22
1 +
2
ANALOG COMMUNICATION
1000W = Pc
Pc =1000 watts
Solution
Given data
Pc =200 W
0.752
= 200 1 +
2
Pt = 256.25 watts
Solution
Given
Vc =18V
RL =72Ω
ANALOG AND DIGITAL COMMUNICATION
Solution
Given
Pt = 20 Kw
ma =0.6
Pt
Carrier power Pc =
ma 2
1 +
2
20 × 103
=
0.62
1 +
2
=16.94 x 103
Pc =17 K Watts
ANALOG COMMUNICATION
(ii) Bandwidth
Solution
= 2 x 5 KHz
=10 KHz
Voltage
BW = 10 KHz
LSB USB
PC = 6W
ma2Pc 12.(6)
(ii) PUSB = PLSB = =
4 4
= 1.5 watts
(iii) Total modulated power,
2
Pt = Pc 1 + ma
2
1
= 6 1 +
2
Pt = 9 Watts
Power
6w
1.5 w 1.5 w
fLSB fc fUSB
Figure 1.18 Power Spectrum
ANALOG COMMUNICATION
Solution
Carrier signal,
0.5 x 40
=
2
=10 volts
(vi) Bandwidth BW =2fm
=2 x 1 KHZ =2 KHz
(vii) Spectrum
40 V
Amplitude
10 V 10 V
Modulating
or audio signal Carrier 900
Audio phase
Amplifier Shifter SSB
Adder
Carrier
Source
Balanced
Add 900 Modulator
Phase Shifter M2
Advantages
Disadvantages
2. The Phase shifting network must provide exact 900 phase shift.
This method used to generate SSB at any frequency and thus use low
audio frequencies.
However the manner in which voltages are fed to the balanced modu-
lators 3 and 4 are different phase shift method.
DSBSC-AM
A
Balanced Low Pass Balanced
modulator 1 filter 1 modulator 3
Pt − Pt ''
Power saving =
Pt
ma 2
ma 2
1 + 2 Pc − 4 Pc
=
ma 2
1 + 2 Pc
ma 2 mc 2
Pc 1 + −
2 4
=
ma 2
1 + Pc
2
m 2 4 + ma 2
1+ a
= 4 = 4
ma 2 2 + ma 2
1+
2 2
4 + ma 2 4 + ma 2
= =
2 ( 2 + ma 2 ) 4 + 2ma
2
5
If ma = 1, the power saving in SSB-SC-AM is = = 83.33%
6
If ma = 0.5, the power saving in SSB-SC-AM is = 94.44%
1.9.7 Advantages of SSB over DSBFC
(i) Less Bandwidth is required
(ii) More power saving, at 100% modulation, power saving is 83.33%
(iii) Reduced noise interference due to reduced bandwidth.
1.9.8 Disadvantages of SSB
(i) The generation and reception of SSB signals is complicated.
(ii) Used in land and air mobile communication, navigation and amateur
radio.
Applications of SSB
(i) Where power saving and low bandwidth requirement are important.
(ii) Used in : land and air mobile communication, navigation and
amateur radio.
1.10 AM – TRANSMITTERS
There are two types of AM transmitters,
Low level transmitter
High level transmitter
ANALOG AND DIGITAL COMMUNICATION
Linear Linear
Bandpass Intermediate final Bandpass Coupling
Modulator
filter power power filter network
amplifier amplifier
Bandpass Modulating
Pre-amplifier signal
filter
driver
Modulating
signal
source
Modulating Modulating
Bandpass
Signal Preamplifier Signal
filter
amplifier driver
amplifier
Antenna
AM Modulator
and output Modulating
power Power
amplifier driver
Carrier
RF carrier Buffer Carrier Power
oscillator amplifier driver amplifier
tion
1 Power level Modulation takes Modulation takes place at a
level
2 Types of am- Highly efficient class Linear amplifiers (A, AB, B)
tions
5 Design of AF Complex due to very Easy as it is to be done at
modulation)
1.11.1Introduction
The problems in the TRF receiver are
It is useful only to single tuned circuit and low frequency
applications.
ANALOG COMMUNICATION
Audio and
Receiving
power
E.M antenna
amplifiers
Radiation
(f0-fs) AF
fs
RF fs IF
Mixer Detector
stage Amplifier
AGC
Local
f
oscillator 0
Ganged tuning
Figure 1.25 Block diagram of Super hetero-dyne receiver
1.11.4 Operations
(i). Antenna Receiver
The DSBFC or AM signal transmitted by the transmitter travels
through the air and reaches the receiving antenna.
This signal is in the form of electromagnetic waves. It induces a
very small voltage (few μV) into the receiving antenna.
(ii) RF stage
The RF stage is an amplifier which is used to select the wanted
ANALOG AND DIGITAL COMMUNICATION
1.12.1 Sensitivity
Sensitivity µV
16
15
14
Lowest sensitivity
13
12
11
Highest sensivity
10
Frequency
1000 KHz
600 850 1600
• Higher the selectivity belter is the adjacent channel rejection and less
is the adjacent channel interference.
Attenuation, dB
100
80 Attenuation increases
as we go away from the
tuned frequency
60
40
Receiver tuned
at 950 kHz
20
Deviation from
resonant frequency
40 -20 0 20 40 kHz
Minimum attenuation
0
This is basically the
frequency response
of the AF amplifier
Frequency
50 1 kHz 5 kHz 10 kHz
1. Radio broadcasting.
2. TV sound transmission.
4. Cellular radio.
5. Microwave communication.
6. Satellite communication.
Frequency modulation
Phase Modulation
Where,
(radians).
Where,
(or)
ANALOG AND DIGITAL COMMUNICATION
q(t) a Vm(t)
Where,
Df Df
fc f
Instantaneous Phase Deviation
Instantaneous phase
Where,
q(t))= q'(t) (
Instantaneous frequency
Vm
= kf. sin wmt
Wm
kf.Vm
= sin w mt
2pfm
Df kf.Vm
= sin w mt , \
where =Df
fm 2p
Vc
0 t
-Vc
Maximum Minimum
Frequency Frequency
Maximum Maximum
deviation deviation
Vc
0 t
-Vc
1. Radio broadcasting.
3. Satellite Communication.
4. Police wireless.
ANALOG COMMUNICATION
6. Ambulances.
7. Taxicabs.
Modulation index in PM
Mp=Kp Vm rad.
Modulation index in FM
Vm
Mf
FM =Kf unit less
Wm
Thus modulation index
Vm
Mf =Kf V/S
Wm
ANALOG AND DIGITAL COMMUNICATION
Vm
=Kf
2pfm
Vm
Here, Kf =Df is called frequency deviation . It is denoted by
2p
Df (or) ‘d’ and it’s unit is Hz
Modulation index in FM
Df d
Mf = (or)
fm fm
Maximum frequency deviation
=
Modulating frequency
Percentage Modulation
• Both FM and PM waveforms are identical except the phase shift. From
ANALOG COMMUNICATION
↓ ↓
Carrier Pair of first side band
1. The FM-wave consists of carrier. The first term in the above equation
represents the carrier.
4. For example J1(mf) denotes the value of J1 for the particular value of
mf written inside the bracket.
5. To solve for the amplitude of the side frequencies Jn(mf) is given by,
( )[ ]
n
mf 1 (mf/2)2 (mf/2)4
Jn(mf) = - + -……. ...(4)
2 n! 1!(n+1)! 2!(n+2)!
Where,
! = Factorial (1 x 2 x 3 x 4....)
mf = modulation index.
ANALOG COMMUNICATION
FM (Frequency Spectrum)
J1(mf).Vc J1(mf).Vc
J2(mf).Vc J2(mf).Vc
Bandwidth=infinite
0.9
0.8
0.7
J1(mf)
0.6
J2(mf)
J3(mf)
0.5
J4(mf)
0.4 J5(mf) J6(mf)
J7(mf)
J8(mf)
0.3
0.2
Jn(mf)
0.1
- 0.1
- 0.2
- 0.3
- 0.4
0 1 2 3 4 5 6 7 8 9 10 11 12
Modulation Index mf
1.5 0.51 0.56 0.23 0.06 0.01 ... ... ... ... ... ... ... ... ... ...
2.0 0.22 0.58 0.35 0.13 0.03 ... ... ... ... ... ... ... ... ...
2.4 0 0.52 0.43 0.20 0.06 0.02 ... ... ... ... ... ... ... ... ...
2.5 -0.05 0.50 0.45 0.22 0.07 0.02 0.01 ... ... ... ... ... ... ... ...
3.0 -0.26 0.34 0.49 0.31 0.13 0.04 0.01 ... ... ... ... ... ... ... ...
4.0 -0.40 -0.07 0.36 0.43 0.28 0.13 0.05 0.02 ... ... ... ... ... ... ...
5.0 -0.18 -0.33 0.05 0.36 0.39 0.26 0.13 0.05 0.02 ... ... ... ... ... ...
5.45 0 -0.34 -0.12 0.26 0.40 0.32 0.19 0.09 0.03 0.01 ... ... ... ... ...
6.0 0.15 -0.28 -0.24 0.11 0.36 0.36 0.25 0.13 0.06 0.02 ... ... ... ... ...
7.0 0.30 0.00 -0.30 -0.17 0.16 0.35 0.34 0.23 0.13 0.06 0.02 ... ... ... ...
8.0 0.17 0.23 -0.11 -0.29 -0.10 0.19 0.34 0.32 0.22 0.13 0.06 0.03 ... ... ...
8.65 0 0.27 0.06 -0.24 -0.23 0.03 0.26 0.34 0.28 0.18 0.10 0.05 0.02 ... ...
9.0 -0.09 0.25 0.14 -0.18 -0.27 -0.06 0.20 0.33 0.31 0.21 0.12 0.06 0.03 0.01 ...
10.0 -0.25 0.05 0.25 0.06 -0.22 -0.23 -0.01 0.22 0.32 0.29 0.21 0.12 0.06 0.03 0.01
Table 1.2 Bessel functions of the first kind Jn(m)
ANALOG AND DIGITAL COMMUNICATION
Note that the sidebands are spaced from the carrier fc and from
each other by a frequency equal to modulating signal frequency fm.
Problem 1
Determine
Given
Solution
Vm = 2V and
Fm =2000 Hz = 2kHz
ANALOG COMMUNICATION
=Kf VmDf
5 kHz
= x 2V
V
Df = 10kHz
2
Vc
2 2
Pt = = Vc
R 2R
Vc 2
Pt =
2R
1.13.10 Bandwidth Requirement of Angle Modulation
• Theoretically, the BW of angle modulated wave is infinite. But practi-
cally it is calculated based on how many sidebands have significant
amplitude.
• The BW of angle modulation depends on the modulation index.
• Angle modulated waveforms are generally classified as either low,
medium or high modulation Index.
m<1 - called low modulation index
m = 1 to 10 - called medium modulation index
m>10 - called high modulation index.
Note : If m < 10 then, then the system is called narrow band FM,
otherwise wide band FM.
For low-index modulation, the frequency spectrum resembles
double-sideband AM and the minimum bandwidth is approximated by,
BW = 2 fm, (Hz) ... (1)
ANALOG AND DIGITAL COMMUNICATION
BW = ( 2∆f * 2 f m )
w.k .t ∆f = K 1 * Vm
V
m f = K1 * m
fm
∆f
fm =
mf
∆f
BW = 2∆f * 2
mf
1
BW = 2∆f 1 + radian/sec
mf
Note: This carson’s rule gives correct results if the modulations index is
greater than 6.
ANALOG COMMUNICATION
Advantages of FM
1. Improved noise immunity.
2. Low power is required to be transmitted to obtain the same quality
of received signal at the receiver.
3. Covers a large area with the same amount of transmitted power
4. Transmitted power remains constant.
5. All the transmitted power is useful.
Disadvantages of FM
Very large bandwidth is required.
Since the space wave propagation is used, the radius of transmis-
sion is limited by the line of sight.
ANALOG COMMUNICATION
Given
Solution
Df
(a) Modulation index mf =
fm
20
=
10
mf = 2
B = 2 (n x fm )
= 2 (4 x 10 kHz)
= 2 (40 kHz)
B = 80 kHz
B = 2(Df + fm)Hz
B = 60 kHz
J0(2) = 0.22
J1(2) = 0.58
J2(2) = 0.35
J3(2) =0.13
mf =2
5.8 V 5.8 V
3.5 V 3.5 V
2.2 V
1.3 V 1.3 V
Df Df
Frequency spectrum
Problem 2
Determine,
Given
Solution
Df(max)
a. Deviation Ratio (DR) =
fm(max)
ANALOG AND DIGITAL COMMUNICATION
75 kHz
=
15 kHz
DR =5
B =2(n x fm) Hz
=2(8 x 15,000)
=2(120000)
=240000Hz (or)
B =240 kHz
37.5 kHz
=
7.5 kHz
mf =5
Bandwidth B = 2(n x fm)Hz
=2(8 x 7500)
B =120 kHz
ANALOG COMMUNICATION
Problem 3
Given
Df(max) = 25 kHz
Solution
Actual frequency deviation
% Modulation =
Maximum allowed deviation
Df(actual)
=
Df(max)
10kHz
=
25kHz
% modulation = 40 %
Problem 4
Given
Df(max) =75kHz
Fm(max) =10kHz
Mp =5 rad
Solution
Df(max)
a. Deviation ratio (DR) =
fm(max)
ANALOG AND DIGITAL COMMUNICATION
75 kHz
=
10 kHz
DR = 7.5
=2 [75 +10]
B =170 kHz
Problem 5
For an FM modulator with a modulation index mf = 1, a
modulating Vm(t) =Vm sin (2p x 1000t), and an unmodulated carrier
Vc(t)=15 sin (2p x 500t) determine
a. Number of sets of significant side frequencies,
b. Their amplitudes,
c. Draw the frequency spectrum showing their relative amplitude.
Given
Modulation index, mf =1
Modulating signal Vm(t) =Vm sin (2p x 1000t)
Carrier signal Vc(t) =15 sin (2px 500t)
Solution
a. Modulation index mf=1 means it have three sets of significant side
frequencies.
b. The relative amplitudes of the carrier and side frequencies are
J0 = J0(mf) x Vc =J0(1) x 15 = 0.77 x 15=11.55 V
J1 =J1(1) x 15 = 0.44 x 15 = 6.6 V
J2 =J2(1) x 15 = 0.11 x 15 =1.65 V
J3 =J3(1) x 15 =0.02 x 15 = 0.3 V
ANALOG COMMUNICATION
6.6 V 6.6 V
1.65 V 1.65 V
0.3 V 0.3 V
Frequency Spectrum
Problem 6
a. Modulating frequency
b. Carrier frequency,
d. Frequency deviation.
Given
Solution
3 x 104
=
2p
ANALOG AND DIGITAL COMMUNICATION
= 4.77kHz
wc
b. (b)Carrier frequency, fc =
2p
8 x 106
=
2p
=1.27 MHz
c. Modulation index mf =6
=6 x 4.77
Df = 28.62 kHz
1.14 COMPARISONS OF VARIOUS ANALOG COMMUNICATION SYSTEMS
ing signal
Output
Voltage VAM (t) = VcSin wct + VFM (t) = Vc Sin (wct + mf Vpm t) = Vc Sin (wct + mp
2 m aV c Sin wmt) Sin wmt)
Noise in- Noise interference is more Noise interference is mini- Noise interference is less
7 terference mum than AM, but more than
FM
Depth of Depth of modulation have Depth of modulation have Depth of modulation re-
Modulation limitation. It cannot be no limitation, it is inversely mains same if modulat-
8
increased above 1 proportional to modulating ing frequency is changed
frequency
Fidelity AM has poor fidelity due to Fidelity is better due to wide Fidelity is better due to
9 narrow bandwidth bandwidth wide bandwidth
ANALOG AND DIGITAL COMMUNICATION
Adjacent Adjacent channel Adjacent channel inter- Adjacent channel interfer-
10 channel interference is pre- ference is avoided due to ence is avoided
interference sent wide frequency spectrum
Signal to noise Signal to noise ratio Signal noise ratio is bet- Signal to noise ratio is infe-
11
ratio is less ter than that of PM rior to that in FM
Equipment Transmission and Transmission and recep- More complex
used reception equip- tion equipments are more
12
ANALOG COMMUNICATION
0 t
15 t 0 t
Output wave-
form
ANALOG AND DIGITAL COMMUNICATION
Ec
maEc maEc
2 2
fc-fm fc fc+fm
Solution:
Here Em= 15V
Ec = 20V
Modulation index, m = Em / Ec = 15/20 = 0.75
Percentage modulation = m * 100
= 75%
5. Define detection (or) demodulation.
20.
A 107.6 MHZ carrier is frequency modulated by a 7 kHZ sine
wave. The resultant FM signal has a frequency of 50 kHZ
Determine the modulation index of the FM wave.
Here δ = 50 kHZ and ƒm = 7 kHZ .
Modulation index = δ/ƒm = 50/7 = 7.142
21.
If the rms value of the aerial current before modulation is
12.5 A and during modulation is 14 A, calculate the percent-
age of modulation employed, assuming no distortion.
Here Itotal = 14 A and Ic = 12.5 A.
m= 2
( I2total
I2C
-1
)
= 2
( 142
12.52
-1
) = 0.71
22.
An AM broadcast transmitter radiates 9.5 KW of power with
the carrier unmodulated and 10.925 KW when it is sinusoi-
dally modulated. Calculate the modulation index.
Ptotal = 10.925 KW, Pc = 9.5 KW
ANALOG AND DIGITAL COMMUNICATION
( )
Ptotal
m= 2 -1
PC
m=
2
10.925
9.5
-1
( ) = 0.54
Ptotal = Pc ( 1+
m2
2 )
5 KW = Pc
( 1+
0.62
2 )
Pc = 4.237 KW.
25.
The antenna current of an AM transmitter is 8 A when only
carrier is sent, but it increases to 8.96 A when the carrier
is modulated by a single tone sinusoid. Find the percentage
modulation.
Here Itotal = 8.96 A and Ic = 8 A.
ANALOG COMMUNICATION
m2
Itotal = Ic 1+
2
m2
8.96 = 8 1+
2
m = 0.713
Emax = 20 + 5 = 25 V
Emin = 20 – 5 = 15 V
Emax - Emin
Modulation index =
Emax + Emin
25-15
= = 0.25
25 +15
Here δ = Δƒ = 75 kHz
And ƒm (max)) = W = 15 kHz
BW = 2(δ + ƒm (max))
= 2[75+15] kHz = 180 kHz
BW = 2 (Df + fm) Hz
Where,
Df = Peak frequency deviation (Hz)
fm = Modulating-signal frequency (Hz)
Sl. No FM PM
1 VFM(T) = Vc cos (wct + mf sin VPM(t) = Vc cos (wct +mp cos
wmt) wmt)
2 Associated with the change Associated with the changes
in fc, there is some phase in phase there some change
change in fc.
ANALOG COMMUNICATION
Vm
Vc
0 Time
0 Time
J1(mf).Vc J1(mf).Vc
J2(mf).Vc J2(mf).Vc
Bandwidth=infinite
= 10 x 103
( 1+
ma2
2 )
= 112.5 KW
( 1+
(0.5)2
2 )
ANALOG COMMUNICATION
REVIEW QUESTIONS
PART - A
1. What is meant by noise?
2. What are the types of noise?
3. Define shot noise.
4. What is flicker noise?
5. Define Amplitude modulation.
6. Differentiate between narrow band and wide band FM signal
7. What is demodulation?
8. Draw the spectrum of FM signal.
9. State Shannon’s Limit for channel capacity theorem. Give an
example.
10. Define Bandwidth efficiency.
11. Distinguish between FM and PM.
12. What is bandwidth need to transmit 4kHz voice signal using AM.
13. Define modulation and modulation index.
14. What is the purpose of limiter in FM receiver?
15. What is modulation index and percentage modulation in AM?
16. Draw the frequency spectrum and mention the band-width of AM
signal.
17. In an AM transmitter, the carrier power is 10kw and the modulation
index is 0.5. Calculate the total RF power delivered.
18. For an AM DSBFC modulator with a carrier frequency of 100KHz
and maximum modulating signal frequency of 5 KHz, determine up-
per and lower side band frequency and the bandwidth.
19. State Carson's rule.
20. In a Amplitude modulation system, the carrier frequency is Fc = 100
KHz. The maximum frequency of the signal is 5 KHz. Determine the
lower and upper side bonds and the bond width of AM signal.
ANALOG AND DIGITAL COMMUNICATION
DIGITAL
COMMUNICATION Unit 2
2.1 INTRODUCTION
communication system
Ease of processing.
Ease of multiplexing.
Definition
Digital Digital
Input Digital Digital pulses output
Digital
Terminal Terminal
Analog Interface Physical Interface
Input Analog
transmission
ADC DAC Output
Input
Channel
Input
Data Output
BPF and
Demodulator Data
Precoder Modulator power BPF and
amplifier and Decoder
Amplifier
clock
Buffer Noise
BPF
Carrier
Channel
Analog BPF
(or)
carrier Transmission Carrier
media and
clock
Recovery
Transmitter
Receiver
Bit rate: Bit rate is the number of bits transmitted in one second. It is
expressed in bits per second (bps).
Hartley’s law
It is expressed as,
I Bxt ...(1)
Where,
Digital Communication
I = Information capacity(bits/sec)
B = Bandwidth (Hertz)
(2) Bandwidth
S
I = 3.32 B log10 1 + bits/sec
N
Where,
B = Bandwidth (Hertz)
Now consider that some white Gaussian noise is present hence (S/N)
is not infinite.
This will reduce the value of (S/N) with increase in B, assuming the
signal power ‘S’ to be constant.
Thus the conclude that an ideal system with infinite bandwidth has
a finite channel capacity. It is denoted by ‘I∞’ and given by,
S
I∞ = 1.44 ...(2)
N0
Shannon’s information Rate
Now consider that some white Gaussian noise is present hence (S/N)
is not infinite.
This will reduce the value of (S/N) with increase in B, assuming the
signal power ‘S’ to be constant.
Thus the conclude that an ideal system with infinite bandwidth has
a finite channel capacity. It is denoted by ‘I∞’ and given by,
S
I∞ = 1.44 ...(2)
N0
Shannon’s information Rate
Solved Problems
B = 5 MHZ
Solution
30 x 10 6
S
= log 10 1 +
3.32 x 5 x 106 N0
1.807
= log 10 S
1 +
N0
Antilog(1.807)= S
1 +
N0
64 = 1 + S
N0
ANALOG AND DIGITAL COMMUNICATION
S
63 =
N0
SNR
= 63
Solution
= 23,290 bits/sec.
Rmax = 2N.N
23,290= N.2N
= N2 log(2)
4.36 =0.3 x N2
4.36
= N2
0.3
14.53 = N2
∴ N = 3.824
Digital Communication
N =log 2 M ...(1)
M= Number of conditions
2 N=M ...(2)
For example,
2.7.1 Introduction
Need of modulation
The modem will modulate the digital data signal from the
DTE(computer) into an analog signal. This analog signal is then
transmitted on the telephone lines.
Advantages
Disadvantages
A
VASK (t) = [1+ Vm(t)] c cos ωc t ...(2)
2
Where,
Case 1
...(3)
VASK (t) = AC cos ωct
Case 2
V (t) = 0 ...(4)
ASK
Thus, the VASK(t) is either Ac cos ωct (or) 0. Hence the carrier is ei-
ther ‘ON or ‘OFF’. Therefore ASK is also called ON-OFF keying.
2.8.3 ASK-generator
(or) 0
∫ making
device
data
Received
signal
(Carrier) Threshold
Ac cos wct
Figure 2.7 ASK - Demodulator Circuit
Case 1
Ac 2 Ac 2
= + cos 2 ωc t ...(2)
2 2
Case 2
The bit interval is the time required to send one single bit. It is the
reciprocal of the bit rate.
Bit rate
Bit rate is the number of bits transmitted (or) sent in one second.
It is expressed in bits per second (bps).
Baud rate
fb
Baud rate = N ...(1)
Where N= number of bits =1
fb f
= fc + − fc + b
2 2
= fb
BW
Advantages
Disadvantages
Definition
Where,
From equation (1) it can be seen that the peak shift in carrier
frequency (fc) is proportional to the amplitude of binary input signal Vm(t).
Case 1
Case 2
-∆f
+∆f
fs fc fm
Logic 1
Logic 0
binary input
1 0 1 0 1
t
Carrier signal
fm fs fm fs fm
Where,
NRZ-Binary
input FSK-Modulator FSK
(VCO) Output
The VCO act as a FSK – generator , the input binary data is given
as control input of VCO.
ANALOG AND DIGITAL COMMUNICATION
If binary input is not applied (i.e) there is no input signal the VCO
generates the centre frequency equal to carrier frequency.
We conclude that the VCO output frequency changes back and forth
between space and mark frequencies.
2.9.5 FSK-detection
BPF Envelope
detector
dc
‘fS’ ~ comparator
FSK -
Output Power
splitter +
Output
dc Data
or
‘fm’ (Original
~ data)
Envelope
BPF detector
Rectified signal
Figure 2.11 Block Diagram of FSK- demodulator
Digital Communication
The respective filter passes only the mark (or) only the space
frequency on to its respective envelope detector.
The envelope detectors, in turn indicate the total power in each pass
band, and the comparator responds to the largest of the two powers.
Carrier
FSK Power -
Input Splitter
+
Output
Data
Multiplier or
Original
LPF Data
Carrier
The multiplier outputs are passed through low pass filters and the
filter outputs are applied to a comparator.
The most common circuit used for modulation of BFSK is the phase
Locked loop (PLL) which is shown in Figure 2.13.
Comparator compares both the values and produce the error voltage.
Bit rate
Bit rate is the number of bits are transmitted (or) sent in one
second .It is expressed in bits per second (bps).
1 1
Bit rate = = = fb
Bit interval T b
Baud rate
fb
Baud rate = .
N
For, BFSK we use one bit ( 0 (or) 1 ) to represent one symbol.
Therefore, the rate of change of FSK waveform( baud) is same as the rate
of change of binary input (bps),thus bit rate equals the baud rate.
fb
Baud rate = = fb
1
Where, N → Number of bits = 1
Where,
fb
fa = = fundamental frequency of binary input
2
f f
= fc + b − fc − b
BW
2 2
fb f
= f c + − fc + b
2 2
BW = fb ...(2)
With CP-FSK, the mark and space frequencies are selected such
that they are separated from the centre frequency by an exact multiple
of one-half the bit rate [fm and fs = n(fb/2)], where n-any integer. This
ensures a smooth phase transition in the analog output signal when it
changes from a mark to space frequency (or) vice-versa.
Binary PSK
M=2N ...(1)
¾¾ As the input digital signal changes state (i.e from ’1’ to ‘0’ or from ‘0’
to ‘1’),the phase of the output carrier shifts between two angles that
are separated by 1800.
¾¾ Hence , other names for BPSK are “phase reversal keying” (PRK) and
“biphase modulation”.
Where,
The binary data signal (0’s and 1’s ) is converted into a NRZ
(non-return to zero ) bipolar signal from unipolar signal by an level
converter.
Buffer
~ sin ωct
Reference Carrier
Oscillator
The operation is explained with the assumptions that the diodes acts
as perfect switches and that they are switched ON and OFF by the
digital data signal.
With the polarity shown, the carrier voltage is developed across the
transformer T2 in phase with the carrier voltage across T1.
Hence, the output signal is in phase with the carrier input signal.
With the polarity shown, the carrier voltage is developed across the
transformer T2 is out of phase with the carrier voltage across T1.
Hence , the output signal is out - of phase with the carrier input
signal.
Ouptut Waveforms
Truth table
Binary input Output phase
Logic 0 1800
Logic 1 00
Phasor diagram
-Sin wct Sin wct
1800 00
Logic 0 Logic 1
Constellation diagram
+ 1800 00
Logic 0 Logic 1
Figure 2.24 Constellation diagram for BPSK
• The Figure 2.25 below shows the simplified block diagram of BPSK
receiver.
• The input BPSK signal can be +sin ωct (or) –sin wct.
• The low-pass filter (LPF) separates the recovered binary data from
the complex demodulated signal.
Case 1
For a BPSK input signal of +sin wct (logic 1), the output of the
balanced modulator is,
= 1 (1-cos ωct)
2
1 1
= - cos 2ωct
2 2
}
↓ ↓
Constant Second harmonic
term
The second term is filtered out by LPF and allows only positive
voltage (+1/2 V).A positive voltage represents a demodulate output is
logic 1.
Case 2
• For a BPSK input of –sin wct (logic 0), the output of the balanced
modulator is,
=-sin2ωct
ANALOG AND DIGITAL COMMUNICATION
1 1
= − − cos 2ωc t
2 2
1 1
= - + cos2ωct
2 2
}
↓ ↓
Constant Second harmonic
term
The second term is filtered out and allows only negative voltage
(-1/2 v). A negative voltage represents a demodulated output is logic 0.
In both cases, the LPF output is applied to the level detector and
clock recovery circuit at the output of level detector we get the following
outputs.
1
+ V → logic 1
2
1
- V → logic 0
2
Thus the binary signal is obtained at the output.
Advantages
(i) BPSK has a Bandwidth which is lower than that of a BFSK – signal.
(ii) BPSK has the better performance of all the system in the presence of
noise .It gives the minimum possibility of error.
Disadvantage
Bit rate
Bit rate is the number of bits are transmitted (or) sent in one
Digital Communication
fb f
= f c + − fc + b
2 2
= fb ...(2)
BW
For the first data bit, there is no preceding bit with which to compare
it . Therefore, an initial reference bit is assumed.
If the initial reference bit is assumed a logic 1, the output from the
XNOR circuit is simply the complement of that shown in timing
diagram.
Figure 2.27 shows the relationship between the input data , the XNOR
output data, and the phase at the output of the balanced modulator.
The first bit (data bit) is XNOR ed with the reference bit .If they are
the same, the XNOR output is a logic1; they are different , the XNOR
output is logic 0.
Figure 2.28 and Figure 2.29 shows the block diagram and timing
sequence for a DBPSK receiver.
1 1
(-Sinwct)(-Sinwct) = - Cos2wct
2 2
1 1
(-Sinwct)(+Sinwct) = + Cos2wct
2 2
1 1
(+Sinwct)(-Sinwct) = + Cos2wct
2 2
If reference phase is incorrectly assumed, then only the first
demodulated bit is in error.
Bandwidth
Advantages
Disadvantage
Bandwidth for both BPSK & DPSK both are same. Mathematically
, the output of BPSK modulator is proportional to
Output = (sinωat)(sinωct)
fb
Where, ωat = 2fat = 2p t
2
fb
ωat =2fct
2
fb
Output =(sin2 t) (sin 2pfct)
2
1 f 1 f
= cos 2π f c − b t − cos 2π f c + b t
2 2 2 2
The output frequency spectrum extends from and the minimum
fb
to f c − f b
bandwidth is, f c +
2 2
fb fb
= fc + 2 − fc − 2
fb
= 2 2
BW = f
b
ANALOG AND DIGITAL COMMUNICATION
=2N
=22
=4
2.13.2 QPSK-Transmitter
One bit is directed to the I-channel and the other directed to the
Q-channel ,as one input to Balanced modulator.
It can be that once a dibit has been split into the I and Q channels,
the operation is the same as in a BPSK modulator.
I-balanced modulator
Case 1
Case 2
Q - Balanced modulator
Case 1
Case 2
Linear summer
When the linear summer combines the two quadrature (900 out of
phase ) signals , there are four possible resultant phasors given by these
expressions
Truth table
Figure 2.32 (a) Phasor Diagram, (b) Constellation Diagram for QPSK
ANALOG AND DIGITAL COMMUNICATION
• Consider the received input is (coswct-sinwct), for that input how the
QPSK-receiver operates and detects the output (01) is explained in
this diagram itself.
Power splitter
The incoming QPSK signal may be any one of the four possible
output phases shown in equation 1,2, 3 and 4. For example consider
the QPSK signal is (cos ωct –sin ωct ) is received then the power splitter
directs the input QPSK signal to the I and Q product detectors and also
to the carrier recovery circuit.
Product detector
I-product detector
The received QPSK- signal (cos wct-sin wct) is one of the inputs
to the I-product detector and the other input is the recovered carrier
(sin wct). The output of the I –product detector is,
Q-Product detector
1 cos2wct 1 sin2wct
1
= + ↑ - ↑ - Sin 0
2 2 2 (filtered out) 0
(filtered out)
1
= v
2
Q = Logic 1
The demodulated Q and I bits (1 & 0 respectively) corresponds
to the constellation diagram and truth table for the QPSK –modulator
shows in Figure 2.32.
The outputs of the product detectors are fed to the bit combining
circuit , where they are converted from parallel I and Q data channels to
a single binary output data stream.
Clock recovery
Clock recovery is the process of receive the clock signal from the
received signal and it is similar to transmitter clock, within the clock
signal a single binary output data stream is obtained.
With QPSK , because the input data are divided into two channels,
the bit rate in either the I (or) the Q- Channel is equal to one-half of the
f
input data rate b .
2
The output of the balanced modulators can be expressed
mathematically as,
ωct = 2fct
fb
∴ Output = sin 2π 4 t (sin2fct)
1 f 1 f
= cos 2π f c − b t − cos 2π f c + b t
2 2 2 2
fb fb
The output frequency spectrum extends from fc + to fc -
4 4
and minimum bandwidth fN is,
f fb fb fb
= fc + b − fc − 4 = 2 4 = 2
4
fb
∴Bandwidth = 2
Advantages of QPSK
I C Output
0 0 -0.541 V
0 1 -1.307 V
1 0 +0.541 V
1 1 +1.307 V
Q C Output
0 0 -1.307 V
0 1 -0.541 V
1 0 +0.541 V
1 1 +1.307 V
Table 2.5 Q - Channel Truth Table
+ 1.307 V
+ 0.541 V
OV
- 0.541 V
- 1.307 V
possible.
The algorithm for the DAC, is quite simple. The I (or) Q bit
determines the polarity of thr output analog signal (logic 1= +V
and logic 0=-V),where as c (or) c bit determines the magnitudes
[(logic 1= 1.307 V) and (logic 0= 0.541 V ). Consequently with two
magnitudes and two polarities , four different output conditions are
possible.
For example
I – channel
The two inputs to the I-channel product modulator is -0.541 and sin
wct .The output is,
Q-channel
For remaining tribit codes (001, 010, 011, 100, 101, 110 and 111),
the procedure is same.
Digital Communication
Phasor diagram
(100) + cos ωc t
-0.541 sin ωc t +1.307 cos ωc t 0.541 sin ωc t + 1.307 cos ωc t
(110)
(101)
-1.307 sin ωc t + 0.541 cos ωc t
(011)
(000)
0.541 sin ωc t - 1.307 cos ωc t
-0.541 sin ωc t - 1.307 cos ωc t
(010)
-cos ωc t
Figure 2.35 (a) Phasor Diagram for 8-PSK
Truth Table
Constellation Diagram
• The power splitter directs the input 8-PSK signal to the I and Q
product detectors and the carrier recovery circuit.
• The incoming 8-PSK signal is mixed with the recovered carrier in the
I- product detector and with a Quadrature carrier in the Q-product
Digital Communication
detector.
• The output of the product detectors are 4-level PAM-signals that are
fed to the 4-to-2 level analog to digital converters (ADCs).
• The outputs from the I-channel 4-to-2 level converter are the I and C
bits , whereas the outputs from the Q-channel 4-to-2 level converter
are the Q and C bits.
• The parallel –to-serial logic circuit converts the I/C and Q/C bit pairs
to serial Q, I and C output data streams.
For examples
Consider the received 8-PSK signal -0.541 sin ωct -1.307 cos ωct
I-channel
=-0.541
( 1-cos2ωct
2 ) – 1.307 sin ωct .cos ωct.
Q-channel
(sin(ωc+ ωc)t+sin (ωc- ωc)t) cos 2ωc t
= -0.541 ` - 1.307 1 +
2 2
= -0.6535
Q = logic 0 C = logic 1
1 f 1 f
= cos 2π f c − b t − cos 2π f c + b t
2 6 2 6
Digital Communication
fb fb
The output frequency spectrum extends from fc+ to fc-
and the minimum bandwidth fN is, 6 6
fb f
BW = fc + − fc − b
6 6
fb fb
= fc + − fc +
6 6
fb
= 2
6
f
BW = b
3
2.15 16-PSK
• With 16- PSK, four bits (called quadbit) are combined, producing 16
different phases (where n = 4, M = 2n \ M = 16)
• The minimum bandwidth and baud equal to one-fourth the bit rate
(fb/4)
• For 16- PSK, the angular separation between adjacent output phases
is only 22.50. Therefore 16- PSK can undergo only a 11.250 phase
shift during transmission and still retain it’s integrity.
• Figure 2.38 and the Table 2.7 shows the constellation diagram and
the truth table for 16-PSK respectively.
ANALOG AND DIGITAL COMMUNICATION
• In all the PSK methods discussed till now, one symbol is distinguished
from the other in phase, but all the symbols transmitted using BPSK,
QPSK (or) M-ary PSK of “same amplitude”.
• The ability of a receiver to distinguish between one signal vector from
another in presence of noise, depends on the distance between the
vector end points.
• This suggests that the noise immunity will improve if the signal
vector differ not only in phase but also in amplitude.
• Such as a system is called as amplitude and phase shift keying
system.
• In this system the direct modulation of carriers in quadrature (i.e
cos wct and sin wct) is involved, therefore this system is called as
the quadrature amplitude phase shift keying (i.e) QAPSK (or) Simply
QASK.
It is also known as Quadrature amplitude modulation (QAM).
Digital Communication
Types of QAM
Depending on the number of bits per message the QAM –signals
are classified as follows,
Name Bits per symbol Number of symbols
4-QAM 2 2 2= 4
8-QAM 3 2 3 =8
16-QAM 4 2 4=16
32-QAM 5 2 5
=32
64-QAM 6 2 6=64
2.16.1 8-QAM –Transmitter
¾¾ 8-QAM is an M-ary encoding technique where M=8, unlike 8-PSK,
the output signal from an 8-QAM modulator is not a constant
amplitude signal.
¾¾ Figure 2.39 shows the block diagram of an 8-QAM transmitter. The only
difference between the 8-QAM transmitter and the 8-PSK transmitter
is the omission of the inverter between the C-channel and the
Q-product modulator.
¾¾ As with 8-PSK , the incoming data are divided into groups of three
bits(tribits). The I,Q and C bit streams, each with a bit rate equal to
one-third of the incoming data rate (fb/3).
¾¾ The I and Q bits determine the polarity of the PAM –signal at the
output of the 2-to-4 level converters, and the c-bit determines the
magnitude. Because the C-bit is fed uninverted to both the I and the
Q-channel 2-to-4 level converters the magnitudes of the I and Q PAM
signals are always equal.
¾¾ Their polarities depends on the logic condition of the I and Q bits and
therefore, may be different.
¾¾ The truth table for the I and Q-channel 2 -to -4 level converters are
identical.
Truth table
I/Q C Output
0 0 -0.541 V
0 1 -1.307 V
1 0 +0.541 V
1 1 +1.307 V
Q I C Amplitude Phase
0 0 0 0.765 V -1350
0 0 1 1.848 V -1350
0 1 0 0.765 V -450
0 1 1 1.848 V -450
1 0 0 0.765 V +1350
1 0 1 1.848 V +1350
1 1 0 0.765 V +450
1 1 1 1.848 V +450
I-channel
Q-channel
The inputs to the Q-channel 2- to- 4 level converters are Q=0 &
C=0 .The output is -0.541 V.
The two inputs to the Q-channel product modulator are -0.541
and cos wct .The output is
Q=(-0.541) (cos ωct)= -0.541 cos ωct
The outputs from the I and Q –channel product modulators are
combined in the linear summer and produce a modulated output of
Summer output =-0.541 sin ωct -0.541 cos ωct
(or)
= 0.765 sin (ωct – 1350)
For remaining tribit codes, the procedure is same.
2.16.2 8-QAM Receiver
An 8-QAM receiver is almost identical to the 8-PSK receiver.
The differences are the PAM levels at the output of the product
detectors and the binary signals at the output of the analog –to digital
converters.
Because there are two transmit amplitudes possible with 8-QAM
that are different from those achievable with 8-PSK , the four demodulated
PAM –levels in 8-QAM are different from those in 8-PSK. Therefore,
the conversion factor for the analog-to- digital converters must also be
different.
Also, with 8-QAM the binary output signals from the I-channel
analog-to-digital converter are the I and C bits, and the binary output
signals from the Q-channel analog to digital converter are Q and C-bits.
Bandwidth consideration for 8-QAM
Bandwidth required for 8-QAM is same as in 8-PSK.
fb
The minimum BW =
3
2.17 16 QAM
Bit Splitter
Four bits are serially clocked into the bit splitter, then they are
outputted simultaneously and in parallel with I, I’, Q, Q’ -channels
2- to 4- level converter
They are + 0.821 sin wct, -0.821 sin wct,+ 0.22 sin wct and - 0.22
sin wct
Summer
The Figure 2.44 shows the truth table, Phasor diagram and
constellation diagram for 16-QAM.
For example
For a, quadbit input of 0000, determine the output amplitude and
phase for 16 QAM modulator.
For Q Product Modulator
They are +0.821 cos wct, - 0.821 wct, 0.22 cos wct and -0.22 cos wct
Digital Communication
I - Channel
The output of I-Channel product modulator for the two inputs are
-0.22 V and sin wct.
Q - Channel
Summer output is
Truth table
Binary input
16- QAM Output
}
Q Q’ I I’
0 0 0 0 0.311 V -1350
0 0 0 1 0.850 V -1650
0 0 1 0 0.311 V -450
}
0 0 1 1 0.850 V -1500
0 1 0 0 0.850 V -1050
0 1 0 1 1.161 V -1350
0 1 1 0 0.850 V -750
}
0 1 1 1 1.61 V -450
1 0 0 0 0.311 V 1350
1 0 0 1 0.850 V 1650
1 0 1 0 0.311 V 450
}
1 0 1 1 0.850 V 150
1 1 0 0 0.850 V 1050
1 1 0 1 1.61 V 1350
1 1 1 0 0.850 V 750
1 1 1 1 1.61 V 450
With 16- QAM, because the input data are divided into four
channels, the bit rate in the I, I’, Q, Q’ channel is equal to one-fourth of
the binary input data rate (fb/4).
fb
Output = x sin 2p
t. sin 2pfct
8
x f f
= cos 2π f c − b t − cos 2π f c + b t
2 8 8
fb
\ The output frequency spectrum extends from fc - to fc + fb
8
8
fb f
\ Minimum bandwidth = f c + − fc − b
8 8
fb
=2
8
fb
BW =
4
Definition
But , in phase shift keying (PSK) system, the phase of the carrier
is changed according to the instantaneous value of modulating signal.
Case 2 Received signal consider –sin ωct, then the output of squaring
circuit is given by,
=cos 2ωct
Phase-Locked loop(PLL)
Frequency divider
Power splitter
The power splitter circuit directs the input received PSK signal to
the I and Q Balanced modulators (or) product detectors.
I - Balanced modulator
It has two inputs , one is the output of power splitter (i.e) received
signal, another is VCO output .
It produces In phase signal to Balanced product detector.
Q – Balanced modulator
It is also having two inputs, one is the received signal, another is
90 phase shifted VCO output.
0
The recovered data are delayed by one-half a bit time and then
compared with the original data in an XOR-circuit.
Figure 2.49 Shows the relationship between the data and the
recovered clock timing.
Param- M-ary
S.NO BASK BFSK MSK BPSK DPSK QPSK QAM
eter PSK
Variable Amplitude
1 Amplitude Frequency Frequency Phase Phase Phase Phase
character and phase
Bits per
2 One One Two One One Two N-bits N-bits
symbol
Number of
possible
3 Two Two Four Two Two Four M = 2N M = 2N
symbols
m = 2N
Symbol
4 Tb Tb 2Tb Tb 2Tb 2Tb NTb NTb
duration
Minimum 2fb
5 2fb 4fb 1.5 fb 2fb fb fb 2nH.fb/N
BN N
Perfor-
mance in Better Better Better Better
6 Poor Better Better Better
presence than ASK than ASK than FSK than ASK
of noise
Error pos- Lower Lower
7 High Low Low Low Low Low
sibility than FSK than ASK
Digital Communication
Moderate- More
Complex- More More More
8 Simple ly com- Complex Com- Complex
ity Complex Complex Complex
plex plex
Non-
Detection Coher- Non- Coher- Coher-
9 Coherent Coher- Coherent Coherent
Method ent Coherent ent ent
ent
Minimum p 0.4 Eb
10 Euclidean Eb 2 Eb 2 Eb 2 Eb 2 Eb 2 Eb 2 Eb. Sin for M =
distance
m 16
s(t)
s(t) = K1
s(t) = s(t) =
ANALOG AND DIGITAL COMMUNICATION
5. What is correlator?
r(t) = ⌠ ⌠ f (t)x(t)dt
0
Solution
According to Shannon Hartley theorem,
Digital Communication
I = B log2
( 1+
S
N )
)
30 x 106 = 5 x 106 log2
( 1+
S
N
30 x 106
5 x 106
= log2
( ) 1+
S
N
6 = log2
( ) 1+
S
N
S
64 =
N
S
= 63 or 17.99 dB
N
15. For an 8-PSK system operating with all information bit rate
of 48kbps. Determine (a) baud (b) minimum baud width (c)
bandwidth efficiency.
Given:
fb = 48 Kbps
Solution
fb 48000
(a) Band = =
N 3
fb 48000
(b) Bandwidth B = = = 16000
N 3
Transmission bit rate (bps)
(c) Bandwidth efficiency =
Minimum bandwidth(Hz)
48000 bps
= = 3 bits/cycle
16000 Hz
ANALOG AND DIGITAL COMMUNICATION
|fm-fs|
Df =
2
From 2Df = |fs -fm|
Substitute (3) in (1)
= 2Df + 2 fb
B =2 (Df + fb)
Where,
B - Minimum Nyquist bandwidth (Hz)
fb - Input bit rate (bps)
Df = frequency deviation (Hz)
17. Draw the phasor diagram of QPSK.
01
10 900 00
11
19. What is the relation between bit rate and baud for a FSK sys-
tem?
The bit time equals the time of an FSK signaling element, and
the bit rate equals the baud.
Baud = fb / N
Number of bits encoded into each signaling element N = 1
Baud = fb
Where, fb - Input bit rate (bps)
20. Draw ASK and PSK waveforms for a data stream 01101001.
S
= 1023
N
B = 50 KHz
( )
Solution
S
Rmax = B log2 1+
N
= 50000 log2 (1+1023)
= 150514.99 bits/sec
23. What is the purpose of limiter in FM receiver?
In an FM system, the message information is transmitted by
variations of the instantaneous frequency of a sinusoidal carrier
wave, and its amplitude is maintained constant.
Any variation of the carrier amplitude at the receiver input must
result from noise or interference.
An amplitude limiter, following the IF section is used to remove
amplitude variations by clipping the modulated wave at the IF
section.
010
10 11 111
Digital Communication
Given fm = 49 KHz
fs = 51 KHz
fb = 3 Kbps
Solution
(a) Peak frequency
|fm-fs|
Df =
2
49 X 103 - 51 X 103
=
2
= 1 KHz
REVIEW QUESTIONS:
PART - A
1. What do you mean by FSK?
2. What is M-ary encoding?
3. State Shannon’s Limit for channel capacity theorem. Give an
example.
4. Draw the block diagram of BFSK transmitter.
5. Define Bandwidth efficiency.
6. Draw the constellation diagram of QPSK signal.
7. Draw 8-QAM phasor diagram.
8. Determine the peak frequency deviation and minimum bandwidth
for a binary FSK signal with a mark frequency of 49 KHz, a space
frequency of 51 KHz.
9. What is Shannon limit for information capacity?
10. What is binary phase shift keying?
11. Draw ASK and PSK waveforms forms for a data stream 110101.
12. What are the advantages of QPSK?
13. What is the relation between bit rate and baud for a FSK system?
14. Draw the ASK and FSK signals for the binary signal s (t) = 101100l.
15. A typical dial up telephone connection has a bandwidth of 3 KU, and
a signal to noise ratio of 30 dB. Calculate the Shannon limit.
16. What are the two types of carrier recovery circuit?
17. Write down the expression for peak frequency deviation of FSK.
18. What is the need for synchronization?
19. What is the bandwidth requirement of FSK?
20. Write down the bit error rate expression of a QPSK system.
21. Draw the block diagram of QPSK transmitter.
22. Differentiate between PSK from DPSK
23. What are the advantages of PSK over FSK?
24. Determine the bandwidth and baud for the FSK signal with a mark
frequency of 49 kHz and a space frequency of 51 kHz and a bit rate
of 2 kbps.
25. Write the differences between PSK and FSK.
PART – B
1. Draw the block diagram of a QPSK transmitter and explain. Derive the
bandwidth requirement of a QPSK system.
2. Draw the block diagram of a non-coherent receiver for detection of
binary FSK signals and derive the probability of symbol error for a
Digital Communication
The basic symbols for the Morse code were dots and dashes. Various
combinations of dots and dashes were used for representing various
letters, numbers, punctuation marks etc.
2. Sender 5. Protocol
3. Medium
DATA COMMUNICATION
Description
1. Message
2. Sender
3. Medium
It is the physical path over which the message travels from the
sender to the receiver.
4. Receiver
5. Protocol
1. Serial transmission
2. Parallel transmission.
1. Parallel Transmission
2. Serial Transmission.
Data Transmission
parallel serial
transmission transmission
Synchronous Asynchronous
(i) The advantage of parallel transmission is that all the data bits will
be transmitted simultaneously. Therefore the time required for the
DATA COMMUNICATION
3.7 CONFIGURATIONS
Data communication circuits can be generally categorized as two
point and multipoint circuits.
There are possible ways to connect the devices .They are follows :
2. Multipoint connection.
3.8 TOPOLOGIES
Station Station
1 2
(a) Point to point
Remote 1
stations 2 Remote station
1 3
2 4
Central
host 3 Common communication
medium
5 4 n 5
(b) Star (c) Bus or multidrop
1
Stations 6 1 Stations
2 6 2
5
7 3
3 4
5
4
(d) Ring or loop (e) Mesh
Figure 3.9 Various Topologies
3.9 TRANSMISSION MODES
Transmission Modes
Each station can transmit and receive , but not at the same time.
When one device is ending the other one is receiving and vice versa.
DATA COMMUNICATION
In full duplex mode, signals going in either direction share the full
ANALOG AND DIGITAL COMMUNICATION
capacity of link.
The link may contain two physically separate transmission paths one
for sending and another for receiving.
A binary digit or bit can represent only two symbols as it has only
states “0” or “1”.
7 0 0 0 0 1 1 1 1
Bit
6 0 0 1 1 0 0 1 1
Numbers
5 0 1 0 1 0 1 0 1
4321 0 1 2 3 4 5 6 7
0000 0 NUL DLE SPACE 0 @ P P
0001 1 SOH DC1 ! 1 A Q a Q
0010 2 STX DC2 “ 2 B R b R
0011 3 ETX DC3 # 3 C S c S
0100 4 EOT DC4 $ 4 D T d T
0101 5 ENQ NAK % 5 E U e U
0110 6 ACK SYN & 6 F V f V
0111 7 BEL ETB . 7 G W g W
1000 8 BS CAN ( 8 H X h X
1001 9 HT EM ) 9 I Y i Y
1010 A LF SUB * : J Z j Z
1011 B VT ESC + ; K [ k {
1100 C FF FS , < L \ L l
1101 D CR GS - = M ] m 1
1110 E SO RS > N ^ n ~
1111 F SI US / ? O - - o DEL
The control symbols are codes reversed for special functions. The
control symbols are as listed in Table 3.2 CR(carriage Return) and
LF(line feed) are the symbols used for basic operations of printers or
displays.
And the symbols STX (Start to Text) and ETX (End of Text) are used
for grouping of data character.
The outer symbols and their meanings are as listed in Table 3.2.
ASCII is a 7 bit code but the eighth bit is often used. This is called
as parity bit. The parity bit is used to detect the errors introduced
during transmission.
The parity bit is generally added in the most significant bit (MSB)
position.
DATA COMMUNICATION
The 8-bit ASCII code word format has been shown in figure 3.13
8 bits
Figure 3.13 An ASCII word with parity bit included in the MSB
position
This is an 8-bit code. However all the possible 256 combinations are
not used.
There is no parity bit used to check error in the basic code set. The
EBCDIC code set is as shown in table 3.3.
3 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
4567 0 1 2 3 4 5 6 7 8 9 A B C D E F
0000 0 NUL DLE SP & 0
0001 1 SOH SBA / a j A J 1
0010 2 STX EUA SYN b k s B K S 2
0011 3 ETX IC c / t C L T 3
0100 4 d m u D M U 4
0101 5 HT NL e n v E N V 5
0110 6 ETB f o w F O W 6
0111 7 ESC EOT g p x G P X 7
1000 8 h q y H Q Y 8
1001 9 EM i r z I R Z 9
1010 A ! !
∩
1011 B $ #
1100 C DUP RA < * % @
1101 D SF ENQ NAK ( ) --
1110 E FM + ; > =
1111 F ITB SUB | ⌐ ? -
ANALOG AND DIGITAL COMMUNICATION
b0 b1 b2 b3 b4 b5 b6 b7
MSB LSB
b0 b1 b2 b3 b4 b5 b6 b7
SP = 1 0 1 0 0 0 0 0
¾¾ Therefore the same code is used for two symbols using letter shift/
figure shift keys which change the meaning of a code.
10 J dible 0 1 0 1 1 26 Z + 1 0 0 0 1
signal
DATA COMMUNICATION
Catridge re-
11 k ( 0 1 1 1 1 27 0 1 0 0 0
turn
12 l ) 1 0 0 1 0 28 Line feed 0 0 0 1 0
13 M . 1 1 1 0 0 29 Letters 1 1 1 1 1
14 N . 0 1 1 0 0 30 Figures 1 1 0 1 1
15 O 9 1 1 0 0 0 31 Space 0 0 1 0 0
16 P 0 1 0 1 1 0 32 Not used 0 0 0 0 0
In order to make the size of each code word 1 byte (8 bits) the ASCII
patterns are augmented by an extra 0 at the left (MSB position).
The extra bit is called as parity bit. The first byte in the extended
ASCII is 00000000 and the last one is 0111 1111.
3.10.5 Unicode
It is a 16 bit code which can represent upto 216 (65 ,536) symbols.
3.10.6 ISO
Bar codes are seen in almost every consumer item sold in the
ANALOG AND DIGITAL COMMUNICATION
modern stores across the world . The bar code is a series of black
bars separated by white spaces.
The widths of the bars represent binary 1’s and 0’s and the bar
pattern represents the cost of that item.
4. Security access
5. Document
6. Production counting
7. Automatic billing
Figure 3.14 (a) shows a typical bar code and Figure 3.14 (b) shows
the bar code structure.
Refer Figure 3.14 (b) various fields in the bar code structure are
as follows
1. Start Margin
unique sequence of bar and spaces which identifies the beginning of the
data field.
2. Data Characters
2. Check Characters
The noise can introduce an error in the binary bits travelling from
one system to the other . That means 0 may change to 1 or 1 may
change to 0.
These extra bits are called as parity bits. They allow the detection or
sometimes correction of the errors.
The data bits along with the parity bits form a code word.
1. Content errors
DATA COMMUNICATION
1. Content errors
The term single bit error suggests that only one bit in the given
data unit such as byte is in error.
Medium
0 1 1 1 0011 0 1 1 1 0010
Burst errors
If two or more bits from a data unit such as a byte change from
1 to 0 or from 0 to 1 then burst errors are said to have occurred.
ANALOG AND DIGITAL COMMUNICATION
The length of the burst is measured from the first corrupted bit
to the last corrupted bit. Some of the bits in between may not have been
corrupted.
errors
Medium
0 1 1 1 0 0 1 1 0 1 0 1 1 0 01
It is possible for the receiver to detect these errors if the received code
word (corrupted) is not one of the valid code words.
Number of errors 1 2 3
Distance 1 2 3
Figure 3.18
Hence to detect the errors at the receiver , the valid code words should
be separated by a distance of more than 1.
Otherwise the incorrect received code words will also become some
other valid code words and the error detection will be impossible.
message bits.
MSB LSB
P d6 d5 d4 d3 d2 d1 d0
7-data bits
Parity bit
¾¾ Even parity means the number of 1’s in the given word including
the parity bit should be even (2,4,6…).
¾¾ Odd parity means the number of 1’s in the given word including
the parity bit should be odd(1,3,5..).
For odd parity this bit is set to 1 or 0 at the transmitter such that
the number of “1 bits” in the entire word is odd. This is illustrated
in Figure 3.21(b).
For even parity this bit is set 1 or 0 such that the number of “1 bits”
in the entire word is even. This is illustrated in Figure 3.21(a).
P Data bits P Data bits
1 1001011
0 1001011
Figure 3.21
ANALOG AND DIGITAL COMMUNICATION
Transmitted code : 0 1 0 0 1 0 1 1 0
Receiver’s decision
two errors Parity
Received code
0 1 1 1 1 0 1 1 0 Even No error
with two errors
Figure 3.23 The receiver cannot detect the presence of error if the
number of errors is even i.e. 2,4,6 ….
P 7 6 5 4 3 2 1
H 0 1 0 0 1 0 0 0
O 1 1 0 0 1 1 1 1
L 1 1 0 0 1 1 0 0
E 1 1 0 0 0 1 0 1
Note that the parity bits are selected in order to obtain an even
parity for each row (i.e. for each letter)
Word A 1 0 1 1 0 1 1 1
+
Word B 0 0 1 0 0 0 1 0
Sum 1 1 0 1 1 0 0 1
Solution
Carries O
10 1 0 1 1 1 1 0
1 0 1 1 0 0 0 1
Data + 1 0 1 0 1 0 1 1
bytes + 0 0 1 1 0 1 0 1
+ 1 0 1 0 0 0 0 1
Checkman
0 0 1 1 0 0 1 0
bytes
`Note that the carries of MSB have been ignored while writing the
checksum byte.
(even party)
→ 1 1 0 0 0 1 1 1 1
→
These bits will make the party of These bits will make
each column even LRC bits → the
→ party of each
(even party) row even
The LRC bits indicate the parity of rows and VRC bits indicate the
parity of columns as shown in figure 3.25
Character C
b1 1
b2 1 → Column - 1 of the data block
b3 0
b4 0
b5 0
b6 0
b7 1
VRC bit → 1
→ VRC bit = 1 to make the parity of first
column even
Figure 3.26
Therefore the eighth bit b8 which a VRC bit is is made “1” to make
the parity even.
The LRC bits are parity bits associated with the data block of
figure 3.25 Each LRC bit will make the parity of the corresponding row,
an even parity. For example, consider row 1 of Figure 3.27.
Figure 3.27
Example 3.2 The following bit stream is encoded using VRC,LRC and
even parity. Locate and correct the error if it is present.
11100001
Solution
1. Figure 3.28 shows the received data block alongwith the LRC and
VRC bits.
→
1 2 3
→ (even
b1
b2
1
1
1
1
1
0
0
0
O0
0
0
0
1
0
0
1
1
1
parity)
Wrong
→
b3 0 1 1 0 1 1 1 0 1
parity
Data b4 0 1 1 0 0 0 0 0 0
Block → b5 0 0 0 1 1 1 0 1 0
b6 0 0 0 0 0 0 0 0 0
b7 1 1 1 1 1 1 1 1 0
VRC bits →
(Even party)
1 1 0 0 →0 1 1 1 1
Wrong party
Figure 3.28 Received data block along with VRC and LRC-bits
This technique is more powerful than the parity than the parity
check and checksum error detection.
There is no error if this division does not yield any remainder. But a
non-zero remainder indicates presence of errors in the received data
unit.
Data 00................0
Step 1:
Step 2:
Divide the newly generated data unit in step 1 by the divisor .This
is a binary division.
Step 3:
Step 4:
This CRC will replace the n 0s appended to the data unit in step
1, to get the code word to be transmitted as shown in figure 3.29
Data CRC
If remainder is 0
Remainder then no errors
¾¾ The code word received at the receiver consists of data and CRC.
¾¾ The receiver treats it as one unit and divides it by the same (n+1) bit
divisor which was used at the transmitter.
¾¾ If the remainder is zero, then the received code word is error free and
hence should be accepted.
DATA COMMUNICATION
011000
10101
+
MOD - 2 011010
additions 10101
+
011111
10101
+
010100
10101
+
000011011
10101
01110 Remainder
ANALOG AND DIGITAL COMMUNICATION
The generation of CRC code is clear after solving the example 3.4
Example 3.4 Generate the CRC code for the data word of 110010101.
The divisor is 10101.
Solution
Divisor : 10101
110010101 0 0 0 0 0
)
10101 11001010100000 dividend
Divisor 10101
+
+
+
+
011000
10101
011011
10101
011100
10101
10011
10101
11000
10101
11010
10101
11110
10101
10110
10101
11 Remainder
Code word
\
11001010100000
11
Code word = 1 1 0 0 1 0 1 0 1 0 0 0 1 1
Example 3.5 Calculate CRC for the frame 110101011 and the generator
polynomial =x4 + x + 1 and write the transmitted frame.
Solution
Divisor : x4 + 0 x3 + 0x2 + x + 1 = 1 0 0 1 1
1 1 0 1 0 1 0 1 1 0 0 0 0 0
11000000110
10011 ) 11 10 01 10 01 1 0 1 1 0 0 0 0 0
MOD - 2
+
+
+
+
additions
010011
10011
00000011000
10011
010110
10011
001010 Remainder
DATA COMMUNICATION
Code word
encoder accepts these bits and adds the check bits to them to
produce the code words.
These code words are transmitted towards the receiver. The check
bits bus are used by the decoder to detect and correct the errors.
Data
Encoder Decoder Destination
source
code words
Data bits Data bits
Data bits Check bits
The encoder of figure 3.31 adds the check bits to the data bits,
according to the prescribed rule. This rule will be dependent on the
type of code being used.
The decoder separates out the data and check bits. It uses the
parity bits to detect and correct errors if they are present in the
received code words.
In FEC the receiver searches for the most likely correct word.
The nearest valid code word (the one having minimum distance)
is the most likely correct version of the received code word as shown in
Figure 3.32 Valid code word 1
Distance
11011100
1
Received code word Valid code word 2
Distance
11001100 11101101
2
Distance Valid code word 3
3 11110100
In Figure 3.32 the valid code word 1 has the minimum distance
(1), hence it is the most likely correct code word.
There are two basic systems of error detection and correction. The
first on being the forward error correction (FEC) system which has
been discussed in previous section. The second one is the automatic
repeat request (ARQ) system.
1. ARQ system less number of check bits (parity bits) are required to
be sent. This will increase the (k/n) ratio for an (n,k) block code if
transmitted using the ARQ system.
3. The bit rate of forward transmission must make allowance for the
backward repeat transmission.
Forward
Message Input buffer
Encoder Transmission Output buffer
Detector
Input and controller
Channel and controller
Return
Transmission ACK/NAK
Channel
Feedback path
¾¾ The encoder produces code words for each message signal at its
input. Each code word at the encoder output is stored temporarily
and transmitted over the forward transmission channel.
¾¾ At the destination a decoder will decode the code words add look
for errors.
¾¾ The output controller and buffer on the receiver side assemble the
output bit stream from the code words accepted by the decoder.
The bit rate of the return transmission which involves the return
transmission of ACK/NAK signal is low as compared to the bit rate of
the forward transmission . Therefore the error probability of the return
transmission is negligibly small.
The block diagram for the step and wait ARQ system is as shown in
Figure 3.34. This method is the simplest ARQ system.
Code word X2 is
Tw Tw
T1 T2 retransmitted
Transmitter X1 X2 X2
NAK
Receiver X1 ACK X2
Error detected
At the receiver the detector searches for error. As error is not found
it sends the positive acknowledgement signal ACK back to the
transmitter.
After receiving this ACK signal, the transmitter will send the next
code word X2. The receiver detects the error and sends a negative
acknowledgement signal NAK to the transmitter.
NAK
X1 X2 X3 X4 X5 X6 X7 X3 X4 X5 X6 X7 X8
Discarded
detected
¾¾ The major difference between this and the previous system is that
the transmitter does not wait for ACK signal for the transmission of
next code word.
¾¾ When the receiver detects an error in the third code word X3 as shown
in Figure 3.35. The receiver sends the NAK signal.
¾¾ But this signal takes some time to reach the transmitter by that time
the transmitter has transmitted code words upto X7.
¾¾ On reception of the NAK signal the transmitter will retransmit all the
code words from X3 onwards. The receiver discards all the code words
it has received after X3 i.e. X3 to X7. It will then receive all the code
words that are retransmitted by the transmitter.
DATA COMMUNICATION
Tw
Transmitter X1 X2 X3 X4 X5 X6 X7 X3 X8 X9 X10
K Only X3 is retransmitted
NA
X1 X2 X3 X4 X5 X6 X7 X3 X8 X9 X10
Retransmitted
Code word X3 is
Error
Received
detected
Figure 3.36 Selective repeat A R Q system
In this system as well, the transmitter does not wait for the ACK
signal for the transmitter of the next code word. It transmits the code
words continuously till it receives the “N A K” signal from the receiver.
The receiver sends the “NAK” signal back to the transmitter as soon
as it detects an error in the received code word. For example the
receiver detects an error in the third code word X3.
The code words X4,X5,X6 and X7 received by the receiver are not
discarded by the receiver . The receiver receives the retransmitted code
word in between the regular code words. Therefore the receiver will have
to maintain the code words sequentially.
↓
Primary station
Mainframe Mux channel DTE DCE
Front-end Data
computer (parallel interface)
processor(FEP) modem
DCE DCE
Data Data
modem modem
RS - 232 RS - 232
↓
DTE DTE
CT ROP CT ROP
Secondary station 1 Secondary station 2
The LCU at the primary station is more complicated then the LCU at
the secondary stations.
The primary LCU controls the data traffic to and from various
circuits having different characteristics.
The secondary LCU is used for directing the data traffic between one
data link and a few terminal devices. All these devices operate at the
same speed using the same character code.
Functions of LCU
2. LCU directs flow of input and output data between different links and
their application programs.
5. The data link control (DLC) characters are inserted and deleted in
the LCU.
ANALOG AND DIGITAL COMMUNICATION
±± Inside LCU, there is one circuit which performs most of the tasks
mentioned above. It is called as UART when the transmission is of
asynchronous type and it is called as USRT when synchronous
transmission is being used.
Functions of UART
Figure 3.38 shows how to program the control word for various
functions. The control word set up the data, parity and stop bit
steering logic circuit.
NDB2 NDB1+ Bits/word
0 0 5
0 1 6
1 0 7
1 1 8
The UART sends a transmit buffer empty (TNMT) signal to the DTE to
indicate that it is ready to receive data.
The data pass through the steering logic circuit where the start, stop
and parity bits are picked up.
The data is loaded into the transmit shift register , and then it is
serially outputted on the transmit serial output (TSO) pin. The bit
rate of outputted data is equal to the transmit clock frequency (TCP)
This process continues till the DTE transfers all its data. This is
shown in Figure 3.39 (b).
ANALOG AND DIGITAL COMMUNICATION
NSB NDB 2
NPB NDB 1 POE Parallel Input data from LCU
(C.R.)
Transmit buffer register (TBR)
Parity
Generator
(P.G.)
out
Status word
register
SWE
TBMT
Figure 3.40 (a) Shows the simplified block diagram of a UART receiver.
RSI
(Receive Receive Receive Shift register
Start - bit
Serial Timing
verify (RSR)
input) circuit
(SWR)
When one complete character is loaded, into the shift register, the
complete character is transferred to the buffer register using parallel
data transfer.
The receive data available (RDA) flag is set in the status word
register.
In order to read the status register, the DTE observes the status word
enable (SWE). If this line is found active the character is read from
the buffer register by placing an active condition on the receive data
enable (RDE) pin.
Once the data reading is over the DTE places an active signal on the
receive data available reset (RDAR) pin. Which resets the RDA pin.
Functions of USRT
2. Error detection.
USRT does not allow the start and stop bits. Instead unique SYN
characters are used into the transmit and receive SYN registers before
the data is transmitted.
0 = parity bit
0 = Parity odd
NDB2 NDB1+ Bits/Word
0 0 5
0 1 6
1 0 7
1 1 8
Data is loaded into the transmit data register from DB0-DB7 with the
help of transmit data strobe (TDS) signal.
But if TDS pulse does not come, the next transmitted character is
extracted from the transmit SYN register and the SYN character
transmitted signal (SCT) is set.
The transmit buffer empty (TBMT) signal is used for requesting the
next character from the DTE.
The serial output data appears on the transmit serial out (TSO) pin.
ANALOG AND DIGITAL COMMUNICATION
The bit rate of the receiver clock signal (RCP) is adjusted as per
requirement and the desired SYN character is loaded into the receive
SYN register from DB0-DB7 with the help of receive SYN strobe (RSS).
When the receiver rest input goes from high to low , the receiver is
placed in the search mode in which serially received data is checked
bit by bit to find the SYN character.
After clocking each bit into the receive shift register, its contents are
compared with those of receive SYN register. if they are identical it
shows that a SYN character is found and the STN character receive
(SCR) output is set.
This character is transferred into the receiver buffer register and the
receiver is placed into the character mode.
In this section we are going to discuss about the most widely used
serial interface RS=232.
To specify the exact nature of interface between the DTE and DCE
following characteristics are used:
1. Mechanical
2. Electrical
3. Functional
4. Procedural
For e.g. when the DTE (Terminal )asserts a request to send, the
DCE ( modem) replies with a clear to send. Similar action-reaction pair
exist for other circuits as well.
1. Data group
2. Control group
3. Timing group
4. Ground
ANALOG AND DIGITAL COMMUNICATION
Table 3.6 shows the RS-232 signals divided into the four
categories.
send
CC 107 DCE ready DTE DCE is ready to operate
CD 108.2 DTE ready DCE DTE is ready to operate
DCE is receiving a
channel line
DCE is receiving
Data
These are the lines which actually carry the message bits between
the DTE and DCE ends.
And the received data line is used to carry a signal from DCE to DTE
end. The data is transmitted serially on these lines.
The secondary lines are provided to transmit and receive data for
those applications in which there are two channels in each direction.
Out of two the first channel is a primary high speed high perfor-
mance channel and the second channel is used for carrying some
less critical message.
This message can be like how many errors have been detected or
about the condition of the data link etc. Most applications do not use
these lines but some DTE and DCE equipment can make use of it.
Control lines
Out of the nine primary control lines, six are for the DCE to DTE
direction and the remaining three are for the DTE to DCE direction.
Timing lines
Ground
As seen from the Table 3.6 there are only two data lines name-
ly signal ground and protective ground. The protective ground is
connected to the chassis of the equipment . It provides a protection
to the user against shocks.
The control lines for handshaking are request to send (RTS), clear to
send (CTS),data set ready (DSR) and data terminal ready (DTR).
Here “data set” is a communication box which acts as DCE and “data
terminal” is the computer or terminal which is a DTE.
The other three control lines are “ring indicator”, “received line signal
detector” and the “signal quality detector”.
The received line signal detector shows the presence of data coming
into the DCE from telephone lines. It also shows whether the qual-
ity of the received signal is adequate for low error performance and
indicates it via the “signal quality detector”.
If a telephone line is being used then the DCE can tell the DTE that
someone is calling. When it detects the bell ringing signal on the line.
It then uses the “ring indicator” to show this.
3. It is suitable for low baud rate slow systems typically upto 20,000
bauds.
5. The standard voltage levels for mark and space makes it possible to
reduce the interference due to noise.
ANALOG AND DIGITAL COMMUNICATION
2. The noise interference for the single ended signals is very high. To
improve noise immunity it is necessary to have the mark and space
voltages close to ± 25 Volts. This difficult in the modern digital sys-
tems which use a 5 Volt supply.
3. The highest baud rate is 20,000 baud for distance less than 50 ft.
This is too slow.
Data circuits
STB
ACK
Computer BUSY Printer
PO
SLCT
AF
PRIME
ERROR
These are eight parallel data lines d0 to d7. All these are
unidirectional lines taking data from computer to printer.
1. Strobe (STB)
ANALOG AND DIGITAL COMMUNICATION
2. Autofed (AF)
4. Select (SLCT) .
If (AF) is low (active) then the printer responds to the carriage return
character by performing a carriage return and a line feed.
This line is also called a initialize line. It is an active low line used by
the computer to clear printer’s memory which includes the printer
programming and print buffer.
4. Select (SLCTIN)
When used, a low signal on this should be seen by the printer before
accepting the data from the computer.
from printer to computer Via these lines, the printer tells computer
what the printer is doing.
1. Acknowledge (ACK)
The printer makes this line after it receives an active STB signal
from the computer. It tells the computer that the printer is ready to
receive another character from the printer.
2. Busy
4. Select (SLCT)
This is an active high line which indicates whether the printer is selected
or not.
5. Error ERROR
Introduction
Definition
The time ‘Ts’ is called as the sampling time (or) sampling period and
to its reciprocal fs =1/Ts is called sampling frequency (or) sampling
rate. This ideal form of sampling is called instantaneous sampling.
DATA COMMUNICATION
Sampling Theorem
(ie) fs ≥ 2 fm
Where,
fs = Sampling frequecny
Nyquist rate
fs = 2 fm samples/sec
Nquist Interval
Definition
Types of PAM
Depending upon the shape of the pulse of PAM. There are two
types of PAM
1. Natural PAM
i. The flat-top PAM is most popular and is widely used. The reason
for using flat top PAM is that during the transmission, the noise
interferes with the top of the transmitted pulses and this noise can
be easily removed if the PAM pulse has flat-top
The modulating signal x(t) is passed through a LPF which will band
limit this signal to fm. Band limiting is necessary to avoid the “aliasing
effect” in the sampling process.
ANALOG AND DIGITAL COMMUNICATION
Pulse train generator generates the train of pulse with frequency fs.
Such that fs ≥ 2fm. Thus the Nyquist criteria is satisfied.
A sample and hold circuit consists of two field effect transistor (FET)
switches and a capacitor
During this period, the capacitor ‘c’ quickly charge upto a voltage
equal to the instantaneous sample value of the incoming signal x(t)
DATA COMMUNICATION
Now, the sampling switch is open and capacitor ‘c’ hold the charge.
The discharge switch is then closed by a pulse applied to a gate Q2.
Due to this, capacitor ‘c’ is discharged to zero volts.
The discharge switch is then opened and thus the capacitor has no
voltage. This is repeated and flat-top PAM signal is generated is as
shown in figure 3.53.
Definition
3.19.1 PWM-Generator
The figure 3.54 shows the circuit that generates PWM signal
The amplitude and width of the pulses are kept constant but the
position of each pulse is varied in accordance with the amplitude of the
sampled values of the modulating signal. The circuit shown in figure
3.54 is also used to generate the PPM-signal
The PPM-Signal can be generated from PWM signal.
To generate PPM signal, the PWM pulse obtained at the output
of the comparator are used as the trigger input to monostable
multivibrator.
The monostable is triggered on negative edge of PWM. The output
of monostable goes high. This high voltage remains high for a fixed
period and then turns low.
The highest amplitude of modulating signal produce the larger width
of PWM-signal, therefore the PPM - pulse moves to the far right
The lowest amplitude of modulating signal produces the narrow
PWM-Signal, therefore the PPM-Pulse moves to the far left.
It is also more immune to noise than PAM.
PAM is used as intermediated form of modulation with PSK, QAM
and PCM, although it is seldom used by itself. PWM and PPM are used
in special-purpose communication systems mainly for the military but
are seldom used for commercial digital transmission systems. PCM is by
far the most prevalent form of Pulse modulation.
Introduction
The sample and hold circuit periodically samples the analog input
signal and converts those samples to multilevel PAM-signal.
Quantizer
Encoder
For the transmission, gray code is preferred, because it has only 1-bit
change for each step in the quantized level noise component.
Transmission path
The PCM-signals are transmitted for long distance with the help of
regenerative repeaters.
i. Equalizations
• The DAC perform opposite to ADC operation and produce the output
DATA COMMUNICATION
signal.
The hold circuit produce the PAM signal ouput for the analog input
signal.
The PAM-signals passes through the LPF to recover the analog signal
x(t). The low pass filter is called as reconstruction filter and it’s cut-
off frequency is equal to the message signal bandwidth fm.
3.21.4 Sampling
1. Natural sampling
2. Flat-top sampling.
Natural sampling
(a) input
waveform
(b) sample
waveform
(c) output
waveform
sampling.
Aperture error is compensated by using equalizers.
Figure 3.59 shows Flat-top sampling.
(a) Input
waveform
(b) sample
waveform
(c) output
waveform
Sampling pulse
Analog
output
+ + PAM output
Z1 Q1 Z2
- -
C2
The FET acts as a simple analog switch. When Q1 turned ON, it of-
fers a low-impedance path to hold the analog sample voltage across
capacitor C1. The time which is ON, is called the aperture time or
acquisition time.
ANALOG AND DIGITAL COMMUNICATION
Input
waveform
Aperture
Conversion
time Q1 Q1
Sample time
pulse on on
Capacitor Q1 Capacitor
changes off discharges
Output
waveform
Droop
Figure 3.61 Input and Output waveforms
It important that the output impedance of voltage follower Z1, and the
resistance of Q1 be as small as possible. So that the RC charging time
constant of the capacitor is kept very short, allowing the capacitor to
charge or discharge rapidly during the short acquisition time.
DATA COMMUNICATION
The inter- electrode capacitance between the gate and drain of the
FET is placed in series with C1, when the FET is OFF. Hence it acting
as a capacitive voltage-divider network.
The gradual discharge across the capacitor during the
conversion time is called droop and is caused by the capacitor
discharging through its own leakage resistance and the input imped-
ance of voltage follower Z2, Hence, the input impedance of Z2 and the
leakage resistance of C1 be as high as possible.
The voltage followers Z1 and Z2 isolate the sample and-hold circuit
from the input and output circuitry.
Sampling Rate:
Audio
0 fa fs 2fs 3fs
Frequency
fs-fa fs+fa
2fs-fa 4fs+fa
The binary codes used for PCM are n-bit code word, where n may be
any positive in-teger greater than 1. The code currently used for PCM
are sign-magnitude codes, where the most significant bit (MSB) is the
sign bit and the remaining bits are used for magnitude.
3.21.5 Resolution
111 +3V
110 +2V +2.6V
101 +1V
}
100 0
000
a)
001 -1V
010 -2V
011 -2V
b)
t1 t2 t3
111 +3V
110 +2V
101 +1V
000
}
100 0
c)
001 -1V
010 -2V
011 -2V
Sample time Sample time Sample time
110 001 111
(d)
Figure 3.64 (a) Analog input signal, (b) Sample pulse, (c). PAM
signal, (d) PCM code
Since the figure 3.64 above each sample voltage is rounded off
(quantized) to the closest available level and then converted to its
corresponding PCM code.
The PAM signal in the transmitter is same PAM signal as produced
in the receiver. Therefore, any round off errors in the transmitted
signal are reproduced when the code is converted back to analog in
the receiver. This error is called the quantization error (Qe).
The quantization error is equivalent to additive white noise as it al-
ters the signal amplitude. This quantization error is also called quan-
tization noise (Qn).
In Figure 3.64 above the first sample occurs at time t1, when the
input voltage is exactly +2V. The PCM code that corresponds to +2
V is 110, and, there is no quantization error. when the input voltage
is + 1V, Sample 2 occurs at time t2 .The PCM code that corresponds
to + 1 V is 001, and again there is no quantization error.
To determine the PCM code for some particular sample voltage,
simply divide the sample voltage by the resolution, convert the
DATA COMMUNICATION
quotient to n-bit binary code, and then add the sign bit.
In figure 3.64 above, for sample 3, the voltage at t3 is approximately
+2.6V. The folded PCM code is,
There is no PCM code for +2.6; therefore, the magnitude of the sam-
ple is rounded off to the nearest value, which is, +3 V or 111. The
rounding-off process resulting in a quantization error of 0.4 V.
3.21.6 Quantization
The figure 3.65 below shows for a linear analog input signal (ie. a
ramp signal), the quantized signal is a staircase function. Thus, the
maximum quantization error is the same for any magnitude input
signal is shown in Figure below.
ANALOG AND DIGITAL COMMUNICATION
The step size of the staircase signal is constant for linear Quantizer.
Vout
Analog
signal
Vin
Quantization
Uniform Quantization
In uniform Quantization the ‘step size’ is same (or) constant
throughout the input range.
There are two types of uniform Quantizer
1. Symmetric Quantizer of midtread type
2. Symmetric Quantizer of midrise type.
Midtread Quantizer
In midtread Quantizer, the origin lies in the middle of a tread of stair
case like graph.
The figure 3.66 below shows the corresponding input-output
characteristics of a uniform Quantizer midtread type.
Midrise Quantizer
In midrise Quantizer, the origin lies in the middle of a rising part of
the stair case like graph.
The figure 3.66 below shows the corresponding input-output
characteristics of a uniform Quantizer midrise type.
Non-Uniform Quantization:
In non-uniform Quantization, a Quantizer characteristic is
non-linear.
Step size is not constant instead if it is variable, dependent on
the amplitude of input signal then quantization is known as
non- uniform quantization.
The step size is varied according to signal level to keep signal to noise
ratio high.
ANALOG AND DIGITAL COMMUNICATION
Output Xq(nTs)
Maximum
quantization 7 δ/2
error 5 δ/2
−δ/2 3 δ/2
δ
-X(nTs) δ/2 X(nTs)
0 δ 2δ 3δ Decision
δ/2 levels
3 δ/2 Overload
5 δ/2 levels
3a
2a
a
-4a -3a -2a -a input
a 2a 3a 4a
-a
Midriser
-2a
-3a
-4a
a/2
0
-a/2
Quantization Noise
Vmax
Dynamic range =
Vmin
Where
Vmin = The quantum value (resolution)
Vmax = Maximum voltage magnitude
Resolution Virb
=SQR = =2
Qe Virb
2
For the maximum amplitude input signal of 3V (either 111 or 011)
the maximum Quantization noise is also equal to the resolution
divided by 2. Hence SQR for a maximum input signal is
1
(SQR )min = =6
0.5
(SQR )min= 20 log 6
in dB = 15.6 dB
V2
SQR(dB ) = 10 log R
(
a
12
2
/R )
ANALOG AND DIGITAL COMMUNICATION
where,
R = resistance (ohms)
v = rms signal voltage ( volts )
q = quantization interval ( volts )
V2
= average signal power ( watts )
R
(q 2 /12) = average quantization noise power watts
( )
R
Iff the resistances are equal
V2 V 2
(SQR )dB = 10 log 2 = 10 lo
o g 2 × 12
q /12 q
V
= 20 × log Since, log 12 = 1.079 = 1
q
Solved problems
1. For a PCM system with a maximum audio input frequency of
4kHz,determine the minimum sample rate and the alias frequency
produced if a 5kHz audio signal were allowed to enter the sample and
hold circuit.
Solution
Given that f a = 4 kHz
Audio signal frequency = 5 kHz
Using Nyquist sampling theorem,
Sampling rate f s ≥ 2 f a ; since f a = 4kHz
Therefore,f s = 8kHz
Alias frequency
= f s − ( Audio signal frequency in sample & hold circuit )
= 8 kHz − 5kHz
= 3kHz
DATA COMMUNICATION
=
( 7.63 + 1) × 100 = 8.63 × 100 = 95.89%
8 +1 9
Analog input signal is first band limited to avoid the aliasing effect
and then applied as one input of differentiator (or) Subtractor. The
previous sampled signal valued is obtained from the integator is also
applied as another input to differentiator. Initially integrator output
is zero (assumed)
DATA COMMUNICATION
The sampler produce the PAM signal and then the analog to digital
converter produce the parallel binary bits.
This parallel binary bits are converted into serial PCM and then
transmitted through the channel.
nature
Once again the same process repeated until all the samples are
transmitted.
Initially hold circuit is at zero, the adder circuit add the current
sample and previous sample value, produce the analog output.
• Initially, the up-down counter is zero and DAC is outputting OV. The
first sample is taken, converted to PAM signal, and compared with
zero volts.
• The steps change value at a rate equal to the clock frequency (sample
rate)
Slope over load noise occurs when the step size (d) is too small
compared to large variation in the input signal. Figure 3.72 below shows
the distortion in delta modulation.
• Let x(t) is the slope of analog signal x(t) and x’(t) is the slope of the
ANALOG AND DIGITAL COMMUNICATION
Granular noise
• Granular noise occurs when the step size (d) is too large compared to
small variations in the input signal. That is for very small variations
in the input signal, the staircase signal is changed by large amount
‘d’ because of large step size.
• When the input signal is almost flat, the staircase signal x’(t) keeps
on oscillating by ± d around the signal.
¾¾ In the linear delta modulator the step size ‘d’ is not variable, if it is
made variable then the slope overload distortion and granular noise
both can be controlled.
¾¾ In one type a discrete set of value is provided for the step size whereas
in another type a continuous range of step size variation is provided.
ANALOG AND DIGITAL COMMUNICATION
If you compare this block diagram with that of the linear delta
modulator, then you will find that except for the counter being replaced
by the digital processor, the remaining block are identical.
Operation
• In the reponse to the Kth clock pulse, the processor generates a step
which is equal in magnitude to the step generated in response to the
previous ( K-1)th clock pulse.
• If the direction of both the steps is same than the processor will
increase the magnitude of the present step by ‘d’. If the directions
are opposite then the processor will decrease the magnitude of the
present step size by ‘d’
DATA COMMUNICATION
Where
Where
For example
= d(1) + d(1) = 2 d
• At the output of low pass filter we get the original signal back.
3.25 Comparison of various pulse communication systems
Pulse Amplitude Pulse width Pulse position
S.No Parameters
modulation Modulation Modulation
Width of the car- Position of the
Amplitude of the carrier
Variable characteris- rier pulse is varied carrier pulse is
1 pulse is varied by modu-
tics of carrier pulse by modulating varied by modu-
DATA COMMUNICATION
lating voltage
Voltage lating voltage
Bandwidth of Bandwidth of
Bandwidth of transmis- the transmission the transmission
sion channel depends on channel depends channel depends
width of the pulse on rise of the on rise of the
2 Bandwidth BW ≥ 1 pulse pulse
2t 1
BW≥ 1 BW ≥
2tr
t - Width of the pulse 2tr
maximum tr` -rise time of the tr-rise time of the
pulse pulse
3 Noise interference Maximum Minimum Minimum
Information is con- Position
4 Amplitude variations Width variations
tained in variations
Necessary of syn-
5 Not necessary Not necessary Necessary
chronization pulse
Complexity in
6 generation and Complex Simple Simple
detection
Varies with amplitude of Varies with width
7 Transmitted power Constant
pulse of the pulse
Similarity with other Frequency modu- Phase Modula-
8 Amplitude modulation
modulation system lation tion
9 Output waveform
ANALOG AND DIGITAL COMMUNICATION
3.26 Comparison of Source Coding methods
16 bits per sample for one sample used to encode than one but are
one sample less than PCM
2 Levels, step size The number of Step size is fixed According to the Fixed number of
levels depends on and cannot be signal variation, levels are used.`
the number of bits varied step size varies
step size is fixed (adapted)
3 Quantization er- Quantization error Slope overload dis- Quantization er- Slope overload
ror and distor- depends on num- tortion and granu- ror is present but distortion and
tion ber of levels used. lar noise is present other errors are quantization noise
absent is present
4. Bandwidth of Highest Bandwidth Lowest Bandwidth Lowest Band- Bandwidth re-
transmission is required since is required width is required quired is lower
channel number of bits are than PCM
high
5 Feedback There is no feedback Feedback exists Feedback exists Feedback exists
in transmitter or in transmitter
receiver
6 Complexity of System is complex Simple Simple Simple
notation
7 Signal to noise Good Poor Better than DM Fair
ratio
8 Area of Audio and video Speech and im- Speech and Images Speech and
applications telephony ages video
ANALOG AND DIGITAL COMMUNICATION
DATA COMMUNICATION
encoding.
Advantages:
(i) High noise immunity.,
(ii) Private and secured communication is possible through the
use of encryption.
Disadvantages:
(i) Increased transmission bandwidth.
(ii) Increased system complexity.
Given
S
= 1023
N
B = 50 kHz
( )
Solution
S
Rmax = B log2 1 + = 50000 log2 (1 + 1023)
N
= 150514.99 bits/sec
12. A PCM system uses sampling frequency of 16 k samples/s.
Then, find out the maximum frequency of the signal upto
which the signal can be perfectly reconstructed.
A continuous time signal can be completely represented in its
samples and received back .If the sampling frequency is twice of
the highest frequency content of the signal. ie.,
fs ≥ 2 fm
Here,
fs = sampling frequency = 16 k samples/sec
ANALOG AND DIGITAL COMMUNICATION
19. How many Hamming bits are required for an ASCII character
'D'
Given
For character 'D' ASCII code (from table) is
1 0 0 0 1 00
m=7
\
Formula 2r ≥ m + r +1 ( r = No. of redundant bits)
Let r = 1
21 ≥ 7 + 1+ 1
21 ≥ 9 (false)
Let r = 2
22 ≥ 7 + 2+ 1
4 ≥ 10 (false)
Let r = 3
23 ≥ 7 + 3 + 1
8 ≥ 11 (false)
Let r = 4
24 ≥ 7 + 4+ 1
16 ≥ 12 (True)
Hence No. of Hamming bits is r = 4
20. Calculate odd and even parity bits for the EBCDIC character
'G'.
The Hex value for the character 'G' is C7 (from the EBCDIC
table), whose binary equivalent is,
C 7
1100 0111 (8bits)
ANALOG AND DIGITAL COMMUNICATION
The parity bit at the MSB position is given as, p 1100 0111
For odd parity: 011000111
For even parity: 111000111
21.
Calculate the odd and even parity bits for ASCII character
W.
Solution:
The Hex value for the character W is 57 (from the ASCII table),
whose binary equivalent is 1010 11 1
5 7
101 0111 (7bits)
During transmission, p101 0111 will gets transmit, if an odd
parity is used the value of p will be 0, since number of l' s in 101
0111 is 5 ( odd). For an even parity, the value of p will be 1.
i.e. to make even number of 1's in 101 0111',
REVIEW QUESTIONS
PART – A
1. What do you mean by non-linear encoding in PCM system?
2. What is the advantage of differential PCM?
3. What are the types of data transmission?
4. Mention the usage of Scrambler and Descrambler
5. Differentiate between error detection and correction
6. Find the minimum sampling frequency for a signal having fre-
quency for a signal having frequency from 10MHz to 10.2MHz,
in order to avoid aliasing.
7. What are the types of pulse modulation systems?
8. List the methods for error correction.
9. What is pulse stuffing?
10. What is meant by fading?
11. Mention any two error control codes.
12. Define sampling theorem.
13. Determine the Nyquist rate for analog input frequency of
(a) 4KHz (b) 10 KHz.
14. List any two data communication standard organization.
15. What is the need for error control coding?
16. What is meant by differential pulse code modulation?
17. What are the advantages of digital transmission?
18. What is data terminal equipment? Give examples.
19. What is forward error correction?
20. What is meant by ASCII code?
21. Which error detection technique is simple and which one is more
reliable?
22. Give some of the alternative names for data communication codes.
23. What are the two types of noises present in Delta modulation
system?
24. Explain why the quantization noise cannot be removed
completely in PCM. How do you reduce this noise?
25. What are the two types of noises present in Delta modulation
system?
DATA COMMUNICATION
PART – B
1. With the neat block diagram, explain the concept of UART
transceiver operation.
2. What are the parallel interfaces? Describe in detail about
centronics parallel interfaces.
3. With block diagram explain the PCM transmitter and receiver.
4. Describe delta modulation system. What are its limitations? How
can they be overcome?
5. (i). Explain any two data communication codes presently used
for character encoding.
(ii). Give brief notes on error detection.
6. With neat block diagram explain the data communication
hardware.
7. Define PWM and explain one method of generating PWM.
8. Describe the processing steps to converts a k bit message word
to n-bit code word (n>k). Introduce a error and demonstrate how
a error can be corrected with an example.
9. Draw• the block diagram and explain the principle of operation
of a PCM system. A binary channel with bit rate =36000 bits/
sec is available for . PCM voice transmission. Find number of
bits per sample, number of quantization levels and sampling fre-
quency assuming highest frequency component of voice signal is
3.2 KHz.
10. (i) Write a note on data communication codes.
(ii) Explain serial and parallel interfaces in detail.
11. Explain in detail about error detection and correction.
12. Explain the standard organization for data communication.
13. Describe the mechanical, electrical and functional
characteristics of Rs. 232 interface.
14. Draw the block diagram and describe the operation of a del-
ta modulator. What are its advantages and disadvantages
compared to a PCM system?
15. Draw the transmitter and receiver block diagram of differential
PCM and describe its operation.
16. The PCM system has the following parameters, maximum analog
input frequency is 4KHz, maximum decoded voltage at the re-
ANALOG AND DIGITAL COMMUNICATION
always 1
2. Sun does not rise in east:-
Hear uncertainty is high,because there is maximum information
as it is not possible
the unit is the nat,and when the base is 10,the unit is the Hartley or
decit. The use of such unit in the present case is analogous to unit
radian used in angle measure and decibel used in connection with
power ratios.)
The use of base 2 is especially convenient when binary PCM is
employed ,If the 2 possible binary digit(bits) may occur with equal
likelihood,each with a probability 1/2,then the correct identification
of the binary digit conveys an amount of informational I=log22=1 bit.
In the past term bit was used as an abbreviation for the phrase
binary digit. When there is an uncertainly whether the word bit is
untended as an abbreviation for binary digit as binit.
Assume there are M equally likely and independent messages that
M=2N,with Nan integer .In this case the information in each message
is
= I log
= 2M log 2 2N = N log 2 2
To identify each message by binary PCM code word ,the number of
binary digits required for the each of the 2N message is also N.Hence
in this case the information in each message,as measured in bits, is
numerically the same as the number of binits needed to encode the
messages.
When pK=1, one possible message s allowed. In this instance,
since the receiver knows the message,there is really no need for
transmission. We find that 1= log 2, I = 0. As PK decreases from 1
to 0,Ik increases monotonically, going from to infinity. Therefore,a
greater amount of information has been conveyed when the receiver
correctly identifies a less likely message.
When two independent messages mK and mj are correctly identified,we
can readily prove that the amount of information conveyed is the sum
of the information associated with each of the message individually.
Therefore,we conclude that the information amount are
I k = log 2 1 / pk
I l = log 2 1 / pl
Problem 1
A source produces one of the four possible symbols
during each interval having probabilities p1=1/2,P2=1/4,P3=P 4=1/8.
obtain the information content of each of these symbols.
Solution
we know that the information content of each symbol is
given as, 1
I k = log 2
Pk
Thus we can write
1 1
I 1 = log 2 = log 2 = log 2 ( 2) = 1 bit
p1 1/ 2
1 1
= log ( 2) = 2 bits
2
I 2 = log 2 = log 2
p2 1/ 4
1 1
= log 2 ( 2) = 3 bits
3
I 3 = log 2 = log 2
p3 1/ 8
1 1
= log 2 ( 2) = 3 bits
3
I4 = log 2 = log 2
p4 1/ 8
Problem 2
Calculate the amount of information,if it is given that pk=1/2.
Solution
The amount of information
1
I k = log 2
pk
1
log10
pk log10 2
= = = 1 bit
log10 2 log10 2
or
1
=log 2 = log 2 ( 2) = 1 bit
1/ 2
ANALOG AND DIGITAL COMMUNICATION
Problem 3
Calculate the amount of information ,if binary digits occur with
equal likelihood in a binary pcm system.
Solution
we know that in binary PCM, there are 2 binary levels (i.e.,)1 or 0
Therefore the probabilities,
p1(0 level)=P2(1 level)=1/2
Here the amount of information content is given as,
1
I 1 = log 2
1/ 2
1
I 2 = log 2
1/ 2
1 log10 2
I 1 = log 2 = log 2 ( 2) = = 1 bit
1/ 2 log10 2
1 log10 2
I2 = log 2 = log 2 ( 2) = = 1 bit
1/ 2 log10 2
I1 = I 2 = 1 bit
Thus,the correct identification of the binary digits in binary PCM
carries 1 bit of information
1
I 1 = log 2
p1
• Since ,there are P1 L number of ml,the total information due to all
message of ml will be,
1
I 1 (total ) = P1L log 2
p1
1
I 2(total ) = P1L log 2 and so on
p2
I (total )
Entropy,H=
L
ANALOG AND DIGITAL COMMUNICATION
M
1
H = ∑ pk log 2
k =1 pk
• Since,Pk=1,the above equation becomes,
M
1
H = ∑ log
K =1
2
1
M log10 (1)
= ∑ log ( 2)
K =1 10
M
1
H = ∑P K log 2
K =1 pk
• The Right hand side of the above equation will be zero, when pk → 0
Hence entropy will be zero(i.e.;)
H=0
Therefore,entropy is zero for both certain and most rated message.
Property 2
When pk=1/M for all M symbols are equally likely .For such a
source entropy is given by H=log2M.
Proof
We know that the probability of M number of equally likely
messages is
1
P =
M
• This probability is same for all M messages,(i.e.,)
1
P1 = P2 = P3 = ...PM = ... (1)
M
1 1 1
H = log 2 M + log 2 M + ... log 2 M
M M M
Property 3
The upper bound on entropy is given as Hmax≤log2M.Hear ‘M’ is the
number of messages emitted by the source.
Solution
• To prove the above property,the property of natural logarithm is
used,it can be written as,
qk
log10
qk M Pk
M
∑ Pk log 2 ∑ k
= P
log102
K =1 Pk K =1
q
log10 k
q log10 e Pk
M M
∑ pk log 2 k = ∑ Pk 2
K =1 Pk K =1 log10 log10 e
M
q
= ∑ Pk log 2 e log e k
K =1 Pk
qk qk
Here log e = In Hence above equation becomes,
Pk Pk
M
q M
q
∑P k log 2 k = log 2 e ∑ Pk In k
K =1 Pk K =1 Pk
SOURCE CODING AND ERROR CONTROL CODING
qk qk
In ≤ − 1
pk pk
M M
≤ log 2e ∑ qk − ∑ pk
K =1 K =1
M M
• Note that ∑q
k =1
k = 1 as well as ∑P
k =1
k =1
• Hence above equation becomes,
M
q
∑P k log 2 k ≤ 0
K =1 pk
1
Now consider that qk = k for all k. That is all symbols in the alphabet
are equally likely.
Then above equation becomes,
M
1
∑P log 2 qk + log 2 ≤ 0
k
Pk
K =1
M M
1
∴ ∑ Pk log 2 qk + ∑ Pk log 2 ≤0
K =1 K =1 Pk
M
1 M
∴ ∑ Pk log 2 ≤ − ∑ Pk log 2 qk
K =1 Pk K =1
M
1 1
≤ ∑P
K =1
k log 2
Pk
log 2
qk
1
Replace qk = in above equation,
M
ANALOG AND DIGITAL COMMUNICATION
M
1 M
∑P
K =1
k log 2 ≤ ∑ Pk log 2 M
Pk K =1
M
≤ log 2 M ∑ Pk
K =1
We Know that ∑P
K =1
k =1
,hence above equation becomes,
M
1
∑P
K =1
k log 2
Pk
≤ log 2 M
H ( X ) ≤ log 2 M
H max ( X ) = log 2 M
Problem 1
In binary PCM if ‘0’ occur with probability 1/4 and ‘1’ occur with
the probability equal to 3/4,then calculate the amount of information
carried by each bin it.
Solution
1
I ( x i ) = log 2
P ( xi )
1
P ( x1 ) =
4
1
With P ( x1 ) =
4
log10 4
We have I ( x1 ) = log 2 4 = = 2bit
log10 2
3
And with I ( x 2 ) =
4
4
log10
We have I ( x 2 ) = 3 = 0.415 bits
log10 2
Here,it may observed that binary ’0’ has probability 1/4 and it
carries 2 bits of information.
Whereas binary it’1’ has probability 3/4 and it carries 0.415 bits
of information.
Thus, this reveals the fact that if probability of occurrence is less,then
the information carried is more and vice versa.
Problem 2
If there are M equally likely and independent symbol,then prove
that amount of information carried by each symbol will be,
I(xi)=N bits,where M=2N and N is an integer
Solution
Since, it is given that all the M symbols are equally likely and
independent,therefore,the probability of occurrence symbol must be
1/M.
We know that amount of information is given as,
1
I ( x i ) = log 2 ... (1)
P ( xi )
ANALOG AND DIGITAL COMMUNICATION
1
P ( xi ) =
M
Hence ,equation(1) will be,
I ( x i ) = log 2 M ... ( 2)
I ( x i ) = log 2 2N = N log 2 2
[Since log 22=1]
= N bits
Hence,amount of information carried by each symbol will be ‘N’
bits. We know that M=2.
This means that there ‘N’ binary digits (bin its)in each symbol.
This indicate that when the symbols are equally likely and coded with
equal number of binary digits (bin its), then the information carried by
each symbols(measured n bits) is numerically same as the number of
bin its used for each symbols.
Problem 3
Prove the statement stated as under “if a receiver knows the
message being transmitted,the amount of information carried will be
zero”.
Solution
Here it is stated that receiver “Knows” the message. This means
that only one message is transmitted. Thus,probability of occurrence o
this message will be P(xi) =1. This is because only one message and its
occurrence is certain(probability of certain events is’1’)The amount of
information carried by this type of message will be,
1 log10 1
I ( x i ) = log 2 =
P ( x i ) log10 2
Substituting(xi)=1
SOURCE CODING AND ERROR CONTROL CODING
Or
I(xi)=0 bits
This proves the statement if the receiver knows message,the
amount of information carried will be zero.
Also,as P(xi) is decreased from 1 to 0,I(xi ) increased monotonically
from 0 to infinity. This shows that the amount of information conveyed
is greater when receiver correctly identifies less likely messages.
Problem 4
Verify the following expression
Solution
If xiand xj independent then we know that
P ( xi x j ) = P ( xi ) P ( x j )
1
also I ( x i x j ) = log 2
P ( xi x j )
1
I ( x i x j ) = log 2
P ( xi ) P ( x j )
1 1
I ( x i x j ) = log + log
P ( xi ) P (x j )
I ( xi x j ) = I ( xi ) + I ( x j )
Problem 5
A discrete source emits one of five symbols once every millisecond
with probabilities 1/2,1/4,1/8,1/16 and 1/16 respectively. Determine
the source entropy and information rate
Solution
We know that the source entropy is given as
m
1
H ( x ) = ∑ P ( X i ) log 2
i =1 P ( xi )
ANALOG AND DIGITAL COMMUNICATION
5
1
= ∑ P ( x i ) log 2 bits / symbol
i =1 P ( xi )
1 1 1 1 1
(or ) H ( X ) = log 2 2 + log 2 4 + log 2 8 + log 2 16 + log 2 16
2 4 8 16 16
1 1 3 1 1 15
(or ) H ( X ) = + + + + =
2 2 8 4 4 8
(or ) H ( X ) = 1.875 bits/symbol
1 1
The symbol rate r= = = 1000 sym
mbols/sec
Tb 10−3
Therefore,the information rate is expressed as
Problem 6
The probabilities of the five possible outcomes of an experiment
are given as
1 1 1 1
P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = , P ( x 4 ) = P ( x 5 ) =
2 4 8 16
Determine the entropy and information rate if there are 16 out
comes per second.
Solution
The entropy of the system is given as
5
1
H ( X ) = ∑ P ( x i ) log 2 bits / symbol
i =1 P ( xi )
1 1 1 1 1 15
(or ) H ( X ) = log 2 2 + log 2 4 + log 2 8 + log 2 16 + log 2 16 =
2 4 8 16 16 8
H ( X ) = 1.875bits / outcome
Problem 7
An analog signal is band limited to fm Hz and sampled at Nyquist
rate. The samples are quanti zed into four levels. Each level represents
one symbol. Thus there are four symbols. The probabilities of these
four levels(symbols) are P(xi)=P(x4)=1/8 and P(x2)=P(x3)=3/8. Obtain
information rate of the source.
Solution
We are given four symbols with probabilities p(x1)=P(x4)=1/8 and
P(x2)=P(x3)=3/8. Average information H(X)(or entropy)is expressed as,
1 1 1 1
H ( X ) = P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2 + P ( x 4 ) log 2
P ( x1 ) P ( x2 ) P ( x3 ) P (x4 )
In this example there are four levels. Those four levels may be
coded using binary PCM as show in Table 6.1
ANALOG AND DIGITAL COMMUNICATION
Symbol or
S.No Probability Binary digits
level
1 Q1 1/8 00
2 Q2 3/8 01
3 Q3 3/8 10
4 Q4 1/8 11
Table 6.1
Hence,two binary digits(bin its) are required to send each
symbols are sent at the rate of 2fm symbols/sec. Therefore,transmission
rate of binary digits will be binary rate=2 binary/symbol×2fm sym-
bols/sec=4 fm bin its/sec. Because one bin it is capable of conveying
one bit of information,therefore the above coding scheme is capable of
conveying 4 fm bits of information per second. But in this example, we
have obtained that we are transmitting 3.6 fm bits of information per
second.This means that the information carrying ability of binary PCM
is not completely utilized by the transmission scheme.
4.3 SOURCE CODING TO INCREASE AVERAGE INFORMATION
PER BIT
Let Nminbe the minimum value of N. Then the coding efficiency of the
source encoder is defined as,
= N min / N k ... ( 2)
η = H/N ....(4)
Need
(i)If the probability of occurrence of all the messages are not
equally likely,then average information or entropy is reduced
and Results in information rate is reduced.
(ii) This problem can be solved by coding the messages with
different number of bits.
NOTE
(i).Shannon - Fano coding is used to encode the messages
depending upon their probabilities.
(ii).This algorithm is assigns less number off or
highly probable message and more number of bits for rarely
occurring messages.
SOURCE CODING AND ERROR CONTROL CODING
Producer
Step 2:Partition the set into two sets that are as close to equi-probable
as possible and assign 0 to the upper set and assign 1 to the lower
set.
Step 3:Continue this process each time partitioning the sets with as
nearly probabilities as possible until further partitioning is not
possible.
Problem 1
Solution
Given Probabilities
P1=0.4,P2=0.2,P3=0.1,P4=0.2,P5=0.1.
Symbols Probabilities
x1 0.4
x2 0.2
x3 0.2
x4 0.1
x5 0.1
ANALOG AND DIGITAL COMMUNICATION
Step 2:
Method 1:
L =1 L
N = ∑ Pknk
k =0
(or ) =∑ Pknk
K =1
5
N = ∑P n
K =1
k k
Method II
No of
Symbol Probability Stage 1 Stage Stage Code bits per
2 3 word message
(nk)
x1 0.4 0 0 1
x2 0.2 1 0 0 100 3
x3 0.2 1 0 1 101 3
x4 0.1 1 1 0 110 3
x5 0.1 1 1 1 111 3
Table 6.3
SOURCE CODING AND ERROR CONTROL CODING
5
N = ∑P n
K =1
k k
M
1
H = ∑ Pk log 2
K =1 Pk
5
1
= ∑ Pk log 2
K =1 Pk
1 1 1 1 1
= P1 log 2 + p2 log 2 + P3 log 2 + P4 log 2 + P5 log 2
P1 p2 P3 P4 P5
1 1 1 1 1
= 0.4 log 2 + 0.2 log 2 + 0.1log 2 + 0.2 log 2 + 0.1 log 2
0.4 0.2 0.1 0.2 0.1
1 1 1 1 1
log10 log10 log10 log10 log10
= 0.4 0.4 + 0.2 0.2 + 0.1 0.1 + 0.2 0.2 + 0.1 0.1
log10 2 log10 2 log10 2 log10 2 log10 2
= ( 0.4 × 1.3219 ) + ( 0.2 × 2.3219 ) + ( 0.1 × 3.3219 ) + ( 0.2 × 2.3219 ) + ( 0.1 × 3.3219 )
= 0.52876 + 0.46439 + 0.33219 + 0.46439 + 0.33219
= 2.12192 bits/ssymbol.
(iii) Efficiency
H
η =
N
2.12192
= = 0.96450
2.2
0 η = 96.45 0
0 0
ANALOG AND DIGITAL COMMUNICATION
Step 6: Start encoding with the last stage,which consist of exactly two
ordered probabilities Assign 0 as the first digit in the code words
for all the source of symbols associated with probability,assign 1
to the second probability.
Step 7: Now go back and assign 0 and 1 to the second digit for the two
probabilities that were combined in the previous step retaining
all assignments made in that stage.
Problem 1
A discrete memory less source has 6 symbols x1 ,x2,x3,x4,x5,x6
with probabilities 0.30,0.35,0.20,0.12,0.08,0.05 respectively. Construct
a huffman code and calculate its efficiency also calculate redundancy of
the code.
Solution
Code words obtained in bracket in stage. We can write the code
words for the respective probabilities,as follows
Stage I Stage
Xi Stage II Stage IV Stage V
P(xi) III
x1 0.30 0.30 0.30 0.45 0.55
x2 0.25 0.25 0.25 0.30 0.45
x3 0.20 0.20 0.25 0.25
x4 0.12 0.13 0.20
x5 0.08 0.12
x6 0.05
Number of
Message Probability Code word
bits nk
x1 0.3 00 2
x2 0.25 01 2
x3 0.2 11 2
x4 0.12 101 3
x5 0.08 1000 4
x6 0.05 1001 4
Table 6.5
(iii) To find efficiency h we have to calculate average code word length(N)
and entropy (H).
M
N = ∑P n
K =1
k k where nk is code word
6
= ∑P n
K =1
k k
Entropy
M
1
H = ∑ Pk log 2
K =1 Pk
6
1
= ∑ Pk log 2
K =1 Pk
1 1 1 1 1 1
= P1 log 2 + P2 log 2 + P3 log 2 + P4 log 2 + P5 log 2 + P6 log 2
P1 P2 P3 P4 P5 P6
1 1 1 1 1 1
0.30 log 2 + 0.25 log 2 + 0.20 log 2 + 0.12 log 2 + 0.08 log 2 + 0.05 log 2
0.30 0.25 0.20 0.12 0.08 0.05
log 0.30 log 0.25 log 0.20 log 0.12 log 0.08 log 0.05
0.30 10 + 0.25 10 + 0.20 10 + 0.12 10 + 0.08 10 + 0.05 10
log10 2 log10 2 log10 2 log10 2 log10 2 log10 2
= 0.521 + 0.5 + 0.4643 + 0.367 + 0.2915 + 0.216
= 2.3598 bits of information/message
γ = 1 − γ ⇒ 1 − 0.99
= 0.01
Problem 2
A discrete memory less source X has four symbols x1,x2,x3, and x4
with probabilities 1 1 1 construct a
P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = P ( x 4 ) =
2 4 8
shannon fano code has the optimum property that ni=I(xi)and the code
efficiency is 100 o/o
SOURCE CODING AND ERROR CONTROL CODING
Solution
Given
1 1 1
P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = P ( x 4 ) = ,n (i ) = ( x i )
2 4 8
1
We know that,I(x i ) = log 2
P ( xi )
1 1
I ( x1 ) = log 2 ⇒ log 2
P ( x1 ) 1
2
1
log10
1
2 log 2
= = 10
=1
log10 2 log10 2
1 1
I ( x 2 ) = log 2 ⇒ log 2
P ( x ) 1
2
4
1
og10
lo
1
4
= = 2
log10 2
1 1
I ( x 3 ) = log 2 ⇒ log 2 =3
P ( x3 ) 1
8
1 1
I ( x 4 ) = log 2 ⇒ log 2 =3
P (x4 ) 1
8
ANALOG AND DIGITAL COMMUNICATION
= P ( x1 ) I ( x1 ) + P ( x 2 ) I ( x 2 ) + P ( x 3 ) I ( x 3 ) + P ( x 4 ) I ( x 4 )
1 1 1 1
= × 1 + × 2 + × 3 + × 3
2 4 8 8
1 1 3 3
= + + +
2 2 8 8
= 1.75 bits/message
M M
N = ∑ P n (or )∑ P ( x ) n
K =1
k k
i =1
i i
= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4
1 1 1 1
= × 1 + × 2 + × 3 + × 3
2 4 8 8
= 1.75 bits/syymbol
code efficiency
H (X ) 1.75
η= = =1
N 1.75
o η = 100 o
o o
SOURCE CODING AND ERROR CONTROL CODING
Problem 3
A DMS has five equaly likely symbols. Construct a Shannon
fano code for x and calculate the efficiency of code. Construct another
Shannon- fano code and compare the results. Repeat for the Huffman
code and compare results.
Solution
(i)A Shannon fano code[by choosing two approximately equi-
probable (0.4 versus 0.6) sets] is constructed as follows
Entropy
5 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
Here all five probabilitie es are same(i.e.,) 0.2 so we can write,
1
H ( X ) = 5 × P ( x i ) log 2
P ( xi )
1
= 5 × 0.2 × log 2
0.2
1
0.2 log10
= 5× 0.2
log10 ( 2)
H ( X ) = 2.32 bits/message
ANALOG AND DIGITAL COMMUNICATION
(ii) Another method for Shannon fano code[by choosing another two
approximately equiprobable (0.6 versus 0.4) sets]is constructed as
follows
Stage Stage Stage Cord No of bits per
Symbol Probability
1 2 3 word message (nk)
x1 0.2 0 0 00 2
x2 0.2 0 1 0 010 3
x3 0.2 0 1 1 011 3
x4 0.2 1 0 10 2
x5 0.2 1 1 11 2
5 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
= 2.32 bits/message
SOURCE CODING AND ERROR CONTROL CODING
= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
= ( 0.2 × 2) + ( 0.2 × 3 ) + ( 0.2 × 3 ) + ( 0.2 × 2) + ( 0.2 × 2)
= 0.4 + 0.6 + 0.6 + 0.4 + 0.4
= 2.4 bits/symbol
ding coefficiency ( η)
Cod
H (X )
coding efficiency η=
N
2.32
= = 0.967
2.4
o η = 96.7 o
o o
Since, average code word length is same as that for the code of
part(i), the efficiency is same.
(iii)The huffman code is constructed as follows
Stage 1
xi Stage II Stage III Stage IV
P(xi)
x1 0.2 (01) 0.4 (1) 0.4 (1) 0.6 (0)
x2 0.2 (000) 0.2 (01) 0.4 (00) 0.4 (1)
x3 0.2 (001) 0.2 (000) 0.2 (01)
x4 0.2 (10) 0.2 (001)
x5 0.2 (11)
= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
Here all probability have same value (0.2).
=0.2 × [n1 + n 2 + n 3 + n 4 + n 5 ]
so,=
= 0.2[2 + 3 + 3 + 2 + 2]
= 0.2 × 12
= 2.4 bits/symbol
Entropy & efficiency are also same as that the Shannon fano code
due to same code word length.
Entropy
5 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
1 1 1
= P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2
P ( x1 ) P ( x2 ) P ( x3 )
1 1
+P ( x 4 ) log 2 + P ( x 5 ) log 2
P (x4 ) P ( x5 )
Here all five probab bilities have same value as 0.2 so we can write,
1
=5 × P ( x1 ) lo
og 2
P ( x1 )
1
= 5 × 0.2 log 2
0.2
1
0.2 log10
= 5× 0.2
log10 2
= 2.32 bits/message
Coding efficiency ( η)
H (X )
Coding efficiiency η =
N
2.32
= = 0.967
2.4
o η = 96.7 o
o o
Problem 4
A Discrete memory less source (DMS) has five symbols x1x2,x3,x4,
SOURCE CODING AND ERROR CONTROL CODING
Entropy
5 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
1 1 1
= P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2
P ( x1 ) P ( x2 ) P ( x3 )
1 1
+P ( x 4 ) log 2 + P ( x 5 ) log 2
P (x4 ) P ( x5 )
1 1 1
= 0.4 log 2 + 0.19 log 2 + 0.16 log 2
0.4 0.19 0.16
1 1
+0.15 log 2 + 0.1 log 2
0 .15 0.1
H ( X ) = 2.15 bits/symbol
Code efficiency ( η)
H (X ) 2.15
η= = = 0.956
N 2.25
o η = 95.6 o
o o
Stage I
Xi Stage II Stage III Stage IV
P(xi)
x1 0.4 (1) 0.4 (1) 0.4 (1) 0.6 (0)
x2 0.19 (000) 0.25 (01) 0.35 (00) 0.4 (1)
x3 0.16 (001) 0.19 (000) 0.25 (01)
x4 0.15 (010) 0.16 (001)
x5 0.1 (011)
Entropy H(X)
Entropy H(X) of Huffman code is same as that for the Shannon-
Fano code.
5 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
1 1 1
= P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2
P ( x1 ) P ( x2 ) P ( x3 )
1 1
+P ( x 4 ) log 2 + P ( x 5 ) log 2
P (x4 ) p ( x5 )
1 1 1
= 0.4 log 2 + 0.19 log 2 + 0.16 log 2
0.4 0.19 0.16
1 1
+0.15 log 2 + 0.1 log 2
0.15 0.1
H ( X ) = 2.15 bits/message
SOURCE CODING AND ERROR CONTROL CODING
= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
= ( 0.4 × 1) + ( 0.19 × 3 ) + ( 0.16 × 3 ) + ( 0.15 × 3 ) + ( 0.1 × 3 )
N = 2.2 bits/symbol
Code efficiency ( η)
H 2.15
η= = = 0.977
N 2.2
o η = 97.7 o
o o
ANALOG AND DIGITAL COMMUNICATION
Solution
Arranging the symbols in decreasing order and obtain the
Huffman code as follows
Stage I Stage
Xi Stage III Stage IV Stage V Stage VI
P(Xi) II
x6 0.3 (00) 0.3 (00) 0.3 (00) 0.3 (00) 0.4 (1) 0.6 (0)
x3 0.2 (10) 0.2 (10) 0.2 (10) 0.3 (10) 0.3 (00) 0.4 (1)
x2 0.15 (010) 0.15 (010) 0.2 (11) 0.2 (10) 0.3 (01)
x5 0.15 (011) 0.15 (011) 0.15 (010) 0.2 (11)
x7 0.1 (110) 0.1 (110) 0.15 (011)
x1 0.05 (1110) 0.1 (111)
x4 0.05 (1111)
= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
+ P ( x6 )n6 + P ( x7 )n7
= ( 0.05 × 4 ) + ( 0.15 × 3 ) + ( 0.2 × 2) + ( 0.05 × 4 ) + ( 0.15 × 3 )
+ ( 0.3 × 2) + ( 0.1 × 3 )
N = 2.6 bits/symboll
Entropy H(X)
SOURCE CODING AND ERROR CONTROL CODING
7 1
H ( X ) = ∑ P ( x i ) log 2
i =1 P ( xi )
1 1 1 1
= 0.05 log 2 + 0.15 log 2 + 0.2 log 2 + 0.05 log 2
0.05 0.15 0.2 0.05
1 1 1
+0.15 log 2 + 0.3 log 2 + 0.1 log 2
0.15 0.3 0.1
H ( X ) = 2.57 bits/message
Coding efficiency ( η )
H 2.57
η= = = 0.9885
N 2.6
o η = 98.85 o
o o
Problem 7
A discrete memory less source has a alphabet given below.
Compute two different Huffman codes for this source,hence for each of
the two codes,find,
(i) The average code-word length.
(ii) The variance of the average code-word length over the
ensemble of source symbol.
Symbol S0 S1 S2 S3 S4
Probability 0.55 015 0.15 0.10 0.05
Solution
The two different Huffman codes are obtained by placing the com-
bined probability as high as possible or as low as possible.
1. Placing combined probability as high as possible
+ 0.05 [3 − 1.9]
2
= 0.99
2. Placing combined probability as low as possible
Stage I Stage Stage
Symbol Stage IV
P(Xi) II III
s0 0.55 (0) 0.55 (0) 0.55 (0) 0.55 (0)
s1 0.15 (11) 0.15 (11) 0.3 (10) 0.45 (1)
s2 0.15 (100) 0.15 (100) 0.15 (11)
s3 0.1 (1010) 0.15 (101)
s4 0.05 (1011)
(i ) Average code-word length
4
∴N = ∑P n
K =0
k k
+ 0.05 [4 − 1.9]
2
= 1.29
Average code-word
Method Varaiance
length
As high as possible 1.9 0.99
As low as possible 1.9 1.29
x
P i
yj
I ( x i , y j ) = log bits ... (1)
P ( xi )
xi
P
Here I(xi,yj) is the mutual information, y is the conditional
j
I ( X ;Y ) = I (Y ; X )
(ii) The mutual information can be expressed in terms in terms of
entropies of channel input or channel out put and conditional entropies.
I ( X ;Y ) = H ( X ) − H ( X /Y )
I (Y ; X ) = H (Y ) − H (Y / X )
where , H (X /Y ) and H (Y / X ) are conditional entropies.
I ( X ;Y ) ≥ 0
I ( X ;Y ) = H ( X ) + H (Y ) − H ( X ,Y )
Property 1
The mutual information of a channel is symmetric.
(i.e.,) I(X;Y)=I(Y;X)
Proof
Let us consider some standard relationships from probability
theory.These are as follows
X
P ( X i ,Y j ) = P i P (Y j ) ... (1)
Yj
Yj
and P ( X i ,Y j ) = P P ( Xi ) ... ( 2)
Xi
From equation (1) and (2) we can write,
X Yj
P i
Yj P (Y j ) = P P ( Xi ) ... ( 3 )
Xi
X
P i
m n
Y j
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2
i =1 j =1
P ( Xi )
X
P i
m n
Y j
I (Y ; X ) = ∑ ∑ P ( X i ,Y j ) log 2
i =1 j =1
P ( Xi )
= I ( X ;Y )
m n
1
H ( X /Y ) = ∑ ∑ P ( X i ,Y j ) log 2 ... (1)
i =1 j =1 P ( X i /Y j )
X
P i
m n
Y j
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2
i =1 j =1
P ( Xi )
m n
1
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2 − H ( X /Y ) ... ( 2)
i =1 j =1 P ( Xi )
∑ P ( X ,Y ) = P ( X )
j =1
i j i
Yj
P
m n Xi
I (Y ; X ) = ∑ ∑ P ( X i ,Y j ) log 2
i =1 j =1
P (Y )
j
m n
1
= ∑ ∑ P ( X i ,Y j ) log 2
j =1 i =1 P (Y j )
m n
1
−∑ ∑ P ( X i ,Y j ) log 2 ... ( 6 )
i =1 j =1 P (Y j / X i )
The conditional entropy H(Y/X) is given as,
m n
1
/X)=∑ ∑ P ( X i ,Y j ) log 2
H(Y/ ... ( 7 )
i =1 j =1 P (Y j / X i )
∑ P ( X ,Y ) = P (Y )
i =1
i j j ... ( 9 )
1 n
H (Y ) = ∑ P (Y j ) log 2
We know that j =1 P (Y j )
Hence first term of above equation represents H (Y).Hence above
equation becomes,
I (Y ; X ) = H (Y ) − H (Y / X ) ... (10 )
Property 3
I(X;Y) ≥ 0
Solution
Average mutual information can be written as,
m n P (X )
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2 i
... (1)
P ( X i /Y j )
i =1 j =1
ANALOG AND DIGITAL COMMUNICATION
1 m n P ( X i ) P (Y j )
-I (Y ; X ) = P ( X ,Y )
∑ ∑ i j P X ,Y In ... ( 2)
In 2 i =1 j =1
( i j)
Also we know that
In α ≤ α − 1
There fore we have
1 m n P ( X i ) P (Y j )
-I (Y ; X ) ≤ P (
∑ ∑ i j P X ,Y
X ,Y ) − 1
In 2 i =1 j =1
( i j )
1 m n m n
−I (Y ; X ) ≤ ∑ ∑ P ( X i ) P (Y j ) − ∑ ∑ P ( X i ,Y j ) ... ( 3 )
In 2 i =1 j =1 i =1 j =1
Since
m n m n
∑ ∑ P ( X i ) P (Y j ) = ∑ P ( X i )∑ P (Y j ) = (1)(1)
i =1 j =1 i =1 j =1
m n m n m
∑ ∑ P ( X ,Y ) = ∑ ∑ P ( X ,Y ) = ∑ P ( X ) = 1
i j i j i
i =1 j =1 j =1 i =1 i =1
Hence proved.
Property 4
I(X;Y) =H (X) +H (Y)-H (X,Y)
Solution
We know the relation
H (X,Y) = H (X,Y) -H (Y)
Therefore
SOURCE CODING AND ERROR CONTROL CODING
∑P (X
i =1
i /Y j ) = P (Y j )
n
m
−∑ ∑ P ( X i ,Y j ) log P (Y j )
j =1 i =1
actual transinformation
η= (or )
max imum transinformation
I ( X ;Y ) I ( X ;Y )
η= = ... ( 2)
max I ( X ;Y ) C
R =1− η
C − I ( X ;Y )
= ... ( 3 )
C
4.9 MAXIMUM ENTROPY FOR CONTINOUS CHANNEL OR GAUSSIAN
CHANNEL
• Probability density function of Gaussian function is given as’
X2
1
P (x ) = e 2σ
2
σ 2π
Where σ2 = Average power of the so urce
The maximum entropy is computed as follows
∞
1
h (x ) = ∫ p ( x ) log dx
P (x )
2
−∞
∞
= − ∫ P ( x ) log 2 P ( x ) dx
−∞
1 x 2
∞
= − ∫ P ( x ) log 2 e 2σ dx
2
−∞ σ 2π
x
2
∞
= − ∫ P ( x ) log 2 σ 2πe 2σ dx [ log 2 ( AB ) = log 2 A + log 2 B ]
2
−∞
− 2
2
∞ x
( )
= ∫ P ( x ) log 2 σ 2π + log 2 e 2σ dx
−∞
∞ ∞ x2
1
( )
− 2
= ∫ P ( x ) 2 log 2 σ 2π dx + ∫ P ( x ) log 2 e 2σ dx
−∞
2 −∞
SOURCE CODING AND ERROR CONTROL CODING
x2
−log ex 2
[ log 2 e 2 σ2
]=
2σ2
[ n log m=logmn ]
∞
1 log e 2
( )
2
= ∫ P ( x ) log 2 σ 2π dx + ∫ x P ( x ) dx
2 −∞ 2σ2
∞
1 log e
= log 2 ( 2πσ2 ) ∫ P ( x ) dx + 2 ∫
x 2P ( x ) dx
2 −∞
2σ
∞
∫ P ( x ) dx = 1, from properties of pdf
−∞
∞
∫ x 2P ( x ) dx = σ2 , from definition of variance
−∞
1 log e 2
log 2 ( 2πσ2 ) + σ
2 2σ2
1
log 2 ( 2πσ2 ) + log e
2
1
h ( x ) = log 2 ( 2πσ2e )
2
C = Blog 2 (1 + S / N ) bite/sec
Where
B →is the channel bandwidth,
S →is the signal power
N →is the total noise power within the channel bandwidth
No
Here B is bandwidth and power spectral density of white noise is
2
hence noise power N becomes,
B
No
N = ∫
−B
2
df
Noise power
N = NoB
SOURCE CODING AND ERROR CONTROL CODING
T
X Y
Source Destination
N
H [ x , y ] = H [y ] + H [ x / y ] ... ( 2)
The noise is added to the system is Gaussian in nature
As source is independent of noise
H [ x , y ] = H [ x ] + H [N ] ... ( 3 )
As y depends on x and N so
Y = f ( x , N ) and Y=x+N
Therefore, H [ x , y ] = H (x , N ) ... ( 4 )
Combining equation(2),(3) and (4)
H [y ] + H [ x / y ] = H [ x ] + H [N ]
H [ x ] − H [ x / y ] = H [y ] − H [N ] ... ( 5 )
1 S
C = 2B × log 2 1 +
2 N
S
C = B log 2 1 + bits /sec.
N
Where B is the channel band width. We Know the power spectral density
of noise is
N = NoB
S
C = B log 2 1 + bits /sec
NoB
C = B log 2 (1 + ∞ ) = ∞
SOURCE CODING AND ERROR CONTROL CODING
S
10000 = 10000 log 2 1 +
N
S
∴ =1
N
S
Here , B=3000 =9
N
S
B=10000 =1
N
Problem 2
Channel capacity is given by
S
C = B log 2 1 + bits /sec. ... ( 6 )
N
In the above equation when the signal power is fixed and white
gaussian noise present,the channel capacity approaches an upper limit
with increase band width ’B’.prove that this upper limit is given as,
lim C S 1 S
C ∞ = B → ∞ = 1.44 =
N o In 2 N o
S
C = B log 2 1 +
NoB
SOURCE CODING AND ERROR CONTROL CODING
Problem 3
A black and white TV picture consists of about 2 x106 Picture
elements with 16 different brightness levels,with equal probabilities. If
pictures are repeated at the rate of 32 per second,calculate average rate
of information conveyed by this TV picture source. If SNR is 30 dB,What
is the maximum band width required to support the transmission of the
resultant video signal
Solution
Given
Picture elements =2x106
Source levels(symbols) =16 i.e.,M=16
Picture repetition rate =32/sec.
S
= 30
N dB
(i) The source symbol entropy(H)
Source emits any one of the 16 brightness levels. Here M=16. These
levels are equiprobable. Hence entropy of such source is given by,
H=log2M
=log216
=4 bits/symbol(level)
(ii)Symbol rate(r)
Each picture consists of 2 x 106 picture elements. Such 32
pictures are transmitted per second. Hence number of picture elements
per second will be,
r = 2 × 106 × 32 symbols/sec
= 64 × 06 symbols/sec
S S
We Know that = 10 log10
N dB N
S
∴ 30=10log10
N
S
∴ =1000
N
Channel coding theorem states that information can be received
without error if,
R≤C
S
R = 2.56 × 108 and C=Blog 2 1 +
N
S
2.56 × 108 ≤ B log 2 1 +
N
i .e., 2.56 × 10 ≤ Blog 2 (1 + 1000 )
8
2.56 × 108
(or ) B≥ i .e., 25.68MH Z
log 2 (1001)
Therefore ,the transmission channel must have a band width of
25.68 MHZ to transmit the resultant video signal.
Problem 4
A voice grade telephone channel has a band width of 3400 Hz.
If the signal to noise ratio(SNR) on the channel is 30 dB,determine the
capacity of the channel. If the above channel is to be used to transmit
4.8 kbps of data determine the minimum SNR required on the channel.
Solution:
Given data: Channel band width B=3400Hz
S
= 30dB
N dB
We Know that
S S
= 10 log10
N dB N
S
∴ 30=10log10
N
S
log10 =3
N
S
∴ =10000
N
ANALOG AND DIGITAL COMMUNICATION
S
Here R=4.8 kbps and C=Blog 2 1 +
N
Hence above equation becomes,
S
4.8 kbps ≤ Blog 2 1 +
N
S
i .e., 4800 ≤ 3400log 2 1 +
N
S
i .e., log 2 1 + ≥ 1.41176
N
S
log10 1 +
N ≥ 1.41176
log10 2
S
∴ ≥ 1.66
N
S
This means = 1.66 to transmit data at the rate of 4.8kbps
N min
Problem 5
For an AWGN channel with 4.0 kHz band width, the noise
spectral density h/2 is 1.0 pico watts/Hz and the signal power at the
receiver is 0.1 mW. Determine the maximum capacity, as also the
actual capacity for the above AWGN channel.
Solution :
Given: B =4000 Hz, S=0.1 x 10-3 W
SOURCE CODING AND ERROR CONTROL CODING
lim C S
C ∞ = B → ∞ 1.44
No
No
Here = 1 × 10−12Watts / Hz .Hence above equation becomes,
2
0.1 × 10−3
C ∞ = 1.44
2 × 10−12
=72 × 106 Hz or 72 MHz.
• When the data passed through the channel, errors are introduced
in the data because channel noise interferes the signal. Hence the
signal power is reduced and errors are introduced.
ANALOG AND DIGITAL COMMUNICATION
(i.e.,) d(X,Y) = d = 2.
vi. Minimum distance (dmin): The minimum distance of linear block
code is defined as the smallest hamming distance between any pair
of code words. In the code (or) minimum distance is the same as the
smallest hamming weight of the difference between any pair of code
words.
The following table list some of the requirements of error capabil-
ity of the code.
1. Detect upto ‘s’ errors per word, dmin ≥ s + 1
2. Correct upto ‘t’ errors per word, dmin ≥ 2t + 1
3. Correct upto ‘t’ errors and detect s > t errors per word, dmin ≥ (t+ s
+1)
code word
Message length =nbits
Linear code: A code is said to be linear if the sum of the two code vec-
tors produces another code vector.
• A code word consists of k message bits which are denoted by
ANALOG AND DIGITAL COMMUNICATION
m1, m2.....mk and (n-k) parity bits (or) check bits denoted by
c1, c2..........cn-k
• The sequence of message bits is applied to linear block encoder to
produce and n bit code word. The elements of this code are x1, x2,.....
xn .
• We can express this code word mathematically as
• A block code generator generates the parity vectors (or parity bits)
required to be added to the message bits to generate the code words.
The code vector X can also be represented as under:
X = MG ... ( 3 )
Where X = Code vector for 1 × n size
M = Message vector of 1× k size
G = Generator matrix of k × n size
Representation of code vector
1 0 ... 0
0 1 ... 0
. . . .
Therefore, Ik = and
. . . .
. . . .
0 0 1 k ×k
P11 P21 ... P1q
P
21 P22 ... P2q
. . . .
and P = ... ( 6 )
. . . .
. . . .
P Pkq
k1 Pk 2 k ×q
Similarly we can obtain the expressions for the remaining parity bits
k = 2q - q -1 q =(n-k)
Message Parity
bits bits
1. Block length: n = 2q − 1
2. Number of message bits
k = 2q − q − 1 = n − q
3. Number of parity bits is (n - k ) = q where q ≥ 3
(ie ) The minimum number of parity bit is 3
4. the minimum distance dmin = 3
k 2q − q − 1 q
5. The code rate efficiency = = q
=1− q
n 2 −1 2 −1
If q >> 1, then code rate r = 1
H = P T | I n −k
Problem 1
The generator matrix for a (6,3) block code is given below. Find all
the code vectors of this code.
1 0 0 | 0 1 1
G = 0 1 0 | 1 0 1
0 0 1 | 1 1 0
Solution
The general pattern of the block codes , hence, in this case, n = 6
and k = 3. This means that the message block size k is 3 and the length
of code vector (i.e.,) n is 6. For obtaining code vectors, we shall follow the
steps given below:
(i) First, we separate out the identity matrix I and coefficient ma-
trix
We know that the generator matrix is given by
G = [ I k ][P ]
Comparing this with the given generator matrix, we obtain
ANALOG AND DIGITAL COMMUNICATION
1 0 0
I k = I 3×3 = 0 1 0
0 0 1
0 1 1
and Pk ×q = P3×3 = 1 0 1
1 1 0
message parity
2. For the second message vector (i.e.,) (m1,m2 , m3 ) = ( 0, 0,1) we have
SOURCE CODING AND ERROR CONTROL CODING
c1 = 0 ⊕ 1 = 1
c2 = 0 ⊕1 = 1
c3 = 0 ⊕ 0 = 0
∴ c1,c 2 ,c 3 = (1,1, 0 )
The complete code for this message word is given by
m1 m2 m3 c1 c2 c3
Code word = 0 0 1 1 1 0
message parity
3. Similarly, we can obtain the code words for the remaining message
words. All these code words have been given in table below
S.No Message vectors Parity bits Complete code vector
m1 m2 m3 c1 c2 c3 m1 m2 m3 c1 c2 c3
1 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 1 1 1 0 0 0 1 1 1 0
3 0 1 0 1 0 1 0 1 0 1 0 1
4 0 1 1 0 1 1 0 1 1 0 1 1
5 1 0 0 0 1 1 1 0 0 0 1 1
6 1 0 1 1 0 1 1 0 1 1 0 1
7 1 1 0 1 1 0 1 1 0 1 1 0
8 1 1 1 0 0 0 1 1 1 0 0 0
ANALOG AND DIGITAL COMMUNICATION
Problem 2
The parity check matrix of a particular (7,4) linear block code is
given by,
1 1 1 0 1 0 0
[H ] = 1 1 0 1 0 1 0
1 0 1 1 0 0 1
Solution
First, let us obtain the PT matrix
PT is the transpose of the coefficient matrix P. The given parity check
matrix H is (n − k ) × n matrix (or) q × n matrix, where q = n − k .
It is given that the code is (7,4). Hamming code. Therefore, we
have
=n 7=
and k 4, q = 3
1 1 1 0 | 1 0 0
We have [H ]3×3 = 1 1 0 1 | 0 1 0
1 0 1 1 | 0 0 1 3×7
- 1 0 1 1 3×4
1 1 1
1 1 0
We have P =
1 0 1
0 1 1 4×3
Similarly, we can obtain the code words for the other message
vectors and the corresponding parity bits and code words are given in
table given below. The weight of the code word is also given.
Code words for the (7,4) Hamming code
S.No Weight of
the code
m1 m2 m3 m4 c1 c2 c3 x1 x2 x3 x4 x5 x6 x7
vector
1. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2. 0 0 0 1 0 1 1 0 0 0 1 0 1 1 3
3. 0 0 1 0 1 0 1 0 0 1 0 1 0 1 3
4. 0 0 1 1 1 1 0 0 0 1 1 1 1 0 4
5. 0 1 0 0 1 1 0 0 1 0 0 1 1 0 3
6. 0 1 0 1 1 0 1 0 1 0 1 1 0 1 4
7. 0 1 1 0 0 1 1 0 1 1 0 0 1 1 4
8. 0 1 1 1 0 0 0 0 1 1 1 0 0 0 3
9. 1 0 0 0 1 1 1 1 0 0 0 1 1 1 4
10. 1 0 0 1 1 0 0 1 0 0 1 1 0 0 3
11. 1 0 1 0 0 1 0 1 0 1 0 0 1 0 3
12. 1 0 1 1 0 0 1 1 0 1 1 0 0 1 4
13. 1 1 0 0 0 0 1 1 1 0 0 0 0 1 3
14. 1 1 0 1 0 1 0 1 1 0 1 0 1 0 4
15. 1 1 1 0 1 0 0 1 1 1 0 1 0 0 4
16. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 7
dmin = 3
Number of errors that can be detected is given by
dmin ≥ s + 1
3 ≥ s +1
or s≤2
Message register
Input m4 m3 m2 m1
sequence
Message bits
code words
S
Modulo - 2
addition + + +
Parity or
Parity bits c3 c2 c1 check bits
or
check bits Parity bit register
1. Basic concept
The decoding of linear block codes is done by using a special
technique called syndrome decoding, which reduces the memory
requirement of the decoder are as under: (i) Error detection in the received
code word, (ii) Error correction. The syndrome decoding technique is
explained as follows.
2. Practical Assumptions
(i) Let X represent the transmitted code word and Y represent the re-
ceived code word.
ANALOG AND DIGITAL COMMUNICATION
(ii) Then if X =Y, no errors in the received signal and if X ≠ Y, then some
errors are present.
3. Detection of Error
a. For an (n,k) linear block code, there exists a parity check matrix of size
(n − k ) × n . We know that,
Parity check matrix, H = P : I n −k
T
( or ) H = P T : I q
(n −k )×n q ×n
I n −k I q
n ×q
This means that the product of any coder vector X and the
transpose of the parity check matrix will always be 0.
We shall use this property for the detection of errors in received
code words as under:
At the receiver, we have
If YH T = ( 0, 0,...0 ) , then Y = X ( i.e.,) there is no error
But, if YH T ≠ ( 0, 0,...0 ) , then Y ≠ X ( i.e.,) error exists in the received
code word
4. Syndrome and its use for error detection
The syndrome is defined as the non-zero output of the
product YHT. Thus, the non-zero syndrome represents some errors
present in the received code word Y. The syndrome is represented by S
and is mathematically given as,
S = YH T
or [S ]1×(n −k ) = [Y ]1×n H T
n ×(n −k )
Thus, when s = 0
Transmitted
0 0 1 1 1 1 0
code vector, X:
Received
code vector, Y: 1 0 01 0 1 0
Error vector E: 1 0 1 0 1 0 0
X = [1 ⊕ 1, 0 ⊕ 0, 0 ⊕ 1,1 ⊕ 0, 0 ⊕ 1,1 ⊕ 0, 0 ⊕ 0]
X = [0, 0,1,1,1,1, 0]
1 1 1
1 1 0
1 0 1
T
H = 0 1 1
1 0 0
0 1 0
0 1 1 7×3
Also the product XHT is given by
1 1 1
1 1 0
1 0 1
XH = ( 0111000 )1×7 0
T
1 1
1 0 0
0 1 0
0
1 1 7×3
SOURCE CODING AND ERROR CONTROL CODING
= ( 0 × 1) ⊕ (1 × 1) ⊕ (1 × 1) ⊕ (1 × 0 ) ⊕ ( 0 × 1) ⊕ ( 0 × 0 ) ⊕ ( 0 × 0 )
( 0 × 1) ⊕ (1 × 1) ⊕ (1 × 0 ) ⊕ (1 × 1) ⊕ ( 0 × 0 ) ⊕ ( 0 × 1) ⊕ ( 0 × 1)
( 0 × 1) ⊕ (1 × 0 ) ⊕ (1 × 1) ⊕ (1 × 1) ⊕ ( 0 × 0 ) ⊕ ( 0 × 0 ) ⊕ ( 0 × 1)
or XH T = 0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0,
0 ⊕ 0 ⊕1 ⊕1 ⊕ 0 ⊕ 0 ⊕ 0
or XH = [0, 0, 0]1×3
T
The parity check matrix proved that for valid code word, the
product XH T = ( 0, 0, 0 ) .
Problem 2
The parity check matrix of a (7,4) Hamming code is as under:
1 1 0 1 1 0 0
H = 1 1 1 0 0 1 1
1 0 1 1 0 0 1
Calculate the syndrome vector for single bit errors.
Solution
We know that syndrome vector is given by
S = EH T = [E ]1×7 H T
7×3
Therefore, syndrome vector will be represented by a 1 × 3 matrix
Thus, S1×3 = [E ]1×7 H
T
7×3
Now let us write various error vectors
Various error vectors with single bit errors are shown in table
given below The bolded/encircled bits represent the locations of errors
2. 0 1 0 0 0 0 0 Second
3. 0 0 1 0 0 0 0 Third
4. 0 0 0 1 0 0 0 Fourth
5. 0 0 0 0 1 0 0 Fifth
6. 0 0 0 0 0 1 0 Sixth
7. 0 0 0 0 0 0 1 Seventh
ANALOG AND DIGITAL COMMUNICATION
1 1 1
1 1 0
1 0 1
[S ] = [10 0 0 0 0 0]1×7 0 1 1
1 0 0
0 1 0
0 1 1
Therefore, [S ] = [1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0,1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0
1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0]
Here, [S ] = [1, 1, 1]
This is the syndrome for the first bit in error.
(ii) For the second bit in error, we have
1 1 1
1 1 0
1 0 1
[S ] = [0 1 0 0 0 0 0] 0 1 1
1 0 0
0 1 0
0 1 1
The erefore, [S ] = [0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0
0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0]
Here, [S ] = [1, 1, 0]
Syndrome vector
S.No Error vectors with single bit errors
“S”
1. 0 0 0 0 0 0 0 0 0 0
2. 1 0 0 0 0 0 0 1 1 1 ←1st Row of HT
3. 0 1 0 0 0 0 0 1 1 0 ←2nd Row of HT
4. 0 0 1 0 0 0 0 1 0 1 ←3rd Row of HT
5. 0 0 0 1 0 0 0 0 1 1 ←4th Row of HT
6. 0 0 0 0 1 0 0 1 0 0 ←5th Row of HT
7. 0 0 0 0 0 1 0 0 1 0 ←6th Row of HT
8. 0 0 0 0 0 0 1 0 1 1 ←7th Row of HT
S = YH T
Problem 2
To clear the above concept of error correction using syndrome
vector, let us consider one particular example. For this, let us use the
following parity check matrix:
1 1 1 0 1 0 0
H = 1 1 0 1 0 1 1
1 0 1 1 0 1 1 3×7
Solution
(i) First, we obtain the received code vector ‘Y’
Assuming X = ( 0 1 0 0 1 1 0 ) to be the transmitted code vector
Let the received code vector be obtained by assuming that the third
bit is in error
Y = (0 1 1 0 1 1 0)
Thus, . Here, third bit represents error.
(ii) Next let us determine the corresponding syndrome vector
We know that,
Syndrome S = YH T
1 1 1
0 1 1
1 0 1
S [ 0 1 1 0 1 1 0 ] 1 1 0
0 0 1
0 1 0
1 1 0
We have S = [ 0 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0,
0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0]
or S = [1, 0,1]
This is the syndrome vector for the given received signal. This
corresponds to the 3rd row of the transpose matrix HT.
T
(iii) But
= S YH= EH T
Therefore EH T = [1, 0, 1]
(iv) Let us obtain the error vector for the syndrome vector S = [1, 0, 1]
From the table the error vector corresponding to the syndrome
(1, 0, 1) is given by
E = (0 0 1 0 0 0 0)
SOURCE CODING AND ERROR CONTROL CODING
This shows that the error is present in the third position of the
received code vector Y.
(v) To obtain the correct vector. The vector X can be obtained as under;
X =Y ⊕E
Similarly the values of Y and E, we obtain
X = [0 1 1 0 1 1 0 ] ⊕ [0 0 1 0 0 0 0 ]
or X = [0 1 0 0 1 1 0 ]
}
+
X = Y⊕E
+
corrected
code vector
+
Syndrome S
Look up table
for error
patterns E
2. Working Operation
The received n-bit code word Y is stored in an n-bit register.
This code vector is then applied to a syndrome calculator to calculate
syndrome S = YHT . In order to obtain the syndrome, the transposed
parity check matrix, HT is stored in the syndrome calculator. The (n-k)
bit syndrome vector S is applied to the look-up table containing to the
error patterns. An error pattern is selected corresponding to the
particular syndrome S generated at the output of the of the syndrome
calculator. The selected error pattern E is then added (modulo-2-
addition) to the received signal Y to generate the corrected code vectors
X.
Therefore, X = Y ⊕ E
Problem 1
An error control code has the following parity check matrix
1 0 1 1 0 0
H = 1 1 0 0 1 0
0 1 1 0 0 1
or [H ]3×6 = P T : I 3
3×6
1 1 0
0 1 1
1 0 1
S = [1 1 0 1 1 0]
1 0 0
0 1 0
0 0 1
or S = [1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0,1 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0]
or S = [0, 1, 1]
This is same as the second row of the transpose matrix HT, which
indicates that there is an error in the second bit of the received signal
(i.e.,)
Error
Y=1 1 0 1 1 0
Problem 2
Given a (7,4) Hamming code whose generator matrix is given by
1 0 0 0 1 0 1
0 1 0 0 1 1 1
G=
0 0 1 0 1 1 0
0 0 0 1 0 1 1
Solution
(i) First, we obtain the P matrix from the generator matrix.
(ii) Then, we obtain the parity bits for each message vector using the
expression,
C = MP
(iii) Next, we obtain all the possible code words as
X = [M : C]
(iv) Lastly we obtain the transpose of P matrix (i.e.,) PT and obtain the
parity check matrix as : [H] = [PT | In-k]
1 0 0 0 1 0 1
0 1 0 0 1 1 1
Given generator matrix G =
0 0 1 0 1 1 0
0 0 0 1 0 1 1
Therefore, the P matrix is given by
1 0 1
1 1 1
P =
1 1 0
0 1 1 4×3
(ii ) Next we obtain the parity check bits
Thee parity bits can be obtained using the following expressiion:
C = MP
1 0 1
1 1 1
or [c1 c 2 c 3 ] = [m1 m2 m3 ] = 1 1 0
0 1 1 4×3
Solving, we obtain
we c1 = m1 ⊕ m2 ⊕ m3
c 2 = m2 ⊕ m3 ⊕ m 4
c 3 = m1 ⊕ m2 ⊕ m 4
Using these equations, we can obtain the parity bits for each
messsage vector. For example, let the message word be
m1m2m3m 4 = 0101
Therefore, we write c1 = 0 ⊕ 1 ⊕ 0 = 1
ANALOG AND DIGITAL COMMUNICATION
c2 = 1 ⊕ 0 ⊕1 = 0
c3 = 0 ⊕1 ⊕1 = 0
Hence, the corresponding parity bits arre c1c 2c 3 = 100
Therefore, the compelete code word for the message word 0101 is
given by t
Complete codeword = 0 1 0 1 1 0 0
Message Parity
bits bits
Similarly, we can obtain the codewords for the remaining
message words. All the message vectors, the corresponding parity bits
and codewords are given in table
(iv )
Lastly, let us obtain the parity check matrix
The paritty check matrix [ H ] is a 3 × 7 matrix (i .e.,)
H = P T : I n −k
The transpose matrix PT is given by
1 1 1 0
P = 0
T
1 1 1
1 1 0 1 3×4
1 1 1 0 1 0 0
Therefore, we have H = P T
: I n −k = 0 1 1 1 0 1 0
1 1 0 1 0 0 1 3×7
This is the required check matrix.
Problem 3
For a systematic linear block code, the three parity check digits
c4, c5 and c6 are given by
c 4 = m1 ⊕ m2 ⊕ m3
c 5 = m1 ⊕ m2
c 6 = m1 ⊕ m3
Solution
(i) First, we obtain parity matrix P and generator matrix G
(ii) Then we obtain the values C4, C5, C6 for various combinations of m1,
m2, m3 and obtain dmin and from the value of dmin, we calculate the
error detecting and correcting capability.
(iv) Lastly, we decode the received words with the help of syndromes
listed in the decoding table.
(i( First, let us obtain the parity matrix P and generator G.
We know that the relation between and check (parity) bits,
message bits and the parity matrix P is given by:
ANALOG AND DIGITAL COMMUNICATION
c 6 = m1 ⊕ m3 = 0 ⊕ 1 = 1
Therefore c 4c 5c 6 = 101 and the codeword is give
en by
m1 m2 m3 c4 c5 c6
Codeword for m1 m2 m3 =
0 0 1 1 0 1
Similarly, the other codewords are obtained. They are listed in table
below
S.No Message Check Code Vector (or) Code
1 1 1
1 1 0
1 0 1
or S = [1 0 1 1 0 0]
1 0 0
0 1 0
0 0 1
S = [1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 0,1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0,1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0]
or S = [1 1 0]
Thus, the syndrome of the received word is [110] which is
the same as the second syndrome in the decoding table. Hence the
corresponding error pattern is given by
E = [0 1 0 0 0 0 ]
and the correct word can be obtained as undr
X1 = Y1 ⊕ E = [1 0 1 1 0 0] ⊕ [0 1 0 0 0 0]
X1 = [1 1 1 1 0 0]
This the corrected trransmitted word.
Similarly, we can perform decoding of 0 0 0 1 1 0
Let X2 = 000110....... is the second received codeword. Even this
is not the valid codeword listed in codeword table. The syndrome for this
can be obtained as under
S = Y2 H T
1 1 1
1 1 0
1 0 1
or S=
1 0 0
0 1 0
0 0 1
or S= [1 1 0]
Problem 4
For a (6,3) code, the generator matrix G is given by
1 0 0 1 0 1
G = 0 1 0 0 1 1
0 0 1 1 1 0
(i) Realize an encoder for this code.
Solution
(i )
First, we obtain the expression for the parity bits.
The
e parity matrix can be obtained by using the expression:
C = MP
Input
m3 m2 m1
Sequence Message bits
Code words
Modulo-2
addition + + + S
Parity or
Parity or check bits
c3 c2 c1
check bits
Parity bit register
ANALOG AND DIGITAL COMMUNICATION
[ X ( p )] = x 0 + x1 p + x 2 p2 + ...... + xn -1 pn -1 ]
Where p = An arbitrary real variable
X ( p ) = polynomial of degree (n - 1)
X ( p ) = M ( p ) .G ( p )
Where M ( p ) = Message signal polynomial of degree e ≤k
M ( p ) = m0 + m1 p + m2 p + .... + mk −1 p
2 k −1
X1 ( p ) = M 1 ( p ) .G ( p )
X 2 ( p ) = M 2 ( p ) .G ( p )
X 3 ( p ) = M 3 ( p ) .G ( p ) .............and so on.
Problem 1
Solution
(i) This is a (7,4) cyclic Hamming code. Therefore, the message vectors
are going to be 4 bit long. There will be total 24 = 16 message vectors.
Let us consider any message code vector as under
M = (m3 m2 m1 m0 ) = ( 0 1 0 1)
M ( p ) = m3 p 3 + m2 p 2 + m1 p + m0 ... (1)
M ( p ) = p2 + 1
G ( p ) = p3 + p + 1
SOURCE CODING AND ERROR CONTROL CODING
Therefore, X ( p ) = p M ( p ) ⊕ C ( p )
n −k
= 0 p 6 + p 5 + 0 p 4 + p 3 + 0 p 2 + 0 p + 0 ⊕ p 2 + 0 p + 0
or X ( p ) = 0 p 6 + p 5 + 0 p 4 + p 3 + 0 p + 0
The code word vector is given by : ( 0 1 0 1 : 1 0 0 )
or X (mk −1....m1 m0 : c q −1c q −2 ...c1c 0 ) = (m3m2m1m0 : c 2c1c 0 )
= (0 1 0 1 : 1 0 0)
ANALOG AND DIGITAL COMMUNICATION
The above equation represents the polynomial for the rows of the
generating polynomials. It is possible to obtain the generator matrix from
this equation.
SOURCE CODING AND ERROR CONTROL CODING
Problem 1
For a (7,4) cyclic code, determine the generator matrix if
G(p) = 1+ p+ p3
Solution
H ere, n = 7 and k = 4, hence q = n − k = 3
G ( p ) = 1 + p + p3
(i ) We multiply both the sides of G ( p ) by p i ,i = (k − 1) .....1, 0.
∴ p iG ( p ) = p i + 3 + p i +1 + p i , i = (k − 1) ........1, 0
But k = 4 ∴ i = 3, 2,1, 0
(ii ) By substtituting these values of i into the above equation we get
four different polynomials as under:
These polynomials corrrespond to the four rows of the generator matrix
as under:
Row No.1 : i = 3 → p 3G ( p ) = p 6 + p 4 + p 3
Row No.2 : i = 2 → p 2G ( p ) = p 5 + p 3 + p 2
Row No.3 : i = 1 → p G ( p ) = p4 + p2 + p
Row No.4 : i = 0 → G ( p ) = p3 + p + 1
The generator matrix for (n ,k ) code is of size k × n . Therefore for ( 7, 4 )
cyclic code, the generator matix will be a 4 × 7 matrix. The polynomials
corresponding to the four are therefore, as s under:
Row No.1 : i = 3 → p + 0 p + p + p3 + 0 p2 + 0 p + 0
6 5 4
Row No.2 : i = 2 → 0 p 6 + p 3 + 0 p 4 + p 3 + p 2 + 0 p + 0
Row No.3 : i = 1 → 0 p6 + 0 p5 + p 4 + 0 p3 + p2 + p + 0
Row No.4 : i = 0 → 0 p 6 + 0 p 5 + 0 p 4 + p 3 + 0 p 2 + p + 1
These polynomials can be converrted into generator matrix G as under
p 6 p 5 p 4 p 3 p 2 p1 p 0
1 0 1 1 0 0 0
G=0 1 0 1 1 0 0
0 0 1 0 1 1 0
0 0 0 1 0 1 1 4×7
This is the req quired generator matrix.
The cyclic codes are subclass of linear block codes. Therfore, its code
vectors can be obtaiined by using the generator matrix as under.
X = MG
Where M = 1 × k message vector
ANALOG AND DIGITAL COMMUNICATION
Problem 2
For the generator matrix of the previous example, determine all
the possible code vectors
Solution:
All the code vectors can be obtained by using the following
g
expression:
X = MG
Let consider any 4-bit message vector. M = (m3 m2 m1 m0 ) = (1010 )
1 0 1 1 0 0 0
0 1 0 1 1 0 0
Therefore, X = [1010]
0 0 1 0 1 1 0
0 0 0 1 0 1 1
Therefore, X = [1 ⊕ 0 ⊕ 0 ⊕ 0 0 ⊕ 0 ⊕ 0 ⊕ 0 1 ⊕ 0 ⊕ 1 ⊕ 0
1⊕ 0 ⊕ 0 ⊕ 0 0 ⊕ 0 ⊕1⊕ 0 0 ⊕ 0 ⊕1⊕ 0
0 ⊕ 0 ⊕ 0 ⊕ 0]
Therefore, we have X = [1001 : 110]
Similarily, the other code vectors can be obtained.
Problem 1
For systematic (7,4) cyclic code, determine the generator matrix
and parity check matrix. Given; G(p)=p3+p+1
Solution
(i ) The ith row of the generator matrix is given under by equation as
under
p(n −i ) ⊕ Ri ( p ) = Qi ( p ) G ( p ) ; where i = 1, 2....k ... (1)
(ii ) It is given that the cyclic code is systematic ( 7, 4 ) code
Therefore, n = 7, k = 4 and (n − k ) = 3
Substituting these values into the above expression, we obtain
p ( 7 − i ) ⊕ Ri ( p ) = Qi ( p ) . ( p 3 + p + 2) .....i = 1, 2,....4
(iii ) With i = 1, the above equation is given by
p 6 ⊕ Ri ( p ) = Qi ( p ) ( p 3 + p + 1) ... ( 2)
Let us obtain the value of Qi ( p ) . The quotient Qi ( p ) can be
obtained by dividing p (n − i ) by G ( p ) as per equation ( 2) . Therefore,
to obtain Qi ( p ) ,
let us divide p 6 by ( p 3 + p + 1) .
The division takes place as under
p 3 + p + 1 ← Quotient polynomial Q (n )
p3 + 0 p2 + p + 1 p6 + 0 p5 + 0 p3 + 0 p2 + 0 p + 0
p6 + p4 + p3
Mod - 2 → ⊕ ⊕ ⊕
additions
0 + p4 + p3 + 0 + 0
p4 + 0 + p2 + p
Mod - 2 → ⊕ ⊕ ⊕ ⊕
additions p3 + p2 + p + 0
p3 + 0 + p + 1
⊕ ⊕ ⊕ ⊕
p2 + 1 Remainder
ANALOG AND DIGITAL COMMUNICATION
1 1 0 1 3×4
Hence the parity check is given by
1 1 1 0 1 0 0
H = 0 1 1 1 0 1 0
1 1 0 1 0 0 1 3×7
This is the required parity che
eck matrix.
SOURCE CODING AND ERROR CONTROL CODING
g1 g2 gn-k-1
Problem 1
Draw the encoder for a (7,4) cyclic Hamming code generated by
the generator polynomial G ( p ) = 1 + p + p
3
Solution
The generator polynomial is given by
G ( p ) = p3 + 0 p2 + p + 1 ... (1)
The generator polynomial of an (n ,k ) cyclic code is expressed as
under:
n −k −1
G ( p) = 1+ ∑
i =1
g i p i + p n −k ... ( 2)
Hence , G ( p ) = p 3 + g 2 p 2 + g1 p + 1 ... ( 3 )
Comparing equations (1) and ( 3 ) , we get obtain
g1 = 1 and g 2 = 0
Thus the encoder for a ( 7, 4 ) Hamming code is shown in figure below
Feedback
Switch
g2=0
g1=1
corrections. After shifting all the incoming bits of signal Y, the output
switch is transferred to position 2 and clock pulses are applied to the
shift register to out the syndrome. The following example will make the
concept of syndrome calculation obvious
g1 g2 gn-k-1
syndrome register are then shifted. The received code vector Y (which is
stored in the buffer register), is then added with the error vector E (which
is stored in the error register) bit by bit to obtain the corrected code word
X at decoder output.
Advantages of Cyclic codes
The advantage of cyclic codes over most of the other codes are as
under:
(i) They are easy to encode
(ii) They possess a well defined mathematical structure which has led to
development of very efficient decoding schemes for them
(iii) The methods that are to be used for error detection and correction
are simpler and easy to implement.
(iv) These methods do not require look-up table decoding
(v) It is possible to detect the error bursts using the cyclic codes.
Drawbacks of cyclic codes
Even though the error detection is simpler, the error correc-
tion is slightly more complicated. This is due to the complexity of the
combinational logic circuit used for error correction.
The main difference between the block codes and the convolu-
tional (or recurrent) codes may be listed below
(i) Block codes
In block codes, the block n bits generated by the encoder in a
particular time unit depends on the block k message bits within that
time unit, but also on the preceding ‘L’ blocks of the message bits (L>1).
Generally, the values of k and n will be small.
(ii) Convolutional code
In the convolutional codes, the block of n bits generated by the
encoder at given time depends not only on the k message bits within that
time unit, but also on the preceding ‘L’ blocks of the message bits (L>1).
Generally, the values of k and n will be small.
Application of convolutional code
Like block codes, the convolutional codes can be designed to
either detect or correct errors. However, because data is usually
retransmitted in blocks, the block codes are more suitable for error
detection and the convolutional codes are more suitable for error
correction.
SOURCE CODING AND ERROR CONTROL CODING
message m m1 m2
input
+ Communicator
x1
switch
Encoded bits
x2
+
• The output bit rate is twice that of the input bit rate.
4.16.2 Important De initions
1. The code rate (r)
The code rate of the encoder of figure is expressed as
k
r =
n
=
Here, k Number
= of message bits 1
n = Number of encoded bits per message bits = 2
1
Therefore, r =
2
ANALOG AND DIGITAL COMMUNICATION
Then these bit sequences are multiplexed with the help of the
commutator switch to produce the following output:
{
X = x 0(1) x 0(2) x1(1)x 2(1) x 2(2) ... }
Where x1 = x i (1)
{x 0
(1) (1) (1)
x1 x 2 ..... }
and {
x 2 = x i (2) = x 0(2) x1(2) x 2(2) ..... }
SOURCE CODING AND ERROR CONTROL CODING
Problem 1
From the convolutional figure below determine the following
(i) Dimension of the code
(ii) Code rate
(iii) Constraint length
(iv) Generating sequence
(v) The encoded sequence for the input message (10011)
Mod-2
adder
Commutator
1 switch
Message FF1 FF2 m2
m0 m1
input 2
Mod-2
adder
+
Solution
The given encoder can be drawn in standard form as shown
Given message sequence (m0m1m2m3m 4 ) = (10011)
g0(1)=1 g2(1)=1
+
+
g0(2)=1 g2(2)=1
g i (1) = {1,1,1}
where g 0(1) = 1 indicates the connections of bit m
g1(1) = 1 indicates the connection of bit m1
g 2(1) = 1 indicates connection of bit m2
Similarly x2 (or) xi(2) is obtained by adding first and last bits. Hence
generating sequence is given by
g i (2) = {1, 0,1}
where g 0(2) = 1 indicates the connections of bit m
= (1 × 0 ) + (1 × 0 ) + (1 × 1)
= 0 + 0 +1 = 1 ( mod 2 addition)
(1) (1) (1) (1)
x3 = g 0 m3 + g1 m2 + g 2 m1
= (1 × 0 ) + (1 × 0 ) + (1 × 0 ) = 1
x 4(1) = g 0(1)m5 + g1(1)m 4 + g 2(1)m2
= (1 × 1) + (1 × 1) + (1 × 0 )
= 1+1+ 0 = 1 ( mod 2 addition)
(1) (1) (1) (1)
x5 = g 0 m5 + g1 m 4 + g 2 m3
= g1(1)m 4 + g 2(1)m3
= (1 × 1) + (1 × 1)
= 1+1 = 0 [ m5 and m6 are not available ]
(1) (1) (1) (1)
x6 = g 0 m6 + g1 m5 + g 2 m 4
= g 2(1)m 4 = 1 × 1 = 1
= 1×1 = 1
ANALOG AND DIGITAL COMMUNICATION
Hence, the code bits obtained at the output of the bottom adder is
given by,
x 0(2) x1(2) x 2(2) x 3(2) x 4(2) x 5(2) x 6(2) = (1 0 1 1 1 1 1)
( 2)
The polynomial G (1) P and G (P ) are called as the generator
( )
polynomials of the code.
From the generator polynomials, we can obtain the codeword
polynomial as under
Code word polynomial corresponding to top adder is given by
x (1) ( p ) = G (1) ( p ) .m ( p )
where m ( p ) = Message polynomial
=m0 + m1 p + m2 p 2 + ....m L −1 p L −1
SOURCE CODING AND ERROR CONTROL CODING
Problem 1
Determine the codeword for the cyclic encoder of figure for the
message signal (1 0 0 1 1), using the transform domain approach. The
impulse response of the input top adder output path is (1, 1, 1) and that
of the input bottom adder path is (1, 0, 1)
Solution
First, let us write the generator polynomial G(1)(p)
The impulse response of the input top adder output path of the
convolutional encoder of figure . Therefore, we have
g 0(1) = 1
g1(1) = 1
and g 2(1) = 1
m2 m1 State of
encoder
0 0 a
0 1 b
1 0 c
1 1 d
4.16.4.1 THE CODE TREE
Let us draw the code tree for (2, 1) encoder. We assume that the
register has been cleared so that it contains all zeros, (i.e.,) Initial state
m2 m1= 0 0. Let us consider input message sequence m = 0 1 0 when
m = 0, x and xc can be determined as,
(i) When the input message bit m = 0 (first bit)
x1 = m ⊕ m1 ⊕ m2
=0+0+0
=0
x 2 = m ⊕ m2 = 0 + 0 = 0
Therefore
x1x 2 = 0 0
new state
0 0 0 x1x2=0 0 0 0 0 0
m0 m1 m2 m0 m1 m2 This bit is
before shift after shift discarded
node ‘a’ which represents the initial state. Hence if m =0, we should take
the upper branch from node ‘a’ to obtain the output x1 x2 = 0 0. The new
state of the encoder is m2 m1 = 0 0 (or) a.
(ii) When the input message bit m =1 (second bit)
When m = 1, x1 and x 2 can be determined as,
x1 = m ⊕ m1 ⊕ m2
= 1⊕ 0 ⊕ 0 = 1
x 2 = m ⊕ m2
= 1⊕ 0 = 1
new state
0 1 0 x1x2=1 1 0 1 0
m0 m1 m2 m0 m1 m2 This bit is
before shift after shift discarded
new state
0 1 0 x1x2=1 1 0 1 0
m0 m1 m2 m0 m1 m2 This bit is
before shift after shift discarded
SOURCE CODING AND ERROR CONTROL CODING
00 a
00
00 11 b
a
10 c
This is the code word 11
x1 x2 = 0 0
for m0 = 0 01 d
00 a
11 a
10
00 b
Take this path if m0=0 11
b 01 c
01
10 d
Start a
01 a
11 a
10 c 11 b
10 c
01 = b 11 00 b = 01
10
10 = c c = 10
01
01 10
d = 11
11 = d
Figure 4.16 Code trellis for the (2, 1)
convolution encoder
A solid line represents the state transition of branch m = 0 and
dotted line represents the branch m =1. Each branch is labelled with the
resulting output bits x1 x2.
4.16.4.3 State Diagram
Figure shows a state diagram for the encoder of figure . We can
obtain this state diagram from the code trellis, by coalencing the left and
right sides of trellis. The self loops at the nodes a and d represent the
state transition a-a and d-d.
b
Self loop Self loop
11 01 This indicates the
output code word
x1x2
a 00 10 d
00 10
11 01
3 0 x1 = 0 ⊕ 1 ⊕ 0 = 1 10 10 00 10
0 1 0 a c
x2 = 0 ⊕ 0 = 0 0 1 0 i.e.,c
b
i.e., i.e., 11
m m1 m2 m m1 m2
a
Code tree for
0 1 0 0 1 0
m=010
Encoder in state ‘b’
ANALOG AND DIGITAL COMMUNICATION
SOURCE CODING AND ERROR CONTROL CODING
Received signal
Y=11 00 2
a0 a1
(2)
11
0
(0)
b1
y = 11 y = 01
a0 00 2 a1 00 3 a2
(2) (1)
0 11 3
(0) (1)
b2
10 2
b1
01
(2) c2
(0)
d2
Similarly the path metric for the path a0 a1 b2 is 3, that of the path
a0 b1 d2 is 0 and so on. The virterbi’s algorithm for all the input bits is as
shown in figure 4.19
Y=11 Y=01 Y=11
a0 00 2 a1 00 3 a2 00 5 a3
11 (1) 11 2
(0) 11 (0) 3
0 3 b2 11 b3
(1) (0) 00 4
b1 10 (2) 2 10
2 c3
01 (1)
(2) c2 1
(1)
01 01
0 (1) 4
(0) 10 d3
d2 (1) 1
Figure 4.20 Paths and their path matrices for the viterbi algorithm
ANALOG AND DIGITAL COMMUNICATION
message of N=12 bits. All the discarded branches and all labels except
for running path metrices have been omitted for the sake of simplicity.
In case, if there are two paths have same metric, then any of them is
continued. Under such circumstances the choice of survivor is arbitrary.
The maximum likelihood path follows the thick line from a0 to a12, as
shown in figure 4.22. The final value of path metric is 2 which shows
that there are atleast two transmission errors present in Y.
Y= 11 01 11 00 01 10 00 11 11 10 11 00
2 3 2 3 3 4 5 2
a0 a13
2 3 2
2
0 3 3 3 3 3 2 5
b1 3
2 2 maximum
3 4 4 5
1 2 2 2 likelihood
Path
4 4
1 2 1 1 2 3
d1
Y+E= 11 01 01 00 01 10 01 11 11 10 11 00 Decoded
M= 1 1 0 1 1 1 0 0 1 0 0 0 signal
From figure 4.22, we observe that at node a12 only one path has
arrived with metric 2. This path is shown by a dark line. Note that since
this path has the lowest metric, it is the surviving path and signal Y is
decoded from this path. Whenever this path is a solid line, the message
is 0 and when it is dotted line, the message is 1.
4.17.2 Metric Diversion Effect
For a large number of message bits to be decoded, the
storage requirement is going to be extremely large. This problem can be
avoided by the metric diversion effect. The metric diversion effect is used
for reducing the required memory storage.
4.17.3 Free distance and coding gain
The error detection and correction capability of the block and cy-
ANALOG AND DIGITAL COMMUNICATION
clic codes is dependent on the minimum distance, dmin between the code
vectors. But, in case of convolutional code the entire transmitted se-
quence is to be considered as a single code vector. Therefore, the free
distance (dfree) is defined as the minimum distance between the code
vectors. But, the minimum distance between the code vectors is same as
the minimum weight of the code vector. Hence, the free distance is equal
to minimum weight of the code vector.
Therefore,
Free distance dfree = Minimum distance
=Minimum weight of code vectors
If X represents the transmitted signal, then the free distance is
given by
dfree = [W (X)]min and X is non-zero
In this way, the minimum distance decides the capacity of the
block or cyclic codes to detect and correct errors, the free distance will
decide the error control capacity for the convolutional code.
Coding gain (A)
The coding gain (A) is defined as the ratio of (E0/N0) of an en-
coded signal to (Eb/N0) of a coded signal. The coding gain is used for
comparing the different coding techniques.
( Eb / N 0 ) Encoded or = rd free
Coding gain A = ( )
( Eb / N 0 ) Coded 2
where r = Code rate and d free = The free distance
Problem 1
Using the encode figure given below generate an all zero sequence
which is sent over a binary symmetric channel. The received sequence
01001000... There are two errors in this sequence (at second and fifth
position). Show that this double error detection is possible with correc-
tion by application of viterbi algorithm
Input
s2 s3
+ +
Output
SOURCE CODING AND ERROR CONTROL CODING
Solution
The trellis diagram for the encoder shown in figure is shown in
figure
Output
Current State 00 Next State
00 = a a = 00
11
11
01 = b b = 01
00
10
10 = c 01 c = 10
01
10 d = 11
11 = d
From the virtebi diagram, let will be able to write the possible
paths for each state a4,b4,c4 and d4 and the running path metric for each
path is as shown below
path metric equal to 2 (i.e.,) the path (a0 - a1 -a2 -a3 -a4). Hence the
encoded signal corresponding to this path is given by
a4 → 00 00 00 00
This is corresponding to the received signal 01 00 10 00
Therefore, Received signal → 01 00 10 00
This shows that Viterbi algorithm can correct the errors present
in the received signal.
Problem 2
For the convolutional encoder arrangement shown in figure draw
the state diagram and hence trellis diagram. Determine output digit
sequence for the data digits 1 1 0 1 0 1 0 0. What are the dimensions of
the code (n, k) and constraint length?
Solution
(i) To obtain dimension of the code:
Observe that one message bit is taken at a time in the encoder of
figure . Hence the dimension of the code is (n,k) =(3,1)
m m1 m2
+ +
xi(3)
xi(2)
Output sequence
xi(1)
Output
Current State 000 Next State
00 = a a = 00
111
010
01 = b b = 01
101
001
10 = c 110 c = 10
011
100 d = 11
11 = d
000 a 100
101 001 d
011
010
c
Bold line represents 0 input
Dotted line represents 1 input
Problem 3
(ii) Draw the code tree, state diagram and trellis diagram
Solution
Since g1 = (100), x1 = m
Since g2 = (111), x2 = m ⊕ m1 ⊕ m2
Since g2 = (111), x3 = m ⊕ m2
m m1 m2
+ +
xi
xi
Output sequence
xi
Output
Current State Next State
m2m1 000 m2m1
00 = a a = 00
111
011
01 = b b = 01
100
010
10 = c 101 c = 10
001
110 d = 11
11 = d
ANALOG AND DIGITAL COMMUNICATION
000 a 110
100 010 d
001
011
000
a
000
111 b
000
a
010
c
111
000 101
a
d
011
a
010
111
m=0 b 100
b
001
c
101
d
110
Start d
a
000
a
011
m=1 111 b
010
c
010
c
100
111 101
b d
011
a
001
101 100 b
d
001
c
110
d
110 d
In the figure above we observe that the code tree repeats after
third stage. This is because each input bit affects upto three output bits
of every mod-2 adder.
ANALOG AND DIGITAL COMMUNICATION
1
Amount of information Ik = log 2
Pk
5. Define entropy
The entropy defined as the source which produces information
per individual message or symbol in a particular interval.
M
1
Entropy H=∑ Pk log 2
k =1 Pk
(i.e.,) H =0 if Pk=0 or 1
2. When 1 for all the M symbols, then the symbols are equally
Pk =
M
likely. For such a source entropy is given by,
H=log2M.
3. Upper bound on entropy is given by,
Hmax≤ log2 M.
x
P i
Yj
I ( X i ,Y j ) = log bits
P ( Xi )
I ( X ;Y ) = I (Y ; X )
I ( X ;Y ) = H ( X ) − H ( X /Y )
=H (Y ) − H (Y / X )
Where,H ( X /Y ) and H (Y / X ) are conditional entropies.
(iii) The mutual information is always positive.
I ( X ;Y ) ≥ 0
(iv) The mutual information is related to the joint entropy H(X,Y) by
following relation,
I ( X ;Y ) = H ( X ) + H (Y ) − H ( X ,Y )
Review Questions
PART A
1. Define channel capacity
2. Differentiate: Uncertainity, information and entropy
3. State the properties of entropy
4. Find entropy of sourece emitting symbols x, y, z with probabilities of
1/5, 1/2,1/3 respectively
5. A source emits four symbols with probabilities P0 = 0, 4, P1 = 0.3, P2
= 0.2, P3 =0.1. Find out the amount of information obtained due to
these four symbols.
6. State the channel capacity theorem.
7. Find the entropy of an event of throwing a die.
8. State Shanno’s first theorem
9. What is meant by self information?
10. State the syndrome properties
11. Define Hamming distance.
12. What are the reasons to use an inter leaver in a turbo code?
13. Define constraint length
14. What is meant by cyclic code?
15. Define trellis diagram
16. Give the difference between linear block code and cyclic code.
17. What are Convolutional codes
18. What are the advantages of viteri decoding technique?
19. Define turbo code.
PART B
1. State and prove the properties of mutual information
2. (a) Explain briefly the source coding theorem
(b) Given five symbols S0, S1,S2, S3 and S4 with their
respective probabilities 0.4,0.2,0.2,0.1,0.1. Use Huffman’s encoding
for symbols and find the average code word length. Also prove that it
satisfies source coding theorem
3. Explain in detail the veterbi algorithim for decoding of Convolutional
codes with a suitable example
4. Consider the generation of (7,4) Cyclic code by the generator polyno-
mial g(x) = 1 + x + x3
(i) Calculate the code word for the message sequence (1001) and
ANALOG AND DIGITAL COMMUNICATION
5.2.1. History
• AT &T Bell Labs developed the first cellular telephone system in the
late 1970’s.
• First Amps system was deployed in Chicago to cover approximately
2100 square miles 1983.
• A total of 40 MHz spectrum in the 800 MHz band was allocated by the
FCC (Federal Communication Commission).
• In 1989, additional 10 MHz (called ‘Extended Spectrum’) was allocated.
• Large cells and omni-directional BS antennas were used.
5.2.2. AMPS Frequency Allocation
• AMPS uses a 7-cell reuse pattern with cell splitting and sectoring (120
degrees).
• It requires S/I (Signal to Interference ratio) of 18 dB for satisfactory
system performance.
• It uses frequency modulation (FM) for radio transmission.
• Mobile - BS (reverse link) uses frequencies between 824 - 849 MHz.
• BS - Mobile (forward link) uses frequencies between 869 - 894 MHz.
• Separation for forward and reverse channel is 45 MHz.
• Figure 5.2 shows complete Advanced Mobile Phone Service(AMPS) fre-
quency spectrum.
Frequency 824 835 845 846.5 849
(MHz)
812
366 466
33 312 21 21 312 50 83
A A A B B A B
Channel No 991 1023 313 333334 354 666 716 799
33 312 21 21 312 50 83
A A A B B A B
Channel No 991 1023 313 333334 354 666 716 799
101 11 40 37 11 40
# of bits
101 11 40 37 11 40
# of bits
Voice Channel Format - Forward channel
DOT 1 = 101 bit dotting sequence
DOT 2 = 37 bit dotting sequence
SYNC = Synchronization word
WN = Message Word (N)
N = Number of repeated message words
101 11 40 37 11 40
# of bits
101 11 40 37 11 40
# of bits
Voice Channel Format - Reverse channel
(b) Reverse control channel
Figure 5.3 Control Channel formats
5.3.1. Introduction
• The development of GSM started in early 1980’s for Europe’s Mobile
infrastructure.
• The first was to establish a team with the title “Group Special Mobile”
(hence the term “GSM”, which today stands for Global System for Mo-
bile Communications) to develop a set of common standards.
• GSM became popular very quickly because it provided improved speech
quality and, through a uniform international standard, made it possi-
ble to use a single telephone number and mobile unit around the world.
ANALOG AND DIGITAL COMMUNICATION
Figure 5.5 GSM protocols are basically divided into three layers
8
7 6
4 4
11 10
8 8
MS 5 1
9 BSS 9 MSC GMSC
Public
network
12 12
BSS
(b) Handover
• In a cellular network’ the radio and fixed voice connections are not
permanently allocated for ‘the duration of a call. Handover or handoff
as it is called in North America, means switching an ongoing call to a
different channel or cell.
• There are four different types of handovers in GSM, which involve
transferring a connection between:
• Channels (timeslots) in the same cell (intra-BTS handover)
• Cells under the control of the same BSC (inter-BTS handover)
• Cells under the control of different BSCs, but belonging to the
same MSC (inter-BSC handover)
• Cells under the control of different MSCs (inter-MSC handover)
• The first two types of handover involve only one base station
controller (BSC). To save signalling bandwidth, they are managed by
the BSC without involving the MSC, except to notify it upon completion
of the handover.
• The last two types of handover are handled by the MSCs involved.
Note:
• Handovers can be initiated by either the BSC or the MSC (as a means
of traffic load balancing).
• During its idle timeslots, the mobile scans the broadcast control
channel of up to 16 neighbouring cells, and forms a list of the six
best candidates for possible handover, based on the received signal
strength.
• This information is passed to the BSC and MSC, at least once per sec-
ond, and is used by the handover algorithm.
(c) Short Message Service (SMS)
• SMS offers message delivery (similar to ‘two-way-paging) that is
guaranteed to reach the MS. If the GSM telephone is not turned on, the
message is held for later delivery. Each time a message is delivered to
an MS; the network expects to receive an acknowledgement from this
MULTI-USER RADIO COMMUNICATION
(i) Authentication
• Authentication normally takes place when the MS is turned on with
each incoming call and outgoing call.
• A verification that the »Ki« (security code) stored in the AuC matches
the »Ki« stored in SIM card of the MS completes this process.
• The user must key in a PIN code on the handset in order to activate the
hardware before this automatic procedure can start.
5.4 CDMA
5.4.1 Introduction
What is CDMA?
• CDMA (Code-Division Multiple Access) is a channel access method
used by various radio communication technologies.
• It is a form of multiplexing, which allows numerous signals to occupy
a single transmission channel, optimizing the use of available band-
width.
• The technology is used in ultra-high-frequency (UHF) cellular telephone
systems in the 800-MHz and 1.9-GHz bands.
5.4.2 Principle Operation
CDMA employs analog-to-digital conversion (ADC) in combination
with spread spectrum technology. Audio input is first digitized into binary
elements. The frequency of the transmitted signal is then made to vary
according to a defined pattern (code), so it can be intercepted only by a
receiver whose frequency response is programmed with the same code, so
it follows exactly along with the transmitter frequency.
There are trillions of possible frequency-sequencing codes, which
enhance privacy and makes cloning difficult.
5.4.3 General Specification of CDMA
• Rx: 869-894MHz Tx: 824-849MHz
• 20 Channels spaced 1250kHz apart (798 users/channel)
• QPSK/(Offset) OQPSK modulation scheme
• 1.2288Mbps bit rate
• IS-95 standard
• Operates at both 800 and 1900 MHz frequency bands.
MULTI-USER RADIO COMMUNICATION
A
A
B
Less Interference for
B
A station
2. Asynchronous DS-CDMA
• In asynchronous CDMA system, orthogonal codes have bad cross-
correlation.
• Unlike the signal from the base station, the signal from the mobile
station to the base station, becomes the asynchronous system.
• In an asynchronous system, somewhat mutual interference increases,
but it uses the other codes such as PN code or Gold code.
Asynchronous Chip
Reverse link
Timing
(Up Link) A
B
Signals from A and B
are interfering each
other
B Big Interference from
A A station
Spreading code
“1 00 10”
Result “01101”
Figure 5.10 CDMA spreading
Synthesizer
Hop Word
fc
PN Clock PN
Generator Oscillator
1.25 MHz
fc
N = i 2 + ij + j 2
} j=2
600
A
} i=3
The process of finding the tier with the nearest co-channel cells is
as follows and shown in figure 5.15
(i) Move i cells center of successive cells.
(ii) Turn 60° in a counter clockwise direction.
(iii) Move j-cells forward through the center of successive cells.
5.5.3.1 Increasing Capacity
In time, as more customers use the system, traffic may build up so
that there are not enough frequency bands assigned to a cell to handle its
calls. A number of approaches have been used to cope with this situation,
including the following:
5.5.3.2 Adding new channels
Typically, when a system is set up in a region, not all of the
channels are used, and growth and expansion can be managed in an
orderly fashion by adding new channels.
5.5.3.3 Frequency borrowing
In the simplest case, frequencies are taken from adjacent cells by
congested cells. The frequencies can also be assigned to cells dynamically.
5.5.4. Interference
The two major types of interference occurs within the
cellular telephone system are co-channel interference and adjacent
channel interference.
MULTI-USER RADIO COMMUNICATION
f1
f1 Base station
cell A
Cluster 2
Base station
cell A
Cluster 1
Cell A
Cluster 1
the distance to the center of the nearest co-channel cell (D) as shown in
5.17. Increasing the D/R ratio (sometimes called co-channel reuse ratio)
increases the spatial separation between cochannel cells relative to the
coverage distance. Therefore. increasing the co-channel reuse ratio (Q) can
reduce co-channel interference.
For a hexagonal geometry, the co-channel reuse ratio determined
by
Q = D/R
where Q = co-channel reuse ratio (unitless)
D = a distance to center of the nearest co-channel cell (kilometers)
R = a cell radius (kilometers)
The smaller the value of (Q, the larger will be the channel
capacity. However, a large value of Q improves the co-channel interference
and, thus, the overall transmission quality.
f2 plus weak/signal
f1 plus strong
f1 signal
(b)
S1
S6 S2
S5 S3
S4
(c)
(a)
Fig 5.19 (a). Cell splitting (b). 120°cell sectoring and (c). 60° cell sec-
toring
5.5.6. Channel assignment or allocation
Channel assignment affects the performance of the system, espe-
cially when it comes to handoffs. There are several channel assignment
MULTI-USER RADIO COMMUNICATION
maintain the active call because the signal is too weak (noise level
becomes high relative to signal level).
• Handoff Threshold PThreshold
• This power limit is usually selected to 1)e few dB’s (S dB to 10 dB)
above the minimum acceptable signal to main the call level.
• The margin Δ = PThreshold - PMinimum to maintain call should
not be too large or too small.
If it is, too large, unnecessary handoffs will occur because the hand-
off threshold is high and will be reached very often even while the mobile
phone is still deep inside the serving cell.
• Unnecessary handoffs put a lot of strain (a lot of work) on the MSC and
system and reduce network capacity because of the need of free chan-
nels in other cells.
• Too small, calls may get dropped before a successful handoff takes
place because not enough time is available for the handoff where the
signal power will drop very quickly from the handoff threshold to the
minimum power to maintain a call.
The following two figure 5.20 and 5.21 shows how two handoff
situations. In the first case, a successful handoff takes place where the
mobile phone is switched from one tower to another while in the second
case, the signal power drops to the minimum value needed for maintaining
a call and the call is dropped without a handoff
FDMA, TDMA and CDMA are the three major multiple access
techniques that are used to share the available bandwidth in a wireless
communication system.
Depending on how the available bandwidth is allocated to the users
these techniques can be classified as narrowband and wideband systems.
Narrowband Systems
The term narrowband is used to relate the bandwidth of the
single channel to the expected coherence bandwidth of the channel. The
available spectrum is divided in to a large number of narrowband
channels. The channels are operated using FDD. In narrow band FDMA,
a user is assigned a particular channel which is not shared by other users
in the vicinity and if FDD is used then the system is called FDMA/FDD.
Narrow band TDMA allows users to use the same channel but allocated
a unique time slot to each user on the channel, thus separating a small
number of users in time on a single channel. For narrow band TDMA,
there generally are a large number of channels allocated using either FDD
ANALOG AND DIGITAL COMMUNICATION
or TDD, each channel is shared using TDMA. Such systems are called
TDMA/FDD and TDMA/TDD access systems.
Wideband Systems
In wide band systems, the transmission bandwidth of a single
channel is much larger than the coherence bandwidth of the channel.
Thus, multipath fading does not greatly affect the received signal within
a wideband channel, and frequency selective fades occur only in a small
fraction of the signal bandwidth.
5.6.1. Frequency Division Multiple Access
This was the initial multiple-access technique for cellular sys-
tems in which each individual user is assigned a pair of frequencies
while making or receiving a call as shown in figure 5.22 One frequency
is used for downlink and one pair for uplink. This is called frequency di-
vision duplexing (FDD). That allocated frequency pair is not used in the
same cell or adjacent cells during the call so as to reduce the co-channel
interference. Even though the user may not be talking, the spectrum
cannot be reassigned as long as a call is in place. Different users can use
the same frequency in the same cell except that they must transmit at
different times.
FDMA channel is not in use, then it sits idle and it cannot be used by
other users to increase share capacity. After the assignment of the voice
channel the BS and the MS transmit simultaneously and continuous-
ly. The bandwidths of FDMA systems are generally narrow i.e. FDMA is
usually implemented in a narrow band system. The symbol time is large
compared to the average delay spread. The complexity of the FDMA mobile
systems is lower than that of TDMA mobile systems. FDMA requires tight
filtering to minimize the adjacent channel interference.
FDMA/FDD in AMPS
The first U.S. analog cellular system, AMPS (Advanced Mobile Phone
System) is based on FDMA/FDD. A single user occupies a single channel
while the call is in progress, and the single channel is actually two sim-
plex channels which are frequency duplexed with a 45 MHz split. When a
call is completed or when a handoff occurs the channel is vacated so that
another mobile subscriber may use it. Multiple or simultaneous users
are accommodated in AMPS by giving each user a unique signal Voice.
Signals are sent on the forward channel from the base station to the
mobile unit, and on the reverse channel from the mobile unit to the base
station. In AMPS, analog narrowband frequency modulation (NBFM) is
used to modulate the carrier.
FDMA/TDD in CT2
Using FDMA, CT2 system splits the available bandwidth into
radio channels in the assigned frequency domain. In the initial call setup,
the handset scans the available channels and locks on to an unoccupied
channel for the duration of the call. Using TDD (Time Division Duplexing),
the call is split into time blocks that alternate between transmitting and
receiving.
FDMA and Near-Far Problem
The near-far problem is one of detecting or filtering out a
weaker signal amongst stronger signals. The near-far problem is
particularly difficult in CDMA systems where transmitters share
transmission frequencies and transmission time. In contrast, FDMA and
TDMA systems are less vulnerable. FDMA systems offer different kinds
of solutions to near-far challenge. Here, the worst case to consider is
recovery of a weak signal in a frequency slot next to strong signal. Since
both signals are present simultaneously as a composite at the input of a
ANALOG AND DIGITAL COMMUNICATION
gain stage, the gain is set according to the level of the stronger signal; the
weak signal could be lost in the noise floor. Even if subsequent stages have
a low enough noise floor to provide
5.6.2 Time Division Multiple Access
In digital systems, continuous transmission is not required
because users do not use the allotted bandwidth all the time. In such
cases, TDMA is a complimentary access technique to FDMA. Global
Systems for Mobile communications (GSM) uses the TDMA technique. In
TDMA, the entire bandwidth is available to the user but only for a finite
period of time. In most cases the available bandwidth is divided into fewer
channels compared to FDMA and the users are allotted time slots during
which they have the entire channel bandwidth at their disposal, as shown
in figure 5.23.
beam antenna. These areas may be served by the same frequency or differ-
ent frequencies. However for limited co-channel interference it is required
that the cells be sufficiently separated. This limits the number of cells a
region can be divided into and hence limits the frequency re-use factor. A
more advanced approach can further increase the capacity of the network.
This technique would enable frequency re-use within the cell. In a practi-
cal cellular environment it is improbable to have just one transmitter fall
within the receiver beam width. Therefore it becomes imperative to use
other multiple access techniques in conjunction with SDMA. When differ-
ent areas are covered by the antenna beam, frequency can be re-used, in
which case TDMA or CDMA is employed, for different frequencies FDMA
can be used.
5.6.4 Code Division Multiple Access
In CDMA, the same bandwidth is occupied by all the users, however
they are all assigned separate codes, which differentiates them from each
other (shown in figure 5.24). CDMA utilize a spread spectrum technique
in which a spreading signal is used to spread the narrow band message
signal.
Pseudo Random Noise Code. Each user is given his own codeword which
is orthogonal to the codes of other users and in order to detect the user,
the receiver must know the codeword used by the transmitter. There are,
however, two problems in such systems which are discussed in the sequel.
1. CDMA/FDD in IS-95
In this standard, the frequency range is: 869-894 MHz (for Rx) and
824-849 MHz (for Tx). In such a system, there are a total of 20 channels
and 798 users per channel. For each channel, the bit rate is 1.2288 Mbps.
For orthogonality, it usually combines 64 Walsh-Hadamard codes and
am-sequence.
2. CDMA and Self-interference Problem
In CDMA, self-interference arises from the presence of delayed
replicas of signal due to multipath. The delays cause the spreading
sequences of the different users to lose their orthogonality, as by design
they are orthogonal only at zero phase offset. Hence in dispreading a given
user’s waveform, nonzero contributions to that user’s signal arise from the
transmissions of the other users in the network. This is distinct from both
TDMA and FDMA, wherein for reasonable time or frequency guard bands,
respectively, orthogonality of the received signals can be preserved.
3. CDMA and Near-Far Problem
The near-far problem is a serious one in CDMA. This problem arises
from the fact that signals closer to the receiver of interest are received with
smaller attenuation than are signals located further away.
Therefore the strong signal from the nearby transmitter will mask
the weak signal from the remote transmitter. In TDMA and FDMA, this is
not a problem since mutual interference can be filtered. In CDMA,
However, the near-far effect combined with imperfect orthogonality
between codes (e.g. due to different time sifts), leads to substantial
interference. Accurate and fast power control appears essential to ensure
reliable operation of multiuser DS-CDMA systems.
5.6.5 Hybrid Spread Spectrum Techniques
The hybrid combinations of FHMA, CDMA and SSMA result in
hybrid spread spectrum techniques that provide certain advantages. These
hybrid techniques are explained below,
ANALOG AND DIGITAL COMMUNICATION
Introduction
• Satellite can ‘see’ a very large area of the earth. Hence the satellite
can form a start point of a communication net, to link many users
together simultaneously. This will include users widely separated
geographically
• A satellite communication system is economically only where the
system is used continuously and a large number of users use it
Block Diagram of a satellite communication
• The block diagram of a satellite communication system is as shown in
figure 5.25
(Transponder)
Satellite
Uplink Downlink
-
6 GHz 4GHz Parabolic dish
antenna
Highly directional
Dish antenna
Transmitting
earth Receiving
station earth station
Earth
Transponder
• Thus a satellite has to receive, process and transmit the signal. All
these functions are performed by a unit called “satellite transponder”
• A communication satellite has two set of transponders each set having
12-transponders making it a total of 34 transponders.
• Each transponder has a bandwidth of 36 MHz which is sufficient to
handle at least one TV-channel
• The uplink signal received by a transponder is weak and downlink
signal transmitted by the transponder is strong. Therefore, to avoid
interference between them, the uplink and downlink frequencies are
selected to be of different values.
• The operation of satellite takes place at a very high signal frequencies
in the microwave frequency range.
The typical band of frequencies used for the communication
satellite are as follows.
1. C-band - 4/6 GHz
2. ku-band=11/14 GHz
3. ka-band -20/30 GHz
• One of the advantages of operating at such a high frequency is
reduction in the size of antennas and other components of the system.
• Multiple access methods such as FDMA, TDMA and CDMA are used
to allow the access of a satellite to the maximum number of earth
stations.
• The power requirement of a satellite is satisfied by solar panels and a
set of nickel cadmium batteries, carried by the satellite itself.
• The power requirement
5.8 SATELLITE SYSTEM LINK MODELS
Up Counter
Base
BPF Mixer BPF High Power
band MUX Modulator
Amplifier
inputs
~ Local
Oscillator
Carrier oscillator
Figure 5.26 Block diagram of uplink model
Multiplexer
The baseband signals are first multiplexed (ie) combined and make
as a single composite signal and apllied as a input to modulator (ie) as a
modulating signal.
Modulator
The modulator combine (mixed) both modulating signal and
high frequency carrier signal and then produce modulated signal. This
modulated signal takes place at a lower frequency than the actual uplink
frequency.
BPF
The Band pass filter allows the Intermediate frequency (IF) to up
converter
Up Converter
• It consist of mixer, local oscillator and BPF. This modulation takes
place at a lower frequency than the actual uplink frequency. Therefore,
a frequency upconverter is used to increase the frequency to the level
of uplink frequency.
• Mixer consist of four outputs, out of which the “sum” component is
selected by the BPF followed by the mixer
Power Amplifier
The upconverter signal is then passed through a power amplifier to
raise the signal to an adequate power level.
Antenna
• The transmitter output is coupled to the antenna. The antenna
transmits this signal at the uplink frequency to the satellite
MULTI-USER RADIO COMMUNICATION
transponder
• A highly directional parabolic dish antenna is used as a transmitting
antenna
5.8.2 Transponder
The combination of a transmitter and receiver in the satellite is
known as transponder. The block diagram of transponder is as shown in
figure 5.27
Satellite antenna
6 GHz 4 GHz
Diplexer
6 GHz 4 GHz
Low High
Noise Mixer Power
Amplifier Amplifier
2 GHz
~ Local Oscillator
Base
band
O/P’s
De
BPF LNA Mixer BPF De-MUX
modulator
~
Local oscillator
Local
4 GHz Oscillator
6 GHz
Local Local
ANALOG AND DIGITAL COMMUNICATION
Oscillator Oscillator
Power Base
amplifier BPF Mixer BPF Modulator MUX band
signals
Transmitter section
• It states that during equal intervals of time, a satellite will sweep out
areas in the orbital plane, focused at the bary centre
Area A 1 = Area A 2
Where,
α = semi major axis in kilometers
p = mean solar earth days
A = A unitless constant
MULTI-USER RADIO COMMUNICATION
Equatorial Orbit
Earth
S
Figure 5.32 Synchronous Orbit
ANALOG AND DIGITAL COMMUNICATION
Disadvantages
• Powerful rockets are required to launch a satellite in the orbit
• The satellites placed in these orbits cannot establish communication in
the polar region of the earth
2. Polar Orbit
• It passes over the N and S poles
• It’s height is 900-1000 km above earth
• It is used for navigation and remote sensing satellites
N Polar Orbit
Equator
Satellite
Earth
S
Figure 5.33 Polar Orbit
3. Inclined Orbits
• It routes earth in a particular angle is as shown in figure 5.34
• It provides communication coverage of polar regions
• Used for domestic communication
• This orbit is not used very frequently. The height of the inclined orbit is
generally set to cover the are of interest
N Equator
Satellite
Inclined orbit
Earth
S
Figure 5.34 Inclined Orbits
MULTI-USER RADIO COMMUNICATION
by Doordarshan to
2. S-band 4 2 0.5
transmit 14 different
language channels
3. C-band 6 4 0.5 TV broadcast
4. X-band 8 6 0.5 Ship and Aircraft
TV broadcast, Non-
5. Ku- band 14 11 0.5
Military applications
Ka-band Commercial
6. 30 20 3
(commercial) broadcasting
Ka-band
7. 31 21 1 Military
(military)
Non-military
8. V-band 50 40 1
applications
5.14 BLUETOOTH
5.14.1 Introduction
Call home from a remote location to turn appliances on and off, set
the alarm, and monitor activity.
Cable replacement
Ad hoc networking
hh Bridging of networks
cc Core protocols
cc Adopted protocols
1. Radio
2. Baseband
AT
Commands
= core protocols vCard/vCal WAE
TCS BIN SDP
=Cable OBEX WAP
replacements
protocol UDP/TCP
=Telephony control
protocols IP
=Adopted protocols PPP
Audio Control
Host-Controller Interface
Link Manager Protocol (LMP)
Baseband
Bluetooth Radio
IP = Internet Protocol
Adopted protocols
Usage Models
File transfer: The file transfer usage model supports the transfer of
directories, files, documents, images, and streaming media formats.
This usage model also includes the capability to browse folders on a
remote device.
ANALOG AND DIGITAL COMMUNICATION
MULTI-USER RADIO COMMUNICATION
Headset: The headset can act as a remote device’s audio input and
output interface.
Piconet
Scatternet
ADVANTAGES
hh Very Robust as the radio hops faster and uses shorter packets
DISADVANTAGES
TECHNICAL SPECIFICATIONS
802.11 Bluetooth
Represents Internet Represents faux internet
Already proved itself Still to prove
Widespread Connectivity Connect at close proximity
ANALOG AND DIGITAL COMMUNICATION
Part-A
Review Question
Part-A
1. What is AMPS and in what way it differs from D-Amps?
2. What is IG and 2G?
3. Define MS,BS and MSC.
4. What is meant by frequency reuse?
5. Define Hand off and mode of Hand-off.
6. What are the types of Hand-off?
7. Write the principles of cellular network.
8. Define cell, cluster.
9. What do you mean by foot print, dwell time?
10. Define Frequency reuse ratio.
11. Define FDMA, TDMA and CDMA.
12. State the principle of CDMA?
13. Write the goal of GSM-Standard.
14. What is mobility management ?
15. What is the maximum number of callers in each cell in a GSM?
PART-B
1. Explain briefly the principle of cellular networks.
2. Compare TDMA, FDMA and CDMA.
3. Discuss on 1G of mobile network (or) AMPS.
4. Discuss the effects of multipath propagation on CDMA-technique.
5. Enumerate on (i) GSM architecture
(ii) GSM-Channels
6. Explain code division multiple Access (CDMA) and compare its
performance with TDMA.
7. Explain in detail about the GSM-logical channels.
8. Write short notes on
(i) Frequency reuse
(ii) Channel alignment
(iii) Hand-off
9. Write short notes on Bluetooth technology.
10. Discuss various multiple access techniques.