You are on page 1of 559

ANALOG AND DIGITAL

COMMUNICATION
Prepared according to Anna university syllabus R-2017
(Common to III semester-CSE/IT )

G. Elumalai, M.E.,(Ph.D)
Assistant Professor (Grade I)
Department of Electronics and Communication Engineering
Panimalar Engineering College
Chennai.

Er. m. JaiGanEsh, M.E.,


Assistant Professor
Department of Electronics and Communication Engineering
Panimalar Engineering College
Chennai.

SREE KAMALAMANI PUBLICATIONS


CHENNAI
SREE KAMALAMANI PUBLICATIONS (P) Ltd.

Publised by SREE KAMALAMANI PUBLICATIONS.


New No. AJ. 21, old No. AJ. 52, Plot No. 2614,
4th Cross, 9th Main Road, Anna Nagar -600 040,
Chennai, Tamilnadu, India
Landline: 91-044-42170813,
Mobile: 91-9840795803
EMAil id: skmpulbicationsmumdad@gmail.com
1ST EdiTioN 2014
2Nd REViSEd EdiTioN 2016
Copyright © 2014, by Sree Kamalamani Publications.
No part of this publication may be reproduced or distributed in any form or by any
means, electronic, mechanical, photocopying, recording or otherwise or stored in a
database or retrieval system without the prior written permission of the publishers.

This edition can be exported from India only by the Publishers,


Sree Kamalamani Publications.

ISBN (13 digits): 978-93-85449-12-3

Information contained in this work has been obtained by Sree


Kamalamani Publications, from sources believed to be reliable. However,
neither Sree Kamalamani Publications nor its authors guarantee the
accuracy or completeness of any information published herein, and neither;
Sree Kamalamani nor its authors shall be responsible for any errors, omissions,
or damages arising out of use of this information. This work is published with
the understanding that Sree kamalamani publications and its authors are
supplying information but are not attempting to render engineering or
other professional services. if such services are required, the assistance of an
appropriate professional should be sought.

Typeset & Coverpage :


Sree Kamalamani Publications
New No. AJ. 21, Old No. AJ. 52, Plot No 2614,
9th Main, 4th cross, Anna Nagar-600 040
Chennai, Tamilnadu, India.
Landline: 91-044-42170813,
Mobile: 91-9840795803
About the Author
G.Elumalai M.E., is working as an Assistant Professor (Grade – I)
in the Department of Electronics and Communication Engineering,
Panimalar Engineering College, Chennai. He obtained his B.E. in
Electronics and Communication Engineering; M.E. in Applied
Electronics and Ph.D pursing in Wireless Sensor Network. His areas
of interests are Communication System, Digital communication, Digital
signal processing and Wireless Sensor Network. He has more than 13
years of experience.

M.Jaiganesh M.E., is working as an Assistant Professor in the


Department of Electronics and Communication Engineering,
Panimalar Engineering College, Chennai. He obtained his B.E. in
Electronics and Communication Engineering; M.E. in Computer and
Communication. His areas of interests are Communication System, Digital
communication, Optical Communication and Embedded system. He has
more than 4 years of experience.
PREFACE

Dear Students,
We are extremely happy to present the book “Analog and Digital
Communication” for you. This book has been written strictly as per the
revised syllabus (R2013) of Anna University. We have divided the subject
into five units so that the topics can be arranged and understood proper-
ly. The topics within the units have been arranged in a proper sequence
to ensure smooth flow of the subject.

Unit I - Introduce the basic concepts of communication, need of
modulation and different types of analog modulation (Amplitude modu-
lation, Frequency modulation and Phase modulation).

Unit II - Deals with basic concepts of digital communication


which includes ASK, FSK, PSK, QPSK and QAM.

Unit III - Discuss about concept of data communication and


various pulse modulation technique.

Unit IV - Concentrate on various techniques for error control cod-
ing.

Unit V – Describe about multiuser radio communication.



A large number of solved university examples and university
questions have been included in each unit, so we are sure that this book
will cater all your needs for this subject.
We have made every possible effort to eliminate all the errors in
this book. However if you find any, please let we know, because that will
help for us to improve further.
G.Elumalai
M.Jaiganesh
EC8394 ANALOG AND DIGITAL COMMUNICATION L T P C 3 0 0 3

UNIT I ANALOG COMMUNICATION


Noise: Source of Noise - External Noise- Internal Noise- Noise Calculation.
Introduction to Communication Systems: Modulation – Types - Need for Modulation.
Theory of Amplitude Modulation - Evolution and Description of SSB Techniques - Theory
of Frequency and Phase Modulation – Comparison of various Analog Communication
System (AM – FM – PM).

UNIT II DIGITAL COMMUNICATION

Amplitude Shift Keying (ASK) – Frequency Shift Keying (FSK) Minimum Shift
Keying (MSK) –Phase Shift Keying (PSK) – BPSK – QPSK – 8 PSK – 16 PSK - Quadrature
Amplitude Modulation (QAM) – 8 QAM – 16 QAM – Bandwidth Efficiency– Comparison of
various Digital Communication System (ASK– FSK – PSK – QAM).

UNIT III DATA AND PULSE COMMUNICATION

Data Communication: History of Data Communication - Standards Organiza-


tions for Data Communication- Data Communication Circuits - Data Communication
Codes - Error Detection and Correction Techniques - Data communication Hardware -
serial and parallel interfaces. Pulse Communication: Pulse Amplitude Modulation (PAM)
– Pulse Time Modulation (PTM) – Pulse code Modulation (PCM) - Comparison of various
Pulse Communication System (PAM – PTM – PCM).

UNIT IV SOURCE AND ERROR CONTROL CODING

Entropy, Source encoding theorem, Shannon fano coding, Huffman coding,


mutual information, channel capacity, channel coding theorem, Error Control Coding,
linear block codes, cyclic codes,
convolution codes, viterbi decoding algorithm.

UNIT V MULTI-USER RADIO COMMUNICATION

Advanced Mobile Phone System (AMPS) - Global System for Mobile Communica-
tions (GSM) - Code division multiple access (CDMA) – Cellular Concept and Frequency
Reuse - Channel Assignment and Hand - Overview of Multiple Access Schemes - Satellite
Communication - Bluetooth
TABLE OF CONTENTS

TABLE OF CONTENTS

UNIT – I ANALOG COMMUNICATION 1.1-1.120

1.1 Introduction 1.2


1.2 Noise 1.5
1.3 Introduction to communication system 1.12
1.4 Modulation 1.16
1.5 Need for modulation 1.17
1.6 Classifications of modulation 1.20
1.7 Some important definitions related to
communication 1.21
1.8 Theory of Amplitude modulation 1.24
1.9 Generation of SSB 1.58
1.10
AM – Transmitters 1.65
1.11 AM Super heterodyne receiver with its characteristic
Performance 1.68
1.12 Performance characteristics of a receiver 1.72
1.13 Theory of Frequency and Phase modulation 1.75
1.14 Comparison of various analog communication
system 1.103
Solved two mark questions 1.106
Review Questions 1.117-1.120

UNIT – II DIGITAL COMMUNICATION 2.1-2.84

2.1 Introduction 2.2


2.2 Digital Transmission system 2.3
2.3 Digital Radio 2.4
2.4 Information capacity 2.5
2.5 Trade of between, Bandwidth and SNR 2.7
2.6 M-ary encoding 2.10
2.7 Digital Continuous wave modulation technique 2.10
Analog and Digital communication

2.8 Amplitude shift keying (or) Digital Amplitude


Modulation (or) OOK – System 2.12
2.9 Frequency shift keying 2.18
2.10 Minimum shift keying (or) continuous phase
frequency shift keying 2.26
2.11 Phase shift keying 2.27
2.12 Differential Phase shift keying 2.37
2.13 Quadrature Phase shift keying 2.41
2.14 8 PSK System 2.49
2.15 16 PSK System 2.56
2.16 Quadrature Amplitude modulation 2.57
2.17 16 - QAM 2.61
2.18 Carrier recovery (phase referencing) 2.66
2.19 Clock recovery circuit 2.70
2.20 Comparison of various digital
communication system 2.72
Solved two mark questions 2.74
Review Questions 2.83-2.84

UNIT – III DATA AND PULSE COMMUNICATION 3.1-3.128

3.1 Introduction 3.2


3.2 History of data communication 3.3
3.3 Components of Data communication systems 3.4
3.4 Standard organization for data communication 3.6
3.5 Data communication circuits 3.7
3.6 Data transmission 3.8
3.7 Configurations 3.13
3.8 Topologies 3.14
3.9 Transmission modes 3.15
3.10 Data communication codes 3.18
3.11 Introduction to error detection and
correction techniques 3.25
3.12 Error detection techniques 3.28
3.13 Error correction techniques 3.45
TABLE OF CONTENTS

3.14 Data communication hardware 3.51


3.15 Serial interface 3.63
3.16 Centronics – Parallel interface 3.72
3.17 Introduction to Pulse modulation 3.76
3.18 Pulse Amplitude Modulation (PAM) 3.80
3.19 Pulse Width Modulation (PWM) 3.83
3.20 Pulse Position Modulation (PPM) 3.84
3.21 Pulse Code Modulation (PCM) 3.84
3.22 Differential Pulse Code Modulation (DPCM) 3.104
3.23 Delta Modulation (DM) 3.107
3.24 Adaptive Delta Modulation (ADM) 3.111
3.25 Comparison of various pulse communication system 3.115
3.26 Comparison of various source coding methods 3.117
Solved two mark questions 3.119
Review Questions 3.126-3.128

UNIT – IV SOURCE AND ERROR CONTROL CODING 4.1-4.138

4.1 Introduction 4.2


4.2 Entropy (or) average information (H) 4.6
4.3 Source coding to increase average information
per bit 4.18
4.4 Data compaction 4.20
4.5 Shannon fano coding algorithm 4.20
4.6 Huffman coding algorithm 4.24
4.7 Mutual information 4.39
4.8 Channel capacity 4.45
4.9 Maximum entropy for continuous channel 4.46
4.10 Channel coding theorem 4.47
4.11 Error control codings 4.57
4.12 Linear Block codes 4.59
4.13 Hamming codes 4.61
4.14 Syndrome decoding for Linear block codes 4.69
4.15 Cyclic codes 4.88
4.16 Convolutional codes 4.100
4.17 Decoding methods of Convolutional codes 4.113
Analog and Digital communication

Solved two mark questions 4.130


Review Questions 4.137-4.138

UNIT – V MULTI-USER RADIO COMMUNICATION 5.1-5.78

5.1 Introduction 5.2


5.2 Advanced Mobile Phone Systems (AMPS) 5.4
5.3 Global system for mobile - GSM (2G) 5.8
5.4 CDMA 5.19
5.5 Cellular network 5.25
5.6 Multiple access techniques for wireless
Communication 5.37
5.7 Satellite communication 5.47
5.8 Satellite Link system Models 5.48
5.9 Earth station (or) ground station 5.52
5.10
Kepler’s laws 5.54
5.11
Satellite Orbits 5.56
5.12 Satellite Elevation categories 5.58
5.13 Satellite frequency plans and allocation 5.60
5.14 5.60
5.75-5.77
5.78
NUMBER SYSTEM

Noise: Source of Noise - External Noise- Internal Noise- Noise


Calculation. Introduction to Communication Systems: Modulation –
Types - Need for Modulation. Theory of Amplitude Modulation - Evo-
lution and Description of SSB Techniques - Theory of Frequency and
Phase Modulation – Comparison of various Analog Communication
System (AM – FM – PM).
ANALOG AND DIGITAL COMMUNICATION

ANALOG COMMUNICATION Unit 1


1.1 INTRODUCTION

‰‰ Communication is the process of establishing connection (or


link) between two points for information exchange.
‰‰ The science of communication involving long distances is called
telecommunication ,the word tele stands for long distance
‰‰ The information can be of different type such as sound, picture,
music computer data etc.,
‰‰ The basic communication components are
tt A Transmitter
tt A communication channel or medium and
tt A receiver

1.1.1 Elements of communication system:

The block diagram of elements of communication system is as


shown in figure 1.1

Information Transmitter Channel Receiver Destination


Source

Noise and
Distortion

Figure 1.1 Block diagram of simple communication system


ANALOG COMMUNICATION

The elements of basic communication system are as follows

1. Information or input signal

2. Input transducer

3. Transmitter

4. Communication channel

5. Noise

6. Receiver

7. Output transducer

Information or input signal

• The communication system has been developing for communicating


useful information from one place to the other.

• This information can be in the form of a sound signal like speech or


music, or it can be in the form of pictures or it can be data informa-
tion coming from a computer.

Input transducer

The information in the form of sound, picture and data signals


cannot be transmitted as it is.

Input transducer is used to convert the information signal from


source into suitable electrical signal .

The input transducer used usually in the communication


systems are microphones, TV camera etc..

Transmitter

‰‰ The function of transmitter block is to convert the electrical


equivalent of the information to a suitable form corresponding to
communicate through communication medium (or) channel.

‰‰ The transmitter consists of the electronic circuits such as modulator,


amplifier, mixer, oscillator and power amplifier.
ANALOG AND DIGITAL COMMUNICATION

‰‰ In addition to that it increases the power level of the signal. The


power level should be increased in order to cover a large range.

Communication channel

The communication channel is the medium used for trans-


mission of electronic signal from one place to the other. The
communication medium can be conducting wires, cables, optical fibre or
free space. Depending on the type of communication medium, two types
of communication systems will exist. They are:

• Wire communication (or) line communication

• Wireless communication (or) radio communication

Noise

Noise is an unwanted electrical signal which gets added to the


transmitted signal when it is travelling towards the receiver

Due to noise, quality of the transmitted information degrades.


Once added, the noise cannot be separated out from the information.

Hence noise is a big problem in the communication systems.

Even though noise cannot be completely eliminated, its effect can


be reduced by using various techniques.

Receiver

The reception is exactly the opposite process of transmission.


That is extract original signal form transmitted signal.

The receiver consists of the electronic circuits such as demodula-


tor, amplifier, mixer, oscillator and power amplifier.

Output transducer

The output transducer converts the electrical signal at the output


of the receiver back to the original form (i.e) Sound, picture and data
signals.

The typical examples of output transducer are loud speakers,


picture tube computer monitor etc.
ANALOG COMMUNICATION

1.2 NOISE

‰‰ Noise is an unwanted signal that interferes with the desired message


signal.
‰‰ In audio and video systems electrical disturbances are appearing as
interference is called as noise.
‰‰ In general noise may be predictable or unpredictable (random) in
nature.
Predictable noise
‰‰ The predictable noise can be estimated and eliminated by proper
engineering design.
‰‰ The predictable noise is generally man made noise and it can be
eliminated easily.
Examples: power supply hum, ignition radiation pickup,
spurious oscillations in feedback amplifiers, fluorescent lightening.
Unpredictable noise
‰‰ This type of noise varies randomly with time, and we have no control
over this noise.
‰‰ The term noise is generally used to represent random noise.
‰‰ Presents of random noise ,complicate the communication system
Sources of noise

1. Internal noise

2. External noise

Internal noise may be classified as

1. Shot noise

2. Thermal Noise

3. Partition Noise

4. Flicker Noise
ANALOG AND DIGITAL COMMUNICATION

External Noise may be classified as

1. Natural Noise

2. Manmade Noise

1.2.1 Natural noise

‰‰ This type of noise randomly occurs in atmosphere due to


lightning, electrical storms and other atmospheric disturbances.
This noise is unpredictable in nature.

‰‰ This noise is also called as atmospheric noise (or) static noise

1.2.2 Manmade Noise

‰‰ Manmade noise results from undesired pickups from electrical


appliances, such as motors, automobiles and aircraft ignition
etc.,

‰‰ This type of noise can be eliminated by removing the source of


the noise. This noise is effective in frequency range of 1 MHz -
500 MHz

1.2.3 Internal noise

Internal noise is created by active and passive components


present within the communication system

1.2.3.1 Shot noise

‰‰ Shot noise present in active devices due to random fluctuation of


charge carriers crossing the potential barriers. In electron tubes,
shot noise is generated due to random emission from cathodes.

‰‰ In semi-conductor devices, it is caused due to random diffusion


of minority carriers (or) random generation of recombination of
electron hole.

‰‰ Shot noise is not normally observed during measurement of


direct noise current, because it is small compared to the DC-
value.
ANALOG COMMUNICATION

‰‰ Shot noise has a flat response spectrum. The mean squared


noise component is proportional to the DC-flowing and for most
of the devices the mean square shot noise current is given by,

In2 = 2IoqeBn amperes ...(1)

Where

I0 = DC in amperes

q e = Magnitude of electron charge (1.6 x 10-19C)

B n = Equivalent noise Bandwidth

1.2.3.2 Thermal noise

?? This type of noise arises due to random motion of electrons in


a conducting medium such as a resistor, and this motion in
turn is randomized through collisions caused by imperfection
in the structure of conductors. The net effect of motion of all
electrons constitutes an electric current flowing through the
resistor, causing the noise

This noise is also known as resistor noise (or) Johnson


noise.

?? The power density spectrum of the current contributing the


thermal noise is given by
2KTG
Si (ω ) = 2 ...(2)
ω 
1+  
Where, α 

T- Ambient temperature in degree kelvin

G- Conductance of the resistor in mhos

K - Boltzman constant

a - average number of collisions per sec per electron


ANALOG AND DIGITAL COMMUNICATION

1.2.3.3 Partition noise

?? This noise is generated whenever a current has to divide


between two (or) more electrodes and results from random
fluctuation in the division.

?? It would be expected therefore that a diode would be less noisy


than a transistor, if third electrode draws current.

?? For this reason, the input stage of microwave receiver is often a


diode circuit. The spectrum of the partition is flat

1.2.3.4 Flicker noise (or) low frequency noise

Flicker noise occurs due to imperfection in cathode surface of


electron tubes and surface around the junctions of semiconductor
devices. In the semiconductor, flicker noise arise from fluctuation in the
carrier density, which in turn give rise to fluctuation in the conductivity
of the material. The power density of the flicker noise is inversely
1
proportional to frequency (ie) S (w) a .
f
Hence, this noise becomes significant at very low frequencies
(below a few KHz)

1.2.4 Calculation of noise

i. Signal to noise Ratio (SNR)

It is defined as the ratio of signal power to noise power either


input side (or) at output side of the circuit (or) device
Signal power at the input
SNRi =
Noise power at the input

Output Signal power


SNR0 =
Output Noise power
ii. Noise Figure

Noise figure is defined as, the ratio of the signal to noise power
ratio supplied to the input terminals of a receiver (SNRi) to the signal
ANALOG COMMUNICATION

to noise power ratio supplied to the output terminal (or) load resistor
(SNR0)

Therefore,
(SNR)i
Noise figure (F) = (SNR)
0

Calculation of Noise Figure


Generator (Antenna)
Amplifier (receiver)
Voltage
gain

V0
Ri RL

Figure 1.1 (a) Block Diagram of calculation of noise figure


Calculate noise figure consider a network shown in figure 1.1(a).
The network has the following

1. Input impedance Rt

2. Output impedance RL

3. An Overall voltage gain

It is led from a source that is antenna of internal resistance Ra.


The internal resistance Ra, may or may not be equal to Rt. The figure
1.1(a) shows the block diagram of such 4 terminals network.

The calculation procedures are as follows

Step 1: Determination of input signal power ‘Pst’

From the figure 1.1(a), we can obtain signal input voltage Vsi and
ANALOG AND DIGITAL COMMUNICATION

power Psi as
V sR t
Vsi = R + R ...(1)
a t

Vsi2
and Ps = R ...(2)
t

Substituting equation (1) in (2) we get,


Vsi2.Rt2
Psi =
(Ra + Rt)2. Rt
Therefore,
Vsi2.Rt
Psi = ...(3)
(Ra + Rt)2
Step 2: Determination of input noise power ‘Pni’

Similarly the noise input voltage Vni and power Pni can be
calculated

We know that

Vni = 4KTBR

= ...(4)
 
4KTB  Ra Rt 
 Ra + Rt 

2
Vni
and Pni =
Rt

Substitute Vni value here, we get

 Ra Rt 
4KTB  
 Ra + Rt 
Pni =  
Rt


 Ra 
= 4KTB  R + R  ...(5)
 a t 
ANALOG COMMUNICATION

Step 3 Calculation of input SNR


Psi
SNRi =
Pni

Using equation (3) and (5), we get

(Vsi2Rt/(Ra + Rt)2)
SNRi =
4KTB (Ra/ Ra + Rt)

Vsi2.Rt
= ...(6)
4KTB. Ra (Ra + Rt)
Step 4: Determination of signal output power Pso

The output signal power will be given as,


Vso2 (AVsi)2
Pso = =
RL RL

A2.Vsi2
Pso = ...(7)
RL
Substitute equation (1) in (7), we get
A2 (VsiRt/ RaRt)2
Pso =
RL

A2Vsi2.Rt2
Pso = ...(8)
RL(Ra + Rt)2
Step 5 Determination of noise output power Pno

The noise output power may be quite difficult to calculate for


instance, it can be simply written as,

Pno = output noise power ...(9)

Step 6 Calculation of the output SNR

The output signal to noise (SNR0) will be found as,


ANALOG AND DIGITAL COMMUNICATION

Pso
SNR0 =
Pno
Using equation (8) and (9) we get
Pso
SNR0 = P
no

A2Vsi2Rt2
= ...(10)
(Ra + Rt)2 RL.Pno
Step 7 Calculation of Noise figure (F)

The general expression for noise figure is


SNRi
F =
SNR0
Using equation (6) and (10), we get

 Vsi 2Rt 
 F = 
 4KTB Ra [Ra + Rt ] 
 A 2Vsi Rt 2 
 
 [Ra + Rt ] R L .Pno 
2

F =
( Ra + Rt ) RL .Pno ... (11)
=
4KTB R .R .A 2 a t

This is the necessary equation.

1.3 INTRODUCTION TO COMMUNICATION SYSTEMS

Electronics communication system can be classified into various


categories based on the following parameters

1. Whether the system is unidirectional (or) bidirectional

2. Whether it uses an analog (or) digital information signal

3. Whether the system uses baseband transmission (or) uses


some kind of modulation
ANALOG COMMUNICATION

Electronics communication systems

Unidirectional/ Technique of
Nature of
Bidirectional transmission
Information
communication signal

Simplex Half Full Analog Digital Baseband Communication


using
system duplexDuplex transmission
modulation
Figure 1.2 Classification of communication system

1.3.1 Classifications based on directions of Communication

Based on whether the system communicates only in one direction


(or) otherwise, the communication systems are classified as,

1. Simplex system

2. Half duplex systems

3. Full duplex systems

Communication System

Unidirectional Bidirectional

(Simplex) (Duplex)

Half duplex Full duplex


ANALOG AND DIGITAL COMMUNICATION

Simplex system

In these systems the information is communicated


in only one direction , they cannot receive.

For example, the radio, TV-broadcasting and telemetry System of


a satellite to earth.

Half duplex system

These systems are bidirectional they can transmit as well as


receive but not simultaneously.

At a time these systems can either transmit (or) receive, for


example a trans-receiver (or)walky talky set.

Full duplex System

These are truly bidirectional systems as they allow the


communication to take place in both the direction simultaneously.

These systems can transmit as well as receive simultaneously ,


for example the telephone Systems.
Bidirectional flow

of information
Transmitter Transmitter

+ +
Communication
Receiver 1 Receiver 2
link
Figure 1.3 Basic Block diagram of full duplex system

1.3.2 Classifications based on the nature of Information signal

Based on nature of information signal, Communication system


classified into two categories namely,

1. Analog Communication system.

2. Digital communication system.


ANALOG COMMUNICATION

Analog Communication

In this communication technique, the transmitted signal is in the


form of analog (or) continuous in nature through the communication
channel (or) media.

Digital communication

In this communication technique, the transmitted


signal is in the form of digital pulses of constant amplitude, frequency
and phase.

1.3.3 Classification based on the technique of transmission

Based on the technique used for the signal transmission. we can


categories into two namely,

1. Base band transmission.

2. Communication systems using modulation.

Base-band transmission

In this technique, the baseband signal (original information


signals) are directly transmitted.

Examples of these type of systems are telephone networks where


the sound signal converted into electrical signal is placed directly on the
telephone lines for transmission (local calls).

Another example of baseband transmission is computer data


transmission over a Co-axial Cables in the computer networks (eg. RS
232 cables).

Thus , the base band transmission is the transmission of the


original information signal as it is.

Limitations of Baseband transmission

The baseband transmission cannot be used with certain medium


(eg) it cannot be used for the radio transmission where the medium is
ANALOG AND DIGITAL COMMUNICATION

free space.

This is because the Voice signal (in the electrical form) cannot
travel long distance in air.

It gets suppressed after a short distance. Therefore for the radio


Communication of baseband signals a technique called “Modulation” is
used.

Drawbacks of baseband transmission (without modulation)

1. Excessively large antenna heights.

2. Signals get mixed up.

3. Short range of communication.

4. Multiplexing is not possible and

5. Poor quality of reception.

Why modulation

The baseband transmission has many limitations which can be


overcome using modulation.

In radio communication, signals from various sources are


transmitted through a common medium that is in open (free) space .This
causes interference among various signals, and no useful message is
received by the receiver.

The problem of interference is solved by translating the


message signals to different radio frequency spectra. This is done by the
transmitter by a process known as ”Modulation”.

1.4 MODULATION

Define: In the modulation process, two signals are used namely


the modulating signal and the carrier signal.
ANALOG COMMUNICATION

Modulating signal (or) Baseband signal (or) Low frequency signal

Carrier signal (or) High frequency signal

Modulating signal Modulator Modulated signal

Carrier signal

Modulation is the process of changing the characteristics of


carrier signal (such as amplitude, frequency and phase) in accordance
with the instantaneous value of modulating signal.

In simple, modulation is the process of mixing of modulating


signal and carrier signal together.

In the process of modulation , the baseband signal is translated


(i.e) shifted from low frequency to high frequency.

1.5 NEED FOR MODULATION (OR) ADVANTAGES OF MODULATION


The advantages of modulation are,

(1) Easy of radiation.

(2) Adjustment of bandwidth.

(3) Reduction in height of antenna.

(4) Avoids mixing of signals.

(5) Increases the range of information.

(6) Multiplexing and

(7) Improves quality of reception.

1.5.1 Easy of radiation

As the signals are translated to higher frequencies,


it becomes relatively easier to design amplifier circuits as well as
ANALOG AND DIGITAL COMMUNICATION

antenna systems at these increased frequencies.

1.5.2 Adjustment of bandwidth

Bandwidth of a modulated signal may be made smaller (or) larger


than the original signal.

Signal to noise ratio (SNR) in the receiver which is a function


of the signal Bandwidth can thus be improved by proper control of
bandwidth at the modulating stage.

1.5.3 Reduction in antenna height

When free space is used as a communication media, messages


are transmitted with the help of antennas.

If the signals are transmitted without modulation, the size of


antenna needed for an effective radiation would be of the order of the
half of the wavelength, given as,
λ c
= ...(1)
2 2f
In broadcast systems, the maximum audio frequency transmitted
from a radio station is 5 KHZ. Therefore,the antenna height required is,
λ c c 3 x 108
= = = = = 30 km
2 2f 2 x 5 x 103 10 x 103
The antenna of this height is practically impossible to install.

Now consider a modulated signal f=10 MHZ. The minimum


antenna height is given by,
λ c
Antenna height is = =
2 2f

3 x 108
=
2 x 10 x 106

= 15 metre

This antenna height can be practically achieved.


ANALOG COMMUNICATION

1.5.4 Avoid mixing of signals


Each modulating signal (message signal) is modulated with
different carrier then they will occupy different slot in the frequency
domain (different channels).Thus modulation avoids mixing of signals.
1.5.5 Increases the range of communication
The modulation process increases the frequency
of the signal to be transmitted. Hence, increases the range of
communication.
1.5.6 Multiplexing
If different message signals are transmitted without modulation
through a single channel may causes interference with one another. (i.e)
overlap with one another.
To overcome this interference means, we need n-number of
channels for n-message signals separately.
But different message signals can be transmitted over a same
channel (single channel) without interference using the techniques
“Multiplexing”.
Simultaneous transmission of multiple message (more than one
message) over a channel is known as “multiplexing”.

Due to multiplexing, the number of channels needed are less.


This reduces the cost of installation and maintenance of more channels.

1.5.7 Improves quality of reception

Due to modulation, the effect of noise is reduced to great extent.


This improves quality of reception.

The two basic types of communications systems are analog and


digital.
Message - continuous signal
Analog communication
Carrier - continuous signal

Message - Digital (or) analog signal


Digital communication
Carrier - continuous signal (analog)
ANALOG AND DIGITAL COMMUNICATION

1.6 CLASSIFICATIONS OF MODULATION

Modulation

Analog modulation Digital modulation

Continuous Analog pulse

modulation modulation DPCM DM ADM PCM

Amplitude- Angle
PAM PWM PPM
modulation(AM) modulation

Phase modulation Frequency modulation

(PM) (FM)

Figire 1.4 Classifications of Modulation

Where,

PAM - Pulse amplitude modulation.

PWM - Pulse width modulation.

PPM - Pulse Position modulation.

PCM – Pulse code modulation.

DM – Delta modulation.

ADM – Adaptive delta modulation.

DPCM – Differential Pulse code modulation.

Linear modulation

The modulation system following the superposition


ANALOG COMMUNICATION

theorem of spectra is known as linear modulation system.

Non-linear modulation

The modulation system which does not follow the superposition


theorem of spectra is known as non-linear modulation system.
1.7 SOME IMPORTANT DEFINITIONS RELATED TO
COMMUNICATION
1.7.1 Frequency(f)

The frequency is defined as the number of cycles of a waveform


per second. It is expressed in hertz (Hz).

Frequency is simply the number of times a periodic motion, such


as a sine wave of voltage (or) current, occurs in a given period of time.
Amp

Time

1 Cycle
Figure 1.5 One cycle

1.7.2 Wave length (λ)

Wave length (λ ) is defined as the distance between two points of


similar cycles of a periodic wave.

Wavelength

Figure 1.6 Wavelength

Wavelength is also defined as the distance travelled by an


ANALOG AND DIGITAL COMMUNICATION

electromagnetic wave during the time of one cycle.

1.7.3 Bandwidth

Bandwidth is defined as the frequency range over which an


information signals is transmitted .

Bandwidth is the difference between the upper and lower


frequency limits of the signal.

Bandwidth (BW) = f2 f1

Where, f2 – upper frequency

f1 – lower frequency

BW

f1 f2 Frequency

1.7.4 Transmission frequencies

The total usable radio frequency (RF) spectrum is divided into


narrower frequency bands, which are descriptive names and several of
these band are further broken down into various types of services.

The International Telecommunication Union (ITU) is an


international agency is control of allocating frequencies and services
within the overall frequency spectrum.
ANALOG COMMUNICATION

Frequency Wavelength
Frequency range
designation range

Extremely High
30 - 300 GHZ 1mm - 1cm
frequency (EFH)
super High frequency
3 - 30 GHZ 1 - 10 cm
(SHF)
Ultra High
300MHZ -3GHZ 10cm - 1m
Frequency (UHF)
Very High frequency
30 - 300MHZ 1 -10m
(VHF)
High frequency(HF) 3 - 30MHZ 10-100m
100m-1km
Medium frequency (MF) 300KHZ - 3MHZ

Low frequency (LF) 30 KHZ - 300KHZ 1km - 10km

Table 1.1 The radio frequency spectrum

Solved Problem

1. Find the wavelength of a signal at each of the following frequencies.


(1) 850 MHZ (2) 1.9 GHZ (3) 28 GHZ.
Solution
Given data
(1)f = 850 MHZ
(2)f =1.9 GHZ and (3) f = 28 GHZ.

Velocity of light
Wavelength ‘ λ ‘ =
Frequency

c
=
f

3 x 108
(i) l = = 0.35 M
850 x 106

3 x 108
(ii) l = = 0.158 m
1.9 x 109
ANALOG AND DIGITAL COMMUNICATION

3 x 108
(iii) l = = 0.0107 m
28 x 109
1.7.5 Frequency spectrum

Frequency spectrum is the representation of a signal in the


frequency domain . It can be obtained by using either fourier series (or)
fourier transform.

It consists of the amplitude and phase spectrums of the signal.


The frequency spectrum indicates the amplitude and phase of various
frequency components present in the given signal.

The frequency spectrum enables us to analyze and synthesize a


signal.

1.7.6 Demodulation (or) Detection

The process of extracting a modulating (or) baseband signal from


the modulated signal is called “demodulation”.

In other words , Demodulation (or) detection is the process by


which the message signal is recovered from the modulated signal at
receiver.
1.8 THEORY OF AMPLITUDE MODULATION

Definition

Amplitude modulation (AM) is the process by which amplitude of


the carrier signal is varied in accordance with the instantaneous value
(amplitude) of the modulating signal, but frequency and phase remains
constant.

1.8.1 Mathematical Representation of an AM wave

Let us consider,

The Modulating signal Vm(t) = Vmsinωmt ...(1)

The Carrier signal Vc(t) = Vcsinω ct ...(2)

Where,
ANALOG COMMUNICATION

Vc __ Amplitude of the carrier signal (volts)

Vm _ Amplitude of the modulating signal (volts)

ωm _ Frequency of the modulating signal (HZ)

ωc _ Frequency of the carrier signal (HZ)

According to the definition of amplitude modulation, the


amplitude of the carrier signal is changed after modulation with respect
to message signal,

VAM(t) = (Vc+Vm(t))sinωct ...(3)

Substitute the value of Vm(t) in equation (3) we get,

VAM(t) = (Vc+Vmsinωmt)sinωct
 V 
= Vc 1 + m sin ωm t  sin ωc t
 Vc 
Vm
Where, = ma = modulation index.
Vc
Modulation index is defined as the ratio of amplitude of message
signal to amplitude of carrier signal.
Vm
Modulation index ‘ma’ =
Vc
VAM(t) = Vc[1+ ma sinωmt] sinωct ...(4)

VAM(t) = Vc[1+ma sin (2fm)t] sin (2fc)t ...(4)a

The equation (4)a represents the time domain representation of


an AM-signal.

1.8.2 AM – voltage distribution

The time domain representation of an AM – signal is given by,

VAM(t)= Vc [1+ ma sinωmt]sinωct ...(1)

VAM(t) = Vcsinωct+maVc sinωmt.sinωct ...(2)

We know that,
ANALOG AND DIGITAL COMMUNICATION

cos(A-B) - cos(A+B)
sin A . sin B =
2
Equation (2) becomes,
maVc maVc
VAM(t) = Vcsinωc t + 2 cos (ωc -ωm)t- cos(ωc+ ωm)t ...(3)
2
In equation (3),

• First term represents carrier signal (volts).

• Second term represents lower side band signal (volts)

• Third term represents upper sideband signal (volts).

The Figure 1.7 shows the voltage spectrum for an AM – DSBFC


wave (or) AM – signal.
Vc

Vc
maVc maVc
2 2
Voltage (V)

fLSB fc fUSB
frequency

Figure 1.7 Voltage spectrum for an AM – DSBFC wave

1.8.3 Frequency spectrum of AM

The AM wave is given by,

VAM(t) =Vc(1+ma sin ωmt) sin ωct ...(1)

= Vc(1+ ma sin 2fmt) sin 2fct

=Vcsin 2fct + maVc sin 2pfct sin 2fmt


maVc mV
= Vc sin 2fct + cos 2p (fc- fm)t - a c cos 2p (fc+fm)t ...(2)
2 2
{

{
{

carrier USB
LSB
ANALOG COMMUNICATION

The (-) sign associated with the USB – represents a phase shift of
180 .The figure 1.8 shows the frequency domain representation.

Vc
maVc maVc
2 2
Amplitude

fm fm

fc-fm fc fc+ fm
Frequency
BW=2fm

Figure 1.8 Frequency domain representation of AM-wave.

The equation (2) shows the frequency domain representation of


AM- signal.

• First term represents the unmodulated carrier signal with the fre-
quency of fc

• Second term represents the lower sideband signal with the frequency
of ( fc- fm ).

• Third term represents the upper sideband signal with the frequency
of ( fc + fm ).

1.8.4 Bandwidth of AM

The bandwidth of the AM-signal is given by the subtraction of the


highest frequency component and the lowest frequency component in
the frequency spectrum.

BW = Bandwidth = fUSB - fLSB

= ( fc + fm ) - ( fc - fm )

= fc + fm - fc +fm

BW = 2fm
ANALOG AND DIGITAL COMMUNICATION

Where, BW = Bandwidth is hertz.

fm = Highest modulation frequency in Hertz.

fc = Highest carrier frequency in Hertz.

Thus, the Bandwidth of the AM-signal is the twice that of the


maximum frequency of modulating signal.

1.8.5 AM- Envelope (or) Graphical representation of AM-wave

AM- DSBFC is sometimes called conventional AM (or) simply AM.

AM is simply called as Double sideband Full carrier (DSBFC) is


probably the most commonly used. The figure 1.9 shows the graphical
representation of AM – signal.

• The shape of the modulated wave (AM) is called AM –envelope which


contains all the frequencies and is used to transfer the information
through the systems.

• An increase in the modulating signal amplitude causes the


amplitude of the carrier to increase.

• Without signal, the AM output waveform is simply the carrier signal.

• The repetition rate of the envelope is equal to the frequency of the


modulating signal.
ANALOG COMMUNICATION

Amplitude No message signal


Modulation signal Vm(t)
Vm

time
-Vm

Carrier signal Vc(t)

Vc

time
-Vc

No modulation AM-DSBFC VAm(t)

Vm
0
{

Vc

time
-Vc

Modulated signal
-Vm


Figure 1.9 AM envelope
• The shape of the envelope is identical to the shape of the modulating
signal.
ANALOG AND DIGITAL COMMUNICATION

1.8.6 Phasor Representation of an AM-wave

The amplitude variation in an AM-system can be explained with


the help of a phasor diagram as shown in figure 1.10
maVc
USB
2
ωm

Carrier Vc Resultant AM phasor


VAM(t)
o

-ωm
maVc LSB
2
Figure 1.10 Phasor representation of AM-wave

• The phasor for the upper sideband rotate anticlockwise at an angular


frequency of wm, faster than the carrier frequency ωc (i.e) (ωm>ωc).

• The phasor for the lower sideband rotates clockwise at an angular


frequency of wm, slower than the carrier frequency (ωc) (i.e)(ωm<ωc).

• The resulting amplitude of the modulated wave at any instant is the


vector sum of the two- sideband phasors.

• Vc is carrier wave phasor, taken as reference phasor and the


resulting phasor is VAM (t).

• The phasors for carrier and the upper and lower side frequencies
combine, sometimes in phase (adding) and sometimes out of phase
(subtracting).

1.8.7 Modulation index and percentage modulation

Modulation index

In AM wave, the modulation index (ma) is defined as the ratio


of maximum amplitude of modulating signal to maximum amplitude of
carrier signal.
ANALOG COMMUNICATION
Vm
Modulation index ‘ma’ =
Vc
Modulation index is used to describe the amount of amplitude
change (modulation) present in an AM – Waveform .
Percentage Modulation
When modulator index is express in percentage, it is called
percent modulation.
Percentage modulation gives the percentage changes in the
amplitude of the output wave when the carrier is acted on by a modulating
signal.
Peak amplitude of modulating signal
Percentage modulation = x 100
Peak amplitude of carrier signal

Vm
= x 100
Vc
= ma x 100

1.8.8 Calculation of modulation index from AM-Waveform

Amplitude Vmax = Vc +Vm

Vm Vmin = Vc -Vm

Vc

Vmax = Vc -Vm
Figure 1.11 AM-Envelope for Calculation of
modulation index
The graphical representation of AM-Wave is also called as time
domain representation of AM-signal.
ANALOG AND DIGITAL COMMUNICATION

From the figure. 1.11 the maximum and minimum amplitude of


AM-signal is represented by,

Vmax = Vc + Vm ...(1)

Vmin
= Vc - Vm ...(2)

From equation (1) - (2) we obtained

Vmax- Vmin = Vc + Vm -Vc + Vm

Vmax - Vmin = 2Vm

Vmax = 2Vm + Vmin ...(3)

(or)
Vmax- Vmin
Vm = ...(4)
2
From equation (1) and (2) , the Vc can be calculated as,

Equation (1) becomes,

Vc = Vmax - Vm ...(5)

Equation (2) becomes,

Vc = Vmax + Vm ...(6)

From equation (5) vc can be calculated by substitute Vm value,

\ Vc = Vmax -  Vmax − Vmin 


 2 
 

2 Vmax -Vmax + Vmin


=
2

Vmax + Vmin
\ Vc = 2 ...(7)

According to the modulation index definition, the modulation


index for AM is given by,
ANALOG COMMUNICATION

According to the modulation index definition, the modulation


index for AM is given by,

Vm
Ma = ...(8)
Vc
Substitute Vm, Vc values in this equation (8)
Vmax- Vmin/2
ma =
Vmax+ Vmin/2

Vmax- Vmin
∴ma = ...(9)
Vmax+ Vmin

Percentage of modulation index in terms of Vmax & Vmin can be ex-


pressed as,

Vmax- Vmin
% ma = x 100
Vmax+ Vmin
...(10)

The Peak changes in the amplitude of the output wave vm is the


sum of the voltages from the upper & lower side frequencies.
Vm
...(11)
VUSB = = VLSB

Substituting Vm value in equation (11) we get,


Vmax- Vmin/2
∴ VUSB = = VLSB
2

Vmax- Vmin
VUSB = VLSB = ...(12)
4
Where,

VUSB = Peak amplitude of the upper side frequency (volts)

VlSB = Peak amplitude of the lower side frequency (volts)


ANALOG AND DIGITAL COMMUNICATION

1.8.9 Modulation index for multiple modulating signal frequency

When more than one modulating signals are modulated by a


single carrier, then the modulation index is given by,

ma(t) (or) ma = m12+m22+m32......


Where,

ma(or) ma(t) → Total modulation index.

m1,m2...... → Modulation index due to individual modulating


components.

1.8.10 Degree of modulation

In AM, there are three types of modulations are available. It


depends upon the amplitude of the modulating signal relative to carrier
amplitude.

(1) Under modulation.

(2) Critical modulation.

(3) Over modulation.

(1) Under modulation

If Vm<Vc then, the modulation index ma<1

• In this type of modulation , the envelope of amplitude modulated


signal does not reach the zero axis. Hence the message signal is
fully preserved in the envelope of the AM wave.

• The message signal can be detected (or) recover from the modulated
signal without any distortion by an envelope detector for under
modulation.
ANALOG COMMUNICATION

Amplitude

Envelope

time

Figure 1.12 AM wave for Ma <1

(2) Critical modulation

If Vm = Vc , then the modulation index ma=1

In this type of modulation , the envelope of amplitude modulated


signal just reaches the zero amplitude axis. The message signal remains
preserved.
Amplitude

Vm

Vc

Time

Figure 1.13 AM wave for ma=1

• In this also, the message signal can be detected from the


modulated signal without any distortion by an envelope
detector.
ANALOG AND DIGITAL COMMUNICATION

(3) Over modulation

If Vm > Vc ,then the modulation index ma > 1

• Here , both positive and negative extensions of the modulating


signals are cancelled (or) clipped out.

Amplitude

Vm

Vc
0

time


Distortion due to Over Modulation

Figure 1.14 AM- wave for ma>1


• The envelope of message signal are not same. Due to this enve-
lope detector provides distorted message signal.

• According to the three types of modulation,ma<1, ma=1 ,ma>1.


The modulating signals are fully preserved without any distorted
from the modulated signal if and only if Vm <Vc then Ma<1

1.8.11 AM-Power distribution

According to the voltage distribution of an AM wave, it consist of


three components namely carrier, lower side band and upper side band.

Therefore, the total transmitted power for AM- signal is sum of


power of all the three components.
ANALOG COMMUNICATION

Total transmitted power for AM- signal is given by

Pt = Pc + PLSB + PUSB ...(1)

Carrier Power (Pc )

The average power dissipated in a load by an unmodulated


carrier is equal to the rms carrier voltage squared divided by the load
resistance.
V2carrier
Pc = ...(2)
R
(Vc/√2)2
Pc =
R
Vc2
Pc = ...(3)
2R
Where,

Pc = Carrier power (watts).

Vc = Peak carrier voltage (volts).

R = Load resistance (ohms).

Power in the sidebands

The upper and lower sideband power are expressed mathematically


is,
(Vsub/ 2)2
PUSB = PLSB = R ...(4)

The peak voltage of the upper and lower side band frequencies,
maVc
VSUB = ...(5)
2
Substitute equation (5) in (4)

PUSB = PLSB =
(
[maVc/2] 2) )
2

ma2Vc2
PUSB = PLSB = 8R ...(6)
ANALOG AND DIGITAL COMMUNICATION

Where,

PLSB = Lower-sideband power (watts)

PUSB = Upper-sideband power (watts).

Equation (6) , may be written as,


ma 2  Vc 2 
PUSB = PLSB =   ...(7)
4  2R 
 Vc 2 
 = Pc 
 2R 
ma 2
PUSB = PLSB = Pc
...(8)
4

Total power in AM -wave

Substitute equation (3) and (8) into equation (1), we get

Pt
= Pc +PLSB + PUSB
Vc2 ma2 ma2
= + Pc
+ Pc

2R 4 4

ma2 ma2
Pt = Pc +
Pc + Pc

4 4

 m 2 m 2
= Pc 1 + a + a 
 4 4 

 2

Pt = Pc 1 + ma  ...(9)
 2 

Pt  ma 2 
1 + =  ...(9) a
Pc  2 
The equation (9) shows the total power required for transmission
of AM-signal or DSBFC. The figure 1.15 shows the power spectrum for
amplitude modulation.
ANALOG COMMUNICATION

Pc Vc2
2R
PLSB = ma Pc
2
ma2Pc
4 PUSB =
4
Power

LSB USB

fLSB fc fUSB
Frequency

Figure 1.15 Power spectrum for an AM-wave

For critical modulation ma =1;

If ma=1 equation (9) becomes


1
Pt = Pc 1 + 
 2

Pt = 1.5 Pc
...(10)

Total power is 1.5 times of carrier power is for critical modulation.

1.8.12 Modulation index in terms of Pt and Pc


Pt ma2
=1+
Pc 2

Pt ma2
-1 =
Pc 2

P 
2  t − 1 = ma2
 Pc 
ANALOG AND DIGITAL COMMUNICATION

P 
2  t − 1 = ma
...(11)
 Pc 

P 
Modulation index ma = 2  t − 1 = ma
 Pc 

1.8.13 Current calculations

We know that,

Pt = It2 R and Pc = Ic2R ...(1)

Where,

It = Total transmit current (ampere).

IC = Carrier current (ampere)

R = Antenna resistance (ohms).

We know that, the total transmit power is given by,


 ma 2 
Pt =Pc 1 +  ...(2)
 2 
Substitute Pt and Pc values in equation (2)

 m 2
I t 2 R = I c 2 R 1 + a 
 2 
 m 
2
I t 2 = I c 2 1 + a 
 2 
 m 2
I t 2 = I c 2 1 + a  ... ( 3 )
 2 

Equation (3) shows that the total transmission current for AM-DSBFC
ANALOG COMMUNICATION

1.8.14 Modulation index in terms of current

We know that,
 m 2
I t 2 = I c 2 1 + a  ... (1)
 2 
It 2
 m  2

2
= 1 + a 
Ic  2 
2 2
It m
2
−1 = a
Ic 2
I 2

2  t 2 − 1 = ma 2
 Ic 
I 2 
ma 2 = 2  t 2 − 1 ... ( 2)
 Ic 

1.8.15 Transmission efficiency

Transmission efficiency of an AM-wave is defined as the ratio of


the transmitted power which contains the information (i.e side band
power) to the total transmitted power.
Power in sideband
% η = x 100
Total power

PLSB +PUSB
= x 100
Pt

ma 2Pc ma 2Pc
+
= 4 4 × 100
 ma 2 
Pc 1 + 
 2 

m 2 m 2 
Pc  a + a 
 4 4 
× 100
=
 ma 
2
Pc 1 + 
 2 
ANALOG AND DIGITAL COMMUNICATION

ma2/2
= ma2 x 100
1+
2

ma2/2 ma2

= x 100 = x 100
(2+ma2)/2 2+ma2

ma2
%η = x 100
2 +ma2
For critical modulation ma =1, then the transmission efficiency
becomes,

if ma=1,
12 1
%η = x 100 = x 100
2+12 3

% η = 33.3 %
The maximum transmission efficiency of the AM-DSBFC is
33.3%.This means , that only one-third of the total power is carried by
the sidebands and the rest two-third is a wasted power.

1.8.16 Total transmitted power for AM-DSBSC

Total power required for transmission of AM- DSBSC is depends


only on sideband power, because carrier is suppressed.

∴Pt = PLSB +PUSB ...(1)

(∴Carrier is suppressed)

Sideband powers

The upper and lower sidebands are having equal power which is
equal to,
(Vsub)2 (Vsub/ 2)2
PUSB = PLSB= = ...(1)a
R R
The peak voltage of the upper and lower sideband frequencies are,


ANALOG COMMUNICATION

maVc
Vsub =
2
∴ Equation (1) becomes
 m V /2 
PUSB = PLSB =  a c 
 2 
ma 2Vc2
= 8R ...(2)

Substitute equation (2) in (1)

Pt
= PLSB +PUSB

ma 2Vc2 ma 2Vc2

= +
8R 8R

2ma 2Vc2 ma 2Vc2


= =
8R 4R

ma 2  Vc 2  Vc 2
Pt =    =Pc
2  2R  2R

m 2 
Pt = Pc  a 
 2  ...(3)
For critical modulation, ma=1

(2)
1
Pt = PC

Pt = 0.5 Pc
...(4)

1.8.17 Transmission efficiency (η) (for AM-DSBSC)


Power in side band
% η =
Pt
ma 2
ma2
P c. + Pc.
4 4
= ma2
Pc
2
ANALOG AND DIGITAL COMMUNICATION

ma2
Pc
2
=
m2
Pc a
2

%η = 1 x 100 = 100%

The maximum transmission efficiency of AM-DSBSC is 100%.


This means the total transmitted power is fully utilized by the AM-
DSBSC. There is no power is wasted.

1.8.18 Comparison of different AM techniques


S.No Parameters DSBFC DSBSC SSB
1. Carrier Not applicable Fully Fully

suppression
2. Sideband Not applicable Not One side band

suppression applicable Completely


3. Bandwidth 2 fm 2 fm fm
4. Transmission Minimum Moderate Maximum

efficiency
5 Number of 1 1 1

Modulating

inputs
6 Complexity Simple Simple Complex
7 Power High Medium Very small

requirement to

cover are
8 Application Radio Radio Point to Point

Broadcasting Broadcasting communica-

tion

1.8.19 Advantages,disadvantages and applications of AM


Advantages
• AM is used for long distance communications.
• AM is relatively inexpensive.
Disadvantages
• AM is more likely to be affected by noise than FM.
ANALOG COMMUNICATION

• Receiver cost and complexity.


• 66.67% of transmitted power is wasted.
• Large bandwidth is required.
Applications
• Sound and audio broadcasting.
• Point to point link communication
• Aircraft communications in the VHF frequency range.
Problems

1. What is the bandwidth needed to transmit 4 KHZ voice signal using


AM.

Solution

Bandwidth required for AM = 2fm

BW =2 fm

BW = 2(4 KHZ)
BW =8 KHZ

2. In an AM –transmitter , the carrier power is 10 KW and the


modulation index is 0.5 .calculate the total RF-power delivered.

Solution
Given data
Pc =10 KW
ma = 0.5
 m 2
The total RF-power delivered Pt = Pc 1 + a 
 2 

 0.52 
1 +  =10 KW
 2 

Pt =11.25 KW

ANALOG AND DIGITAL COMMUNICATION

3. The output of AM-transmitter is given by

UAM (t)=500(1+0.4 sin 3140 t)sin 6.28 x 107 t

Calculate

(i) Carrier frequency (ii) Modulating frequency (iii) Modulation


index (iv) Carrier power if the load is 600 W (v) Total power.

Solution

The output of AM-DSBFC is given by,

VAM(t)= Vc(1+ ma sinωmt)sinωct ...(1)

Compare equation (1)with the given output equation.

We get,

Vc =500 volts

ma =0.4

ωm
= 3140

ωc = 6.28 x 10 7

(i) ωm
= 2fm = 3140
3140
fm = 499.7
2
fm = 500 HZ

2fc = 6.28 x 107 = wc

(ii) w c
= 2pfc = 6.28 x 107
6.28 x107
fc =
2
fc = 0.999 x 107 HZ
fc =10 MHZ

ANALOG COMMUNICATION

(iii) Modulation index ma = 0.4

(iv) Carrier power if the load is 600 Ω


Vc2 5002
Pc = =
2R 2 x 600

Pc = 208.3 watts

 m 2
(v)Total Power (Pt) = P 1 + a 
 2 
 0.42 
= 208.3 1 + 
 2 

= 224.9

Pt = 225 watts

The total power required for AM is 225 watts.

4. For an AM-DSBFC modulator with a carrier frequency


of 100 KHZ and maximum modulating signal frequency of 5 KHZ.
Determine upper and lower side band frequency and the Bandwidth.

Solution

Given data:

fc
=100KHZ

fm =5 KHZ

(1)Upper sideband frequency

fUSB =fc +fm

=100 KHZ + 5 KHZ

=105 KHZ

(2)Lower side band frequency


ANALOG AND DIGITAL COMMUNICATION

fLSB =fc - fm

=100 KHZ - 5 KHZ

=95 KHZ

(3)Bandwidth ‘BW’ =2 fm

=2(5 KHZ)

=10 KHZ

5. In an AM-Modulator, 500 KHZ carrier of amplitude 20 V is modulated


by 10 KHZ modulating signal which causes a change in the output
wave of ± 7.5 V. Determine

(i) Upper and lower sideband frequencies

(ii) Modulation index

(iii) Peak amplitude of upper and lower side


frequency.

(iv) Maximum and minimum amplitudes of


envelope.

Solution

Given data

fc = 500 KHZ , fm=10 KHZ, Vc= 20 V,Vm=15 V

(i) fUSB = fc+fm

= 500 K +10 K

= 510 KHZ

fLSB = fc - fm

= 500 K +10 K

= 490 KHZ
ANALOG COMMUNICATION

Vm
(ii) Modulation index ma =
Vc
15
=
20
= 0.75

(iii) Peak amplitude of upper and lower side frequencies


maVc
=
2

0.75 x 20
=
2
= ± 7.5 V

(iv). Maximum and minimum amplitude of envelope.

Vmax = Vc + Vm

= 20 + 15

= 35 V

Vmin = Vc - Vm

= 20 - 15

= 5 V

6. If a 10V carrier is amplitude modulated by two different frequencies


with amplitudes 2 V and 3 V respectively. Find modulation index.

Solution

Given data

Vc = 10 V, Vm1 = 2V, Vm2 = 3V

Vm1 2
Modulation index m1 = =
Vc 10
ANALOG AND DIGITAL COMMUNICATION

= 0.2
Vm1 3
Modulation index m2 = = = 0.3
Vc 10
Total modulation index ‘ma’ = m12 +m22
=
0.04 +0.09

ma = 0.66
7. An AM-signal has the equation V(t)=[15 + 4 sin 44 x 103 t] (sin 46.5 x
106 t)V. Find
(i)carrier frequency (ii)modulating frequency (iii)modulation index
(iv)sketch the signal in the time domain, showing voltage and time
scales.(v) Peak voltage of unmodulated carrier.
Solution

V(t) =[15+4 sin 44 x 103 t] sin(46.5 x 106 t)


4
=15[1+ sin 44 x 103 t](sin 46.5 x 106 t)
15
=15[1+ 0.26 sin 44 x 103 t] (sin 46.5 x 106 t) ...(1)

For AM-DSBFC output is given by

V(t) =Vc[1+ma sin ωm t]sin ωct ...(2)

Compare equation (1) and (2) we get,

Vc =15 V

ma = 0.26

ωm = 44 x 103 HZ

ωc =46.5 x 106 HZ

(i) Carrier frequency ‘fc'


2fc = 46.5 x 106
46.5 x106

fc =
2
f =7.4 x 106 HZ
c
ANALOG COMMUNICATION

(ii) Modulating frequency ‘fm’


2fm =44 x 103
44 x 103

fm =
2
fm = 7 x 103 HZ

(iii) Modulation index ‘ma’ =0.26

(iv) Peak voltage of unmodulated carrier is =15 V

(v) AM-Voltage spectrum.

Vc = 15 V

Voltage maVc maVc


=1.95 V =1.95 V
2 2

LSB USB

fLSB fC fUSB
Frequency

Fig 1.16 AM-Voltage Spectrum

8. For a modulation co-efficient = 0.2 and an unmodulated, carrier


power Pc=1000 W. Determine

(i)Total sideband power (ii)Upper and lower sideband power(iii)


modulated carrier power (iv)Total transmitted power.

Solution

Given Data

Modulation index = 0.2


ANALOG AND DIGITAL COMMUNICATION

Un modulated carrier power =1000 W

(i) Upper & lower sideband power,


ma2.Pc 0.22
PUSB =PLSB= = x 1000
4 4
PUSB =PLSB= 10 watts

(ii) The total sideband power is,

PTSB
= PUSB + PLSB
ma2 ma2
= Pc + Pc
42 4
ma
= Pc
2
0.2 2
= (1000)
2

PTSB = 20 watts
(iii)The total power required for transmission of AM-wave,
 2

Pc =PC 1 + ma 
 2 

 0.22 
Pc =1000 1 + 
 2 

Pt =1,020 watts

(iv) Modulated Carrier Power,
 ma 2 
Pt = Pc 1 + 
 2 

Pt
=
 ma 2 
1 + 
 2 

= Pc 1020
 0.22 
1 + 
 2 
ANALOG COMMUNICATION

1000W = Pc

Pc =1000 watts

9. A 200 W carrier is modulated to a depth of 75%. Calculate the total


power in the modulated wave.

Solution

Given data

Pc =200 W

% ma =75% ∴ma =0.75

Total power in the modulated wave,


 ma 2 
Pt = Pc 1 + 
 2 

 0.752 
= 200 1 + 
 2 

Pt = 256.25 watts

10. For an AM-DSBFC wave with an unmodulated carrier voltage of 18V


and a load resistance of 72 W. Determine the following.

(i) Unmodulated carrier power (ii) Modulated carrier power


(iii) Total sideband power (iv) Upper & lower sideband power (v) Total
transmitted power.

Solution

Given

Vc =18V

RL =72Ω
ANALOG AND DIGITAL COMMUNICATION

(i) Unmodulated carrier power,


Vc2 182
Pc = = = 2.25 W
2R 2 x 72
(ii)
Total sideband power,
ma2Pc 12.(2.25)
PTSB = 2 = (Let ma=1 critical Modulation)
2
PTSB
=1.125 W

(iii) upper &lower sideband power,


ma2Pc 12.(2.25)
PUSB =PLSB = 2 =
4
= 0.5625 watts

(iv) Total transmitted power,


 ma 2 
Pt = Pc 1 + 
 2 
=1.5 Pc = 1.5 (2.25)
Pt = 3.375 watts

11. Find the carrier power of broadcast radio transmitter radiates


20 KW, for the modulation index is 0.6.

Solution

Given
Pt = 20 Kw

ma =0.6
Pt
Carrier power Pc =
 ma 2 
1 + 
 2 

20 × 103
=
 0.62 
1 + 
 2 
=16.94 x 103

Pc =17 K Watts
ANALOG COMMUNICATION

12. For an AM-DSBSC modulator with a carrier frequency fc = 100 KHz


and a maximum modulating signal frequency of 5 KHz. Determine.

(i) frequency limits for the upper and lower sidebands.

(ii) Bandwidth

(iii) sketch the output frequency spectrum.

Solution

(i) Frequency limits for upper & lower sidebands,

fUSB = fc+ fm=100 KHz+5 KHz = 105 KHz

fLSB = fc- fm=100KHz-5KHz =95KHz

(ii) Bandwidth ‘BW = 2fm

= 2 x 5 KHz

=10 KHz

Voltage

BW = 10 KHz

LSB USB

fLSB = 95 KHz fUSB = 105 KHz


Frequency
Figure 1.17 Output Frequency Spectrum
ANALOG AND DIGITAL COMMUNICATION

13. For an AM-DSB-FC with a Peak unmodulted carrier voltage Vc=12V,


and modulation co-efficient m=1 with load resistance RL = 12.
Determine.
(i) Pc, PLSB, PUSB (ii) Pt (iii) Draw the power spectrum
Solution
Vc2 122
(i) Carrier power PC = =
2R 2 x12

PC = 6W

ma2Pc 12.(6)
(ii) PUSB = PLSB = =
4 4

= 1.5 watts
(iii) Total modulated power,

 2

Pt = Pc 1 + ma 
 2 

 1
= 6 1 + 
 2

Pt = 9 Watts

Power

6w

1.5 w 1.5 w

fLSB fc fUSB
Figure 1.18 Power Spectrum
ANALOG COMMUNICATION

14. A modulating signal 20 sin (2 x 103 t) is used to modulate a carrier


signal 40 sin (2 x 104 t).Find out,

(i) Modulation index

(ii) Percentage modulation

(iii) Frequencies of the sideband components and their amplitudes.

(iv) Bandwidth of the modulating signal

(v) Also draw the spectrum of the AM-wave.

Solution

Given, modulating signal,

Vm(t) =20 sin(2 x 103 t) ...(1)

Carrier signal,

Vc(t) = 40 sin (2 x 104 t) ...(2)

(i) Modulating signal, is represented by

Vc(t) = Vm sin (2fmt) ...(3)

Compare , equation (1) & (3),

Vm= 20 V, fm =103 HZ =1 KHZ

(ii) Carrier signal is represented by,

Vc(t) = Vc. sin (2fct) ...(4)

Compare equation (2) and (4)

Vc= 40V, fc =104 HZ =10 KHZ


Vm 20
(iii) Modulation index,ma = = = 0.5
Vc 40
% ma = 0.5 x 100 =50 %
ANALOG AND DIGITAL COMMUNICATION

(iv) Frequency of sideband components,

Upper sideband frequency fUSB= fc+ fm

fUSB = fc+ fm = 10+1 = 11 KHz

Lower sideband frequency,

fUSB = fc-fm = 10-1= 9 KHz

(v) Amplitudes of sidebands

The amplitudes of upper as well as the lower sideband is given by,


maVc
=
2

0.5 x 40
=
2

=10 volts
(vi) Bandwidth BW =2fm
=2 x 1 KHZ =2 KHz
(vii) Spectrum

40 V
Amplitude

10 V 10 V

9 KHz 10 KHz 11 KHz


Frequency in KHz
Figure 1.19 Voltage Spectrum
ANALOG COMMUNICATION

1.9 GENERATION OF SSB

To generate the single side band suppressed carrier (SSBSC), we


have to suppress the carrier as well as one of the side bands. The various
techniques to suppress one of the side bands are

1. Filter method (or) Frequency discrimination method

2. Phase shift method (or) Hartley Method

3. Third Method (or) Weaver’s Method

1.9.1 Frequency Discrimination method


‰‰ In Frequency discrimination method, first a DSBSC signal is
generated by using an ordinary product modulator or
balanced modulator. Then, this DSBSC signal is passed through
suitable band pass filter to obtain SSBSC signal, where, one of the
sidebands is filtered out.
The design of band pass filter is very critical and there are some
limitations on the modulating and carrier frequencies. They are
1. The base band is approximately related to carrier frequency. it is
very difficult to design a band pass filter, If the carrier frequency
is much greater than the bandwidth of the baseband or modu-
lating signal.
2. The frequency discrimination method is useful only when the
baseband is restricted at its lower edge, so that the upper side
bands and lower side bands are non-overlapping.
The filter method is used in speech communication where lowest
spectral component is 70Hz and it may not be taken as 300 Hz, without
affecting the intelligibility of the speech signal. However the system is
not useful for video communication where the modulating signal starts
from dc.
ANALOG AND DIGITAL COMMUNICATION

cos ωct (carrier)


signal

Product Band pass


modulator Filter
x(t) A SSB-BC
x(t) cos ωct Signal
A DSBSC signal

Figure 1.20 Frequency discrimination method


Applications

Since the modulation and demodulation is complex, costlier, this


system is not used for commercial broadcasting.
It is mainly used in wireless system for ultra-high frequency and
very high frequency communication process.

1.9.2 Phase Discrimination (or) Phase shift method


The block diagram of the phase discriminator is shown in the
figure 1.21
‰‰ In this method, there are two balanced modulators and two phase
shifters used.
‰‰ One modulator accepts carrier with 90° phase shift from carrier
oscillator and modulating signal directly.
‰‰ Another modulator accepts modulating signal with phase shift of 900
and the carrier signal directly.
‰‰ Balanced modulator 1 accepts direct modulating signal
Vm(t) = Vm sinωmt and 90° phase shifted carrier signal
Vc(t) = Vc sin (ωct+ 90).
‰‰ Balanced modulator 2 accepts 90° phase shifted modulating signal
Vm(t) = Vm sin(ωmt+90) and direct carrier signal Vc(t) = Vc sinωct.
Balanced
Modulator
M1

Modulating
or audio signal Carrier 900
Audio phase
Amplifier Shifter SSB
Adder
Carrier
Source
Balanced
Add 900 Modulator
Phase Shifter M2

Figure 1.21 Phase discriminator


ANALOG COMMUNICATION

‰‰ The output of balanced modulator 1 is also, output of balanced


modulator 2 is

V1 = Vm sin ωm (t )Vc sin ( wc t + 90 )


V V
= m c ( cos (ωc t + 90 ) − ωm t ) − cos ( (ωc t + ωm t ) + ωm t ) 
2
VmVc
V1 = cos (ωc t − ωm t + 90 ) − cos (ωc t + ωm t + 90 )  ... (1)
2 
( LSB ) (USB )
also, output of balanced modulator 2 is
V1 = Vm sin (ωm t + 90 )Vc sin ωc t
V V
= m c cos (ωc t − (ωm t + 90 ) ) − cos (ωc t + (ωm t + 90 ) ) 
2
VmVc
V2 = cos (ωc t − ωm t + 90 ) − cos (ωc t + ωm t + 90 )  ... ( 2)
2 
( LSB ) (USB )
Therefore, output of the sum will be
V0 = V1 + V2
V V
= m c cos (ωc t − ωm t + 90 ) − cos (ωc t + ωm t + 90 ) 
2
V V
+ m c cos (ωc t − ωm t + 90 ) − cos (ωc t + ωm t + 90 ) 
2

In the above equation, LSB components have the phase shift


difference (+ 90°and -900). (i.e.90°out of phase). Hence they are
cancelled each other. Therefore only upper sideband components
present, since they are in same phase(90°). Thus SSB SC signal is
obtained.
1. Each balanced modulator need to be carefully balanced in order to
suppress the carrier.
2. Each modulator should have equal sensitivity to baseband signal.
3. It is difficult to design wide band phase shifting network.
4. The carrier phase shifted network, must provide an exact 90° phase
shift of carrier frequency.
ANALOG AND DIGITAL COMMUNICATION

Advantages

1. Any desired sideband in a single frequency translation step,


regardless of the carrier frequency can be generated.

2. No need of sharp cut off filter.

Disadvantages

1. The modulator should have equal sensitivity to the base band


signal.

2. The Phase shifting network must provide exact 900 phase shift.

1.9.3 Modified phase shift method (or) weaver’s method

‰‰ This method is known by inventor’s name.

‰‰ This method used to generate SSB at any frequency and thus use low
audio frequencies.

Accordingly this method is in indirect competition with filter


method.

‰‰ Because of its complexity, it is not commercially used.

‰‰ The block diagram of SSB-SC-AM generation using weavers method


is shown in figure.

‰‰ The signal feeding to the balanced modulator1and 2 and its output


signal at point AB is similar to that of phase shift method.

‰‰ However the manner in which voltages are fed to the balanced modu-
lators 3 and 4 are different phase shift method.

‰‰ Instead of phase shifting the whole range of modulating frequencies


(A.F.), this method combines them with a fixed carrier frequency f(say
ANALOG COMMUNICATION

1700Hz). The phase shift is applied to this fixed fre-


quency only.

‰‰ The resulting voltage at the output of balanced modulators1and 2


are fed to the low pass

filters (LPF), whose cut off frequency is designed to be equal to f to en-


sure that input to the last stage Balanced modulators 3 and4 results in
a proper sideband suppression.

DSBSC-AM
A
Balanced Low Pass Balanced
modulator 1 filter 1 modulator 3

Modulating 2 cosωct 2 sinωct


frequency 90 Phase
0 RF
(AF) Shifter Carrier
Adder SSB SC AM
AF OSC
Carrier
2 sinωct OSC 900 Phase
Shifter
B 2 cosωct
Balanced
Low Pass Balanced
modulator 2
filter 2 modulator 4
DSBSC-AM

Figure 1.22 Weavers method of generation of SSBSC wave

Let the modulating Vm (t ) = Vm sin ωm t


Sub carrier signal C 0 (t ) = 2V0 sin ω0t
RF carrier signal Cc (t ) = 2Vc sin ωc t
The input to the balanced modulator 1
=Vm sin ωm t × 2V0 sin (ω0t + 900 )
The output of balanced modulator 1 is
= VmV0 cos (ω0 − ωm ) t + 900  − cos (ω0 + ωm ) t + 900  ... (1)
Similarily, output of balanced modulato or is
= Vm sin ωm t 2V0 sin ω0t
= VmV0 cos (ω0t − ωm t ) − cos (ω0t + ωm t )  ... ( 2)
The LPF 1 and LPF 2 eliminate upper sidebands of ballanced
modulator 1 and 2. Hence the output of LPF 1 is
ANALOG AND DIGITAL COMMUNICATION

= VmV0 cos (ωc t − ωm t + 900 ) t


Output of LPF 2
= VmVc cos (ωc − ωm ) t
Assumee V0 = Vm = Vc = 1
Output of balanced modulator 3
= cos (ω0 − ωm ) t + 900  2 sin ωc t
= sin (ωc + ω0 − ωm ) t + 900 + sin (ωc + ω0 − ωm ) t − 900  ... ( 3 )
The output of balanced modulator 4
= cos (ω0 − ωm ) t 2 sin (ωc t + 900 )
= sin ωc + 900 + ω0t − ωm t  + sin ωc t + 900 − ω0t + ωm t 
= sin  (ωc + ω0 − ωm ) t + 900  + sin (ωc − ω0 + ωm )  t + 900 ... ( 4 )
The output off adder circuit is ( 3 ) and ( 4 )
= sin  (ωc + ω0 − ωm ) t + 900  + sin (ωc − ω0 + ωm ) t − 900  
+ sin (ωc + ω0 − ωm ) t + 900 + sin (ωc − ω0 + ωm ) t + 900  
= 2 sin ( ωc + ω0 − ωm ) t + 900 
The other two terms ( LSB ) are cancelled each other because it is
out of phase wiith each other
= 2 cos (ωc + ω0 − ωm ) t
The final RF output frequency is f c + f 0 + f m which is essentially
the lower sideband of RF carrier f c + f 0
1.9.5 Power Calculation of SSBSC-AM
The total power in transmitted AM is,
Ec 2  ma 2 
Pt = 1 + 
2R  2 
 m 2
Pt = Pc 1 + a 
 2 
If the carrier and one e side band suppressed, then the total power
in SSB-SC-AM is,
Pt ′′ = PLSB − PUSB
ma 2Ec 2
=
8R
m E2
= a c
8R
ma 2  Ec 2 
=  
4R  2R 
1
Pt ′′ = ma 2Pc
4
ANALOG COMMUNICATION

Pt − Pt ''
Power saving =
Pt
 ma  2
 ma 2 
1 + 2  Pc −  4 Pc 
=    
 ma 2 
1 + 2  Pc
 
 ma 2 mc 2 
Pc 1 + − 
 2 4 
=
 ma 2 
1 +  Pc
 2 
m 2 4 + ma 2
1+ a
= 4 = 4
ma 2 2 + ma 2
1+
2 2
4 + ma 2 4 + ma 2
= =
2 ( 2 + ma 2 ) 4 + 2ma
2

5
If ma = 1, the power saving in SSB-SC-AM is = = 83.33%
6
If ma = 0.5, the power saving in SSB-SC-AM is = 94.44%
1.9.7 Advantages of SSB over DSBFC
(i) Less Bandwidth is required
(ii) More power saving, at 100% modulation, power saving is 83.33%
(iii) Reduced noise interference due to reduced bandwidth.
1.9.8 Disadvantages of SSB
(i) The generation and reception of SSB signals is complicated.
(ii) Used in land and air mobile communication, navigation and amateur
radio.
Applications of SSB
(i) Where power saving and low bandwidth requirement are important.
(ii) Used in : land and air mobile communication, navigation and
amateur radio.
1.10 AM – TRANSMITTERS
There are two types of AM transmitters,
‰‰ Low level transmitter
‰‰ High level transmitter
ANALOG AND DIGITAL COMMUNICATION

1.10.1 Low level transmitter


The block diagram for a low-level AM DSBFC transmitter is shown
in figure
Preamplifier
‰‰ It is a linear voltage amplifier with high input impedance .
‰‰ It is used to raise source signal amplitude to a usable level with
minimum nonlinear distortion and as little thermal noise as possible
.
Antenna

RF carrier Buffer Carrier


oscillator amplifier driver

Linear Linear
Bandpass Intermediate final Bandpass Coupling
Modulator
filter power power filter network
amplifier amplifier

Bandpass Modulating
Pre-amplifier signal
filter
driver

Modulating
signal
source

Figure 1.23 Block diagram of Low-level AM-DSBFC transmitter


Modulating signal driver (linear amplifier)
‰‰ Amplifies the information signal to an adequate level to suffi-
ciently drive the modulator.
RF carrier oscillator
‰‰ It is used to generate the carrier signal.
‰‰ Usually crystal-controlled oscillators are used
Buffer amplifier
‰‰ It is a low-gain, high-input impedance linear amplifier.
‰‰ It is used to isolate the oscillator from the high-power amplifiers.
Modulator can be used either as emitter or collector
modulator. The intermediate and final power amplifiers (push pull
modulator)
ANALOG COMMUNICATION

These amplifiers are required with low level transmitters to main-


tain symmetry in the AM envelope
Coupling network
It matches output impedance of the final amplifier to the trans-
mission line/antenna.
Applications
It is used in low-power, low-capacity systems: wireless intercoms,
remote control units, pagers and short-range walkie-talkie.

1.10.2 High-Level Transmitters


The block diagram for a high-level AM DSBFC transmitter is
shown in figure

Modulating Modulating
Bandpass
Signal Preamplifier Signal
filter
amplifier driver
amplifier
Antenna

AM Modulator
and output Modulating
power Power
amplifier driver

Carrier
RF carrier Buffer Carrier Power
oscillator amplifier driver amplifier

Figure 1.24 Block diagram of High-level AM-DSBFC transmitter

‰‰ Modulating signal is processed similarly as in low-level transmitter


except for the addition of power amplifier.

• To provide higher power, modulating signal necessary to achieve


100% modulation (carrier power is maximum at the high-level
modulation point) .

‰‰ The same circuit used as in low level AM transmitter, for carrier


ANALOG AND DIGITAL COMMUNICATION

oscillator, buffer and driver but with addition of power amplifier.

‰‰ The modulator circuit has three primary functions:

• Provide the circuitry necessary for modulation to occur.


• It is the final power amplifier
• Frequency up-converter translates low-frequency information
signals to radio frequency signals that can be efficiently radiated
from an antenna and propagates through free space.
1.10.3 Comparison of High level and Low level modulation

S.No Parameters High Level Modula- Low Level Modulation

tion
1 Power level Modulation takes Modulation takes place at a

place at a high power low power level

level
2 Types of am- Highly efficient class Linear amplifiers (A, AB, B)

plifiers C amplifiers are used are used after modulation


3 Efficiency Very High Low
4 Devices used Vacuum tubes or Transistors, JFET, OP-

transistors for me- AMPs.

dium power applica-

tions
5 Design of AF Complex due to very Easy as it is to be done at

power amplifier high power involved low power


6 Application High power Sometimes used in TV

broadcast transmitters (IF

modulation)

1.11 AM SUPER HETERODYNE RECEIVER WITH ITS


CHARACTERISTICS PERFORMANCE

1.11.1Introduction
The problems in the TRF receiver are
‰‰ It is useful only to single tuned circuit and low frequency
applications.
ANALOG COMMUNICATION

‰‰ Variation in the bandwidth


‰‰ Instability
‰‰ Variation in gain
‰‰ Insufficient selectivity
‰‰ It was not possible to use double tuned RF amplifiers in this receiver.
‰‰ Poor adjacent channel rejection.
Above mentioned problems of TRF receiver are solved in this
receiver by converting every selected RF signal to a fixed lower
frequency called as the •’Intermediate Frequency (IF)’.
1.11.2 Principle (or) Definition of Heterodyning
The radio & TV receiver operates on the principle of super hetero-
dyning. The process of mixing two signals having different frequencies
to produce a new frequency is called as heterodyning.
1.11.3 Block diagram

Audio and
Receiving
power
E.M antenna
amplifiers
Radiation
(f0-fs) AF
fs
RF fs IF
Mixer Detector
stage Amplifier

AGC
Local
f
oscillator 0

Ganged tuning
Figure 1.25 Block diagram of Super hetero-dyne receiver
1.11.4 Operations
(i). Antenna Receiver
The DSBFC or AM signal transmitted by the transmitter travels
through the air and reaches the receiving antenna.
This signal is in the form of electromagnetic waves. It induces a
very small voltage (few μV) into the receiving antenna.
(ii) RF stage
The RF stage is an amplifier which is used to select the wanted
ANALOG AND DIGITAL COMMUNICATION

signal and reject other out of many, present at the antenna.


It also reduces the effect of noise. At the output of the RF ampli-
fier we get the desired signal at frequency”fs”.
(iii) Mixer
The mixer receives signals from the RF amplifier at frequency (fs,)
and from the localoscillator at frequency (f0 )Such that f0>fs.
(iv) IF amplifier
‰‰ The mixer will mix these signals to produce signals having
frequencies fs, fo , (fo+ fs,) and (fo–fs).
‰‰ Out of these the difference of frequency component i.e.(fo - fs,) is
selected and all others are rejected.
‰‰ This frequency is called as the intermediate frequency (IF).

I.F = (fo - fs)

‰‰ This intermediate frequency signal is then amplified by one or more


IF amplifier stages.
‰‰ IF amplifiers provide most of the gain (and hence sensitivity) and the
bandwidth requirements of the receiver.
‰‰ Therefore the sensitivity and selectivity of this receiver do not change
much withchanges in the incoming frequency.
Note: This frequency contains the same modulation as the original
signal fs.
(v) Detector
The amplified IF signal is detected by the detector to recover the
original modulating signal.
This is then amplified and applied to the loudspeaker.
(vi) Audio and power amplifier
The demodulated output is not sufficient to drive the output
device (loud speaker),therefore the power level of demodulated output is
increased using audio and power amplifiers.
(vii) Loud speaker
The loud speaker is a output transducer is used to convert the
ANALOG COMMUNICATION

demodulated signal into original form ie . in the form of audio signal.


(viii) Ganged Tuning
In order to maintain a constant difference between the local
oscillator frequency and the incoming frequency, ganged tuning is used.
This is simultaneous tuning of RF amplifier, mixer and local
oscillator and it is achieved by using ganged tuning capacitors (Tuning
control in radio set).
(ix) Automatic Gain Control (AGC)
• This circuit controls the gains of the RF and IF amplifiers to
maintain a constant output voltage level even when the signal
level at the receiver input is fluctuating.
• This is done by feeding a controlling dC voltage to the RF and IF
amplifiers.
• The amplitude of this dc voltage is proportional to the detector
output.
1.11.5 Summary of Super heterodyne Receiver
Select the desired station at frequency fs by tuning the
RF amplifier and local oscillator

Local oscillator is tuned to frequency f0 with f0>fs

Mixer produces IF. Note that IF = (f0 - fs)

Output of mixer is an AM signal with two sidebands and


carrier equal to IF . The IF amplifier amplifies this signal.

Output Detector will demodulates this signal to


recover the modulating signal.(AF signal).

The audio amplifier andpower amplifier will amplifythe AF signal


and apply it to the loud speaker.
ANALOG AND DIGITAL COMMUNICATION

1.11.6 Advantages of Super heterodyning


No variation in bandwidth and remains constant over the entire
operation.
High sensitivity & selectivity.
High adjacent channel rejection.
Improved stability.
Higher gain per stage because IF amplifier are operated at a lower
frequency.
1.11.7 Frequency parameter of AM receiver
The AM receiver has the following frequency measurements
1. Two frequency bands: Medium wave (MW) bands. & Short wave
(SW) bands.
2. RF carrier range: MW band- 535 Khz to 1650 Khz. SW band – 5
to 15 Mhz.
3. IF range: 455 Khz.
4. IF bandwidth: 10 Khz.

1.12 PERFORMANCE CHARACTERISTICS OF A RECEIVER

1.12.1 Sensitivity

‰‰ Sensitivity of a radio receiver is defined as its ability to amplify weak


signals.

‰‰ It is often defined in terms of the input voltage that must be applied


at the input of the receiver to obtain a standard output power.

‰‰ Sensitivity is measured in µv or decibels.

Note: How to improve the sensitivity?

The sensitivity of a radio receiver is dependent on the RF and IF


amplifier stages. By increasing the gains or these stages it is possible to
increase the sensitivity of a receiver.
ANALOG COMMUNICATION

Sensitivity µV

16

15

14
Lowest sensitivity
13

12

11
Highest sensivity
10
Frequency
1000 KHz
600 850 1600

Figure 1.26 Graph representing sensitivity of a radio receiver


1.12.2 Selectivity
• The selectivity of a receiver is its ability to reject unwanted signals.

• It is determined by the frequency response characteristics of the IF


amplifier. The responses of the mixer and RF amplifier stages also
play a small but significant role.

• The selectivity decides the adjacent channel rejection of a receiver.

• Higher the selectivity belter is the adjacent channel rejection and less
is the adjacent channel interference.

Note: How to increase the selectivity ?

The selectivity of a receiver depends on the IF amplifier.


Higher the “Q” of the tuned circuit used in the TF amplifier, better is the
selectivity.
ANALOG AND DIGITAL COMMUNICATION

Attenuation, dB

100

80 Attenuation increases
as we go away from the
tuned frequency
60

40
Receiver tuned
at 950 kHz
20
Deviation from
resonant frequency
40 -20 0 20 40 kHz

Figure 1.27 Graph representing selectivity of a radio receiver


1.12.3 Fidelity
‰‰ The fidelity is the ability of a receiver to reproduce all the
modulating frequencies equally.
‰‰ The fidelity basically depends on the frequency response of the AF
amplifier.
‰‰ High fidelity is essential in order to reproduce a good quality.
Receiver output in dB

Minimum attenuation

0
This is basically the
frequency response
of the AF amplifier

Frequency
50 1 kHz 5 kHz 10 kHz

Figure 1.28 Figure representing fidelity of a radio receiver


ANALOG COMMUNICATION

1.12.4 Image Frequency Rejection Ratio (IFRR)


Image Frequency: The image frequency is defined as the received signal
frequency plus twice the intermediate frequency.
f IM = f s + f IF

Image Frequency-Rejection Ratio: The image frequency rejection


ratio(IFRR) is a numerical measure of the ability of a preselector to reject
the image frequency.
Mathematically, the IFRR is expressed as IFRR = 1 + Q 2ρ 2
Where, Q – quality factor
f M f 
ρ= 1 − s 
 fs f1M 
Note: For improving the capability of a receiver to reject image frequen-
cy, the value of IFRR should be high as possible.
1.12.5 Double Spotting
Double spotting means the same radio stations being heared at
two different points on the AM receiver dial.
Due to: Poor front-end selectivity.
Reduced: By increasing the front-end selectivity by introducing
another RF- amplifier stage.
1.13 THEORY OF FREQUENCY AND PHASE MODULATION
1.13.1 Introduction

We know that amplitude, frequency (or) phase of the carrier


can be varied by the modulating signal instantaneous value. Carrier
amplitude is varied in accordance with the instantaneous value of
message signal is known as Amplitude modulation.

There is another method of modulating a sinusoidal carrier name-


ly the angle modulation. In angle modulation either frequency (or) phase
of the carrier is carried in accordance with the instantaneous value of
modulating signal , but the carrier amplitude is constant.

Angle modulation has several advantages over the amplitude such


as noise reduction, improved system fidelity a nd m ove e fficient us e of
ANALOG AND DIGITAL COMMUNICATION

power because the amplitude of carrier is constant in angle modulation.

Thus angle modulation can be broadly classified into two types


such as,
Angle modulation

Frequency Phase Modulation


Modulation (FM) (PM)

But Angle modulation does have some disadvantages


too such as increased bandwidth and we use of more complex circuits.

Angle modulation is being used for the following applications,

1. Radio broadcasting.

2. TV sound transmission.

3. Two way mobile radio

4. Cellular radio.

5. Microwave communication.

6. Satellite communication.

Frequency modulation

When the frequency of carrier is varied in accordance with


the instantaneous amplitude of modulating signal, then it is called
frequency modulation (FM). Amplitude of the modulated carrier remains
constant.

Phase Modulation

When the phase of carrier is varied in accordance with the


instantaneous amplitude of modulating signal, then it is called Phase
modulation (PM). Amplitude of the modulated carrier remains constant.
ANALOG COMMUNICATION

1.13.2 Principle of Angle Modulation

The principle of angle modulation can be stated as follows, The


phase angle ( f )of a sinusoidal carrier wave is varied with respect to
time. An angle modulated wave can be expressed mathematically as,

Vmod(t) = VC sin f ...(1)

= VC sin (wct +q(t)) . .. (2)

Where,

VC Peak carrier amplitude (volts).

wC Carrier radian frequency (Hz).

q(t) Instantaneous phase deviation

(radians).

Vmod(t) Angle modulated wave

The major difference between FM and PM is that in


FM, the frequency of carrier is varied by the modulating
signal whereas in PM, the phase of the carrier is varied by modulating
signal.

However, when the frequency of the carrier is varied it’s phase


also gets varied and vice-versa. Therefore, FM and PM both occur
whenever either form of angle modulation is performed.

In other words we can say that a direct FM is an indirect PM,


whereas a direct PM is an indirect FM. Therefore, In angle modulation
q(t) is a function of modulating signal. That means,

q(t) =F (Vm (t))

Where,

Vm(t) =Vm coswmt=modulating signal.

(or)
ANALOG AND DIGITAL COMMUNICATION

q(t) a Vm(t)

Where,

Vm → Peak amplitude of modulating signal .

wm → Frequency of modulating signal.

1.13.3 Some important definition in Angle Modulation

Phase deviation (Dq)

The relative angular displacement (shift) of the carrier phase


in radians with respect to the reference phase is called phase deviation
(Dq). The change in the carrier phase produces a corresponding change
in frequency.

Frequency deviation (Df)

The frequency deviation (Df) is defined as the amount by which the


carrier frequency is varied from its un-modulated value.

The magnitude of the frequency and phase deviation is


proportional to the amplitude of the modulating signal (Vm). The
instantaneous frequency of FM-signal varies with time. The maxi-
mum change in instantaneous frequency from the value fc is known as
frequency deviation.
Negative Positive
deviation deviation

Df Df

fc f
Instantaneous Phase Deviation

The instantaneous phase deviation is the instantaneous change


in the phase of the carrier at a given instant of time and indicates how
much the phase of the carrier is changing with respect to its reference
phase.

Instantaneous phase deviation=q(t), radians.


ANALOG COMMUNICATION

Instantaneous phase

The instantaneous phase is the precise phase of the carrier at a


given instant of time.

Instantaneous phase = Wct + q(t), radians.

Carrier reference phase (Wct) = 2pfct (radians).

Where,

fc = Carrier frequency (Hz)

q(t) – Instantaneous phase deviation (radians).

Instantaneous frequency deviation

The Instantaneous frequency deviation is the instantaneous


change in the frequency of the carrier.

It is defined as the first time derivative of the instantaneous phase


deviation.

Instantaneous frequency deviation= q'(t) rad/sec

q(t))= q'(t) (

Instantaneous frequency

The Instantaneous frequency is the precise frequency of the carrier


at a given instant of time.

1.13.4 Mathematical representation of FM

In frequency modulation, Instantaneous frequency


deviation q'(t) is directly proportional to modulating signal voltage.

(i.e) q'(t) a Vm(t) radians/sec ...(1)

q'(t) = kf. Vm(t) radians/sec ...(2)

Where, kf - deviation sensitivity of frequency


ANALOG AND DIGITAL COMMUNICATION

From the definition of instantaneous frequency deviation, we


have
d
q(t) = q'(t)
dt
Integrate on both side we get,

q (t) = ⌠ q'(t) .dt ...(3)



Substitute q'(t) value in equation (3), we get

q(t) = ⌠ kf. Vm(t).dt



Where, Vm(t) = Vm cos wmt

q(t) = ⌠ kf.Vm cos wmt.dt



= kf ⌠ Vm.cos wmt.dt

Vm
= kf. sin wmt
Wm

kf.Vm
= sin w mt
2pfm
Df kf.Vm
= sin w mt , \
where =Df
fm 2p

q(t) =Mf .sin wmt


...(4)
Df
where, = Mf
fm

The angle modulated waves mathematically expressed as,

Vmod(t) = Vc sin (wct + q(t)) ...(5)

Substitute q(t) value in equation (5) , we get


ANALOG COMMUNICATION

VFM(t) =Vc sin [wct + Mf .sin wmt] ...(6)

The equation (6) shows the frequency modulated wave.

1.13.5 Mathematical representation of PM

In phase modulation, instantaneous phase deviation q (t) is


proportional to modulating signal voltage (i.e)

q(t) a Vm(t) rad ...(1)

q (t) = KP Vm (t) rad ...(2)

Where, KP –deviation sensitivity of phase

The angle modulated wave is mathematically expressed as,

Vmod(t) =Vc sin (wct + q(t)) ...(3)

Substitute q(t) in equation (3) , we get

Vpm(t) =Vc sin [wct+ KP Vm(t)]

Vpm(t) = Vc sin [wct + KP . Vm. cos wmt] ...(4)

In the equation (4) Kp Vm = Mp

Vpm(t) = Vc sin [wct + Mp cos wmt] ...(5)

The above equation (5) shows the phase modulated wave.


ANALOG AND DIGITAL COMMUNICATION

1.13.6 Graphical representation of Frequency Modulation and


Phase Modulation

(a) Unmodulated carrier signal

No (b) Modulating signal


modulation zero crossings zero crossings
Vm
0 t
-Vm

(c) Frequency modulated wave

Vc
0 t
-Vc

Maximum Minimum
Frequency Frequency
Maximum Maximum
deviation deviation

Vc
0 t
-Vc

Figure 1.29 Phase and Frequency modulation of a sine-wave carrier by a


sine-wave modulating signal
Applications of FM

1. Radio broadcasting.

2. Sound broadcasting in TV.

3. Satellite Communication.

4. Police wireless.
ANALOG COMMUNICATION

5. Point to point communication.

6. Ambulances.

7. Taxicabs.

1.13.7 Deviation ,Modulation index and Frequency Deviation

The FM signal , in general is expressed as,

VFM(t) =Vc sin [wct + Mf sin wmt] ...(1)

and the PM signal, in general is expressed as,

VPM(t) =Vc sin [wct + MP cos wmt] ...(2)

In both the above equations, the term Mf and Mp are called


modulation index. Note that the term Mf sin wmt and Mp cos wmt indicates
instantaneous phase deviation q(t). Hence ‘Mf’ and Mp also indicates
maximum phase deviation. In other words , modulation index can also
be defined as “maximum Phase deviation”.

Modulation index for PM

Modulation index in PM

Mp=Kp Vm rad.

Thus modulation index of PM-signal is directly proportional to


peak modulating voltage and it’s unit is radians.

Modulation index for FM

Modulation index in FM
Vm
Mf
FM =Kf unit less
Wm
Thus modulation index
Vm
Mf =Kf V/S
Wm
ANALOG AND DIGITAL COMMUNICATION

Vm
=Kf
2pfm

Vm
Here, Kf =Df is called frequency deviation . It is denoted by
2p
Df (or) ‘d’ and it’s unit is Hz

Modulation index in FM
Df d
Mf = (or)
fm fm
Maximum frequency deviation
=
Modulating frequency

Percentage Modulation

For angle modulation, the percentage modulation is given as the


ratio of actual frequency deviation to maximum allowable frequency
deviation .(i.e)
Actual frequency deviation
% Modulation =
Maximum allowable frequency deviation
Deviation Ratio (DR)

The deviation ratio is the ratio of maximum frequency deviations


to maximum modulating signal frequency .(i.e)
Maximum frequency deviation
Deviation Ration (DR) =
fm(max)
1.13.8 Frequency spectrum of Angle Modulated Waves

• For FM signal, the maximum frequency deviation takes place when


modulating signal is at positive and negative peaks.

• For PM signal. The maximum frequency deviation takes place near


zero crossing of the modulating signal.

• Both FM and PM waveforms are identical except the phase shift. From
ANALOG COMMUNICATION

modulated waveform it is difficult to know, whether the modulation


is FM (or) PM.

• We know that AM contains only two sidebands per modulating


frequency . But in angle modulated signal contains large number
of sidebands depending upon the modulation index. Since FM and
PM have identical modulated waveforms, their frequency content is
same.

• In FM (or) PM , a single –frequency modulating signal produces an


infinite number of pairs of side frequencies and thus has an infinite
bandwidth.

• Consider either FM-equations (or) PM –equation for spectrum


analysis, Here we consider FM-equation for analysis.

• In FM modulation by a single –frequency sinusoid is consider,


when the modulating signal has zero amplitude, then carrier has
frequency of wc (or) fc.

• As the amplitude of the modulating signal increases, the frequency


of the carrier increases and vice versa. Frequency analysis of an
angle modulated wave by a single –frequency sinusoid produces a
peak phase deviation of Mf radians, where mf is the modulation in-
dex.

The equation for frequency modulated wave is given by,

VFM(t) = Vc sin [wct + Mf sinwmt] ...(1)

The expression for the FM wave is not simple . It is complex since it is


sine of sine function . The only way to solve this equation is by using the
Bessels functions.

By using the Bessels functions the equation for angle-modulated


wave can be expanded as follows,

VFM = Vc sin [wct + Mf . sin wmt]

= Vc {sin wct cos(Mf.sin wmt) + cos wct sin (Mf.sin wmt)}


ANALOG AND DIGITAL COMMUNICATION

= Vc{J0(mf).sin wct +J1 (mf ).[sin(wc+ wm) t


- sin(wc-wm)t] + J2(mf )[sin(wc+2 wm) t
+ sin (wc-2wm)t] + J3 (mf)[sin(wc+3 wm)t
-sin(wc-3wm)t]+ .............} ...(2)

VFM(t) = Carrier + infinite number of sidebands.

VFM(t) =Vc.J0(mf). sinwct + Vc.J1(mf).{sin(wc ± wm)t ...(3)

↓ ↓
Carrier Pair of first side band

From the above equation we conclude that,

1. The FM-wave consists of carrier. The first term in the above equation
represents the carrier.

2. The FM-wave ideally consists of infinite number of sidebands. All the


terms except the first one remaining are sidebands.

3. The amplitudes of the carrier and sidebands are dependent on the


J-co-efficients. The values of these coefficients can be obtained from
the table 1.2 (or)from the graph shown in Figure 1.25

4. For example J1(mf) denotes the value of J1 for the particular value of
mf written inside the bracket.

5. To solve for the amplitude of the side frequencies Jn(mf) is given by,

( )[ ]
n
mf 1 (mf/2)2 (mf/2)4
Jn(mf) = - + -……. ...(4)
2 n! 1!(n+1)! 2!(n+2)!

Where,

! = Factorial (1 x 2 x 3 x 4....)

n = number of the side frequency.

mf = modulation index.
ANALOG COMMUNICATION

FM (Frequency Spectrum)

Lower side bands J0(mf).Vc Upper sidebands

J1(mf).Vc J1(mf).Vc

J2(mf).Vc J2(mf).Vc

fc - 2fm fc-fm fc fc+fm fc+2fm

Bandwidth=infinite

Figure 1.30 Ideal frequency spectrum of FM-wave




1

0.9

0.8

0.7
J1(mf)
0.6
J2(mf)
J3(mf)
0.5
J4(mf)
0.4 J5(mf) J6(mf)
J7(mf)
J8(mf)
0.3

0.2

Jn(mf)
0.1

- 0.1

- 0.2

- 0.3

- 0.4
0 1 2 3 4 5 6 7 8 9 10 11 12

Modulation Index mf

Figure 1.31 Jn(mf) versus mf


ANALOG AND DIGITAL COMMUNICATION
Modu-
lation Carrier Side Frequency Pairs
index
m J0 J1 J2 J3 J4 J5 J6 J7 J8 J9 J10 J11 J12 J13 J14
0.00 1.00 ... ... ... ... ... ... ... ... ... ... ... ... ... ...
0.25 0.98 0.12 ... ... ... ... ... ... ... ... ... ... ... ... ...
0.5 0.94 0.24 0.03 ... ... ... ... ... ... ... ... ... ... ... ...
1.0 0.77 0.44 0.11 0.02 ... ... ... ... ... ... ... ... ... ... ...
ANALOG COMMUNICATION

1.5 0.51 0.56 0.23 0.06 0.01 ... ... ... ... ... ... ... ... ... ...
2.0 0.22 0.58 0.35 0.13 0.03 ... ... ... ... ... ... ... ... ...
2.4 0 0.52 0.43 0.20 0.06 0.02 ... ... ... ... ... ... ... ... ...
2.5 -0.05 0.50 0.45 0.22 0.07 0.02 0.01 ... ... ... ... ... ... ... ...
3.0 -0.26 0.34 0.49 0.31 0.13 0.04 0.01 ... ... ... ... ... ... ... ...
4.0 -0.40 -0.07 0.36 0.43 0.28 0.13 0.05 0.02 ... ... ... ... ... ... ...
5.0 -0.18 -0.33 0.05 0.36 0.39 0.26 0.13 0.05 0.02 ... ... ... ... ... ...
5.45 0 -0.34 -0.12 0.26 0.40 0.32 0.19 0.09 0.03 0.01 ... ... ... ... ...
6.0 0.15 -0.28 -0.24 0.11 0.36 0.36 0.25 0.13 0.06 0.02 ... ... ... ... ...
7.0 0.30 0.00 -0.30 -0.17 0.16 0.35 0.34 0.23 0.13 0.06 0.02 ... ... ... ...
8.0 0.17 0.23 -0.11 -0.29 -0.10 0.19 0.34 0.32 0.22 0.13 0.06 0.03 ... ... ...
8.65 0 0.27 0.06 -0.24 -0.23 0.03 0.26 0.34 0.28 0.18 0.10 0.05 0.02 ... ...
9.0 -0.09 0.25 0.14 -0.18 -0.27 -0.06 0.20 0.33 0.31 0.21 0.12 0.06 0.03 0.01 ...
10.0 -0.25 0.05 0.25 0.06 -0.22 -0.23 -0.01 0.22 0.32 0.29 0.21 0.12 0.06 0.03 0.01
Table 1.2 Bessel functions of the first kind Jn(m)
ANALOG AND DIGITAL COMMUNICATION

In FM, we get the carrier and infinite number of sum and


difference sideband frequencies are produced.

In addition, theoretically infinite number of pairs of upper and


lower sidebands are also generated.

Hence the spectrum of FM signal is generally wider them the


spectrum of AM.

Note that the sidebands are spaced from the carrier fc and from
each other by a frequency equal to modulating signal frequency fm.

Problem 1

Determine

1. The peak frequency deviations (Df) and modulation


index (mf) for an FM modulator with a deviations sensitivity
Kf= 5kHz/V and a modulating signal Vm (t)=2 cos (2p x 2000t).

2. The peak phase deviations (mp) for a PM modulator


with a deviation sensitivity Kp =2.5 rad/V and a modulating sig-
nal Vm(t) = 2 cos (2p x 2000t).

Given

Deviation sensitivity for FM, Kf = 5 KHz/V

Modulating signal Vm (t) = 2 cos (2p x 2000t)

Deviation sensitivity for PM, Kp = 2.5 rad/V

Modulating signal ,Vm(t) = 2 cos (2p x 2000t)

Solution

i. Vm (t) = 2 cos (2p x 2000t)

Vm(t) =Vm cos (2pfmt)

Vm = 2V and

Fm =2000 Hz = 2kHz
ANALOG COMMUNICATION

ii. Peak frequency deviation,

=Kf VmDf
5 kHz
= x 2V
V

Df = 10kHz

1.13.9 Average Power in FM and PM modulators


In angle modulation, the total transmitted power is always
remains constant. It is not dependent on the modulation index. The
reason for this is that the amplitude of the FM (or) PM signal Vc is always
constant. And the power is given by.

2
Vc 
 2 2
Pt =   = Vc
R 2R
Vc 2
Pt =
2R
1.13.10 Bandwidth Requirement of Angle Modulation
• Theoretically, the BW of angle modulated wave is infinite. But practi-
cally it is calculated based on how many sidebands have significant
amplitude.
• The BW of angle modulation depends on the modulation index.
• Angle modulated waveforms are generally classified as either low,
medium or high modulation Index.
m<1 - called low modulation index
m = 1 to 10 - called medium modulation index
m>10 - called high modulation index.
Note : If m < 10 then, then the system is called narrow band FM,
otherwise wide band FM.
For low-index modulation, the frequency spectrum resembles
double-sideband AM and the minimum bandwidth is approximated by,
BW = 2 fm, (Hz) ... (1)
ANALOG AND DIGITAL COMMUNICATION

and for high modulation index, the minimum bandwidth is


approximated by,
BW = 2 Δf, (Hz) ... (2)
The actual bandwidth required to pass all the significant side
bands for an angle modulatedwave is:
BW = 2 (n* fm), (Hz) ... (3)
Where n - number of significant sidebands, fm- modulating signal
frequency (Hz).
Carson’s Rule
• It is used to estimate the bandwidth for all angle-modulated
system regardless of the modulation index.
• It is the second method to find practical bandwidth .
• Carson’s rule state that the bandwidth of FM wave is twice the sum
of the peak frequency deviation and the highest modulating signal
frequency.
BW = 2 (Δf * fm) (Hz) ...(4)

• Where Δf- peak frequency derivation (Hz),fm- modulating signal


frequency (Hz).
BW = 2 ( ∆f * f m )

BW = ( 2∆f * 2 f m )
w.k .t ∆f = K 1 * Vm
V
m f = K1 * m
fm
∆f
fm =
mf
∆f
BW = 2∆f * 2
mf
 1 
BW = 2∆f 1 +  radian/sec
 mf 

Note: This carson’s rule gives correct results if the modulations index is
greater than 6.
ANALOG COMMUNICATION

1.13.11 Types of frequency modulation


i) Narrow band FM (NBFM)
‰‰ A narrowband FM is the FM wave with a small bandwidth. The
modulation index mf of narrowband FM is small as compared to one
radian.
‰‰ Hence the spectrum of narrow band FM consists of the carrier and
upper sideband and a lower sideband.
‰‰ For small values of mf the values of the J coefficients are,
J0(mf) = 1, J1 (mf) = mf /2
Jn(mf) = 0 for n>1
‰‰ Therefore a narrow band FM wave can be expressed mathematically
as follows,
eFM = S(t) = Ec sinωct + mf Ec/2 sin (ωc+ωm)t - mf Ec/2 sin (ωc-ωm)t
• The negative sign represents LSB is 180o phase shifted
• Practically the narrow band FM systems have mf less than 1. The
maximum permissible frequency deviation is restricted to about
5kHz.
The system is used in FM mobile communications such as police
wireless, ambulances, taxicabs etc.
ii) Wide band FM (WBFM)
‰‰ As discussed earlier, for large values of modulation index mf, the FM
wave ideally contains the carrier and an infinite number of sideband
located symmetrically around the carrier.
‰‰ Such a FM wave has infinite bandwidth and hence called as wide-
band FM.
‰‰ The modulation index of wideband FM is higher than 1.
‰‰ The maximum permissible deviation is 75kHz and it is used in the
entertainment broadcasting applications such as FM radio, TV etc.
ANALOG AND DIGITAL COMMUNICATION

1.13.12 Comparison of NBFM AND WBFM

S.No Parameters Wideband FM Narrowband


FM
1 Modulation index Greater than 1 Less than 1
2 Maximum devia- 75kHz 5kHz
tion
3 Range of 30 Hz to 15kHz 30 Hz to 3kHz
modulating
frequency
4 Maximum modu- 5 to 2500 Slightly greater
lation index than 1
5 Bandwidth Large Small
(approximately (equal to AM)
15times > NBFM)

6 Noise Noise is more Noise is less


suppressed suppressed

7 Applications Entertainment Mobile


broadcasting communication


Advantages of FM
1. Improved noise immunity.
2. Low power is required to be transmitted to obtain the same quality
of received signal at the receiver.
3. Covers a large area with the same amount of transmitted power
4. Transmitted power remains constant.
5. All the transmitted power is useful.
Disadvantages of FM
Very large bandwidth is required.
Since the space wave propagation is used, the radius of transmis-
sion is limited by the line of sight.
ANALOG COMMUNICATION

FM transmission and reception equipments are complex.


Applications of FM:
Radio broadcasting.
Sound broadcasting in TV.
Satellite communication
Police wireless
Point to point communication.
Problem 1

For an FM modulator with a peak frequency deviation Df = 20


kHz, a modulating signal frequency fm =10 kHz, Vc =10 V and a 500 –kHz
carrier determine

a. Actual minimum bandwidth

b. Approximate minimum bandwidth using Carson’s rule.

c. Plot the output frequency spectrum for the Bessel approxima-


tion.

Given

Peak frequency deviation , Df =20KHz

Modulating signal frequency, fm =10KHz

Carrier frequency ,fc = 500 KHz

Carrier voltage, Vc =10V.

Solution
Df
(a) Modulation index mf =
fm

20
=
10
mf = 2

A modulation of 2 yields four sets of significant side bands.


ANALOG AND DIGITAL COMMUNICATION

B = 2 (n x fm )

= 2 (4 x 10 kHz)

= 2 (40 kHz)


B = 80 kHz

d. Minimum bandwidth using Carson’s rule.

B = 2(Df + fm)Hz

=2 (20 kHz + 10 kHz)

B = 60 kHz

e. Using Bessel function table, to draw the output frequency


spectrum.

J0(2) = 0.22

J1(2) = 0.58

J2(2) = 0.35

J3(2) =0.13

mf =2

Vc J0 (mf) =10 x 0.22 =2.2 V

Vc J1(mf) =10 x 0.58 = 5.8 V

Vc J2 (mf) =10 x 0.35 =3.5 V

VcJ3(mf) =10 x 0.13=1.3 V


ANALOG COMMUNICATION

5.8 V 5.8 V

3.5 V 3.5 V

2.2 V
1.3 V 1.3 V
Df Df

440 460 480 500 520 540 560 f(KHz)

Bessel B=80 kHz

Frequency spectrum

Problem 2

Determine,

a. The deviation ratio and bandwidth for the worst-case


(widest bandwidth) modulation index for an FM broadcast-band
transmitter with a maximum frequency deviation of 75 kHz and a
maximum modulating signal frequency of 15 kHz.

b. The deviation ratio and maximum bandwidth for an equal


modulation index with only half the peak frequency deviation and
modulating signal frequency.

Given

Maximum frequency deviation, Df =75 KHz

Maximum modulating signal frequency, fm =15KHz

Solution
Df(max)
a. Deviation Ratio (DR) =
fm(max)
ANALOG AND DIGITAL COMMUNICATION


75 kHz
=
15 kHz
DR =5

A modulation index of 5 produces eight significant sidebands.

B =2(n x fm) Hz

=2(8 x 15,000)

=2(120000)

=240000Hz (or)

B =240 kHz

b. For a 37.5kHz frequency deviation and a modulating signal


frequency fm = 7.5 kHz, the modulation index is
Df
mf =
fm

37.5 kHz
=
7.5 kHz

mf =5
Bandwidth B = 2(n x fm)Hz

=2(8 x 7500)
B =120 kHz

ANALOG COMMUNICATION

Problem 3

An FM wave with a frequency deviation of 10 kHz and maximum


deviation allowed is 25 kHz. Find out the percentage modulation?

Given

Df(act) =10 kHz

Df(max) = 25 kHz

Solution
Actual frequency deviation
% Modulation =
Maximum allowed deviation
Df(actual)
=
Df(max)
10kHz
=
25kHz
% modulation = 40 %

Problem 4

In an FM system , if the maximum value of deviation is 75 kHz


and the maximum modulating frequency is 10 kHz . Calculate the
deviation ratio and bandwidth of the system using Carson’s rule?

Given

Df(max) =75kHz

Fm(max) =10kHz

Mp =5 rad

Solution
Df(max)
a. Deviation ratio (DR) =
fm(max)
ANALOG AND DIGITAL COMMUNICATION

75 kHz
=
10 kHz

DR = 7.5

b. System bandwidth, B =2(Df(max) + fm(max))

=2 [75 +10]

B =170 kHz
Problem 5
For an FM modulator with a modulation index mf = 1, a
modulating Vm(t) =Vm sin (2p x 1000t), and an unmodulated carrier
Vc(t)=15 sin (2p x 500t) determine
a. Number of sets of significant side frequencies,
b. Their amplitudes,
c. Draw the frequency spectrum showing their relative amplitude.
Given
Modulation index, mf =1
Modulating signal Vm(t) =Vm sin (2p x 1000t)
Carrier signal Vc(t) =15 sin (2px 500t)
Solution
a. Modulation index mf=1 means it have three sets of significant side
frequencies.
b. The relative amplitudes of the carrier and side frequencies are
J0 = J0(mf) x Vc =J0(1) x 15 = 0.77 x 15=11.55 V
J1 =J1(1) x 15 = 0.44 x 15 = 6.6 V
J2 =J2(1) x 15 = 0.11 x 15 =1.65 V
J3 =J3(1) x 15 =0.02 x 15 = 0.3 V
ANALOG COMMUNICATION

c. Frequency spectrum 11.55 V

6.6 V 6.6 V

1.65 V 1.65 V
0.3 V 0.3 V

497 498 499 500 501 502 503


Frequency
J3 J2 J1 J0 J1 J2 J3

Frequency Spectrum

Problem 6

An FM wave is represented by the voltage equation VFM(t) =10 sin


(8 x106 t + 2sin 3 x104t)calculate

a. Modulating frequency

b. Carrier frequency,

c. Modulation index and

d. Frequency deviation.

Given

FM wave VFM(t) =10 sin(8 x 106 t+2 sin 3 x 104t)

Solution

VFM(t) =Vc sin(wct + mf sin wmt)

VFM(t) =10 sin(8 x 106 t+ 6 sin 3 x 104t)


wm
a. Modulating frequency,fm =
2p

3 x 104
=
2p

ANALOG AND DIGITAL COMMUNICATION

= 4.77kHz
wc
b. (b)Carrier frequency, fc =
2p
8 x 106
=
2p

=1.27 MHz

c. Modulation index mf =6

d. Frequency deviation ,Df =mf x fm

=6 x 4.77
Df = 28.62 kHz

1.14 COMPARISONS OF VARIOUS ANALOG COMMUNICATION SYSTEMS

Param- Amplitude Modulation Frequency Modulation Phase Modulation


S.No
eter
Definition Amplitude of the carrier is Frequency of the carrier is Phase of the carrier
varied according to ampli- varied according to amplitude is varied according to
1
tude of modulating signal of modulating signal amplitude of modulat-
ANALOG COMMUNICATION

ing signal
Output
Voltage VAM (t) = VcSin wct + VFM (t) = Vc Sin (wct + mf Vpm t) = Vc Sin (wct + mp
2 m aV c Sin wmt) Sin wmt)

2 [Cos (wc + wm)t +


Cos (wc- wm)t ]
Number of Only two side bands Infinite number of side bands Infinite number of
3
side bands sidebands
Bandwidth BW = 2fm. It is is not BW = 2(Df +fm). The bandwidth BW = 2 (Df + fm). The
4 dependent on the depends on modulation index bandwidth depends on
modulation index modulation Time
Transmit- Transmitted power depends Transmitted power is constant Transmitted power is
5 ting on the modulation index independent of modulation in- constant independent
power dex of modulation index
Modulation
Vm kf. Vm
index ma = mf = mp = kp. Vm (radians)
Vc fm

6 Modulation index is pro- Modulation index is propor-


portional to both modulat- tional to modulating voltage as
ing and carrier voltage well as modulating frequency

Noise in- Noise interference is more Noise interference is mini- Noise interference is less
7 terference mum than AM, but more than
FM

Depth of Depth of modulation have Depth of modulation have Depth of modulation re-
Modulation limitation. It cannot be no limitation, it is inversely mains same if modulat-
8
increased above 1 proportional to modulating ing frequency is changed
frequency
Fidelity AM has poor fidelity due to Fidelity is better due to wide Fidelity is better due to
9 narrow bandwidth bandwidth wide bandwidth
ANALOG AND DIGITAL COMMUNICATION
Adjacent Adjacent channel Adjacent channel inter- Adjacent channel interfer-
10 channel interference is pre- ference is avoided due to ence is avoided
interference sent wide frequency spectrum
Signal to noise Signal to noise ratio Signal noise ratio is bet- Signal to noise ratio is infe-
11
ratio is less ter than that of PM rior to that in FM
Equipment Transmission and Transmission and recep- More complex
used reception equip- tion equipments are more
12
ANALOG COMMUNICATION

ments are less complex


complex
Applications Radio and TV Radio, TV broadcasting, PM is used in some mobile
13 broadcasting police wireless, point to systems
point communication
Vc j (m ) j (m )
j1(mf) 0 f j (m ) j1(mp) 0 pj (m )
M aV c MaVc I2(mf) 1 f I2(mp) 1 p
Frequency I2(mf) I2(mp)
2 2
14
spectrum
(fc-fm) fc (fc+fm) fc-2/m fc-fm fc fc+fm fc-2/m fc-2/m fc-fm fc fc+fm fc-2/m

Amplitude Amplitude Amplitude

0 t
15 t 0 t
Output wave-
form
ANALOG AND DIGITAL COMMUNICATION

SOLVED TWO MARKS

1. Define AM and draw its spectrum.



Amplitude of the carrier signal varies according to amplitude
variations in modulating signal is known as amplitude
modulation. Spectrum: Figure shows the spectrum of AM signal. It
consists of carrier (ƒc) and two sidebands at ƒc ± ƒm .

Ec
maEc maEc
2 2

fc-fm fc fc+fm

Figure Spectrum of AM wave

2. Why carrier frequencies are generally selected in HF range than


low frequency range?

The antenna size is very large at low frequencies. Such


antenna is practically not possible to fabricate. High carrier
frequencies require reasonable antenna size for transmission and
reception.
High frequencies can be transmitted using tropospher-
ic scatter propagation, which is used to travel long distances.

ANALOG COMMUNICATION

3. The equation of an AM wave is, eAM = 100[1 + 0.7 cos (3000t/2π)


+ 0.3cos(6000t/2π) sin(106t/2π)]
Find the amplitude and frequency of various sideband terms.

Solution: The given equation can also written as


eAM = [100 + 70cos(3000t/2π) + 30cos(6000t/2π)] sin(106t/2π)
Here
Em1 = 70 and ω1 = 3000/2π rad/sec
Em2 = 30 and ω2 = 6000/2π rad/sec
Ec = 100 and ωc = 106/2π rad/sec
Hence
m1 = Em1/Ec = 70/100 = 0.7
m2 = Em2/Ec = 30/100 = 0.3
Ec= 100
m1Ec m1Ec
= 35
2 = 35 2
m2Ec m2Ec
= 15 = 15
2 2

wc -6000/ 2p wc -3000/ 2p wc =106/ 2p w +3000/ 2p w +6000/ 2p


c c

4. Calculate percentage modulation in AM if carrier amplitude


is 20 V and modulating signal is of 15V.

Solution:
Here Em= 15V
Ec = 20V
Modulation index, m = Em / Ec = 15/20 = 0.75
Percentage modulation = m * 100
= 75%
5. Define detection (or) demodulation.

Detection is the process of extracting modulating signal


(message signal or original information or base band) from the
ANALOG AND DIGITAL COMMUNICATION

modulated carrier. Different types of detectors are used for


different types of modulations.
In other words, demodulation or detection is the process by
which the message is recovered from the modulated signal at re-
ceiver.The devices used for demodulation or detection are called
demodulation or detectors.

6. Define the term modulation index for AM.

Modulation index is the ratio of amplitude of modulating signal


(Em) to amplitude of carrier (Ec).
i.e. m = Em / Ec

7. State Carson’s rule of FM bandwidth.

Carson's rule' approximates the bandwidth necessary to


transmit an angle modulated wave as twice the sum of the
peak frequency deviation and the highest modulating signal
frequency.
Carson’s rule of FM bandwidth is given as,
BW = 2(δ + ƒm (max))
Here δ is the maximum frequency deviation and ƒm (max) is the
maximum signal frequency.

8. Differentiate between narrow band FM and wideband FM.

In narrowband FM, the frequency deviation is very small. Hence


the frequency spectrum consists of two major sidebands like
AM. Other sidebands are negligible and hence they can be ne-
glected. Therefore the bandwidth of narrowband FM is limited
only to twice of highest modulating frequency.
If the deviation in carrier frequency is large enough so that other
sidebands cannot be neglected, then it is called wideband FM.
The bandwidth of wideband FM is calculated as per Carson’s
rule.
ANALOG COMMUNICATION

9. Define frequency modulation.

Frequency modulation is defined as the process by which the


frequency of the carrier wave is changed in accordance with the
instantaneous value of the message signals.

10. Define modulation index for FM.

Modulation index is defined as the ratio of maximum frequency


deviation to the modulating frequency.
Modulation index mf = δ/ƒm

11. Define frequency deviation.

Frequency deviation is the change in frequency that occurs in


the carrier when it is acted on by a modulating signal frequency.
The frequency deviation is typically given as the peak frequency
shift in Hertz (Δf).

12. What is the effect of increasing modulation index in FM?


In FM, the total transmitted power always remains constant.
But with increased depth of modulation, the required bandwidth
is increased.

13. Why is FM superior to AM in performance?


i). In AM system the bandwidth is finite. But FM system has in-
finite number of sidebands in addition to a single carrier.
ii). In FM system all the transmitted power is useful whereas in
AM most of the transmitted power is used by the carrier.
iii). Noise is very less in FM; hence there is an increase in the
signal to noise ratio.

14. Define instantaneous phase deviation

The instantaneous phase deviation is the instantaneous change


in phase of the carrier at a given instant of time and it indicates
ANALOG AND DIGITAL COMMUNICATION

how much the phase of the carrier is changing with respect to


the reference phase.

16. Define angle modulation

Angle modulation is defined as the process by which the fre-


quency or phase of the carrier wave is changed in accordance
with the instantaneous value of the message signals.

17. Define PM.

In phase modulation, the phase of the carrier varies according


to amplitude variations of the modulating signal. The PM signal
can be expressed mathematically as,
ePM = Ec sin(ωct+ mpsinωmt)
Here mp is the modulation index for phase modulation. It is giv-
en as, mp = Φm
Here Φm is the maximum value of phase change.

18. What is the need for pre-emphasis in FM transmission?

The noise has greater effect on higher modulating


frequencies than on lower ones. The effect of noise on higher
frequencies can be artificially boosting them at the transmitter
and correspondingly attenuating them at the receiver. Thus is
pre-emphasis.

19. What are the advantages of FM over AM?

a. The amplitude of FM is constant. It is independent of depth of


modulation. Hence transmitter power remains constant in FM
whereas it varies in AM.
b. Since amplitude of FM is constant, the noise interference is
minimum in FM. Any noise superimposing amplitude can be
removed with the help of amplitude limits. Whereas it is difficult
to remove amplitude variations due to noise in AM.
ANALOG COMMUNICATION

c. The depth of modulation has limitation in AM. But in FM the


depth of modulation can be increased to any value by increasing
the deviation. This does not cause any distortion in FM signal.
d. Since guard bands are provided in FM, there is less possibility of
adjacent channel interference.
e. Since space waves are used for FM, the radius of propagation
is limited to line of sight. Hence it is possible to operate several
independent transmitters on same frequency with minimum in-
terference.
f. Since FM uses UHF and VHF ranges, the noise interference is
minimum compared to AM which uses MF and HF ranges.

20.
A 107.6 MHZ carrier is frequency modulated by a 7 kHZ sine
wave. The resultant FM signal has a frequency of 50 kHZ
Determine the modulation index of the FM wave.
Here δ = 50 kHZ and ƒm = 7 kHZ .
Modulation index = δ/ƒm = 50/7 = 7.142

21.
If the rms value of the aerial current before modulation is
12.5 A and during modulation is 14 A, calculate the percent-
age of modulation employed, assuming no distortion.
Here Itotal = 14 A and Ic = 12.5 A.

m= 2
( I2total
I2C
-1
)
= 2
( 142
12.52
-1
) = 0.71

22.
An AM broadcast transmitter radiates 9.5 KW of power with
the carrier unmodulated and 10.925 KW when it is sinusoi-
dally modulated. Calculate the modulation index.

Ptotal = 10.925 KW, Pc = 9.5 KW
ANALOG AND DIGITAL COMMUNICATION

( )

Ptotal
m= 2 -1
PC

m=
2
10.925
9.5
-1
( ) = 0.54

23. A broadcast radio transmitter radiates 5 KW power when the


modulation percentage is 60%. How much is the carrier power?
Ptotal = 5 KW, m = 0.6, Pc =?

Ptotal = Pc ( 1+
m2
2 )
5 KW = Pc
( 1+
0.62
2 )
Pc = 4.237 KW.

24. What are two major limitations of the standard form of


amplitude modulation?

1) Most of the power is transmitted in the carrier. Hence AM is less


efficient.
2) Because of amplitude variations in AM signal, the effect of noise
is more.

25.
The antenna current of an AM transmitter is 8 A when only
carrier is sent, but it increases to 8.96 A when the carrier
is modulated by a single tone sinusoid. Find the percentage
modulation.

Here Itotal = 8.96 A and Ic = 8 A.
ANALOG COMMUNICATION

m2
Itotal = Ic 1+
2

m2
8.96 = 8 1+
2

m = 0.713

26. If a modulated wave with an average voltage of 20 Vp chang-


es in amplitude 5 V, determine the maximum and minimum
envelope amplitudes and the modulation coefficients.

Emax = 20 + 5 = 25 V
Emin = 20 – 5 = 15 V

Emax - Emin
Modulation index =
Emax + Emin

25-15
= = 0.25
25 +15

27. A carrier is frequency modulated with a sinusoidal signal of


2 kHZ resulting in a maximum frequency deviation of 5 kHZ
. Find 1) Modulation index 2) Bandwidth of the modulating
signal.

Given data: Modulating frequency ƒm = 2 kHz


Maximum frequency deviation δ = 5 kHz
1) Modulation index = mf = δ/ƒm
5 x 103
= = 2.5
2 x 103

2) Bandwidth of the modulating signal


BW = 2(δ + ƒm (max))
Here ƒm (max) is the maximum modulating frequency, which
is given as 2 kHz .
Hence,
ANALOG AND DIGITAL COMMUNICATION

BW = 2(5 x 103 + 2 x 103) = 14 KHz


28. Calculate the bandwidth of commercial FM transmission as-
suming Δƒ = 75 kHz and W = 15 kHz.

Here δ = Δƒ = 75 kHz
And ƒm (max)) = W = 15 kHz
BW = 2(δ + ƒm (max))
= 2[75+15] kHz = 180 kHz

29. Define bandwidth efficiency.


It is the ratio of the transmission bit rate to the minimum
bandwidth required for a particular modulation scheme. It is
denoted as Bη and given by
Transmission bit rate (bps)
Bh =
Minimum bandwidth (Hz)
bits/s bits/s bits
= = =
Hz cycles/s cycle

30. What is the required bandwidth for FM signal in terms of


frequency deviation?

BW = 2 (Df + fm) Hz

Where,
Df = Peak frequency deviation (Hz)
fm = Modulating-signal frequency (Hz)

31. Distinguish between FM and PM.

Sl. No FM PM
1 VFM(T) = Vc cos (wct + mf sin VPM(t) = Vc cos (wct +mp cos
wmt) wmt)
2 Associated with the change Associated with the changes
in fc, there is some phase in phase there some change
change in fc.
ANALOG COMMUNICATION

3 mf, is porportional to the mp is proportional to the


modulating voltage as well as modulating voltage
the modulating frequency fm
4 It is possible to receive FM on It is possible to receive PM on
a PM receiver FM receiver

32. Draw the waveforms of AM signal and DSB- SC signal


AM signal waveform Vm
+ Envelope

Vm

Vc

0 Time

DSB-SC modulated Signal


VDSB(t)

0 Time

33. For. an AM DSBFC modulator with a carrier frequency of


100KHz and maximum modulating signal frequency of 5
KHz, determine upper and lower side band frequency and
the bandwidth.

Upper side band fUSB = fc + fm


= 100 + 5
= 105 KHz
Lower side band fLSB = fc - fm
= 100 - 5
= 95 KHz
Bandwidth (B) = 2 fm = 10 KHz
ANALOG AND DIGITAL COMMUNICATION

34. Draw the spectrum of FM signal


Lower side bands J0(mf).Vc Upper sidebands

J1(mf).Vc J1(mf).Vc

J2(mf).Vc J2(mf).Vc

fc - 2fm fc-fm fc fc+fm fc+2fm

Bandwidth=infinite

35. What is the purpose of limiter in FM receiver?


In an FM system, the message information is transmitted by
variations of the instantaneous frequency of a sinusoidal carrier
wave, and its amplitude is maintained constant.
Any variation of the carrier amplitude at the receiver input must
result from noise or interference.
An amplitude limiter, following the IF section is used to remove
amplitude variations by clipping the modulated wave at the IF
section.
36. In an AM transmitter, the carrier power is 10kW and the
modulation index is 0.5. Calculate the total RF power deliv-
ered.
Given:
Carrier power Pc = 10kW
Modulation Index (rna) = 0.5
Solution:

We know, total power Pt = PC

= 10 x 103
( 1+
ma2
2 )

= 112.5 KW
( 1+
(0.5)2
2 )
ANALOG COMMUNICATION

REVIEW QUESTIONS
PART - A
1. What is meant by noise?
2. What are the types of noise?
3. Define shot noise.
4. What is flicker noise?
5. Define Amplitude modulation.
6. Differentiate between narrow band and wide band FM signal
7. What is demodulation?
8. Draw the spectrum of FM signal.
9. State Shannon’s Limit for channel capacity theorem. Give an
example.
10. Define Bandwidth efficiency.
11. Distinguish between FM and PM.
12. What is bandwidth need to transmit 4kHz voice signal using AM.
13. Define modulation and modulation index.
14. What is the purpose of limiter in FM receiver?
15. What is modulation index and percentage modulation in AM?
16. Draw the frequency spectrum and mention the band-width of AM
signal.
17. In an AM transmitter, the carrier power is 10kw and the modulation
index is 0.5. Calculate the total RF power delivered.
18. For an AM DSBFC modulator with a carrier frequency of 100KHz
and maximum modulating signal frequency of 5 KHz, determine up-
per and lower side band frequency and the bandwidth.
19. State Carson's rule.
20. In a Amplitude modulation system, the carrier frequency is Fc = 100
KHz. The maximum frequency of the signal is 5 KHz. Determine the
lower and upper side bonds and the bond width of AM signal.
ANALOG AND DIGITAL COMMUNICATION

21. The maximum frequency deviation in an FM is 10 KHz and signal


frequency is 10 KHz. Find out the bandwidth using Carson's rule and
the modulation index.
22. If a 10 V carrier is amplitude modulated by two different frequencies
with, amplitudes 2 V and 3V respectively. Find the modulation index:
23. Write down the mathematical expressions for angle modulated wave.
24. A 200W carrier is modulated to a depth of 75%. Calculate the total
power in the modulated wave.
25. Find the peak phase deviation for a PM modulator with a deviation
sensitivity K= 5rad/V and a modulating signal Vm (t )=2 cos(2p2000.t).
26. What is the basic difference between AM and FM receivers?
27. Draw the frequency spectrum of AM signals.
28. Give the expression that relates the power carried with the modula-
tion index of AM.
29. What are the basic building blocks of phase locked loops?
30. Define the term 'angle modulation'.
31. Find the carrier power of a broadcast radio transmitter radiates 20
KW for the modulation index is 0.6.
32. Mention the advantages of the super heterodyne receiver over TRF
receiver.
33. Define modulation index for FM and PM.
34. List the applications of phase locked loop.
35. Define phase modulation.
36. What is the approximate bandwidth required to transmit a signal at
4 kHz using FM with frequency deviation of 75 kHz?
37. Draw the amplitude modulation waveforms with modulation index
m = 1, m< l and m > 1.
PART – B
1. Draw the block diagram of AM super heterodyne receiver and explain
ANALOG COMMUNICATION

function of each block.


2. With the help of a block diagram and theory explain FM demodula-
tion employing PLL.
3. (i).What is the need for modulation?
(ii).Explain with necessary diagram any one method for generation
of AM waves.
4. (i) With neat block diagram describe AM transmitter.
(ii). Derive for carrier power and transmitter power in AM in terms of
modulation index.
5. Draw the block diagram and explain generation of DSB-SC signal
using balanced modulator. If the percentage modulation is 100%,
how much percentage of the total power is present in the signal when
DSB-SC is used.
6. Define FM and PM modulation and write their equations. Describe
the generation of FM wave using Armstrong method.
7. Write a note on frequency spectrum analysis of angle modulated
waves.
8. Explain the band width requirements of angle modulated waves.
9. Compare FM and PM.
10. The output of a AM transmitter is given by
Calculate
(1) Carrier frequency
(2) Modulating frequency.
(3) Modulation index
(4) Carrier power if the load is 600 Ω
(5) Total power.
11. In an AM modulator, 500 KHz carrier of amplitude 20 V is modulated
by 10 KHz modulating signal which causes a change in the output
wave of ± 7.5 V. Determine
ANALOG AND DIGITAL COMMUNICATION

(1) Upper and lower side band frequencies


(2) Modulation Index
(3) Peak amplitude of upper and lower side frequency
(4) Maximum and minimum amplitudes of envelope.
12. An AM signal has the equation
(1) Find the carrier frequency.
(2) Find the frequency of the modulating signal.
(3) Find the value. Of m
(4) Find the peak voltage of the unmodulated carrier.
(5) Sketch the signal in the time domain, showing voltage and time
scales.
13. For an AM DSBFC wave with an unmodulated carrier voltage of 18 V
and a load resistance of 72Ω, determine the following
(i) Unmodulated carrier power (ii) Modulated carrier power (iii) .Total
sideband power
(iv) Upper and lower sideband powers (v) Total transmitted power.
14. For an AM DSBSC modulator with a carrier frequency fc = 100 KHz
and a maximum modulating signal fm = 5 KHz.
Determine
(1) the frequency limits for the upper and lower sidebands
(2) bandwidth
(3) sketch the output frequency spectrum.
15. What is noise? Explain briefly about sources of noise.
NUMBER SYSTEM

Amplitude Shift Keying (ASK) – Frequency Shift Keying (FSK) Minimum


Shift Keying (MSK) –Phase Shift Keying (PSK) – BPSK – QPSK – 8 PSK –
16 PSK - Quadrature Amplitude Modulation (QAM) – 8 QAM – 16 QAM
– Bandwidth Efficiency– Comparison of various Digital Communication
System (ASK– FSK – PSK – QAM).
Digital Communication

DIGITAL
COMMUNICATION Unit 2
2.1 INTRODUCTION

An electronic communication is transmission, reception and


processing of information. Information is defined as knowledge (or)
intelligence communicated (or) received.

Figure 2.1 shows the block diagram of electronic communication


system.

Information Transmission Destination

Source Medium Source

Figure 2.1 Simplified block diagram of an electronic

communication system

The information may be analog (or) digital. Analog information


can be voice, picture (or) music.

Digital information may be binary coded number,


alpha numeric codes, graphic symbols, database information etc. But
the information cannot be transmitted as it is. It should be converted
into electrical signals and then modulated.

Modulation can be analog modulation technique such


as Amplitude modulation, Frequency modulation and phase
modulation. But these schemes are being replaced by the digital
modulation schemes. Digital communication technique include
transmission of digital pulses.

Advantages of digital communication

ˆˆ Better noise immunity.


ANALOG AND DIGITAL COMMUNICATION

ˆˆ Ease of processing.

ˆˆ Ease of multiplexing.

ˆˆ Error correction and detection becomes more effective in digital


communication.

Definition

We may define the “Digital communication system” as the


systems that use the low frequency digital information signals to
modulate the high frequency carriers and the transmission takes place
in the form of digital pulses.

There are two types of digital communication,

1. Digital transmission (or) Base band data transmission.

2. Digital radio. (or) Pass band data transmission.


2.2 DIGITAL TRANSMISSION SYSTEM

Figure 2.2 shows the simplified block diagram of digital


transmission system.

Digital Digital
Input Digital Digital pulses output
Digital
Terminal Terminal
Analog Interface Physical Interface
Input Analog
transmission
ADC DAC Output
Input

Figure 2.2 Digital transmission system

ˆˆ The original source information can be in digital (or) analog form.

ˆˆ If it is in the digital form then it should be directly


transmitted through the transmission medium and received by a
receiver otherwise (i.e. analog signal) then it is converted into Digital
signal and then transmitted.

ˆˆ If we require digital data, the received signal can be taken as it is.


Digital Communication

ˆˆ If we require analog data, the received signal can be converted into


analog by DAC.

ˆˆ In digital transmission , there is no modulation. Digital data is


transmitted directly over a pair of wires, co-axial cable (or) fiber optic
cable.

ˆˆ Digital transmission system must need a physical transmission


medium (channel) to communicate data between two point, so it is
suitable for short distance communication only. e.g. data transmitted
from computer to printer, LAN connections etc.
2.3 DIGITAL RADIO

Figure 2.3 shows the block diagram of digital radio system.

Channel
Input
Data Output
BPF and
Demodulator Data
Precoder Modulator power BPF and
amplifier and Decoder
Amplifier

clock

Buffer Noise
BPF
Carrier
Channel
Analog BPF
(or)
carrier Transmission Carrier
media and
clock
Recovery
Transmitter

Receiver

Figure 2.3 Block Diagram of digital radio system

In a digital radio, the digital pulses modulate an analog carrier


between two or more points.
ANALOG AND DIGITAL COMMUNICATION

In the digital transmission system needs a physical transmission


medium (channel) such as a pair of wire, co-axial cable, optical fiber etc.
But the digital radio generally uses the free space as it’s transmission
medium.

Bit rate, Symbol rate ,Baud rate

Bit rate: Bit rate is the number of bits transmitted in one second. It is
expressed in bits per second (bps).

Symbol rate: If ‘N’ number of successive bits are combined to form a


symbol, then symbol rate becomes,
Bit rate
Symbol rate =
N
Baud rate : Baud rate is the rate of change of a signal on transmission
medium after encoding and modulation have occurred .Thus Baud rate
is basically symbol rate.
1
Baud =
ts
Where, ts → time of one signalling element

2.4 INFORMATION CAPACITY

It is a measure of how much information can be propagated


through a communications system and it is a function of bandwidth and
transmission time. It is expressed in bits/sec.

Hartley’s law

Hartley’s law states that the relationship among bandwidth,


transmission time and information capacity.

It is expressed as,

I  Bxt ...(1)

Where,
Digital Communication

I = Information capacity(bits/sec)

B = Bandwidth (Hertz)

t = transmission time (sec)

Shannon’s information capacity Theorem

ˆˆ The electrical signals are travel from transmitter to the


receiver through a transmission channel (or) medium which posses
two important characteristics , they are

(1) Signal to noise ratio (SNR) and

(2) Bandwidth

ˆˆ These two characteristics will ultimately decide


the maximum capacity of a channel to carry information.

ˆˆ Shannon’s theorem for information capacity states that the


relationship among bandwidth, signal to noise ratio and information
capacity.

Shannon’s limit for information capacity is


 S 
I = B log 2 1 +  bits/sec
 N0 

 S
I = 3.32 B log10 1 +  bits/sec
 N

Where,

I = Information capacity (bps)

B = Bandwidth (Hertz)

S/N = Signal to Noise power ratio (unit less)


ANALOG AND DIGITAL COMMUNICATION

2.5 TRADE OF BETWEEN BANDWIDTH AND SNR

• By Shannon Hartley theorem we get the information capacity as,


 S 
I = B log 1 +  ...(1)
2
 N0 
• Let us try to find out the maximum possible value of ‘I’. From the
equation for ‘I’ it is evident that it depends on two factors, which are
the Bandwidth ‘B’ and the S/N ratio.

Let us find their effect on ‘I’ one by one.

Effect of S/N on ’I’

If the communication channel is noiseless then N=0. Therefore


(S/N) → ∞ and so ‘I’ also will tend to ∞. Thus the noiseless channel will
have an infinite capacity.

Effect of Bandwidth on ‘I’

ˆˆ Now consider that some white Gaussian noise is present hence (S/N)
is not infinite.

ˆˆ Now as the bandwidth approaches infinity the channel capacity does


not become infinite since N = N0B, will increase with the bandwidth
B.

ˆˆ This will reduce the value of (S/N) with increase in B, assuming the
signal power ‘S’ to be constant.

ˆˆ Thus the conclude that an ideal system with infinite bandwidth has
a finite channel capacity. It is denoted by ‘I∞’ and given by,
S
I∞ = 1.44 ...(2)
N0
Shannon’s information Rate

Shannon’s information rate is equal to the information capacity


according to the Shannon’s theorem,
S
∴Rmax = Imax = 1.44 ...(3)
N0
ANALOG AND DIGITAL COMMUNICATION

• By Shannon Hartley theorem we get the information capacity as,


 S 
I = B log 1 +  ...(1)
2
 N0 
• Let us try to find out the maximum possible value of ‘I’. From the
equation for ‘I’ it is evident that it depends on two factors, which are
the Bandwidth ‘B’ and the S/N ratio.

Let us find their effect on ‘I’ one by one.

Effect of S/N on ’I’

If the communication channel is noiseless then N=0. Therefore


(S/N) → ∞ and so ‘I’ also will tend to ∞. Thus the noiseless channel will
have an infinite capacity.

Effect of Bandwidth on ‘I’

ˆˆ Now consider that some white Gaussian noise is present hence (S/N)
is not infinite.

ˆˆ Now as the bandwidth approaches infinity the channel capacity does


not become infinite since N = N0B, will increase with the bandwidth
B.

ˆˆ This will reduce the value of (S/N) with increase in B, assuming the
signal power ‘S’ to be constant.

ˆˆ Thus the conclude that an ideal system with infinite bandwidth has
a finite channel capacity. It is denoted by ‘I∞’ and given by,
S
I∞ = 1.44 ...(2)
N0
Shannon’s information Rate

Shannon’s information rate is equal to the information capacity


according to the Shannon’s theorem,
S
∴Rmax = Imax = 1.44 ...(3)
N0
Practically it is very difficult to achieve this rate; because to
achieve this rate the channel bandwidth needs to be equal to ∞, and
Digital Communication

practically it is extremely difficult to have a transmission channel with


an infinite bandwidth.

Solved Problems

1. Given a channel with an intended capacity of 30 Mbps. The


bandwidth of this channel is 5 MHZ .What is SNR required in order
to achieve this capacity ?

Given data: I = 30 Mbps

B = 5 MHZ

Solution

According to Shannon’s limit theorem, information capacity is


given by ,
 S 
I = B log 2 1 +  ...(1)
 N0 
(or)

I = 3.32 B. log10 1 + S 


 N0 

30 x 106 = 3.32 x 5 x 10 6. log10 1 + S 


 N0 

30 x 10 6
 S 
= log 10 1 + 
3.32 x 5 x 106 N0 

1.807
= log 10  S 
1 + 
 N0 

Antilog(1.807)=  S 
1 + 
 N0 

64 = 1 + S 
 N0 
ANALOG AND DIGITAL COMMUNICATION
S
63 =
N0
SNR
= 63

\ SNR = 63 (or) in decibels , 10 log 10


63 =17.99 dB

2. Compute the maximum bit rate for a channel having bandwidth


of 3500 HZ and SNR of 20 dB. Also calculate the number of levels
required to transmit at the maximum bit rate.

Solution

Given data: B =3500 HZ, S/N = 20 dB =100.

According to Shannon’s information rate,

(i) Rmax =Imax = B log2  S 


1 + 
 N0 
= 3500 log2 [1+100]

= 3500 x 3.32 log 10


(101)

= 23,290 bits/sec.

Maximum bit rate = 23,290 bits/rate

(ii) Maximum bit rate =No. of levels x No. of bits/level.

Number of levels Q = 2N, no. of bits/level =N

Rmax = 2N.N

23,290= N.2N

Take log on both sides,

Log (23,290) =N. log. 2N= N. N. log (2)

= N2 log(2)

4.36 =0.3 x N2
4.36
= N2
0.3
14.53 = N2

∴ N = 3.824
Digital Communication

Therefore , number of bits per message,N=4

Number of levels Q= 2N=2 4 =16 levels.


2.6 M-ARY ENCODING

¾¾ M-ary is a term derived from the word binary. M-simply represents


a digit that corresponds to the number of conditions, levels (or)
combinations possible for a given number of binary variables.

¾¾ It is often advantageous to encode at a level higher than the binary


when there are more than two conditions possible.

¾¾ The number of bits necessary to produce a given number of


conditions is expressed mathematically as,

N =log 2 M ...(1)

Where, N = Number of bits necessary

M= Number of conditions

Equation (1) is simply and rearranged to express the number of


conditions possible with N-bits as,

2 N=M ...(2)

For example,

With one bit , only 2 1


=2 conditions are possible.

With two bit ,only 2 2=4 conditions are possible.

With three bit, only 23=8 conditions are possible.

2.7 DIGITAL CONTINUOUS WAVE MODULATION TECHNIQUES

2.7.1 Introduction

There are basically two types of transmission of digital signals.

(i) Base band data transmission


ANALOG AND DIGITAL COMMUNICATION

The digital data is transmitted over the channel directly. There


is no carrier (or) any modulator. This is suitable for only short distance
transmission.

(ii) Pass band data transmission

The digital data modulates high frequency sinusoidal


carrier. Hence it is also called as “digital continuous
wave modulation”. It is suitable for transmission over long
distances.

2.7.2 Pass band Data Transmission

There are three basic types of modulation techniques for the


transmission of digital signals .These methods are based on the three
characteristics of a sinusoidal signal amplitude, frequency and phase.

The corresponding modulation methods are then called as,

Figure 2. 4 Classification of digital continuous wave modulation


system

Need of modulation

ˆˆ The modem will modulate the digital data signal from the
DTE(computer) into an analog signal. This analog signal is then
transmitted on the telephone lines.

ˆˆ The digital data signal’s are not transmitted as it is on the


telephone lines because the digital data consists of binary 0’s and
Digital Communication

ˆˆ The digital data signal’s are not transmitted as it is on the


telephone lines because the digital data consists of binary 0’s and
1’s . Therefore the waveform changes it’s value abruptly from high to
low (or) low is high.

ˆˆ In order to carry such a signal without any distortion being


introduced, the communication medium needs to have a large
bandwidth.

ˆˆ Unfortunately the telephone lines do not have high Bandwidth.


Therefore we have to convert the digital signal first into an analog
signal which needs lower bandwidth by means of the modulation
process.

Advantages

The advantage of CW modulation techniques such as ASK,


FSK ,PSK etc. used for transmission of data is that we can use the
telephone lines for transmission of high speed data. Due to the use of
CW modulation the BW requirement is reduced.

Disadvantages

The disadvantages of continuous wave modulation is we need to


use a MODEM along with every computer. This makes the system costly
and complex.
2.8 AMPLITUDE SHIFT KEYING (OR) DIGITAL AMPLITUDE
MODULATION (OR) OOK- SYSTEM
Definition

In ASK, the digital information signal (or) binary information


signal directly modulates (or) alternate the amplitude of the carrier
between two distinct levels (1 and 0).This digital modulation method is
also referred to as ON-OFF Keying (00K).
ANALOG AND DIGITAL COMMUNICATION

2.8.1 Mathematical representation

ASK is the simplest type of digital CW-modulation. Here carrier is a sine


wave of frequency ‘fc’.

We can represent the carrier signal mathematically


as follows,

Vc(t) = Ac cos ωc t ...(1)

ASK can be mathematically expressed as,

A
VASK (t) = [1+ Vm(t)]  c cos ωc t  ...(2)
 2 
Where,

Vm(t) = Digital modulating signal (volts)

Ac = Unmodulated carrier amplitude (volts)

ωc =Analog carrier radian frequency

Case 1

For a bit 1 (logic 1) input, Vm(t) = +1 volt

Equation (2) becomes,


Ac
VASK(t) = [1+1] cos ωct
2

...(3)
VASK (t) = AC cos ωct
Case 2

For a bit 0 (logic 0) input,Vm(t) = -1 volt

Equation (2) becomes,

VASK(t) =[1-1] Ac cos ωct


2
Digital Communication

V (t) = 0 ...(4)
ASK

Thus, the VASK(t) is either Ac cos ωct (or) 0. Hence the carrier is ei-
ther ‘ON or ‘OFF’. Therefore ASK is also called ON-OFF keying.

2.8.2 Graphical representation

The Figure 2.5 shows the ASK- Modulation in graphical manner.

Figure 2.5 ASK Output Waveform

2.8.3 ASK-generator

The Figure 2.6 shows the Ask- generation circuit

Figure 2.6 Block Diagram of ASK-generator

• The digital signal from the computer is a


unipolar NRZ (non-return to zero) signal which acts as a
ANALOG AND DIGITAL COMMUNICATION

modulating signal, applied as a one input of product modulator.

• ASK – modulator is nothing but a multiplier followed by a


band-pass filter. The carrier signal is applied as a another input of
product modulator.

• Due to multiplication of the two signal, the ASK output will be


present only when a binary ‘1’ is to be transmitted.

• The ASK output corresponding to a binary ‘0’ is zero.

• We conclude that a carrier is transmitted when a binary ‘1’ is to


be sent and no carrier is transmitted. When binary ‘0’ is to be
sent is as shown in Figure 2.5

2.8.4 ASK –Detector

The Figure 2.7 below shows the ASK-demodulation circuit.


ASK
Signal Tb Decision Original

(or) 0
∫ making
device
data

Received
signal

(Carrier) Threshold
Ac cos wct
Figure 2.7 ASK - Demodulator Circuit

• The ASK –signal is applied as one input of multiplier and integrator.

• The locally generated coherent carrier is applied as another input of


multiplier

Case 1

Received signal is consider Ac cos wct then, the output of


multiplier is given by,

= Ac2 cos2 ωct ...(1)


Digital Communication

The output of multiplier given as a input to integrator, integrator


act as a LPF. Therefore LPF produce low frequency component only at
the output.
1 + cos 2ωc t 
= Ac 2  
 2 

Ac 2 Ac 2
= + cos 2 ωc t ...(2)
2 2

In equation (2), first term represents DC-term and second term


represents second order harmonic. Therefore LPF filtered out the second
term, First term only obtained at the output
Ac 2
= ...(3)
2
The output of integrator is given to Decision making device,
which is compared with threshold value and produce the output logic 1
(i.e binary ‘1’)

Case 2

Received signal consider as a zero, then the output of multiplier,


integrator and decision making device is equal to zero. Therefore the
output is logic ‘0’ (i.e) binary ‘0’.

Bit time (interval ) Tb

The bit interval is the time required to send one single bit. It is the
reciprocal of the bit rate.

Bit rate

Bit rate is the number of bits transmitted (or) sent in one second.
It is expressed in bits per second (bps).

Relation between bit rate & bit interval,


1 1
Bit rate = = = fb
Bit interval Tb
ANALOG AND DIGITAL COMMUNICATION

Baud rate
fb
Baud rate = N ...(1)

For ASK, we use one bit (0 (or) 1) to represent one symbol.


Therefore , the rate of change of the ASK wave form (baud) is the same
as the rate of change of binary input (bps), thus bit rate equals the baud
rate.
fb
Baud =
N
fb
= = fb
1


Where N= number of bits =1

2.8.5 Bandwidth of ASK

The Bandwidth of ASK in terms of bit rate is given by,

BW = (fc + fa) - ( fc -fa) ...(1)


fb
Where, fa = fundamental frequency of binary input =
2
 fb   fb 
BW =  fc + − fc − 
 2   2

fb f
= fc + − fc + b
2 2

= fb
BW

For ASK , Bandwidth is also equal to bit rate.

Advantages

(1) Simple techniques

(2) Easy to generate and detect.

Disadvantages

(1) It is very sensitive to noise.

(2) It is used at very low bit rates upto 100 bits/sec.


Digital Communication

2.9 FREQUENCY SHIFT KEYING

Definition

The frequency of a sinusoidal carrier is shifted between two


discrete values according to the binary symbol (0 (or) 1).

2.9.1 Mathematical representation

FSK is sometimes called binary FSK (BFSK).The general


expression for FSK is,

V (t) =VC cos {2(fc + Vm(t)∆f)t} ...(1)


FSK

Where,

Vc =Peak analog carrier amplitude.

fc =Analog carrier centre frequency.

Vm(t) =binary input (i.e logic 1 (or) logic 0)

∆f =Peak shift in the analog carrier frequency.

From equation (1) it can be seen that the peak shift in carrier
frequency (fc) is proportional to the amplitude of binary input signal Vm(t).

Case 1

For a logic ‘1’ input Vm(t)=+1V. the equation (1) becomes,

VFSK(t) = Vc cos{2(fc + 1 ∆f)t}

V (t) = Vc cos{2(fc+∆f)t} ...(2)


FSK

Case 2

For logic ‘0’ input, Vm (t) = -1 V, the equation (1) becomes,

VFSK(t) = Vc cos { 2p(fc - 1. Df)t}

VFSK(t) =Vc cos {2(fc -∆f )t} ...(3)


ANALOG AND DIGITAL COMMUNICATION

In BFSK , the centre frequency fc is shifted up and down in the


frequency domain by the binary input signal, is as shown in Figure 2.8.

-∆f
+∆f

fs fc fm
Logic 1
Logic 0

Figure 2.8 FSK in frequency domain

When the binary input signal changes from a logic 0 to a logic 1


and vice versa, the output frequency shifts between two frequencies,

(1) Space (or) logic 0 frequency (fs)

(2) Mark (or) logic 1 frequency (fm)

2.9.2 Frequency Deviation (∆f)

It is half the difference between mark and space frequencies,


|fm-fs|
∆f = ... (4)
2
Where,

∆f = frequency deviation (HZ)

|fm- fs| = absolute difference between the mark and space


frequencies.
Digital Communication

2.9.3 Graphical representation

The Figure 2.9 shows the Graphical representation of FSK-


Modulation.

binary input

1 0 1 0 1
t
Carrier signal

FSK- Output Free


t

fm fs fm fs fm

Figure 2.9 FSK-Output Waveform

Where,

fm- mark frequency

fs- Space frequency

2.9.4 FSK- generation

The Figure 2.10 below shows the FSK - generation circuit.

NRZ-Binary
input FSK-Modulator FSK
(VCO) Output

Figure 2.10 Block diagram of FSK generator

ˆˆ The VCO act as a FSK – generator , the input binary data is given
as control input of VCO.
ANALOG AND DIGITAL COMMUNICATION

ˆˆ If binary input is not applied (i.e) there is no input signal the VCO
generates the centre frequency equal to carrier frequency.

ˆˆ For logic 1 input , the VCO output frequency shifted to mark


frequency fm (i.e)( fc +∆ f).

ˆˆ For logic 0 input , the VCO output frequency shifted to space


frequency fs (i.e) (fc- ∆f ).

ˆˆ We conclude that the VCO output frequency changes back and forth
between space and mark frequencies.

2.9.5 FSK-detection

There are three methods to demodulate the FSK-Signal.

(1) Non – coherent FSK demodulator

(2) Coherent FSK demodulation

(3) PLL-FSK demodulator

2.9.5.1 Non- Coherent FSK-demodulator

The Figure 2.11 below shows the FSK - demodulator circuit

BPF Envelope
detector
dc
‘fS’ ~ comparator

FSK -
Output Power
splitter +
Output
dc Data
or
‘fm’ (Original
~ data)
Envelope
BPF detector

Rectified signal
Figure 2.11 Block Diagram of FSK- demodulator
Digital Communication

ˆˆ FSK - demodulation is quite simple with a circuit such as the one


shown in Figure 2.11.

ˆˆ FSK - input signal is simultaneously applied to the inputs of both


band pass filters (BPF) through a power splitter.

ˆˆ The respective filter passes only the mark (or) only the space
frequency on to its respective envelope detector.

ˆˆ The envelope detectors, in turn indicate the total power in each pass
band, and the comparator responds to the largest of the two powers.

ˆˆ If Non-inverting is greater when compare to inverting than it is taken


as logic 1 and vice versa for logic 0.

ˆˆ This type of FSK –detection is referred to as non-coherent detection.

ˆˆ There is no frequency involved in the demodulation process that is


synchronized either in phase .frequency (or) both with the incoming
FSK-signal.

2.9.5.2 Coherent FSK-receiver

ˆˆ The Figure 2.12 shows the block diagram for a coherent


FSK receiver.
Multiplier
LPF

Carrier
FSK Power -
Input Splitter
+

Output
Data
Multiplier or
Original
LPF Data

Carrier

Figure 2.12 Block Diagram of a Coherent FSK receiver


ANALOG AND DIGITAL COMMUNICATION

ˆˆ The FSK –input signal is simultaneously applied to the inputs of both


multipliers through power splitter.

ˆˆ Locally generated frequencies are applied as another input of


multiplier .The two frequencies are not same as transmitter
reference frequency, it is impractical to reproduce a local
reference that is coherent with both of them. So coherent FSK –
detection is seldom used.

ˆˆ The multiplier outputs are passed through low pass filters and the
filter outputs are applied to a comparator.

ˆˆ Comparator responds to the largest of the two powers.

ˆˆ If non-inverting is greater than the inverting input than the output is


logic 1 and vice versa for logic 0.

2.9.5.3 PLL-FSK demodulator

Figure 2.13 PLL FSK- Demodulator

ˆˆ The most common circuit used for modulation of BFSK is the phase
Locked loop (PLL) which is shown in Figure 2.13.

ˆˆ Generally, the natural frequency of PLL is made equal to the


center frequency of FSK –modulator (i.e) carrier frequency (fc) before
Digital Communication

applying the FSK –input.

ˆˆ There are two inputs are applied to two phase comparator,


one is received BFSK signal and other is VCO –output.

ˆˆ Comparator compares both the values and produce the error voltage.

ˆˆ Received BFSK input signal will have a frequency of fm (or) fs.(mark


frequency (or) space frequency).

ˆˆ As the FSK –frequency shifts between mark and space frequencies.


The dc-error voltage at the output of the phase comparator follows
the frequency shift.

ˆˆ Corresponding to the two input frequencies, there will be two values


of dc -error Voltage (0 (or) 1).Thus we get a binary data at the output
is shown in Figure 2.14.

Figure 2.14 Waveform of PLL-FSK Demodulator

Binary FSK has a poor error performance as compared to the PSK


(or) QAM –system . so it is not used for high performance digital radio
system.

ˆˆ BFSK is used in low performance , low cost, asynchronous data


modems which are used for data communications over analog voice
– band telephone lines.
ANALOG AND DIGITAL COMMUNICATION

Bit rate

Bit rate is the number of bits are transmitted (or) sent in one
second .It is expressed in bits per second (bps).
1 1
Bit rate = = = fb
Bit interval T b
Baud rate
fb
Baud rate = .
N
For, BFSK we use one bit ( 0 (or) 1 ) to represent one symbol.
Therefore, the rate of change of FSK waveform( baud) is same as the rate
of change of binary input (bps),thus bit rate equals the baud rate.
fb
Baud rate = = fb
1
Where, N → Number of bits = 1

2.9.6 Bandwidth of BFSK

The bandwidth of BFSK in terms of bit rate is given by,

Bandwidth =(fc + fa) - (fc - fc ) ...(1)

Where,
fb
fa = = fundamental frequency of binary input
2

 f   f 
=  fc + b  −  fc − b 
BW
 2  2

fb f
= f c + − fc + b
2 2
BW = fb ...(2)

For, FSK system bandwidth is equal to baud rate.


Digital Communication

2.9.7 Advantages of BFSK


• Implementation is simple and inexpensive.
• FSK is affected less by noise than BASK
2.9.8 Disadvantages of BFSK
• Probability of error for BFSK signal is, higher than BPSK
P (e ) = 1 erfc Eb / 2N 0
2
• More bandwidth is required.
• Error rate is more compared to BPSK.

2.10 MINIMUM SHIFT KEYING OR CONTINUOUS- PHASE


FREQUENCY SHIFT KEYING

Continuous-phase frequency shifting keying (CP-FSK) (or)


Minimum shift keying is a binary FSK except mark and space frequencies
are synchronized with the input binary bit rate synchronous simply
implies that there is a precise time relationship between the two: it does
not mean they are equal.

With CP-FSK, the mark and space frequencies are selected such
that they are separated from the centre frequency by an exact multiple
of one-half the bit rate [fm and fs = n(fb/2)], where n-any integer. This
ensures a smooth phase transition in the analog output signal when it
changes from a mark to space frequency (or) vice-versa.

Figure 2.15 shows the non-continuous FSK-waveform.

Figure 2.15 Non-Continuous FSK Waveform


ANALOG AND DIGITAL COMMUNICATION

It can be seen that when the input changes from a logic 1 to


a logic 0 and vice versa, there is an abrupt phase discontinuity in an
analog signal. When this occurs, the demodulator has trouble following
the frequency shift. Consequently an error may occur.

Figure 2.16 shows the CP-FSK Output Waveform

When the output frequency changes, it is a smooth continuous


transition. Consequently, there are no phase discontinuous, CP-FSK
has a better bit-error performance than conventional binary FSK for a
given signal to noise ratio.

The disadvantages of CP-FSK is that it requires synchronization


circuits and is therefore, more expensive to implement.

2.11 PHASE SHIFT –KEYING (PSK)

This is another form of digital continuous wave (CW) modulation,


phase shift keying (PSK) is the most efficient of the three modulation
methods. Therefore it is used for high bit-rates.

In PSK, Phase of the sinusoidal carrier is changed according to


the data bit to be transmitted.
Digital Communication

Binary PSK

¾¾ PSK is an M-ary digital modulation scheme similar to conventional


phase modulation except with PSK the input is a binary digital signal
and there are a limited number of output phases possible.

¾¾ The relation between number of bits and number of possible


conditions is given by,

M=2N ...(1)

¾¾ The simplest form of PSK is binary phase- shift keying (BPSK),


where N=1, and M=2. Therefore with BPSK , two phases (21=2) are
possible for the carrier. One phase represents logic 1 and other phase
represents a logic 0.

¾¾ As the input digital signal changes state (i.e from ’1’ to ‘0’ or from ‘0’
to ‘1’),the phase of the output carrier shifts between two angles that
are separated by 1800.

¾¾ Hence , other names for BPSK are “phase reversal keying” (PRK) and
“biphase modulation”.

¾¾ BPSK is a form of square –wave modulation of a continuous wave


(cw) signal.

2.11.1 Mathematical representation

The BPSK can be represented mathematically as,

VBPSK(t) =Vm(t) Vc sin (2fct) ...(1)

Where,

Vm(t) =A bipolar NRZ signal is used to represent the


digital data (Vm(t)=±1)

When binary ‘0’ is to be represented by,


ANALOG AND DIGITAL COMMUNICATION

VBPSK(t) =-Vc sin (2fct) ...(2)

When binary ‘1’ is to be represented by,

VBPSK(t) =Vcsin (2fct)

=sin 2fct ...(3)

2.11.2 Graphical representation

Figure 2.17 shows the graphical representation of BPSK


modulation (or) generation

Figure 2.17 BPSK-Output waveform

2.11.3 BPSK –Transmitter

ˆˆ Figure 2.18 shows the simplified block diagram of BPSK –transmitter.


The balanced modulator acts as a phase reversing switch.

ˆˆ The binary data signal (0’s and 1’s ) is converted into a NRZ
(non-return to zero ) bipolar signal from unipolar signal by an level
converter.

ˆˆ The output of level converter is then applied to a multiplier (balanced


modulator) as one of the input. The other input to the multiplier is
Digital Communication

the carrier signal sin wct.


BP-Bipolar
+V
+ V UP - Unipolar -V
0
Level PSK
Balanced Band pass
Converter Modulated
Binary Modulator filter
(UP to BP) Output
data
input
~ sin ωct

Buffer

~ sin ωct

Reference Carrier
Oscillator

Figure 2.18 Block Diagram of BPSK-transmitter


ˆˆ The output of balanced modulator (multiplier) is either carrier
signal (sin wct) (or) phase shifted carrier signal (-sin ωct), depending
on binary input logic 1 (or) logic 0 respectively.
ˆˆ The operation of balanced modulator is explained as follows for the
binary inputs logic 1 and logic 0.
2.11.3.1 Balanced Ring Modulator (Balanced Modulator)
ˆˆ Figure 2.19 Shows the balanced ring modulator circuit

Figure 2.19 Balanced ring modulator circuit


ANALOG AND DIGITAL COMMUNICATION

Operation of the circuit

ˆˆ The operation is explained with the assumptions that the diodes acts
as perfect switches and that they are switched ON and OFF by the
digital data signal.

ˆˆ The operation can be divided into two different modes.

Operation with binary input = logic 1

ˆˆ If the binary input is equal to logic 1 then the equivalent circuit is as


shown in Figure 2.20 . The diodes D1 and D2 are ON (forward biased)
while diodes D3 and D4 are OFF(Reverse Biased).

Figure 2.20 Equivalent circuit of Balanced Ring modulator


for the binary input ‘I’

ˆˆ With the polarity shown, the carrier voltage is developed across the
transformer T2 in phase with the carrier voltage across T1.

ˆˆ Hence, the output signal is in phase with the carrier input signal.

Operation with binary input = logic 0

• If the binary input is equal to logic 0 then the equivalent circuit is as


shown in Figure 2.21. The diodes D1 and D2 are reverse biased and
remains OFF, whereas D3 and D4 are forward biased and remains
ON.
Digital Communication

ˆˆ With the polarity shown, the carrier voltage is developed across the
transformer T2 is out of phase with the carrier voltage across T1.

ˆˆ Hence , the output signal is out - of phase with the carrier input
signal.

Figure 2.21 Equivalent circuit of Balanced ring modulator for the


binary input ‘O’

Ouptut Waveforms

Figure 2.22 BPSK- Output Waveform


ANALOG AND DIGITAL COMMUNICATION

Truth table
Binary input Output phase
Logic 0 1800
Logic 1 00

Table 2.1 Truth Table for BPSK

Phasor diagram
-Sin wct Sin wct

1800 00
Logic 0 Logic 1

Figure 2.23 Phasor Diagram for BPSK

Constellation diagram

+ 1800 00

Logic 0 Logic 1
Figure 2.24 Constellation diagram for BPSK

2.11.4 Binary PSK detection (or) BPSK receiver

• The Figure 2.25 below shows the simplified block diagram of BPSK
receiver.

Figure 2.25 Block Diagram of BPSK-Receiver

• The input BPSK signal can be +sin ωct (or) –sin wct.

• The coherent carrier recovery detects and regenerates


a carrier signal that is both frequency and phase coherent with the
Digital Communication

original transmit carrier.

• The balanced modulator is a product detector, the output is the


product of the two inputs (the BPSK signal and the recovered
carrier.

• The low-pass filter (LPF) separates the recovered binary data from
the complex demodulated signal.

The mathematical expression for demodulation process is as


follows,

Case 1

Balanced modulator output = BPSK input * Regenerated carrier.

For a BPSK input signal of +sin wct (logic 1), the output of the
balanced modulator is,

Output =(sinωct)(sin ωct) =sin2ωct

= 1 (1-cos ωct)
2

1 1
= - cos 2ωct
2 2
}

↓ ↓
Constant Second harmonic
term

The second term is filtered out by LPF and allows only positive
voltage (+1/2 V).A positive voltage represents a demodulate output is
logic 1.

Case 2

• For a BPSK input of –sin wct (logic 0), the output of the balanced
modulator is,

Output =(-sin ωct) sin ωct

=-sin2ωct
ANALOG AND DIGITAL COMMUNICATION

1 1 
=  − − cos 2ωc t 
2 2 

1 1
= - + cos2ωct
2 2

}
↓ ↓
Constant Second harmonic
term
The second term is filtered out and allows only negative voltage
(-1/2 v). A negative voltage represents a demodulated output is logic 0.

In both cases, the LPF output is applied to the level detector and
clock recovery circuit at the output of level detector we get the following
outputs.
1
+ V → logic 1
2

1
- V → logic 0
2
Thus the binary signal is obtained at the output.

Advantages

(i) BPSK has a Bandwidth which is lower than that of a BFSK – signal.

(ii) BPSK has the better performance of all the system in the presence of
noise .It gives the minimum possibility of error.

(iii) BPSK has a very good noise immunity.

Disadvantage

The only disadvantage of BPSK is that generation


and detection of BPSK is not easy. It is quite complicated,
because the synchronous (coherent) demodulation is used to recover
the original signal from BPSK signal.

Bit rate

Bit rate is the number of bits are transmitted (or) sent in one
Digital Communication

second .It is expressed in bits per second (bps).


1 1
Bit rate = = = fb
Bit interval Tb

fb
= = fb
1
Baud rate
fb
Baud rate =
N
For, BPSK we use one bit ( 0 (or) 1 ) to represent one symbol.
Therefore, the rate of change of PSK waveform( baud) is same as the rate
of change of binary input (bps),thus bit rate equals the band rate.
fb
Baud rate = = fb. Where N=1,number of bits
1
Bandwidth of BPSK

The bandwidth of BPSK in terms of bit rate is given by,

Bandwidth =(fc + fa) - (fc - fa) ...(1)

Where, fa = = fundamental frequency of binary input


 f   fb 
=  fc + b
BW  −  fc − 2 
 2   

fb f
= f c + − fc + b
2 2
= fb ...(2)
BW

For, PSK system bandwidth is equal to baud rate.


ANALOG AND DIGITAL COMMUNICATION

2.11.5 Comparison of Binary Modulation Systems

S.No Parameter Binary ASK Binary FSK Binary PSK


Variable
i. Amplitude Frequency Phase
characteristics
ii. Bandwidth (Hz) fb fb fb
Noise
iii. Low High High
immunity
Error
iv High Low Low
Probability
Performance in presence Better than Better than
v. Poor
of noise ASK FSK
Moderately
vi. Complexity Simple Very Complex
complex
Suitable upto 100 Suitable upto Suitable for
vii Bit rate
bits/sec 1200 bits/sec high bit rates
viii Detection method Envelope Envelope Coherent

2.12 DIFFERENTIAL PHASE-SHIFT KEYING (DPSK)

The differential phase shift keying can be treated as the


non- coherent version of PSK. DPSK does not need a synchronous
(coherent) carrier at the demodulator.

DPSK is an alternative form of digital modulation where the


binary input is contained in the difference between two successive
signaling elements rather than the absolute phase.

2.12.1 Differential BPSK Transmitter

Figure 2.26 .shows a simplified block diagram of a differential


BPSK
Digital Communication

ˆˆ An incoming information bit is XNORed with the preceding bit prior


to entering the BPSK modulator (Balanced modulator).

ˆˆ For the first data bit, there is no preceding bit with which to compare
it . Therefore, an initial reference bit is assumed.

ˆˆ If the initial reference bit is assumed a logic 1, the output from the
XNOR circuit is simply the complement of that shown in timing
diagram.

ˆˆ Figure 2.27 shows the relationship between the input data , the XNOR
output data, and the phase at the output of the balanced modulator.

ˆˆ The first bit (data bit) is XNOR ed with the reference bit .If they are
the same, the XNOR output is a logic1; they are different , the XNOR
output is logic 0.

ˆˆ The balanced modulator operates as same as a conventional BPSK


modulator; Logic 1 produces + sinwct at the output and a logic 0
produces -sinwct at the output.

2.12.2 Differential BPSK –Receiver

Figure 2.28 and Figure 2.29 shows the block diagram and timing
sequence for a DBPSK receiver.

Figure 2.28 Block Diagram of DBPSK-Detector


ANALOG AND DIGITAL COMMUNICATION

Figure 2.29 Timing Diagram

The received signal is delayed by one bit time, then compared


with next signaling element in the balanced modulator. If they are the
same, a logic 1 (+ voltage) is generated .If they are different , a logic 0
(- voltage) is generated.

Balanced modulator output


1 1
(+Sinwct)(+Sinwct) = - Cos2wct
2 2

1 1
(-Sinwct)(-Sinwct) = - Cos2wct
2 2

1 1
(-Sinwct)(+Sinwct) = + Cos2wct
2 2

1 1
(+Sinwct)(-Sinwct) = + Cos2wct
2 2
If reference phase is incorrectly assumed, then only the first
demodulated bit is in error.

Bandwidth

Thus the bandwidth of DPSK is = fb. Similar to that of BPSK


only one bit is transmitted at a time (ie) N =1

Advantages

1. Simple technique with which it can be implemented.


Digital Communication

2. It has a lower bandwidth requirement as compared to that of BPSK.

3. It does not need a synchronization carrier at the demodulator.

Disadvantage

1. The effect of noise is higher than BPSK.

2. The error rate in DBPSK is higher than that in BPSK.

2.12.3 Bandwidth Consideration of BPSK and DBPSK

Bandwidth for both BPSK & DPSK both are same. Mathematically
, the output of BPSK modulator is proportional to

Output = (sinωat)(sinωct)
fb
Where, ωat = 2fat = 2p t
2
fb
ωat =2fct
2

fb
Output =(sin2 t) (sin 2pfct)
2

1  f  1  f 
= cos 2π  f c − b  t − cos 2π  f c + b  t
2  2 2  2
The output frequency spectrum extends from and the minimum
fb
to f c − f b
bandwidth is, f c +
2 2

 fb   fb 
=  fc + 2  −  fc − 2 
   

fb
= 2 2

BW = f
b
ANALOG AND DIGITAL COMMUNICATION

2.13 QUADRATURE PHASE SHIFT KEYING


QPSK is an M-ary encoding technique in which the two
successive bits in a bit –stream are combined together to form a
message and each message is represented by a distinct value of phase
shift of the carrier.

For QPSK, N =number of bits =2.

M = number of possible conditions


=N

=2N


=22

=4

With two bits , there are four possible conditions, 00,01,10,11.


Therefore, with QPSK, the binary input data are combined into groups
of two bits called dibits.

In modulator, each dibit code generates one of the four possible


output phases(+450,+1350,-450, and -1350).

2.13.1 Graphical Representation

Figure 2.30 Graphical Representation of QPSK-Output


Digital Communication

2.13.2 QPSK-Transmitter

ˆˆ A block diagram of QPSK modulator is as shown in Figure 2.31. Two


bits (dibit) are clocked into the bit splitter. After both bits have been
serially inputted, they are simultaneously parallel outputted.

ˆˆ One bit is directed to the I-channel and the other directed to the
Q-channel ,as one input to Balanced modulator.

ˆˆ The carrier oscillator generates carrier signal is applied as it is to


I-balanced modulator another input and 900 phase shifted carrier is
applied as another input to Q-balanced modulator.

Figure 2.31 Block Diagram of QPSK - Transmitter (or) Modulator

ˆˆ It can be that once a dibit has been split into the I and Q channels,
the operation is the same as in a BPSK modulator.

ˆˆ A QPSK modulator is two BPSK modulators combined in parallel.

ˆˆ The balanced modulators output are as follows,


ANALOG AND DIGITAL COMMUNICATION

I-balanced modulator

Case 1

logic 1 = 1 volt , carrier = sin ωct

The output of balanced modulator = +sinωct

Case 2

logic 0 = -1 volt , carrier = sinωct,

The output of balanced modulator = -sin ωct

Q - Balanced modulator

Case 1

logic 1 =1 V ,carrier= cos ωct (phase shifted),

The output of balanced modulator = Cos wct.

Case 2

logic 0 =-1 V, carrier =cos ωct (phase shifted)

The output of balanced modulator= -cos ωct.

Linear summer

When the linear summer combines the two quadrature (900 out of
phase ) signals , there are four possible resultant phasors given by these
expressions

(+ sin wct)+ (cos wct) = 450 ...(1)

(- sin wct)+ (cos wct) = 1350 ...(2)

(+ sin wct)+ (-cos wct) = -450 ...(3)

(- sin wct)+ (-cos wct) = -1350 ...(4)


Digital Communication

Truth table

Binary i/p QPSK


o/p
I Q
phase
0 0 -1350
0 1 1350
1 0 -450
1 1 +450
Table 2.2 Truth Table for QPSK

The output of linear summer is analog in nature,


therefore the output of linear summer is directly communicate through
the channel.

Figure 2.32 (a) Phasor Diagram, (b) Constellation Diagram for QPSK
ANALOG AND DIGITAL COMMUNICATION

2.13.3 QPSK –receiver

The block diagram of a QPSK –receiver is as shown in Figure 2.33

Figure 2.33 Block Diagram of QPSK-receiver

• Consider the received input is (coswct-sinwct), for that input how the
QPSK-receiver operates and detects the output (01) is explained in
this diagram itself.

Power splitter

The incoming QPSK signal may be any one of the four possible
output phases shown in equation 1,2, 3 and 4. For example consider
the QPSK signal is (cos ωct –sin ωct ) is received then the power splitter
directs the input QPSK signal to the I and Q product detectors and also
to the carrier recovery circuit.

Carrier recovery circuit

Carrier recovery circuit reproduces the original transmit carrier


oscillator signal. The recovered carrier must be frequency and phase
coherent with the transmit reference carrier. This recovered carrier is
applied directly to one of the input of I-product detector and 900 phase
shifted and then applied as one of the input of Q-product detector.
Digital Communication

Product detector

The QPSK signal is demodulated in I and Q product detectors,


which generates the original I and Q data bits, are as follows,

Mathematically ,the demodulation process is as follows

I-product detector

The received QPSK- signal (cos wct-sin wct) is one of the inputs
to the I-product detector and the other input is the recovered carrier
(sin wct). The output of the I –product detector is,

I =(cos ωct - sin ωct) . (sin ωct)

= cos ωct .sin ωct - sin2 ωct


1 1 1
= (sin (wc + wc)t + sin(ωc-ωc)t - (1-cos 2ωct)
2 2 2
1 1 1 1
= sin 2ωct + - sin 0 - cos 2wct
2 ↑ 2 2 2 ↑
0
(filtered out) (filtered out)
1
I =-
2
I = Logic 0

Q-Product detector

Again, the received QPSK signal (coswct-sin wct) is one of the


inputs to the q- product detector and the other input is the recovered
carrier shifted 900 in phase (cos wct).The output of Q -product detector
is,

Q = (Cos wct - sin wct) (Cos wct)

= Cos2 wct - sin wct. cos wct


1 1 1
= (1 + cos 2 wct) - sin (wc+ wc)t - sin (wc- wc)t
2 2 2

ANALOG AND DIGITAL COMMUNICATION

1 cos2wct 1 sin2wct
1
= + ↑ - ↑ - Sin 0
2 2 2 (filtered out) 0
(filtered out)

1
= v
2
Q = Logic 1
The demodulated Q and I bits (1 & 0 respectively) corresponds
to the constellation diagram and truth table for the QPSK –modulator
shows in Figure 2.32.

Bit Combining circuit

The outputs of the product detectors are fed to the bit combining
circuit , where they are converted from parallel I and Q data channels to
a single binary output data stream.

Clock recovery

Clock recovery is the process of receive the clock signal from the
received signal and it is similar to transmitter clock, within the clock
signal a single binary output data stream is obtained.

2.13.4 Bandwidth considerations of QPSK

With QPSK , because the input data are divided into two channels,
the bit rate in either the I (or) the Q- Channel is equal to one-half of the

f 
input data rate  b  .
 2
The output of the balanced modulators can be expressed
mathematically as,

=(sin ωat)(sin ωct)


Output
fb
Where, ωat =2fat =2 t
fa 4 fb/2 fb
For QPSK fa= = =
2 2 4
Digital Communication

ωct = 2fct
 fb 
∴ Output = sin 2π  4  t (sin2fct)
 

1  f  1  f 
= cos 2π  f c − b  t − cos 2π  f c + b  t
2  2 2  2
fb fb
The output frequency spectrum extends from fc + to fc -
4 4
and minimum bandwidth fN is,
 f   fb  fb fb
=  fc + b  −  fc − 4  = 2 4 = 2
 4   

fb
∴Bandwidth = 2

Advantages of QPSK

1. Very good noise immunity


2. Baud rate is half the bit rate therefore more effective utilization
of the available bandwidth of the transmission channel.
3. Low error probability.
4. Due to these advantages the QPSK is used for very high bit rate
data transmission.
Disadvantages

The generation and detection of QPSK is complex.

2.13.5 QPSK is better than PSK

The QPSK is better than BPSK because,

(1) Due to multilevel modulation used in QPSK, it is possible to increase


the bit –rate to double the bit rate of PSK without increasing the
bandwidth.

(2) The noise immunity of QPSK system is same as that of PSK-system.

(3) Available channel bandwidth is utilized in a better way by the QPSK


system than PSK-system.
ANALOG AND DIGITAL COMMUNICATION

2.13.6 Comparison of BPSK & QPSK

S.No Characteristi BPSK QPSK


1. Variable Characteristics Phase Phase
2. Type of modulation Two level Four level (phase)
A group of two binary bit is
A binary bit is represent-
3. Type of representation represented by one phase
ed by one phase state
state
4. Bit rate/ Baud rate Bit rate - Baud rate Bit rate = 2 Baud rate
5. Detection method Coherent Coherent
6. Complexity Complex Very Complex
Suitable for applications
7. Applications Very high bit rate
that need high bit rate

Table 2.3 Comparison of BPSK and QPSK


2.14 8-PSK SYSTEM

With 8-PSK, three bits are encoded , forming tribits and


producing eight different output phases. With 8-PSK, n =3,M =8 and
there are eight possible output phases .To encode eight different phases,
the incoming bits are encoded in groups of three, called tribits (2 3 =8)
2.14.1 8-PSK Transmitter
A block diagram of an 8-PSK modulator is as shown in Figure
2.34.
Digital Communication

I C Output
0 0 -0.541 V
0 1 -1.307 V
1 0 +0.541 V
1 1 +1.307 V

Table 2.4 I- Channel Truth table

Q C Output
0 0 -1.307 V
0 1 -0.541 V
1 0 +0.541 V
1 1 +1.307 V
Table 2.5 Q - Channel Truth Table

+ 1.307 V

+ 0.541 V
OV
- 0.541 V
- 1.307 V

Fig 2.35 PAM Levels

ˆˆ The incoming serial bit stream enters the bit–splitter’ where it is


converted to a parallel, three-channel output (the I (or) in-phase
channel, the Q(or) in-quadrature channel, and the c (or) control
channel)

ˆˆ Consequently, the bit-rate in each of the three channels is fb/3.

ˆˆ The bits in the I and C channels enter the I- channel 2-to-4


level converter , and the bits in the Q and C channels enters the
Q- Channel 2 to 4 converter.

ˆˆ The 2-to-4 level converters are parallel –input digital to analog


converters (DACs), with two input bits, four output voltages are
ANALOG AND DIGITAL COMMUNICATION

possible.

ˆˆ The algorithm for the DAC, is quite simple. The I (or) Q bit
determines the polarity of thr output analog signal (logic 1= +V
and logic 0=-V),where as c (or) c bit determines the magnitudes
[(logic 1= 1.307 V) and (logic 0= 0.541 V ). Consequently with two
magnitudes and two polarities , four different output conditions are
possible.

For example

Consider the input bit stream to bit splitter is 000 (tribit).

I – channel

ˆˆ The inputs to the I-channel 2- to -4 level converters


are I=0 and C=0, the output is -0.541.

ˆˆ The two inputs to the I-channel product modulator is -0.541 and sin
wct .The output is,

I = (-0.541)sin ωct = -0.541 sin ωct.

Q-channel

ˆˆ The inputs to the Q-channel 2- to -4 level converters are Q=0 and


C =1 , the output is -1.307 V.

ˆˆ The two inputs to the Q-channel product modulator is -1.307 and


cos wct. the output is ,

Q= (-1.307)(cos ωct)=-1.307 cosωct.

ˆˆ The output of the I and Q product modulators


are combined in the linear summer and produce a modulated output

Summer output =-1.541 sin ωct -1.307 cos ωct.

ˆˆ For remaining tribit codes (001, 010, 011, 100, 101, 110 and 111),
the procedure is same.
Digital Communication

Phasor diagram

(100) + cos ωc t
-0.541 sin ωc t +1.307 cos ωc t 0.541 sin ωc t + 1.307 cos ωc t
(110)
(101)
-1.307 sin ωc t + 0.541 cos ωc t

1.307 sin ωc t + 0.541 cos ωc t


(111)
- sin ωc t sin ωc t
(001)
-1.307 sin ωc t - 0.541 cos ωc t
1.307 sin ωc t - 0.541 cos ωc t

(011)
(000)
0.541 sin ωc t - 1.307 cos ωc t
-0.541 sin ωc t - 1.307 cos ωc t
(010)

-cos ωc t
Figure 2.35 (a) Phasor Diagram for 8-PSK
Truth Table

Binary Input 8- PSK


Q I C Output Phase
0 0 0 -112.50
0 0 1 -157.50
0 1 0 -67.50
0 1 1 -22.50
1 0 0 +112.50
1 0 1 +157.50
1 1 0 +67.50
1 1 1 +22.50
Table 2.6 Truth table for 8-PSK
ANALOG AND DIGITAL COMMUNICATION

Constellation Diagram

Figure 2.36 Constellation Diagram for 8-PSK

2.14.2 8 -PSK Receiver

A block diagram of an 8-PSK receiver is as shown in Figure 2.37.

Figure 2.37 Block diagram of 8- PSK receiver

• The power splitter directs the input 8-PSK signal to the I and Q
product detectors and the carrier recovery circuit.

• The carrier recovery circuit reproduces the original reference oscillator


signal.

• The incoming 8-PSK signal is mixed with the recovered carrier in the
I- product detector and with a Quadrature carrier in the Q-product
Digital Communication

detector.

• The output of the product detectors are 4-level PAM-signals that are
fed to the 4-to-2 level analog to digital converters (ADCs).

• The outputs from the I-channel 4-to-2 level converter are the I and C
bits , whereas the outputs from the Q-channel 4-to-2 level converter
are the Q and C bits.

• The parallel –to-serial logic circuit converts the I/C and Q/C bit pairs
to serial Q, I and C output data streams.

For examples

Consider the received 8-PSK signal -0.541 sin ωct -1.307 cos ωct

I-channel

The inputs to I –channel product detector are, one is received


8-PSK signal (i.e)-0.541 sin wct -1.307 cos ωct and another input is
carrier signal sin ωct.

The output of product detector is,

= (-0.541 sin ωct -1.307 cosωct) sin ωct.

=-0.541 sin 2 ωct -1.307 sin ωct .cos ωct.


=-0.541
( 1-cos2ωct
2 ) – 1.307 sin ωct .cos ωct.

0.541 0.541 1.307 1.307


=- + cos ωct - sin ( ωc + ωc)t-
sin( ωc- ωc)t
2 2 2 2
= - 0.2705 + 0.2705 cos2 ωct - 0.6535 sin2 ωct -0.6535 sin 0
(filtered out) (filtered out)

= - 0.2705.
I = logic 0, C = logic 0
ANALOG AND DIGITAL COMMUNICATION

Q-channel

The inputs to Q-channel product detectors are cos ωct and


- 0.541 sin ωct -1.307 cos ωct.

The output of product detectors is,

=(-0.541 sin ωct -1.307 cos ωct) cos ωct

=-0.541 sin ωct.cos ωct - 1.307 cos2 ωct


(sin(ωc+ ωc)t+sin (ωc- ωc)t) cos 2ωc t
= -0.541 ` - 1.307 1 +
2 2

0.541 1.307 1.307


=- sin 2ωct +sin 0 - - cos 2ωct
2 2 2
↓ ↓
(Filtered out) (Filtered out)

= -0.6535

Q = logic 0 C = logic 1

Therefore, the received output data stream is 000. The remaining


tribits, the procedure is same.

Bandwidth consideration for 8-PSK systems

The output of the product modulator can be expressed


mathematically as follows,

Output = (sin ωat) (sin ωct)


fa fb/2 fb
For 8 PSK the bit rate is fa/3; Therefore, (fa= = = )
3 3 6
Output =sin 2fat.sin 2 fct

=sin 2 fb/6t. sin 2 fc t

1  f  1  f 
= cos 2π  f c − b  t − cos 2π  f c + b  t
2  6 2  6
Digital Communication

fb fb
The output frequency spectrum extends from fc+ to fc-
and the minimum bandwidth fN is, 6 6
fb  f 
BW = fc + −  fc − b 
6  6
fb fb
= fc + − fc +
6 6
fb
= 2
6
f
BW = b
3

2.15 16-PSK

• 16- PSK is an M-ary encoding technique where M = 16, There are 16


different output phases possible

• With 16- PSK, four bits (called quadbit) are combined, producing 16
different phases (where n = 4, M = 2n \ M = 16)

• The minimum bandwidth and baud equal to one-fourth the bit rate
(fb/4)

• For 16- PSK, the angular separation between adjacent output phases
is only 22.50. Therefore 16- PSK can undergo only a 11.250 phase
shift during transmission and still retain it’s integrity.

• Figure 2.38 and the Table 2.7 shows the constellation diagram and
the truth table for 16-PSK respectively.
ANALOG AND DIGITAL COMMUNICATION

Bit Code Phase


0000... 11.250
0001... 33.750
0010... 56.250
0011... 78.750
0100... 101.250
0101... 123.750
0110... 146.250
0111... 168.750
1000... 191.250
1001... 213.750
1010... 236.250 Fig 2.38 Constellation diagram for 16-PSK
1011... 258.750
1100... 284.250
1101... 303.750
1110... 326.250
1111... 348.750

Table 2.7 Truth table for 16-PSK

2.16 QUADRATURE –AMPLITUDE MODULATION

• In all the PSK methods discussed till now, one symbol is distinguished
from the other in phase, but all the symbols transmitted using BPSK,
QPSK (or) M-ary PSK of “same amplitude”.
• The ability of a receiver to distinguish between one signal vector from
another in presence of noise, depends on the distance between the
vector end points.
• This suggests that the noise immunity will improve if the signal
vector differ not only in phase but also in amplitude.
• Such as a system is called as amplitude and phase shift keying
system.
• In this system the direct modulation of carriers in quadrature (i.e
cos wct and sin wct) is involved, therefore this system is called as
the quadrature amplitude phase shift keying (i.e) QAPSK (or) Simply
QASK.
It is also known as Quadrature amplitude modulation (QAM).
Digital Communication

Types of QAM
Depending on the number of bits per message the QAM –signals
are classified as follows,
Name Bits per symbol Number of symbols
4-QAM 2 2 2= 4
8-QAM 3 2 3 =8
16-QAM 4 2 4=16
32-QAM 5 2 5
=32
64-QAM 6 2 6=64
2.16.1 8-QAM –Transmitter
¾¾ 8-QAM is an M-ary encoding technique where M=8, unlike 8-PSK,
the output signal from an 8-QAM modulator is not a constant
amplitude signal.
¾¾ Figure 2.39 shows the block diagram of an 8-QAM transmitter. The only
difference between the 8-QAM transmitter and the 8-PSK transmitter
is the omission of the inverter between the C-channel and the
Q-product modulator.
¾¾ As with 8-PSK , the incoming data are divided into groups of three
bits(tribits). The I,Q and C bit streams, each with a bit rate equal to
one-third of the incoming data rate (fb/3).

Figure 2.39 Block of diagram of 8-QAM Transmitter


ANALOG AND DIGITAL COMMUNICATION

¾¾ The I and Q bits determine the polarity of the PAM –signal at the
output of the 2-to-4 level converters, and the c-bit determines the
magnitude. Because the C-bit is fed uninverted to both the I and the
Q-channel 2-to-4 level converters the magnitudes of the I and Q PAM
signals are always equal.

¾¾ Their polarities depends on the logic condition of the I and Q bits and
therefore, may be different.

¾¾ The truth table for the I and Q-channel 2 -to -4 level converters are
identical.

Truth table
I/Q C Output
0 0 -0.541 V
0 1 -1.307 V
1 0 +0.541 V
1 1 +1.307 V

Table 2.8 Truth Table for 2-4 level converters

Binary input 8-QAM output

Q I C Amplitude Phase
0 0 0 0.765 V -1350
0 0 1 1.848 V -1350
0 1 0 0.765 V -450
0 1 1 1.848 V -450
1 0 0 0.765 V +1350
1 0 1 1.848 V +1350
1 1 0 0.765 V +450
1 1 1 1.848 V +450

Table 2.9 Truth Table for 8-QAM transmitter


Digital Communication

Figure 2.40 Phasor Diagram Figure 2.41 Constellation Diagram

Figure 2.42 Output phase and Amplitude Versus time relationship


for 8-QAM
For example

Consider the input bit stream to the bit splitter is 000.

I-channel

The inputs to the I-channel 2- to -4 level converters


are I=0 & C=0 the output of 2- to -4 level converter is -0.541 V.

The two inputs to the I-channel product modulator are -0.541


and sin wct .The output is,

I= (-0.541) (sin ωct) = -0.541 sin ωct


ANALOG AND DIGITAL COMMUNICATION

Q-channel

The inputs to the Q-channel 2- to- 4 level converters are Q=0 &
C=0 .The output is -0.541 V.
The two inputs to the Q-channel product modulator are -0.541
and cos wct .The output is
Q=(-0.541) (cos ωct)= -0.541 cos ωct
The outputs from the I and Q –channel product modulators are
combined in the linear summer and produce a modulated output of
Summer output =-0.541 sin ωct -0.541 cos ωct
(or)
= 0.765 sin (ωct – 1350)
For remaining tribit codes, the procedure is same.
2.16.2 8-QAM Receiver
An 8-QAM receiver is almost identical to the 8-PSK receiver.
The differences are the PAM levels at the output of the product
detectors and the binary signals at the output of the analog –to digital
converters.
Because there are two transmit amplitudes possible with 8-QAM
that are different from those achievable with 8-PSK , the four demodulated
PAM –levels in 8-QAM are different from those in 8-PSK. Therefore,
the conversion factor for the analog-to- digital converters must also be
different.
Also, with 8-QAM the binary output signals from the I-channel
analog-to-digital converter are the I and C bits, and the binary output
signals from the Q-channel analog to digital converter are Q and C-bits.
Bandwidth consideration for 8-QAM
Bandwidth required for 8-QAM is same as in 8-PSK.

fb
The minimum BW =
3

2.17 16 QAM

As with 16-PSK, 16 QAM is an M-ary system where M = 16. The


input data are acted on in groups of four (24 = 16). As with 16 QAM both
the phase and magnitude of the transmit carrier are varied.
Digital Communication

2.17.1 QAM transmitter

The block diagram of 16 QAM transmitter is as shown in Figure


2.43. The input binary data are divided into four channels: I, I’, Q, Q’.
The bit rate in each channel is equal to one-fourth of the input bit rate
(fb/4).

Bit Splitter

Four bits are serially clocked into the bit splitter, then they are
outputted simultaneously and in parallel with I, I’, Q, Q’ -channels

Figure 2.4.3 Block diagram of 16-QAM Transmitter

2- to 4- level converter

The I and Q bits determine the polarity at the output of the 2- to


-4 level converters (a logic 1 = positive and a logic 0 = negative). The I’
and Q’ bits determine the magnitude (a logic 1 = 0.821 V) and logic 0 =
0.22 V). Consequently, the 2 to 4 level converter generates 4- level PAM
signal for each 2- to -4 level converters.

Two polarities and two magnitudes are possible at the output of


each 2- to -4 level converters. They are ± 0.22 V and ±0.821V .
ANALOG AND DIGITAL COMMUNICATION

Balanced Modulator (Product Modulator)

The PAM signals modulate the in-phase and quadrature


carriers in the balanced modulator (product modulator). Four outputs
are possible for each product modulator

For I-product modulator,

They are + 0.821 sin wct, -0.821 sin wct,+ 0.22 sin wct and - 0.22
sin wct

Summer

The linear summer combines the outputs from I and Q channel


product modulators and produces the 16 output conditions necessary
for 16- QAM.

The Figure 2.44 shows the truth table, Phasor diagram and
constellation diagram for 16-QAM.

For example
For a, quadbit input of 0000, determine the output amplitude and
phase for 16 QAM modulator.
For Q Product Modulator

They are +0.821 cos wct, - 0.821 wct, 0.22 cos wct and -0.22 cos wct
Digital Communication

I - Channel

The two inputs applied to 2 to 4 level converters are I = 0 and I’ =


0. The output is -0.22 V.

The output of I-Channel product modulator for the two inputs are
-0.22 V and sin wct.

I = -0.22 sin wct

Q - Channel

The two inputs applied to 2- to -4 level converters are Q = 0 and


Q’ = 0. The output is -0.22 V.

The output of Q - Channel product modulator for the two inputs


are - 0.22 V and cos wct

Q = -0.22 cos wct

Summer output is

= -0.22 sin wct - 0.22 cos wct

= 0.311 sin (wct - 1350).

Thus, for the remaining quadbit codes, the procedure is same.


ANALOG AND DIGITAL COMMUNICATION

Truth table
Binary input
16- QAM Output

}
Q Q’ I I’
0 0 0 0 0.311 V -1350
0 0 0 1 0.850 V -1650
0 0 1 0 0.311 V -450

}
0 0 1 1 0.850 V -1500
0 1 0 0 0.850 V -1050
0 1 0 1 1.161 V -1350
0 1 1 0 0.850 V -750

}
0 1 1 1 1.61 V -450
1 0 0 0 0.311 V 1350
1 0 0 1 0.850 V 1650
1 0 1 0 0.311 V 450

}
1 0 1 1 0.850 V 150
1 1 0 0 0.850 V 1050
1 1 0 1 1.61 V 1350
1 1 1 0 0.850 V 750
1 1 1 1 1.61 V 450

Table 2.10 Truth table for 16-QAM

2.17.2 Band width Considerations of 16 QAM

With 16- QAM, because the input data are divided into four
channels, the bit rate in the I, I’, Q, Q’ channel is equal to one-fourth of
the binary input data rate (fb/4).

Balanced modulator output represented mathematically as

Output = (x sin wat)( sin wct)

Where, wa = 2pfat, wc = 2pfct x = ± 0.22 or ± 0.821

Output = x sin 2p fat x sin 2p fc t


fa
For 16-QAM, fa = 4
fb fb/2 fb
where fa = \ fa = =
2 4 8
Digital Communication

fb
Output = x sin 2p
t. sin 2pfct
8
x  f   f  
= cos 2π  f c − b  t − cos 2π  f c + b  t 
2  8  8 

fb
\ The output frequency spectrum extends from fc - to fc + fb
8
8
 fb   f 
\ Minimum bandwidth =  f c +  −  fc − b 
 8  8 
fb
=2
8
fb
BW =
4

2.18 CARRIER RECOVERY (PHASE REFERENCING)

Definition

Carrier recovery is the process of extracting a phase-coherent


reference carrier from a received signal. This is sometimes called phase
referencing.

Need for carrier recovery circuit

In ASK ,FSK systems carrier phase is constant, only the


amplitude , frequency of the carrier is changed respectively. So , at the
receiver we generate a separate carrier and multiplied with received
signal, obtain the original information.

But , in phase shift keying (PSK) system, the phase of the carrier
is changed according to the instantaneous value of modulating signal.

In PSK , the binary data were encoded as a precise phase of the


transmitted carrier.

To correctly demodulate the data, a phase- coherent carrier was


recovered and compared with the received carrier in a product detector.
ANALOG AND DIGITAL COMMUNICATION

To determine the absolute phase of the received carrier, it is


necessary to produce a carrier at the receiver that is phase coherent
with the transmit reference oscillator. This is the function of the carrier
recovery circuit.

With PSK and QAM , the carrier is suppressed in the balanced


modulator’s and therefore, is not transmitted consequently at the
receives the carrier cannot simply be tracked with a standard PLL.

With suppressed carrier system, such as PSK & QAM, A


sophisticated method of carrier recovery circuit is necessary such as,

(1) Squaring loop method.

(2) costas – loop method.

2.18.1 Squaring loop method

A common method of achieving carrier recovery for only BPSK


is the squaring loop. The block diagram flow shows the squaring loop
method carrier recovery is as shown in figure 2.45.

Figure 2.45 Block Diagram of Squaring carrier recovery for a BPSK

Band pass filter


The received BPSK signal is given to BPF. The BPF reduces the
spectral width of the received noise and produce the required signal to
the squarer circuit.
Squarer circuit
The squaring circuit removes the modulation and generates the
second harmonic of the carrier frequency. With BPSK , only two output
phases are possible + sin wct, - sin wct.
Mathematically, the squaring loop circuit operation can be
described as follows,
Digital Communication

Case 1 Received signal consider sin wct, then the output of


squaring circuit is given by,

Squarer output = (+ sin ωct) (+sin ωct)


1
= sin2 ωct = (1- cos2ωct)
2
1 1
= – cos 2 ωct
2 2
= cos 2 ωct

Case 2 Received signal consider –sin ωct, then the output of squaring
circuit is given by,

Squarer output =(- sin ωct) (- sinωct)


1
= sin2ωct = (1- cos2ωct)
2
1 1
= - cos 2 ωct.
2 2
filtered out

=cos 2ωct

In both cases, the output from the squaring circuit contained a


constant voltage (+1/2 V) and a signal at twice the carrier frequency
(cos 2 ωct). The constant voltage is removed by filtering, leaving only
cos 2 ωct.

Phase-Locked loop(PLL)

The output of squarer is applied to PLL as input, the PLL is phase


tracked for the second harmonic of the squarer output . The putput of
VCO is taken and applied as a input to frequency divider network.

Frequency divider

Frequency divider is the circuit which is used to divide the input


frequency by the value of N. Here divide by 2 network only used and
obtained the phase reference carrier for the product detectors.
ANALOG AND DIGITAL COMMUNICATION

2.18.2 Costas –loop method


A second method of carrier recovery is the costas, (or) quadrature
loop is as shown in Figure 2.46 below.

Figure 2.47 Block diagram of costas loop carrier recovery circuit

Power splitter
The power splitter circuit directs the input received PSK signal to
the I and Q Balanced modulators (or) product detectors.
I - Balanced modulator
It has two inputs , one is the output of power splitter (i.e) received
signal, another is VCO output .
It produces In phase signal to Balanced product detector.
Q – Balanced modulator
It is also having two inputs, one is the received signal, another is
90 phase shifted VCO output.
0

It produces 900 phase shifted (900 phase signal) signal (Q - signal)


to balanced product detector.
Loop filter
The balanced product detector output is product of I & Q-signals
,(i.e) applied as input to loop fiter.
Loop filter designed for the cut-off frequency of carrier (i.e) ± wc
and produce a error voltage.
VCO
The error voltage of loop filter act as a control voltage for VCO
Digital Communication

produce some frequency components.


Once the frequency of the VCO is equal to the suppressed carrier
frequency then the VCO is in lock – in condition.
The output of the VCO – act as carrier signal and it is applied to
the I and Q- balanced modulator in the Receiver circuit.

2.19 CLOCK RECOVERY CIRCUIT


Clock recovery is the process of extracting the phase coherent
clock from the received signal.
2.19.1 Need of clock recovery circuit
In any digital system , digital radio requires precise timing (or)
clock synchronization between the transmit and receive circuitry.
Because of this, it is necessary to generate clocks at the receiver that are
synchronous with those at the transmitter, to reproduce original data.

Figure 2.48 Clock recovery Figure 2.49 Output waveform


circuit for clock recovery circuit
Operation

Figure 2.48 Shows the simple circuit that is commonly used to


recover clocking information from the received data.
ANALOG AND DIGITAL COMMUNICATION

The recovered data are delayed by one-half a bit time and then
compared with the original data in an XOR-circuit.

The frequency of the clock that is recovered with this method is


equal to the received data rate(fb).

Figure 2.49 Shows the relationship between the data and the
recovered clock timing.

From the Figure, we conclude that as long as the receive data


contain a substantial number of transitions (I/O sequences), the
recovered clock is maintained.

If the receive data were to undergo an extended period of


successive 1’s and 0’s ,the recovered clock would be lost.

To prevent this from occurring , the data are scrambled at the


transmitter end and descrambled at the receive end.

Scrambling introduces transitions (pulses) into the binary


signal using a prescribed algorithm , and the descrambler uses the same
algorithm to remove the transitions.
2.20 Comparison of various digital communication systems

Param- M-ary
S.NO BASK BFSK MSK BPSK DPSK QPSK QAM
eter PSK
Variable Amplitude
1 Amplitude Frequency Frequency Phase Phase Phase Phase
character and phase
Bits per
2 One One Two One One Two N-bits N-bits
symbol
Number of
possible
3 Two Two Four Two Two Four M = 2N M = 2N
symbols
m = 2N
Symbol
4 Tb Tb 2Tb Tb 2Tb 2Tb NTb NTb
duration
Minimum 2fb
5 2fb 4fb 1.5 fb 2fb fb fb 2nH.fb/N
BN N
Perfor-
mance in Better Better Better Better
6 Poor Better Better Better
presence than ASK than ASK than FSK than ASK
of noise
Error pos- Lower Lower
7 High Low Low Low Low Low
sibility than FSK than ASK
Digital Communication
Moderate- More
Complex- More More More
8 Simple ly com- Complex Com- Complex
ity Complex Complex Complex
plex plex
Non-
Detection Coher- Non- Coher- Coher-
9 Coherent Coher- Coherent Coherent
Method ent Coherent ent ent
ent
Minimum p 0.4 Eb
10 Euclidean Eb 2 Eb 2 Eb 2 Eb 2 Eb 2 Eb 2 Eb. Sin for M =
distance
m 16
s(t)
s(t) = K1
s(t) = s(t) =
ANALOG AND DIGITAL COMMUNICATION

= 2 2Ps s(t) 0.2 Ps


b(t) 2Ps s(t) = 2Ps
Cos = 2Ps Cos 2pf0t
Equation s(t) 2Ps cos cos (2pf0t +
cos (2 (pf0 s(t) = + K2
of trans- 2pf0t for = 2Ps cos 2pf0t (2pf0t + (2m+1) p
11 +d (t) Wt) b(t) 2Ps 4 0.2 Ps
mitted symbol cos 2pf0t b(t) (2m+1) p
W is cos 2pf0t 4 Sin 2pf0t
signal d(t)e differ- m = 0, 1,
‘I’ = 0 frequency K1K2 = ±1
entially m = 0, 2,.....
for sym- shift or ± 3, for
coded 1, 2,.....
m =16
bol ‘O’
Digital Communication

SOLVED TWO MARKS

1. What do you mean by ASK?

ASK (Amplitude Shift Keying) is a modulation technique which


converts digital data to analog signal. In ASK, the two binary
values (0, 1) are represented by two different amplitudes of the
carrier signal.

2. What do you mean by FSK?

FSK (Frequency Shift Keying) also a modulation technique which


converts digital data to analog signal. In FSK, the two binary
values (0, 1) are represented by two different frequencies near
the carrier frequency.

3. What do you mean by PSK?

PSK (Phase Shift Keying) also a modulation technique which


converts digital data to analog signal. In PSK, the two binary
values (0, 1) are represented by two different phase’s (00 or 1800)
of the carrier phase.

4. What are antipodal signals?


In BPSK, the two symbols are transmitted with the help of
following signals,
Symbol ‘1’ s1(t) = 2P cos ( 2πƒ0t)
Symbol ‘2’ s2(t) = 2P cos ( 2πƒ0t + π)
Here observe that above two signals differ only in a relative phase
shift of 1800. Such signals are called antipodal signals.
ANALOG AND DIGITAL COMMUNICATION

5. What is correlator?

Correlator is the coherent receiver. It correlates the received


noisy signal ƒ(t) with the locally generated replica of the known
signal x(t). Its output is given as,
T

r(t) = ⌠ ⌠ f (t)x(t)dt
0

Matched filter and correlator are functionally same.

6. Define minimum shift keying.


Minimum shift keying uses two orthogonal signals to
transmit binary ‘0’ and ‘1’. The differences between these two
frequencies are minimum. Hence, there are no abrupt changes
in the amplitude and the modulated signal is continuous and
smooth.

7. What do you mean bit rate and baud rate?


The rate at which data (bits) are transmitted is called bit rate.
That is number of bits transmitted per second. Unit is bps(bits
per second).
The rate at which signal elements (pulses) are transmitted is
called baud rate (modulation rate). This means number of signal
elements(pulses) transmitted per second. Unit is bauds.

8. Differentiate Binary PSK and QPSK.


Binary PSK
1. Two different phases are used to represent two binary values.
2. Each signal element represents only one bit.
QPSK
1. Four different phases are used to represent two binary values.
2. Each signal element represents two bits.
Digital Communication

9. Give the difference between standard FSK and MSK.

Sr. No FSK MSK



1. The two frequencies are in- The difference between the
teger multiple of base band two frequencies is minimum
frequency and at the same and at the same time they are
time they are orthogonal. orthogonal.
2. BW = 4fb BW = 1.5 fb
3. This is binary modula- This quadrature modulation.
tion.

10. Compare binary PSK with QPSK.

Sr. No BPSK QPSK



1. One bit form a symbol. Two bits form a symbol.
2. Two possible symbols. Four possible symbols
Minimum bandwidth is twice of
3. Minimum bandwidth is equal to ƒb .
ƒb
4. Symbol duration = T . Symbol duration = 2T

11. Compare QASK and QPSK.


S r . Parameter QPSK QASK
No
1. Modulation Quadrature phase Quadrature amplitude and
phase
2. Location of signal All signal points placed on Signal points are replaced
points circumference of circle symmetrically about origin
2
3. Distance between 2 0.15 Eb for 16 symbols 0.4 Eb for 16 symbols
signal points and for 2 Eb for 4 symbols
4. Complexity Relatively simpler Relatively complex
5 Noise immunity Better than QASK Poor than QPSK. But better
than M-ary PSK
6 probability
Error Less than QASK Higher than QPSK. Lower
than M-ary PSK.
7 Type of demodulation Coherent Coherent
ANALOG AND DIGITAL COMMUNICATION

12. Differentiate coherent and non-coherent methods.


Coherent (synchronous) detection

In coherent detection, the local carrier generated at the receiver


is phase locked with the carrier at the transmitter. The detection
is done by correlating received noisy signal and locally generated
carrier. The coherent detection is a synchronous detection.
Non-coherent (envelope) detection
This type of detection does not need receiver carrier to be
phase locked with transmitter carrier. The advantage of such a
system is that the system becomes simple, but the drawback is
that error probability increases. The different digital modulation
techniques are used for specific application areas. The choice is
made such that the transmitted power and channel bandwidth
are best exploited.

13. What are the advantages of QPSK as compared to BPSK?


1) For the same bit error rate, the bandwidth required by QPSK
is reduced to half as compared to BPSK.
2) Because of reduced bandwidth, the information transmission
rate of QPSK is higher.
3) Variation in OQPSK amplitude is not much. Hence carrier
power almost remains constant.

14. Given a channel with an intended capacity of 30Mbs. The


bandwidth of this Channel is 5MHz. What is the signal to
Noise ratio required in order to achieve this capacity?
Given
I= 30Mbs
B = 5MHz

Solution
According to Shannon Hartley theorem,
Digital Communication

I = B log2
( 1+
S
N )
)

30 x 106 = 5 x 106 log2
( 1+
S
N


30 x 106
5 x 106
= log2
( ) 1+
S
N

6 = log2
( ) 1+
S
N

S
64 =
N


S
= 63 or 17.99 dB
N

15. For an 8-PSK system operating with all information bit rate
of 48kbps. Determine (a) baud (b) minimum baud width (c)
bandwidth efficiency.

Given:
fb = 48 Kbps
Solution
fb 48000
(a) Band = =
N 3
fb 48000
(b) Bandwidth B = = = 16000
N 3
Transmission bit rate (bps)
(c) Bandwidth efficiency =
Minimum bandwidth(Hz)
48000 bps
= = 3 bits/cycle
16000 Hz
ANALOG AND DIGITAL COMMUNICATION

16. Derive the bandwidth for FSK modulated signal.


Minimum bandwidth for FSK is given by
B = |(fs-fb)-(fm-fb)|
= |(fs-fm)|+ 2fb

|fm-fs|
Df =
2
From 2Df = |fs -fm|
Substitute (3) in (1)
= 2Df + 2 fb

B =2 (Df + fb)
Where,
B - Minimum Nyquist bandwidth (Hz)
fb - Input bit rate (bps)
Df = frequency deviation (Hz)
17. Draw the phasor diagram of QPSK.

01

10 900 00

11

18. Define information capacity and bit rate.


Information capacity
It is a measure of how much information can be propagated
through a communications system and it is a function of
bandwidth and transmission time. Information capacity
represents the number of independent symbols that can be
carried through a system in a given unit of time.
Bit rate
It is the number of bits transmitted during one second. It is
expressed in bits per second (bps).
Digital Communication

19. What is the relation between bit rate and baud for a FSK sys-
tem?

The bit time equals the time of an FSK signaling element, and
the bit rate equals the baud.
Baud = fb / N
Number of bits encoded into each signaling element N = 1
Baud = fb
Where, fb - Input bit rate (bps)

20. Draw ASK and PSK waveforms for a data stream 01101001.

21. What are the advantages of QPSK?


Advantages of QPSK:
(i) Low error probability,
(ii) Very good noise immunity,
(iii) For the same bit error rate, the bandwidth required by QPSK
is reduced to half as compared to BPSK.
(iv) Because of reduced bandwidth, the information transmission
rate of QPSK is higher.
ANALOG AND DIGITAL COMMUNICATION

22. Find the capacity of a channel having 50 kHz bandwidth and


produces SNR of 1023 at the output.

S
= 1023
N

B = 50 KHz

( )
Solution
S
Rmax = B log2 1+
N

= 50000 log2 (1+1023)
= 150514.99 bits/sec
23. What is the purpose of limiter in FM receiver?
In an FM system, the message information is transmitted by
variations of the instantaneous frequency of a sinusoidal carrier
wave, and its amplitude is maintained constant.
Any variation of the carrier amplitude at the receiver input must
result from noise or interference.
An amplitude limiter, following the IF section is used to remove
amplitude variations by clipping the modulated wave at the IF
section.

24. Draw 8-QAM phasor diagram.


2 Amplitude
1 Amplitude
4 Phase
4 Phase
01 00 011

010

101 100 000 001


110

10 11 111
Digital Communication

(a) 4-QAM (b) 8-QAM

25. Determine the peak frequency deviation and min-


imum bandwidth, for a BFSK signal with a mark frequency of 49
kHz, a space frequency of 51 kHz and an input bit rate of 3kbps.

Given fm = 49 KHz
fs = 51 KHz
fb = 3 Kbps
Solution
(a) Peak frequency

|fm-fs|
Df =
2
49 X 103 - 51 X 103
=
2
= 1 KHz

(b) Minimum Bandwidth


B = 2(Df + fb)
= 2(1000 + 3000) = 8000
= 8 KHz

26. What is bandwidth need to transmit 4 kHz voice signal using


AM

Bandwidth of AM = 2fm = 2 x 4 = 8 kHz


ANALOG AND DIGITAL COMMUNICATION

REVIEW QUESTIONS:
PART - A
1. What do you mean by FSK?
2. What is M-ary encoding?
3. State Shannon’s Limit for channel capacity theorem. Give an
example.
4. Draw the block diagram of BFSK transmitter.
5. Define Bandwidth efficiency.
6. Draw the constellation diagram of QPSK signal.
7. Draw 8-QAM phasor diagram.
8. Determine the peak frequency deviation and minimum bandwidth
for a binary FSK signal with a mark frequency of 49 KHz, a space
frequency of 51 KHz.
9. What is Shannon limit for information capacity?
10. What is binary phase shift keying?
11. Draw ASK and PSK waveforms forms for a data stream 110101.
12. What are the advantages of QPSK?
13. What is the relation between bit rate and baud for a FSK system?
14. Draw the ASK and FSK signals for the binary signal s (t) = 101100l.
15. A typical dial up telephone connection has a bandwidth of 3 KU, and
a signal to noise ratio of 30 dB. Calculate the Shannon limit.
16. What are the two types of carrier recovery circuit?
17. Write down the expression for peak frequency deviation of FSK.
18. What is the need for synchronization?
19. What is the bandwidth requirement of FSK?
20. Write down the bit error rate expression of a QPSK system.
21. Draw the block diagram of QPSK transmitter.
22. Differentiate between PSK from DPSK
23. What are the advantages of PSK over FSK?
24. Determine the bandwidth and baud for the FSK signal with a mark
frequency of 49 kHz and a space frequency of 51 kHz and a bit rate
of 2 kbps.
25. Write the differences between PSK and FSK.

PART – B
1. Draw the block diagram of a QPSK transmitter and explain. Derive the
bandwidth requirement of a QPSK system.
2. Draw the block diagram of a non-coherent receiver for detection of
binary FSK signals and derive the probability of symbol error for a
Digital Communication

non-coherent FSK system.


3. (i) Determine the baud rate and minimum bandwidth necessary to
pass a 10 Kbps binary signal using amplitude shift keying.
(ii) Explain quadrature amplitude modulation with the help of
relevant diagrams.
4. (i) Derive an expression for baud rate in PSK and FSK systems.
(ii) Explain the generation and detection of QPSK signals.
5. With neat schematic diagram, explain the balanced ring modulator of
BPSK.
6. (i). Describe the two techniques of achieving carrier recovery circuit.
(ii). Explain in detail about 8 – QAM transmitter and receiver.
7. With relevant diagram explain the method of synchronous detection
of FSK signal. What should be the relationship between bit rate and
frequency shift for a better performance.
8. With neat diagram explain the working of a DPSK transmitter. What
are the advantages of DPSK over PSK.
9. Explain the generation and detection of PSK system with the help of
block diagrams
10. Describe the coherent detection procedure of M-ary PSK and obtain
the expression for the probability of symbol error.
11. (i) Discuss the principle of operation of FSK- Transmitter.
(ii). Write a note on QPSK.
12. (i). Discuss the principle of operation of FSK - receiver.
(ii). Write a note on DPSK.
13. What is carrier recovery? Discuss how carrier recovery is achieved by
the squaring loop and Costas loop circuits.
14. (i) Compare the different digital modulation schemes in terms of
bandwidth bit error rate and efficiency.
(ii) For the DPSK modulator, determine the output phase sequence
for the following input bit sequence: 11001100 ll10 10. Assume that
the reference, bit = 1
15. (i). Determine the minimum bandwidth and baud for a BPSK
modulator with a carrier frequency of 80 MHz and an input bit rate
fb= 1 Mbps. Sketch the output spectrum.
(ii) Discuss the differences between PSK and differential PSK
Data Communication: History of Data Communication - Standards
Organizations for Data-Communication- Data Communication
Circuits - Data Communication Codes - Error Detection and
Correction Techniques - Data communication Hardware - serial and
parallel interfaces. Pulse Communication: Pulse Amplitude Modulation
(PAM) – Pulse Time Modulation (PTM) – Pulse code Modulation (PCM)
- Comparison of various Pulse Communication System (PAM – PTM –
PCM).
ANALOG AND DIGITAL COMMUNICATION

DATA COMMUNICATION Unit 3


3.1 INTRODUCTION

‰‰ Data is defined as information which is stored in the digital form. A


single data unit is called as datum.

‰‰ Data communication is the process of transferring digital


information between two points.

3.1.1 Types of Data

‰‰ Data can be alphabets, numeric or symbols and it


consists of any one or the communication of the following:
microprocessor OP codes, control codes, user addresses. Program
data or data base information.

‰‰ At the source or destination the data are in digital form but


during the transmission , it may be analog or digital.

‰‰ A data communication network can be simply consisting of two


computers connected to each other a public telecommunication
network is as shown in figure 3.1.

‰‰ Data communication system are used for interconnecting all types of


digital computing equipments, internet etc.
DATA COMMUNICATION

‰‰ In this unit we are going to discuss about data communication and


networking.

‰‰ The aim of data communication and networking is to facilitate the


exchange of data such as audio, text and video between any points
in world.

‰‰ The transfer of data takes place over a computer


network. A network is like a highway over which the data travels
smoothly.

Definition of data communication

•• The word data refers to the information which is presented in a form


that is agreed upon by the users and creaters of data.

•• Data communication is the exchange of data between two devices via


some form of transmission medium . Such as a co-axial cable.

Characteristics of data communication system

The characteristics of a data communication system are

1. Delivery 2. Accuracy 3. Timeliness


3.2 HISTORY OF DATA COMMUNICATIONS

‰‰ The data communications began in 1837 when the telegraph was


invented and when the Morse code was developed.

‰‰ The basic symbols for the Morse code were dots and dashes. Various
combinations of dots and dashes were used for representing various
letters, numbers, punctuation marks etc.

‰‰ The first telegraph was invented in English by Sir Charles


Wheatstone and Sir William Cooke. In 1844 the first telegraph line
between Baltimore and Washington D.C. was established.

‰‰ In 1849 the first slow-speed telegraph printer was invented. It’s a


high speed version was developed in 1860.
ANALOG AND DIGITAL COMMUNICATION

‰‰ In 1874 Emile Baudot invented a telegraph multiplexer. It enabled


simultaneous transmission of upto six different messages from
telegraph machines on the same line.
‰‰ The telephone line was invented in 1876 by Alexander Bell. In 1899
Marconi was successful in sending radio telegraph messages.
‰‰ In 1920 the first commercial radio stations were installed.
‰‰ The first special purpose computer was developed by Bell
laboratory in 1940 using electromechanical relays.
‰‰ The first general purpose computer was developed jointly by Harvard
university and IBM.
‰‰ In 1951 the first mass produced electronic computer was launched.
Then a number of mainframe computers, small business computers,
Personal computer, has increased rapidly.
‰‰ So the need of data communication has increased exponentially.
3.3 COMPONENTS OF DATA COMMUNICATION SYSTEM
If we specifically consider the communication between two com-
puters then the data communication system is as shown in figure 3.2

It has the following five components

1. Message 4. Receiver and

2. Sender 5. Protocol

3. Medium
DATA COMMUNICATION

Description

1. Message

¾¾ Message is nothing but information or data which is to be sent from


one point to the other.

¾¾ A message can be in the form of sound, text, number, pictures, video


or combination of them.

2. Sender

Sender is a device which sends the message. Examples of


sender are: computer, workstation, video camera, telephone handset
etc.

3. Medium

‰‰ It is the physical path over which the message travels from the
sender to the receiver.

‰‰ The examples of transmission medium are co-axial cable, twisted


pair wire, fiber optic cable, radio waves (used in terrestrial or satellite
communication).

4. Receiver

It is the device which receives the message. Examples of receiver


are : computer, TV receiver, workstation, telephone handset etc.

5. Protocol

‰‰ Protocol is defined as the set rules which govern data communication

‰‰ The connection of two devices takes place via the communication


medium, but the actual communication between them will take
place with the help of a protocol.
ANALOG AND DIGITAL COMMUNICATION

3.4 STANDARD ORGANIZATIONS FOR DATA COMMUNICATIONS


Some of the standard organizations for data communication are
as follows

1. International Standard Organization(ISO)

‰‰ ISO is the international organization for standardization. It


creates sets of rules and standards for graphics, document exchange
etc.

‰‰ ISO endorses and coordinates the work of the other standard


organizations.

2. Consultative Committee for International Telephony And Teleg-


raphy (CCITT)

‰‰ The CCITT is now a standard organization for the United Nations.

‰‰ Many government authorities and representatives are members of


CCITT .

‰‰ CCITT develops the recommended sets of rules and standards for


telephone and telegraph communications.

‰‰ CCITT has developed three sets of specifications.

1. V series for MODEM interfacing.

2. X series for data communication

3. Q series for Integrated Services Digital Network (ISDN).

3. American National Standards Institute (ANSI)

ANSI is the official standard agency for United States.

4. Institute of Electrical and Electronics Engineers (IEEE)

IEEE is a U.S. based professional organization of electronics ,


computer and communications engineers.
DATA COMMUNICATION

5. Electronic Industries Association (EIA)

‰‰ EIA is a U.S. organization . It establishes and recommends industrial


standards.

‰‰ EIA has developed the RS (recommended standard) series of


standards for data and telecommunications.

6. Standards Council of Canada (SCC)

SCC is the official standards agency for Canada . It has similar


responsibilities to those of ANSI.
3.5. DATA COMMUNICATION CIRCUITS

‰‰ Figure 3.3 shows a simplified block diagram of a data communication


network.

‰‰ It consists of a source of digital information called primary


station, a transmission medium and a destination called secondary
station.

‰‰ The primary or host is very often a mainframe computer. It has its


own peripherals and local terminals.

‰‰ The secondary station or destination is connected to the primary


station via a transmission medium.

‰‰ The transmission media can be free space radio transmission which


includes terrestrial microwave or satellite communication.

‰‰ The transmission medium can be metallic, coaxial or optical or


coaxial cables.

‰‰ DTE (data terminal equipment) is a general term which is used


to describe the interface equipment to take digital signals from
computers and terminals and to convert them into a form which is
more suitable for transmission.

‰‰ DCE (data communications equipment) is an equipment which


converts digital signals to analog signals and interfaces the data
ANALOG AND DIGITAL COMMUNICATION

equipment to the analog transmission medium.

‰‰ DCE is basically a modulator /demodulator i.e.. a MODEM.

‰‰ The data transmission can be carried out using one of the


following techniques.

1. Serial transmission

2. Parallel transmission.

‰‰ Parallel transmission allows the transfer of more than one bits


simultaneously . Parallel transmission takes place only between host
and DTE.

‰‰ Serial transmission transfers the data serially, bit-by-bit , one bit at


a time.

‰‰ All long distance transmissions are serial type.

3.6 DATA TRANSMISSION

Data Transmission means movement of data which is in the form


of bits between two or more digital devices. The data transmission takes
place over some physical medium from one computer to the other.
DATA COMMUNICATION

There are two ways of transmitting the bits. They are:

1. Parallel Transmission

2. Serial Transmission.

3.6.1 Transmission Mode

Various modes of data transmission are shown in figure 3.4

Data Transmission

parallel serial
transmission transmission

Synchronous Asynchronous

Figure 3.4 Modes of data transmission

As seen from figure 3.4. serial transmission and parallel


transmission are the two basic types of transmission.

The serial transmission is further classified into two types


namely synchronous and asynchronous transmission.
ANALOG AND DIGITAL COMMUNICATION

3.6.2 Parallel Transmission

‰‰ In parallel transmission of data , all the bits of a byte are transmitted


simultaneously on separate wires as shown in figure 3.5
Wires carrying the bits
0 0
1 1
2
0
3
0
4
1
5
0
6
1
7
0
8
Transmitter Receiver

Figure 3.5 Parallel transmission of data

‰‰ This type of transmission requires multiple circuits for


interconnecting the two device.

‰‰ Parallel transmission is possible practically if the two devices are


close to each other.

‰‰ For example parallel transmission takes place between a computer


and its printer.

‰‰ Figure 3.5 shows the parallel transmission of an 8-bit digital data.

‰‰ This will require eight wires for connection between a transmitter


and a receiver.

‰‰ With increase in the number of receivers , the number of wires will


increase to an unmanageable number.

Advantages of parallel transmission

(i) The advantage of parallel transmission is that all the data bits will
be transmitted simultaneously. Therefore the time required for the
DATA COMMUNICATION

transmission of an N-bit word is only one clock cycle.


(ii) The serial transmission will require N number of cycles for the
transmission of same word.
(iii) Due to this the clock frequency can be kept low without affecting
the speed of operation . For serial transmission the clock frequency
cannot be low.
Disadvantage
To transmit an N-bit word, we need N number of wires. With
increase in the number of users, these wires will be too many to
handle. The serial transmission uses only one wire , for connecting the
transmitter to the receiver. Hence practically the serial transmission is
always preferred.
3.6.3 Serial Transmission
‰‰ In serial transmission, the bits of a byte are serially transmitted one
after the other as shown in figure 3.6
‰‰ The byte to be transmitted is first stored in a shift register. Then
these bits are shifted from MSB to LSB bit by bit in synchronization
with the clock . Bits are shifted right by one position per clock cycle.
‰‰ The bit which falls out of the shift register is transmitted. Hence LSB
is transmitted first.
‰‰ For serial transmission only one wire is needed between the
transmitter and the receiver. Hence serial transmission is preferred
for long distance data communication. This is the advantage of serial
transmission over parallel transmission.
Single wire used for
Shift register
transmission
(At transmitter)
MSB LSB 1 1 1 1
1 0 0 0 1 0 1 1
0 000
LSB MSB
Figure 3.6 Serial transmission
ANALOG AND DIGITAL COMMUNICATION

‰‰ The serial transmission has a serious drawback. As only one bit


is transmitted per clock cycle, it requires a time corresponding to
8-clock cycles to transmit one byte . (The parallel transmission needs
only one clock cycle to transmit a byte).The time can be reduced by
increasing the clock frequency.
Advantages of serial transmission
1. Only one wire is required.
2. Reduction in cost due to less number of conductors.
Disadvantages
1. The speed of data transfer is low
2. To increase the speed of data transfer, it is necessary to increase the
clock frequency.
Application
It is used for computer to computer communication , specially
long distance communication.
Types of serial transmission
There are two types of serial transmission namely,
1. Asynchronous transmission.
2. Synchronous transmission.
3.6.4 comparison of Serial and Parallel Transmission
S.No Parameter Parallel Transmission Serial transmission
1 Number of wires required N wires 1 wire
to transmit N bits.
2 Number of bits transmit- N bits 1 bit
ted simultaneously
3 Speed of data transfer Fast Slow
4 Cost Higher due to more number Low, since only one
of conductors. wire is used.
5 Application Short distance communica- Long distance com-
tion such as computer to puter to computer
printer communication. communication.
DATA COMMUNICATION

3.7 CONFIGURATIONS
Data communication circuits can be generally categorized as two
point and multipoint circuits.

A network is two or more devices connected to each other through


connecting links.

There are possible ways to connect the devices .They are follows :

1. Point to point connection

2. Multipoint connection.

3.7.1 Point –to-Point connection

‰‰ A point-to-point connection provides a dedicated link between two


devices as shown in figure 3.7.

‰‰ Entire capacity of the link is reserved for transmission between these


two devices only.

‰‰ It is possible to connect the two devices by means of a pair of wires


(see Figure 3.7(a)) or using a microwave or satellite link as shown
in Figure 3.7(b).

(a) Wire link

Figure 3.7 Point to point connection


ANALOG AND DIGITAL COMMUNICATION

3.7.2 Multipoint Connection

A multipoint connection is also called as a multidrop connection.


In such a connection more than two devices share a single link
as shown in figure 3.8
In the multipoint connection the channel capacity is shared. If
many devices share the link simultaneously . It is called spatially shared
connection.
But if users share it turn then it is time sharing connection.

3.8 TOPOLOGIES

‰‰ The topology (or) architecture of a data communication network


identifies the manner in which various locations within the network
are interconnected.
‰‰ Most commonly used topologies are
1. Point-to-point
2. The star
3. The bus or the multidrop
4. A ring or a loop
5. Mesh
These topologies are shown in figure 3.9
DATA COMMUNICATION

Station Station
1 2
(a) Point to point
Remote 1
stations 2 Remote station
1 3
2 4
Central
host 3 Common communication
medium

5 4 n 5
(b) Star (c) Bus or multidrop

1
Stations 6 1 Stations
2 6 2
5
7 3
3 4
5

4
(d) Ring or loop (e) Mesh
Figure 3.9 Various Topologies
3.9 TRANSMISSION MODES

There are four modes of transmission for data communication


circuits.

Transmission Modes

Simplex Half duplex Full duplex Full/full duplex

3.9.1 Simplex Mode

‰‰ In these systems the information is communicated in only one


direction. For example the radio or TV broadcasting systems can only
transmit. They cannot receive .
ANALOG AND DIGITAL COMMUNICATION

‰‰ In data communication system the simplex communication takes


place as shown in Figure 3.10

‰‰ The communication from CPU to monitor or keyboard to CPU is


unidirectional.

Fig 3.10 Simplex mode of Data transmission

‰‰ Keyboard and traditional monitors are examples of simplex devices.

3.9.2 Half Duplex Systems

‰‰ These systems are bi-directional, (i.e) they can transmit as well as


receive but not simultaneously.

‰‰ At a time these systems can either transmit or receive, for example a


trans receiver or walky talky set.

‰‰ A data communication system working in the half duplex mode is


shown in figure 3.11

‰‰ Each station can transmit and receive , but not at the same time.
When one device is ending the other one is receiving and vice versa.
DATA COMMUNICATION

Figure 3.11 Half duplex mode of Data Transmission

In half duplex transmission , the entire capacity of the channel is


taken over by the transmitting (sending).

3.9.3 Full Duplex Mode

‰‰ These are truly bi-directional systems as they allow the


communication to take place in both the directions simultaneously.

‰‰ These systems can transmit as well as receive simultaneously, for


example the telephone systems.

‰‰ A full duplex data communication system is as shown in Figure 3.12

‰‰ Each station can transmit and receive simultaneously.

Fig 3.12 Full duplex mode of Data Transmission

‰‰ In full duplex mode, signals going in either direction share the full
ANALOG AND DIGITAL COMMUNICATION

capacity of link.

‰‰ The link may contain two physically separate transmission paths one
for sending and another for receiving.

‰‰ Otherwise the capacity of channel is divided between signals


travelling in both directions.
3.9.4 Full/full Duplex Mode
‰‰ In this mode , the transmission is possible in both the directions at
the same time but not between the same two stations.
‰‰ That means say one station is transmitting to a second station and
receiving from a third station at the same time.
(Example-Conference cells, Video downloading using torrents)

3.10 DATA COMMUNICATION CODES

‰‰ We can define the data communication codes as the prescribed bit


sequences used for encoding characters and symbols.
‰‰ The codes are also called as the character sets, character codes,
symbol codes or character languages.
The three most common codes are
1. Baudot code
2. ASCII code and
3. EBCDIC code
Need of Code

‰‰ A binary digit or bit can represent only two symbols as it has only
states “0” or “1”.

‰‰ But, this is not enough for communication between two


computers because there we need many more symbols for
communication. These symbols are required to represent.
1. 26 alphabets with capital and small letters
2. Numbers from 0 to 9
DATA COMMUNICATION

3. Punctuation marks and other symbols.


‰‰ Therefore instead of using only single binary bits, a group of bits is
used as a code to represent a symbol.
3.10.1 ASCII-(American Standard Code for Information Interchange)
‰‰ It was defined by American National standards Institute (ANSI). It
is a 7 bit code with 27 i.e. 128 possible combinations and all of them
have defined meanings.
‰‰ The ASCII code set consists of 94 printable characters. SPACE and
DEL characters, and 32 control symbols. These are all showed in
Tables 3.1 and 3.2.
Table 3.1 ASCII code set

7 0 0 0 0 1 1 1 1
Bit
6 0 0 1 1 0 0 1 1
Numbers
5 0 1 0 1 0 1 0 1
4321 0 1 2 3 4 5 6 7
0000 0 NUL DLE SPACE 0 @ P P
0001 1 SOH DC1 ! 1 A Q a Q
0010 2 STX DC2 “ 2 B R b R
0011 3 ETX DC3 # 3 C S c S
0100 4 EOT DC4 $ 4 D T d T
0101 5 ENQ NAK % 5 E U e U
0110 6 ACK SYN & 6 F V f V
0111 7 BEL ETB . 7 G W g W
1000 8 BS CAN ( 8 H X h X
1001 9 HT EM ) 9 I Y i Y
1010 A LF SUB * : J Z j Z
1011 B VT ESC + ; K [ k {
1100 C FF FS , < L \ L l
1101 D CR GS - = M ] m 1
1110 E SO RS > N ^ n ~
1111 F SI US / ? O  - - o DEL

Looking at the table 3.1 we come to know that the co-ordinates of


character “K” are (4,B) therefore its code is 1001011.
ANALOG AND DIGITAL COMMUNICATION

Other symbols and their meanings

‰‰ The control symbols are codes reversed for special functions. The
control symbols are as listed in Table 3.2 CR(carriage Return) and
LF(line feed) are the symbols used for basic operations of printers or
displays.

‰‰ The symbols ACK (Acknowledgement) or NAK (Negative


Acknowledgement) are used for the error control.

‰‰ And the symbols STX (Start to Text) and ETX (End of Text) are used
for grouping of data character.

‰‰ The outer symbols and their meanings are as listed in Table 3.2.

Table 3.2 Control symbols used in ASCII code set


ACK Acknowledgement FF Form Feed
BEL Bell FS File Separator
BS Backspace GS Group Separator
CAN Cancel HT Horizontal Tabulation
CR Carriage Return LF Line Feed
DC1 Device Control 1 NAK Negative Acknowledgement
DC2 Device Control 2 NUL Null
DC3 Device Control 3 RS Record Separator
DC4 Device Control 4 SI Shift-In
DEL Delete SO Shift-Out
DLE Data Line Escape SOH Start of Heading
EM End of Medium STX Start of Text
ENQ Enquiry SUB Substitute Character
EOT End of Transmission SYN Synchronous Idle
ESC Escape US Unit separator
ETB End of Transmission VT Vertical Tabulation
ETX End of Test
Use of parity bit in ASCII code

‰‰ ASCII is a 7 bit code but the eighth bit is often used. This is called
as parity bit. The parity bit is used to detect the errors introduced
during transmission.

‰‰ The parity bit is generally added in the most significant bit (MSB)
position.
DATA COMMUNICATION

‰‰ The 8-bit ASCII code word format has been shown in figure 3.13

8 bits

MSB-parity bit P 7-bit ASCII Code word

Figure 3.13 An ASCII word with parity bit included in the MSB
position

3.10.2 EDCDIC-Extended Binary Coded Decimal Interchange Code

‰‰ This is an 8-bit code. However all the possible 256 combinations are
not used.

‰‰ There is no parity bit used to check error in the basic code set. The
EBCDIC code set is as shown in table 3.3.

Table 3.3 EBCDIC code


0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1
1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1
Bit
2 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
Numbers

3 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
4567 0 1 2 3 4 5 6 7 8 9 A B C D E F
0000 0 NUL DLE SP & 0
0001 1 SOH SBA / a j A J 1
0010 2 STX EUA SYN b k s B K S 2
0011 3 ETX IC c / t C L T 3
0100 4 d m u D M U 4
0101 5 HT NL e n v E N V 5
0110 6 ETB f o w F O W 6
0111 7 ESC EOT g p x G P X 7
1000 8 h q y H Q Y 8
1001 9 EM i r z I R Z 9
1010 A ! !

1011 B $ #
1100 C DUP RA < * % @
1101 D SF ENQ NAK ( ) --
1110 E FM + ; > =
1111 F ITB SUB | ⌐ ? -
ANALOG AND DIGITAL COMMUNICATION

b0 b1 b2 b3 b4 b5 b6 b7

MSB LSB

For example , the EBCDIC code for representing “SP” is

b0 b1 b2 b3 b4 b5 b6 b7
SP = 1 0 1 0 0 0 0 0

3.10.3 Baudot Code

¾¾ This is a 5 bit code used for the teleprinters. With 5 bits 32


combinations are possible but in this code there are 58 symbols.

¾¾ Therefore the same code is used for two symbols using letter shift/
figure shift keys which change the meaning of a code.

¾¾ Table 3. 4 shows baudot codes. This code is normally transmitted


asynchronously.

Lower Upper Code word Lower Upper Code word


No No
case case 5 4 3 2 1 case case 5 4 3 2 1
1 A - 0 0 0 1 1 17 Q 1 1 0 1 1 1
2 B ? 1 1 0 0 1 18 R 4 0 1 0 1 0
3 C : 0 1 1 1 0 19 S . 0 0 1 0 1
4 D WRU 0 1 0 0 1 20 T 5 1 0 0 0 0
5 E 3 0 0 0 0 1 21 U 7 0 0 1 1 1
6 F 1 0 1 1 0 1 22 V - 1 1 1 1 0
7 G & 1 1 0 1 0 23 W 2 1 0 0 1 1
8 H # 1 0 1 0 0 24 X / 1 1 1 0 1
9 I 8 0 0 1 1 0 25 Y 6 1 0 1 0 1
Au-

10 J dible 0 1 0 1 1 26 Z + 1 0 0 0 1

signal
DATA COMMUNICATION

Catridge re-
11 k ( 0 1 1 1 1 27 0 1 0 0 0
turn
12 l ) 1 0 0 1 0 28 Line feed 0 0 0 1 0
13 M . 1 1 1 0 0 29 Letters 1 1 1 1 1
14 N . 0 1 1 0 0 30 Figures 1 1 0 1 1
15 O 9 1 1 0 0 0 31 Space 0 0 1 0 0
16 P 0 1 0 1 1 0 32 Not used 0 0 0 0 0

Table 3.4 Baudot Code

3.10.4 Extended ASCII

™™ In order to make the size of each code word 1 byte (8 bits) the ASCII
patterns are augmented by an extra 0 at the left (MSB position).

™™ The extra bit is called as parity bit. The first byte in the extended
ASCII is 00000000 and the last one is 0111 1111.

3.10.5 Unicode

”” It is a 16 bit code which can represent upto 216 (65 ,536) symbols.

”” It allows the representation of English language in the coded form


directly.

”” Different sections of the code are allocated to symbols from


different languages. Some sections are reserved for graphical and
special symbols.

3.10.6 ISO

ISO is the long form of international organization of


standardization. It has designed a code of 32 bits.

This code is capable for representing 232 i.e. 4294967296


symbols.

These may symbols are enough to represent any symbol.

3.10.7 Bar Codes

‰‰ Bar codes are seen in almost every consumer item sold in the
ANALOG AND DIGITAL COMMUNICATION

modern stores across the world . The bar code is a series of black
bars separated by white spaces.

‰‰ The widths of the bars represent binary 1’s and 0’s and the bar
pattern represents the cost of that item.

The bar codes may contain some additional information such as

1. Inventory management and control

2. Shipping and receiving

3. Management and control

4. Security access

5. Document

6. Production counting

7. Automatic billing

Figure 3.14 (a) shows a typical bar code and Figure 3.14 (b) shows
the bar code structure.

Refer Figure 3.14 (b) various fields in the bar code structure are
as follows

1. Start Margin

This is the first field in the bar code structure, It consists of a


DATA COMMUNICATION

unique sequence of bar and spaces which identifies the beginning of the
data field.

2. Data Characters

The data character corresponds to the format or symbology


used by the bar code. The data in this field is serially encoded and is
extracted from the card with an optical scanner. The photodetector
in the scanner senses the reflected light and converts it into binary
signals.

2. Check Characters

Each character has a check character value associated with it.


The value of the characters within a message are added together and the
sum is divided by a constant. The stop character field indicates the end
of data and stop margin contain a unique pattern of black and white
bars to indicate that the bar code has ended. A number of bar code
formats are being used out of which the most common code is known as
code 39.
3.11 INTRODUCTION TO ERROR DETECTION AND CORRECTION
TECHNIQUES (ERROR CONTROL)
3. 11. 1 Need of Error Control

‰‰ When transmission of digital signals takes place between two


systems such as computers as shown in figure 3.15 . the signal get
contaminated due to the addition of “ Noise” to it.

‰‰ The noise can introduce an error in the binary bits travelling from
one system to the other . That means 0 may change to 1 or 1 may
change to 0.

‰‰ These error can become a serious threat to the accuracy of the


digital system. Therefore it is necessary to detect and correct the
errors.

‰‰ The error detection and correction are collectively called as error


control.
ANALOG AND DIGITAL COMMUNICATION

Figure 3.15 Noise contamination the binary signal

How to detect and correct errors ?

‰‰ For the detection and /or correction of these errors, one or


more than one extra bits are added to the data bits at the time
transmitting.

‰‰ These extra bits are called as parity bits. They allow the detection or
sometimes correction of the errors.

‰‰ The data bits along with the parity bits form a code word.

Error control techniques

The error control techniques can be divided into two types

1. Error detection techniques

2. Error correction techniques

The error detecting techniques are capable of only detecting the


errors. They cannot correct the errors.

The error correcting techniques are capable of detecting as well as


correcting the errors.

3.11.2 Types of Errors

The errors introduced in the data bits during their transmission


can be categorized as

1. Content errors
DATA COMMUNICATION

2. Flow integrity errors

1. Content errors

The content errors are nothing but errors in the contents


of a message e.g. a “0” may be received as “1” or vice versa. Such
errors are introduced due to noise added into the data signal during its
transmission.

2. Flow integrity errors

Flow integrity errors means missing blocks of data. It is possible


that a data block may be lost in the network as it has been delivered to
a wrong destination.

Depending on the number of bits in error message we can classify


the errors into two types as,

1. Single bit error 2. Burst errors.

Single bit error

The term single bit error suggests that only one bit in the given
data unit such as byte is in error.

That means only one bit will change from 1 to 0 or 0 to 1, as


shown in Figure. 3.16

Medium
0 1 1 1 0011 0 1 1 1 0010

Transmitted byte Received byte


error
Figure 3.16 Single bit error

Burst errors

If two or more bits from a data unit such as a byte change from
1 to 0 or from 0 to 1 then burst errors are said to have occurred.
ANALOG AND DIGITAL COMMUNICATION

The length of the burst is measured from the first corrupted bit
to the last corrupted bit. Some of the bits in between may not have been
corrupted.

Burst errors are illustrated in Figure 3.17


Length of Burst
error(5 bits)

errors

Medium
0 1 1 1 0 0 1 1 0 1 0 1 1 0 01

Transmitted byte Received byte

Figure 3.17 Burst errors

3.12 ERROR DETECTION TECHNIQUE

‰‰ It is the process of monitoring the received data and find when a


transmission error has occurred.

‰‰ When a code word is transmitted , one or more number of


transmitted bits will be reversed (0 to 1 or vice-versa) due to
transmission impairments.

‰‰ Thus errors will be introduced.

‰‰ It is possible for the receiver to detect these errors if the received code
word (corrupted) is not one of the valid code words.

‰‰ When the errors are introduced , the distance between the


transmitted and received code words will be equal to the number of
errors as illustrated in figure 3.18.

Transmitted code 10101100 11101011 00100101


word
Received code 11101100 01111011 10110001
word error
error error
DATA COMMUNICATION

Number of errors 1 2 3
Distance 1 2 3

Figure 3.18

‰‰ Hence to detect the errors at the receiver , the valid code words should
be separated by a distance of more than 1.

‰‰ Otherwise the incorrect received code words will also become some
other valid code words and the error detection will be impossible.

‰‰ The number of errors that can be detected depends on the


distance between any two valid code words.

‰‰ The purpose of error detection is not to prevent errors from


occurring but to prevent the undetected errors from occurring.

The most common error detection techniques are as follows


Error Detection Techniques

Redundancy Echoplex Exact count Parity Checksum VRC CRC


encoding and
LRC

Figure 3.19 Error detection techniques

‰‰ Let us discuss these techniques one by one.

3.12.1 Redundancy Technique for Error Detection

Redundancy involves transmission of each character twice. If


the character is not received even after transmitting it twice, then a
transmission error is said to have occurred.
ANALOG AND DIGITAL COMMUNICATION

3.12.2 Echoplex Technique for Error Detection

 This technique is used in those data communication systems


where human operators are used to enter data manually from a
keyboard.

 Echoplex needs a full duplex operation. Each character is


transmitted immediately after it has been typed into the transmit
terminal.

 As soon as the character is received by a receiver , the receiver


transmits it back to the transmitter where it appears on the screen.

 When the character appears on the screen, the operator gets a


verification that the character has been received by the destination
terminal.

 If there is an error then a wrong character will be displayed on


the transmit screen. If this happens, then the operator sends a
backspace and removes the erroneous character. Then he can retype
the correct character.

 The advantage of this technique is that it requires simple


circuitry. But its disadvantage is that an error may creep in when a
correctly received character becomes erroneous on it journey back to
transmitter.

 Another disadvantage is that it depends heavily on the human


operators to detect and correct error.

Finally the Echoplex system needs a duplex system but useful


information is flowing only in one direction.

3.12.3 Parity Technique for Error Detection

¾¾ The simplest technique for detecting errors is to add an extra bit


known as parity but to each word being transmitted.

¾¾ As shown in Figure 3.20 generally the MSB of an 8-bit word is


used as the parity bit and the remaining 7 bits are used as data or
DATA COMMUNICATION

message bits.

MSB LSB
P d6 d5 d4 d3 d2 d1 d0
7-data bits
Parity bit

Figure 3.20 Format of a transmitted word with parity

¾¾ The parity of the 8-bit transmitted can be either even parity or


odd parity.

¾¾ Even parity means the number of 1’s in the given word including
the parity bit should be even (2,4,6…).

¾¾ Odd parity means the number of 1’s in the given word including
the parity bit should be odd(1,3,5..).

Use of parity bit to decide parity

‰‰ The parity bit can be set to 0 or 1 depending on the type of parity


required.

‰‰ For odd parity this bit is set to 1 or 0 at the transmitter such that
the number of “1 bits” in the entire word is odd. This is illustrated
in Figure 3.21(b).

‰‰ For even parity this bit is set 1 or 0 such that the number of “1 bits”
in the entire word is even. This is illustrated in Figure 3.21(a).
P Data bits P Data bits
1 1001011
0 1001011

P Data bits P Data bits


1 0010011 0 1000110

(a) Inclusion of a parity bit (b) Inclusion of a parity bit


to obtain an even parity is obtain the odd parity

Figure 3.21
ANALOG AND DIGITAL COMMUNICATION

How does error detection take place?

1. The parity checking at the receiver can detect the presence of an


error if the parity of the received signal is different from the expected
parity. That means if it is known that the parity of the transmitted
signal is always going to be “even” and if the received signal has an
odd parity then the receiver can conclude that the received signal is
not correct. This is as shown in figure 3.22

2. When a single error or an odd number of errors occur during


transmission the parity of the code word changes. Parity of the
received code word is checked at the receiver and change in parity
indicates that error is present in the received word. This is as shown
in figure 3.22

3. If presence of error is detected then the receiver will ignore the


received byte and request for the retransmission of the same byte to
the transmitter. Receiver’s decision
Parity
P Message bits
Transmitted Correct word
0 10010110 Even
code
error
P
Incorrect word
Received code with Odd
0 00010110
one error
errors
P
Received code with
0 01000110 Odd Incorrect word
three errors

Figure 3.22 The receiver detects the presence of error if the


number of errors is odd i.e. 1,3,5 ….

When does parity technique fail to detect the errors ?

If the number of errors introduced in the transmitted code is two


or any even number , then the parity of the received code word will not
change. It will still remain even as shown in figure 3.23 and the receiver
will fail to detect the presence of errors.
DATA COMMUNICATION

Transmitted code : 0 1 0 0 1 0 1 1 0
Receiver’s decision
two errors Parity

Received code
0 1 1 1 1 0 1 1 0 Even No error
with two errors

Figure 3.23 The receiver cannot detect the presence of error if the
number of errors is even i.e. 2,4,6 ….

Limitations of parity checking

1. Thus the simple parity checking method has its limitations . It is


not suitable for detection of multiple errors (two , four, six etc ) i.e. The
parity checking method is not useful in detecting the burst error.

2. The other limitation of parity checking method is that it cannot


reveal the location of erroneous bit. It cannot correct the error either.

Table 3.5 Example for Even Parity

P 7 6 5 4 3 2 1
H 0 1 0 0 1 0 0 0
O 1 1 0 0 1 1 1 1
L 1 1 0 0 1 1 0 0
E 1 1 0 0 0 1 0 1

Note that the parity bits are selected in order to obtain an even
parity for each row (i.e. for each letter)

3.12.4 Checksum Technique for Error Detection

¾¾ As discussed in the previous section, simple parity cannot detect two


or even number of errors within the same word (or) burst errors.

¾¾ One way to overcome this problem is to use a sort of two dimensional


parity.

¾¾ As each word is transmitted, it is added to the previously sent word


and the sum is retained at the transmitter as shown in Figure 3.24
ANALOG AND DIGITAL COMMUNICATION

Word A 1 0 1 1 0 1 1 1
+
Word B 0 0 1 0 0 0 1 0

Sum 1 1 0 1 1 0 0 1

Figure 3.24 Concept of checksum

Each successive word is added in this manner to the pervious


sum. At the end of the transmission the sum (called a checksum ) upto
that time is sent.

The errors normally occur in burst . The parity check


method is not useful in detecting the errors under such conditions. The
checksum error detection method can be used successfully in detecting
such errors.

In this method a “checksum” is transmitted along with every block


of data bytes . In this method an eight bit accumulator is used to add 8
bit bytes of a block of data to find the “checksum byte” .The carries of the
MSB are ignored while finding out the checksum byte. The generation
of checksum will be clear if you refer to the example 3.1.

Example 3.1 Find the checksum of the following message.

10110001, 10101011, 00110101, 10100001

Solution

Carries O
10 1 0 1 1 1 1 0
1 0 1 1 0 0 0 1
Data + 1 0 1 0 1 0 1 1
bytes + 0 0 1 1 0 1 0 1
+ 1 0 1 0 0 0 0 1
Checkman
0 0 1 1 0 0 1 0
bytes

`Note that the carries of MSB have been ignored while writing the
checksum byte.

How to detect error using the checksum byte?


DATA COMMUNICATION

1. After transmitting a block of data bytes (say 8-data bytes) the


“checksum” bytes is also transmitted. The checksum byte is
regenerated at the receiver separately by adding the received bytes.

2. The regenerated checksum byte is then compared with the


transmitted one. If both are identical then there is no error. If they
are different then the errors are present in the block of received data
bytes.

3. Sometimes the 2’s complement of the checksum is transmitted


instead of the checksum itself. The receiver will accumulate all the
bytes including the 2’s complement of the checksum. If there is no
error, the contents of the accumulator should be zero after
accumulation of the 2’s complement of the checksum byte.

Advantage of the checksum method

The advantage of this method over the simple parity checking


method is that the data bits are “ mixed up “ due to the 8 bit addition.
Therefore checksum represents the overall data block. In checksum
therefore, there is 255 to 1 chance of detecting random errors.

3.12.5 VRC and LRC Technique for Error Detection

When a large number of binary words are being transmitted or


received in succession, the resulting collection of bits is considered as
a block of data, with rows and columns as shown in figure 3.25. The
parity bits are produced for each row and column of such block of data.
Characters C O M P U T E R
b1 1 1 1 0 1 0 1 0 1
b2 1 1 0 0 0 0 0 1 1
7 bits ASCII b3 0 1 1 0 1 1 1 0 1
codes (Mes- b4 0 1 1 0 0 0 0 0 0
sage bits) b5 0 0 0 1 1 1 0 1 0
b6 0 0 0 0 0 0 0 0 0
b7 1 1 1 1 1 1 1 1 0
VRC Bits

(even party)
→ 1 1 0 0 0 1 1 1 1


These bits will make the party of These bits will make
each column even LRC bits → the
→ party of each
(even party) row even

Figure 3.25 Vertical and longitudinal parity check bits


ANALOG AND DIGITAL COMMUNICATION

The two sets of parity bits so generated are known as :

1. Longitudinal Redundancy Check (LRC) bits

2. Vertical Redundancy Check (VRC) bits

The LRC bits indicate the parity of rows and VRC bits indicate the
parity of columns as shown in figure 3.25

The vertical Redundancy Check (VRC) bits

As shown in Figure 3.26 the VRC bits are parity bits


associated with the ASCII code for each character. Each VRC bit will make
the parity of its corresponding column “an even parity” .For example
consider column 1 corresponding to character “C” .The ASCII code for
the character C is,

Character C

b1 1
b2 1 → Column - 1 of the data block
b3 0
b4 0
b5 0
b6 0
b7 1
VRC bit → 1
→ VRC bit = 1 to make the parity of first
column even

Figure 3.26

Therefore the eighth bit b8 which a VRC bit is is made “1” to make
the parity even.

The longitudinal Redundancy Check (LRC) bits

The LRC bits are parity bits associated with the data block of
figure 3.25 Each LRC bit will make the parity of the corresponding row,
an even parity. For example, consider row 1 of Figure 3.27.

Row 1: b1 1 1 1 0 1 0 1 1 1 ← LRC bit to make


parity even
DATA COMMUNICATION

Figure 3.27

How to locate the bit in error ?

Even a single error in any bit will result in a noncorrect “LRC”


in one of the rows and an incorrect VRC in one of the columns. The bit
which is common to the row and column is the bit in error.

However there is still a limitation on the Block parity code, which


is that , multiple errors in rows and columns can be only detected but
they cannot be corrected. This because, it is not possible to locate the
bits which are in error .This will be clear when you will solve the example
3.2

Example 3.2 The following bit stream is encoded using VRC,LRC and
even parity. Locate and correct the error if it is present.

11000011 11110011 10110010 00001010

00101010 00101011 10100011 01001011

11100001

Solution

1. Figure 3.28 shows the received data block alongwith the LRC and
VRC bits.

2. Note the parity bits corresponding to row 1 and column 5


indicate wrong parity. Therefore the fifth bit is in the first row
(encircled bit) is incorrect. Thus using VRC & LRC, it is possible to
locate and correct the bits in error.
ANALOG AND DIGITAL COMMUNICATION

Byte Byte Byte Bit in error LRC bits


1 2 3
→ (even

b1
b2
1
1
1
1
1
0
0
0
O0
0
0
0
1
0
0
1
1
1
parity)

Wrong


b3 0 1 1 0 1 1 1 0 1
parity
Data b4 0 1 1 0 0 0 0 0 0
Block → b5 0 0 0 1 1 1 0 1 0
b6 0 0 0 0 0 0 0 0 0
b7 1 1 1 1 1 1 1 1 0
VRC bits →
(Even party)
1 1 0 0 →0 1 1 1 1

Wrong party

First bit of the fifth byte is in error

Figure 3.28 Received data block along with VRC and LRC-bits

3.12.6 Cyclic Redundancy Check (CRC)

‰‰ This technique is more powerful than the parity than the parity
check and checksum error detection.

‰‰ CRC is based on binary division. A sequence of redundant bits called


CRC or CRC remainder is appended at the end of a data unit such
as byte.

‰‰ The resulting data unit after adding CRC remainder becomes


exactly divisible by another predetermined binary number.

‰‰ At the receiver, this data unit is divided by the same binary


number.

‰‰ There is no error if this division does not yield any remainder. But a
non-zero remainder indicates presence of errors in the received data
unit.

‰‰ Such an erroneous data unit is then rejected.


DATA COMMUNICATION

3.12.6.1 Procedure to Obtain CRC

The redundancy bits used by CRC are derived by following the


procedure given below:

1. Divide the data unit by a predetermined divisor.

2. Obtain the remainder. It is the CRC.

3.12.6.2 Requirements of CRC

A CRC will be valid if and only if it satisfies t he following


requirements :

1. It should have exactly one less bit than divisor.

2. Appending the CRC to the end of the data unit should


result in the bit sequence which is exactly divisible by the divisor.

3.12.6. 3. CRC Generator

The CRC generator is shown in Figure 3.29


n-bits

Data 00................0

(n+1) bits Divisor


Remainder
CRC

n bits Code word


Data CRC

Figure 3.29 CRC generator

The stepwise procedure in CRC generation is as follows

Step 1:

Append a string of n 0s to the data unit where n is 1 is less than


the number of bits in the predecided divisor (n+1 bits long).
ANALOG AND DIGITAL COMMUNICATION

Step 2:

Divide the newly generated data unit in step 1 by the divisor .This
is a binary division.

Step 3:

The remainder obtained after the division in step 2 is the n bit


CRC.

Step 4:

This CRC will replace the n 0s appended to the data unit in step
1, to get the code word to be transmitted as shown in figure 3.29

3.12.6.4 CRC Checker

Figure 3.30 shows the CRC checker.


Received code word
Data CRC

Data CRC

Divisor (n+1) bits

If remainder is 0
Remainder then no errors

Figure 3.30 CRC checker

¾¾ The code word received at the receiver consists of data and CRC.

¾¾ The receiver treats it as one unit and divides it by the same (n+1) bit
divisor which was used at the transmitter.

¾¾ The remainder of this division is then checked.

¾¾ If the remainder is zero, then the received code word is error free and
hence should be accepted.
DATA COMMUNICATION

¾¾ But a non-zero remainder indicates presence of errors hence the cor-


responding code word should be rejected.
Example 3.3 The code word is received as 1100 1001 01011 .Check
whether there are errors in the received code word , if the divisor is
10101 .(The divisor corresponds to the generator polynomial).
Solution
‰‰ As we know the code word is formed by adding the dividend and the
remainder
‰‰ This code word will have an important property that it will be
completely divisible by the divisor.
‰‰ Thus at the receiver we have to divide the received code word by, the
same divisor and check for the remainder.
‰‰ If there is no remainder then there are no errors. But if there is re-
mainder after division, then there are errors in the received code
word.
‰‰ Let us use this technique and find if there are errors.
Data word : 1100 1001 01011
Divisor : 10101
1111100001
)
10101 1100100101011
Received
Codeword
Divisor 10101
+

011000
10101
+

MOD - 2 011010
additions 10101
+

011111
10101
+

010100
10101
+

000011011
10101
01110 Remainder
ANALOG AND DIGITAL COMMUNICATION

The non-zero remainder shows that there are errors in the


received code word.

Generation of CRC code

The generation of CRC code is clear after solving the example 3.4

Example 3.4 Generate the CRC code for the data word of 110010101.
The divisor is 10101.

Solution

Given: Data word : 110010101

Divisor : 10101

The number of data bits = m = 9

The number of bits in the code word = N

Dividend =Data word and number of zeros.

Number of Zeroes = Number of bits in the divisor.

110010101 0 0 0 0 0

Data Word 5 additional


zeros
DATA COMMUNICATION

Carry out the division as follows


11111001111 Quotient

)
10101 11001010100000 dividend
Divisor 10101

+
+

+
+
011000
10101
011011
10101
011100
10101
10011
10101
11000
10101
11010
10101
11110
10101
10110
10101
11 Remainder
Code word

In CRC the required code word is obtained by writing the data


word followed by the remainder.

\
11001010100000
11
Code word = 1 1 0 0 1 0 1 0 1 0 0 0 1 1

Undetected errors in CRC

‰‰ CRC cannot detect all types of errors.

‰‰ The probability of error detection and the types of detectable errors


depends on the choice of divisor.
ANALOG AND DIGITAL COMMUNICATION

Example 3.5 Calculate CRC for the frame 110101011 and the generator
polynomial =x4 + x + 1 and write the transmitted frame.

Solution

The generator polynomial actually acts as the divisor in the


process of CRC generation.

\ Data word : 110101011

Divisor : x4 + 0 x3 + 0x2 + x + 1 = 1 0 0 1 1

The number of data bits = m=9

The number of bit in the code word = N

Dividend =Data word + number of zeros.

Number of zeroes = Number of bits in the divisor.

1 1 0 1 0 1 0 1 1 0 0 0 0 0

Data word 5 additional zeros

Carry out the division as follows

11000000110
10011 ) 11 10 01 10 01 1 0 1 1 0 0 0 0 0
MOD - 2
+
+

+
+

additions
010011
10011
00000011000
10011
010110
10011
001010 Remainder
DATA COMMUNICATION

Code word

The code word is given by:


110101011 0 1 0 1 0
Data word Remainder

3.13 ERROR CORRECTION TECHNIQUES

There are three methods of error correction as follows


Error correction

Symbol Retransmission Forward error


substitution ARQ correction CFEC

3.13.1 Symbol substitution

‰‰ This technique was designed to be used in human environment.

‰‰ If a character received is in error then a unique character such as a


reverse question mark (?) is substituted for the erroneous character.

‰‰ For example if the message “Come” has an error in the first


character, then it would be displayed as :?ome”.

‰‰ The human operator can discern the correct


message by inspection and retransmission is not necessary.

‰‰ But if the message is Rs. ?000,000, then the operator cannot


determine the correct character , so retransmitted is essential.

3.13.2 Forward Error Correction (FEC)

Error correction techniques

‰‰ In the error correction technique codes are generated as transmitter


by adding a group of parity bits or check bits as shown in figure 3.31.
The source generates the data in the form of binary symbols. The
ANALOG AND DIGITAL COMMUNICATION

encoder accepts these bits and adds the check bits to them to
produce the code words.

‰‰ These code words are transmitted towards the receiver. The check
bits bus are used by the decoder to detect and correct the errors.


Data
Encoder Decoder Destination
source

code words
Data bits Data bits
Data bits Check bits

Figure 3.31 Error correction technique

‰‰ The encoder of figure 3.31 adds the check bits to the data bits,
according to the prescribed rule. This rule will be dependent on the
type of code being used.

‰‰ The decoder separates out the data and check bits. It uses the
parity bits to detect and correct errors if they are present in the
received code words.

‰‰ The data bits are then applied to the destination.

‰‰ In FEC the receiver searches for the most likely correct word.

‰‰ When an error is detected , the distance between the received


invalid code word and all the possible valid code words in measured.

The nearest valid code word (the one having minimum distance)
is the most likely correct version of the received code word as shown in
Figure 3.32 Valid code word 1
Distance
11011100
1
Received code word Valid code word 2
Distance
11001100 11101101
2
Distance Valid code word 3
3 11110100

Figure 3.32 Concept of Forward Error Corrector


DATA COMMUNICATION

In Figure 3.32 the valid code word 1 has the minimum distance
(1), hence it is the most likely correct code word.

3.13.3 Automatic Repeat Request (ARQ) Technique

™™ There are two basic systems of error detection and correction. The
first on being the forward error correction (FEC) system which has
been discussed in previous section. The second one is the automatic
repeat request (ARQ) system.

In the ARQ system of error control , when an error is detected,


a request is made for the retransmission of that signal. Therefore a
feedback channel is required for sending the request for retransmission.

The ARQ systems differ from FEC systems in three important


respects. They are as follows:

1. ARQ system less number of check bits (parity bits) are required to
be sent. This will increase the (k/n) ratio for an (n,k) block code if
transmitted using the ARQ system.

2. A return transmission path and additional hardware in order to


implement repeat transmission of codewords will be needed.

3. The bit rate of forward transmission must make allowance for the
backward repeat transmission.

Basic ARQ system

The block diagram of the basic ARQ system is as shown in


Figure 3.33
Forward channel

Forward
Message Input buffer
Encoder Transmission Output buffer
Detector
Input and controller
Channel and controller

Return

Transmission ACK/NAK

Channel

Feedback path

Figure 3.33 Block diagram of the basic ARQ system


ANALOG AND DIGITAL COMMUNICATION

Operation of ARQ system

¾¾ The encoder produces code words for each message signal at its
input. Each code word at the encoder output is stored temporarily
and transmitted over the forward transmission channel.

¾¾ At the destination a decoder will decode the code words add look
for errors.

¾¾ The decoder will output a “positive acknowledgement “ (ACK) if no


errors are detected and it will output a negative acknowledgement
(NAK) if errors are detected.

¾¾ On receiving a negative acknowledgement (NAK) signal via the


return transmission path the “controller” will retransmit the
appropriate word from the words stored by the input buffer.

¾¾ A particular word may be retransmitted only once or it may be


retransmitted twice or more number of times.

¾¾ The output controller and buffer on the receiver side assemble the
output bit stream from the code words accepted by the decoder.

Error probability on the return path

The bit rate of the return transmission which involves the return
transmission of ACK/NAK signal is low as compared to the bit rate of
the forward transmission . Therefore the error probability of the return
transmission is negligibly small.

3.13.3.1 Types of ARQ system

The three types of ARQ systems are

1 .Stop-and-wait ARQ system

2. Go back N ARQ and

3. Selective repeat ARQ


DATA COMMUNICATION

3.13.3.2 Stop and wait ARQ system

™™ The block diagram for the step and wait ARQ system is as shown in
Figure 3.34. This method is the simplest ARQ system.

™™ In Figure 3.34. X1.X2…etc are code words. The transmitter sends


the first code word X1 during time TW. This code word reaches the
receiver after a delay time td which is proportional to the distance
between the transmitter and receiver.

Code word X2 is
Tw Tw
T1 T2 retransmitted
Transmitter X1 X2 X2

NAK
Receiver X1 ACK X2

Error detected

Figure 3.34 Stop and wait ARQ system

™™ At the receiver the detector searches for error. As error is not found
it sends the positive acknowledgement signal ACK back to the
transmitter.

™™ After receiving this ACK signal, the transmitter will send the next
code word X2. The receiver detects the error and sends a negative
acknowledgement signal NAK to the transmitter.

™™ On reception of this NAK signal, the transmitter will retransmit the


code word X2.

The major disadvantage of this system is that the transmitter has


to wait for the ACK and NAK signals .This wastes a lot of transmitter
time.
ANALOG AND DIGITAL COMMUNICATION

3.13.3.3 Go back N ARQ


Retransmission of code
Tw
words begins here
Transmitter X1 X2 X3 X4 X5 X6 X7 X3 X4 X5 X6 X7 X8

NAK
X1 X2 X3 X4 X5 X6 X7 X3 X4 X5 X6 X7 X8
Discarded

Error code word

detected

Figure 3.35 Go back N ARQ system

The block diagram for the go back n ARQ system is as shown in


figure 3.35.

¾¾ The major difference between this and the previous system is that
the transmitter does not wait for ACK signal for the transmission of
next code word.

¾¾ It transmits the code words continuously as long as it does not


receive the “NAK” signal.

¾¾ When the receiver detects an error in the third code word X3 as shown
in Figure 3.35. The receiver sends the NAK signal.

¾¾ But this signal takes some time to reach the transmitter by that time
the transmitter has transmitted code words upto X7.

¾¾ On reception of the NAK signal the transmitter will retransmit all the
code words from X3 onwards. The receiver discards all the code words
it has received after X3 i.e. X3 to X7. It will then receive all the code
words that are retransmitted by the transmitter.

3.13.3.4 Selective –repeat ARQ system

±± The block diagram of selective repeat ARQ system is as shown in


Figure 3.36.


DATA COMMUNICATION

Tw

Transmitter X1 X2 X3 X4 X5 X6 X7 X3 X8 X9 X10

K Only X3 is retransmitted
NA
X1 X2 X3 X4 X5 X6 X7 X3 X8 X9 X10

Retransmitted

Code word X3 is
Error
Received
detected
Figure 3.36 Selective repeat A R Q system

‰‰ In this system as well, the transmitter does not wait for the ACK
signal for the transmitter of the next code word. It transmits the code
words continuously till it receives the “N A K” signal from the receiver.

‰‰ The receiver sends the “NAK” signal back to the transmitter as soon
as it detects an error in the received code word. For example the
receiver detects an error in the third code word X3.

‰‰ By the time this “NAK” signal reaches the transmitter, it had


transmitted the code words upto X7 as shown in Figure 3.36

‰‰ On reception of “NAK” signal, the transmitter will retransmit only


the code word X3 and then continues with the sequence X8,X9…. As
shown in figure 3.36.

The code words X4,X5,X6 and X7 received by the receiver are not
discarded by the receiver . The receiver receives the retransmitted code
word in between the regular code words. Therefore the receiver will have
to maintain the code words sequentially.

Hence the selective repeat ARQ system is the most efficient


but the most complex system, of all the ARQ systems.
3.14 Data Communication Hardware

‰‰ Figure 3.37 shows the block schematic of a multipoint data


communication circuit using the bus topology. This arrangement is
ANALOG AND DIGITAL COMMUNICATION

used by most of the data communication circuits.

‰‰ The hardware and associated circuitry which connects the


host computer to the remote computer terminal is called a data
communication link.

‰‰ The station containing mainframe is called as primary or host and


the other stations are called secondaries or remotes.

‰‰ This type of arrangement is called as centralized network.

‰‰ The primary station consists of the mainframe computer , a line


control unit (LCU) and a modem.

‰‰ Each secondary station, consists of an LCU, modem and terminal


equipment such as computer terminals and printers.
RS - 232 Serial Interface


Primary station
Mainframe Mux channel DTE DCE

Front-end Data
computer (parallel interface)
processor(FEP) modem

Transmission medium (4W-FDX)

DCE DCE

Data Data

modem modem

RS - 232 RS - 232

DTE DTE

line control line control

Unit (LCU) Unit (LCU)

CT ROP CT ROP
Secondary station 1 Secondary station 2

Figure 3.37 Multipoint data communication circuit block diagram


DATA COMMUNICATION

‰‰ The mainframe is the host of the network and the application


program is stored on it.

‰‰ The primary station in Figure. 3.37 can store process or retransmit


the data received by it from the secondary stations. It also stores
software for database management.

‰‰ The LCU at the primary station is more complicated then the LCU at
the secondary stations.

‰‰ The primary LCU controls the data traffic to and from various
circuits having different characteristics.

‰‰ The secondary LCU is used for directing the data traffic between one
data link and a few terminal devices. All these devices operate at the
same speed using the same character code.

‰‰ The LCU which has software associated with it is called as a front


end processor (FEP). The LCU at the primary station is generally a
FEP.

3.14.1 Line Control Unit (LCU)

Functions of LCU

‰‰ LCU has many important functions. Some of the important functions


of LCU are as follows:

1. At primary station it provides an interface between host computer


and other circuits.

2. LCU directs flow of input and output data between different links and
their application programs.

3. It performs parallel to serial and serial to parallel conversion of data.

4. LCU has circuitry for error detection and error correction.

5. The data link control (DLC) characters are inserted and deleted in
the LCU.
ANALOG AND DIGITAL COMMUNICATION

±± The LCU operates on the digital data so it is called as Data Terminal


Equipment (DTE).

±± Inside LCU, there is one circuit which performs most of the tasks
mentioned above. It is called as UART when the transmission is of
asynchronous type and it is called as USRT when synchronous
transmission is being used.

3.14.2 Universal Asynchronous Receiver/Transmitter (UART)

UART is used for asynchronous data transmission between the


DTE and DCE .The asynchronous data format is used and no clock
signal

Functions of UART

‰‰ Some of the important functions of UART are

1. Serial to parallel and parallel to serial data conversions.

2. Error detection by inserting and checking parity bits.

3. Inserting and detecting start and stop bits.

‰‰ UART is made of two parts: UART transmitter and UART receiver.

‰‰ Before transfer of data in either direction, it is essential to program


a control word into the UART control register. This control word
indicates the nature of data, number of data bits, whether parity is
used and if used whether it is an even parity or an odd parity.

‰‰ Figure 3.38 shows how to program the control word for various
functions. The control word set up the data, parity and stop bit
steering logic circuit.

NPB 1 = no error bit (RPE disabled)


0 = parity bit
POE 1 = parity even
0 = parity odd
NSB 1 = 2 stop bits
0 = 1 stop bits
DATA COMMUNICATION


NDB2 NDB1+ Bits/word
0 0 5
0 1 6
1 0 7
1 1 8

Note : when NDB2/NDB1 = 11 and NSB = 1.1.5 stop bits


Figure 3.38 Control word

3.14.3 UART Transmitter

Figure 3.39 (a) shows the block schematic of UART transmitter.

‰‰ The UART sends a transmit buffer empty (TNMT) signal to the DTE to
indicate that it is ready to receive data.

‰‰ When the DTE senses an active TBMT signal, it sends a parallel


data character to the transmit data lines TD0 – TD7 and strobes them
into the transmit buffer register with the transmit data strobe signal
(TDS).

‰‰ When the transmit end of character (TEOC) signal becomes


active, the contents of transmit buffer register are transferred to the
transmit shift register.

‰‰ The data pass through the steering logic circuit where the start, stop
and parity bits are picked up.

‰‰ The data is loaded into the transmit shift register , and then it is
serially outputted on the transmit serial output (TSO) pin. The bit
rate of outputted data is equal to the transmit clock frequency (TCP)

‰‰ When the data in the transmit shift register is being sequentially


clocked out, the DTE loads the next character into the buffer register.

‰‰ This process continues till the DTE transfers all its data. This is
shown in Figure 3.39 (b).
ANALOG AND DIGITAL COMMUNICATION

NSB NDB 2
NPB NDB 1 POE Parallel Input data from LCU

TDS TD7 TD6 TD5 TD4 TD3 TD2 TD1 TD0


CS Control register

(C.R.)
Transmit buffer register (TBR)

Parity

Generator

(P.G.)

Data -,parity-,and stop

- bit Steering logic

1 1 P/d7 d6 T r a n s m i t d2 d1 d0 Start Output TSO


TCP Timing Serial
shift bit circuit
generation
register Data

out

Buffer empty logic circuit

Status word

register
SWE
TBMT

Figure 3.39(a) Block diagram of UART transmitter


DATA COMMUNICATION

Figure 3.39 (b) Timing diagram of UART transmitter

3.14.4 UART Receiver

Figure 3.40 (a) Shows the simplified block diagram of a UART receiver.

Receive clock RCP

RSI
(Receive Receive Receive Shift register
Start - bit
Serial Timing
verify (RSR)
input) circuit

Parity Receive buffer register


RDE
checker (RBR)

Control RD7 RD6 RD5 RD4 RD3RD2RD1RD0


register Parallel output data to LCU

Status word register

(SWR)

RPE RFE RDA ROR SWE RDAR

Figure 3.40 (a) Block diagram of UART-receiver


ANALOG AND DIGITAL COMMUNICATION

‰‰ The same control word which is used by the transmitter is used


by the receiver to determine the number of stop bits, data bits and
the parity bit information for the UART receiver.

Timing diagram of UART receiver

‰‰ Refer Figure 3.40 (b) to understand various operations of receiver.


The UART receiver ignores the idle time line 1’ s.

‰‰ As soon as a valid start bit is detected by the start bit is detected by


the start bit verification circuit, the data character is clocked out in
serial manner into the receiver shift register.

‰‰ The parity check circuit checks the parity bit if it is used.

‰‰ When one complete character is loaded, into the shift register, the
complete character is transferred to the buffer register using parallel
data transfer.

‰‰ The receive data available (RDA) flag is set in the status word
register.

‰‰ In order to read the status register, the DTE observes the status word
enable (SWE). If this line is found active the character is read from
the buffer register by placing an active condition on the receive data
enable (RDE) pin.

‰‰ Once the data reading is over the DTE places an active signal on the
receive data available reset (RDAR) pin. Which resets the RDA pin.

‰‰ Meanwhile the next character is received and clocked into the


receiver shift register and this process ie repeated until all the data
have been received.
DATA COMMUNICATION

Figure 3.40 (b) Timing diagram-UART receiver


ANALOG AND DIGITAL COMMUNICATION

3.14.5 Universal Synchronous Receiver/Transmitter (USRT)

The USRT is used for synchronous data transmission between


the DTE and DCE. In the synchronous transmission there is a checking
information transferred between USRT and modem. Each transmission
begins with a unique SYN character.

Functions of USRT

The important functions of USRT are as follows:

1. Serial to parallel and parallel to serial data conversion.

2. Error detection.

3. Insertion and detection of SYN character.

Figure 3.41 (b) shows the block diagram of USRT transceiver. It


operates very similar to the UART . Hence we will discuss only the
differences.

USRT does not allow the start and stop bits. Instead unique SYN
characters are used into the transmit and receive SYN registers before
the data is transmitted.

The programming information for the control word is shown in


Figure 3.41 (a)

NPB 1 = no parity bit (RPE disabled)

0 = parity bit

POE 1 = Parity even

0 = Parity odd
NDB2 NDB1+ Bits/Word
0 0 5
0 1 6
1 0 7
1 1 8

Figure 3.41 (a) USRT control word


DATA COMMUNICATION

3.14.6. USRT Transmitter

‰‰ The bit rate of transmit clock signal (TCP) is adjusted as per


requirement. The desired SYN character is loaded from the parallel
input pins (DB0-DB7) into the transmit SYN register with the help of
transmit SYN strobe (TSS).

‰‰ Data is loaded into the transmit data register from DB0-DB7 with the
help of transmit data strobe (TDS) signal.

‰‰ The next character to be transmitted is taken out from the


transmit data register if the TDS pulse comes during the
transmission of the present character.

‰‰ But if TDS pulse does not come, the next transmitted character is
extracted from the transmit SYN register and the SYN character
transmitted signal (SCT) is set.

‰‰ The transmit buffer empty (TBMT) signal is used for requesting the
next character from the DTE.

‰‰ The serial output data appears on the transmit serial out (TSO) pin.
ANALOG AND DIGITAL COMMUNICATION

Figure 3.41 (b) USRT transceiver


DATA COMMUNICATION

3.14.7 USRT Receiver

‰‰ The bit rate of the receiver clock signal (RCP) is adjusted as per
requirement and the desired SYN character is loaded into the receive
SYN register from DB0-DB7 with the help of receive SYN strobe (RSS).

‰‰ When the receiver rest input goes from high to low , the receiver is
placed in the search mode in which serially received data is checked
bit by bit to find the SYN character.

‰‰ After clocking each bit into the receive shift register, its contents are
compared with those of receive SYN register. if they are identical it
shows that a SYN character is found and the STN character receive
(SCR) output is set.

‰‰ This character is transferred into the receiver buffer register and the
receiver is placed into the character mode.

‰‰ In this mode, the receive data is checked character-by-character and


the receive flags are set accordingly.

3.15 SERIAL INTERFACE

‰‰ In order to ensure proper data flow between LCU and modem, a


serial interface is introduced.

‰‰ The aim of using this interface is to coordinate the flow of data,


control signals and timing information between the DTE and DCE.

‰‰ In this section we are going to discuss about the most widely used
serial interface RS=232.

‰‰ The different serial interface available for serial data communication


are,

1.RS-232, 2.RS-449, 3.RS-530 ,4.RS-422 and 5.RS-423.


ANALOG AND DIGITAL COMMUNICATION

3.15.1 The EIA 232/V.24 Standard

‰‰ One of the most common interface standards is RS-232 standard. RS


means recommended standards . RS-232 standard is published by
the Electronic Industry Association (EIA).

‰‰ The latest revision of these standards is the third one so it is called


as RS-232 C standard. This includes the original RS-232 standards.

‰‰ The V.24 is almost a similar standard as the EIA 232 which is


recommended by the ITU-T (CCITT) standards.

‰‰ In the standards the terminal or computer is called a Data Terminal


Equipment (DTE) and the modem is called a Data Circuit Terminating
Equipment (DCE).

‰‰ To specify the exact nature of interface between the DTE and DCE
following characteristics are used:

1. Mechanical

2. Electrical

3. Functional

4. Procedural

‰‰ The mechanical characteristics specify the physical connection


between the DTE and DCE . Typically , the data signal, control signal,
timing signal and ground signal are bundled into a cable with a terminator
connector male or female at each end.

‰‰ The mechanical specification for EIA-232/V.24 for a 9 pin and 25


pin connector is shown in figure 3.42 (a) and 3.42 (b).
DATA COMMUNICATION

‰‰ The electrical characteristics specify the voltage levels and timing of


voltage changes. Both DTE and DCE must use the same code (e.g.
NRZ or RZ).

‰‰ These characteristics also determine the data rates and distance


that can be achieved.

‰‰ The functional characteristics specify which circuits are connected


to each of the pins in the 9 pins or 25 pins , and what they mean. The
Figure 3.43 shows 9 pins that are nearly always used.
ANALOG AND DIGITAL COMMUNICATION

Functions can be classified into groups like data, control, timing


and electrical ground.

The procedural characteristics is the protocol, that is, the legal


sequence of events . The protocol is based on action –reaction pairs.

For e.g. when the DTE (Terminal )asserts a request to send, the
DCE ( modem) replies with a clear to send. Similar action-reaction pair
exist for other circuits as well.

3.15.2 The RS-232C Voltage Levels

™™ The binary data can be represented by two different voltages. A


binary “1” is called as mark and a binary “0” is called as space. As
per the RS-232C standards the voltage ranges for representing mark
and space are well defined.

™™ A “Mark” i.e. binary “1” is represented by any voltage -3 Volts to -25


Volts.

™™ A “space” i.e. binary “0” is represented by any voltage between


+3 Volts to 25 Volts as shown in Figure 3.44
DATA COMMUNICATION

™™ The voltage range between -3 Volts and +3 Volts is treated as in valid


range. There should not be any mark or space voltages within this 6
Volt wide zone at any time.

™™ Thus to send a ‘mark’ the selected voltage level should be close to


-25 volts and to send a ‘space’ the voltage level should be close to
+ 25 volts.

3.15.3 RS-232 Signals

Basic communication of a signal from source to receiver requires


only a single wire and ground wire. The Rs-232 standard however makes
a provision for various handshaking arrangements. There are actually
19 signals used for interfacing . The RS-232 standard uses at the most
25 wires in a cable and connector. The standard divides these signals
into three groups. They are:

1. Data group

2. Control group

3. Timing group

4. Ground
ANALOG AND DIGITAL COMMUNICATION

Table 3.6 shows the RS-232 signals divided into the four
categories.

Table 3.6 V.24/EIA-232-F interchange circuits

EIA-232 V-24 Name Direction To Function


Data Signals
BA 103 Transmitted Data DCE Transmitted by DTE
BB 104 Received Data DTE Received by DTE
Secondary transmitted
SBA 118 DCE Transmitted by DTE
Data
SBB 119 Secondary received data DTE Received by DTE
Control Signals
CA 105 Request to send DCE DTE wishes to transmit
DCE is ready to receive,

CB 106 Clear to send DTE response to Request or

send
CC 107 DCE ready DTE DCE is ready to operate
CD 108.2 DTE ready DCE DTE is ready to operate
DCE is receiving a

CE 125 Ring Indicator DTE ringing signal on the

channel line
DCE is receiving

Received line signal a signal within


CF 109 DTE
detector appropriate limits on

the channel line


Indicates whether there

CG 110 Signal quality detector DTE is a high probability er-

ror in the data received.


Selects one of two data
CH 111 Data signal rate selector DCE
rates
Selects on of two data
CI 112 Data signal rate selector DTE
rates
CJ 133 Ready for receiving DCE On/off flow control
Secondary request to DTE wishes to transmit
SCA 120 DCE
send on reverse channel
DATA COMMUNICATION

DCE is ready to receive


SCB 121 Secondary clear to send DTE
on reverse channel
Secondary received line Same as 109, for
SCF 122 DTE
signal detector reverse channel
Instructs remote DCE
RL 140 Remote loopback DCE
to loop back signals
Instructs DCE to loop-
LL 141 Local loopback DCE
back signals
Local DCE in a test
TM 142 Test mode DTE
condition
Clocking signal

Transmitter signal transitions to ON and


DA 113 DCE
element training OFF occur at center of

each signal element


Clocking signal bot 113
Transmitter signal
DB 114 DTE and 114 relate to sig-
element timing
nals on circuit 103
Receiver signal element Clocking signal for
DD 115 DTE
timing circuit
Ground
Signal ground/common Common ground
AB 102
return reference for all circuits

Functions of various RS-232 lines

Let us now understand the operation of different lines (signals) of


RS-232 standard.

Data

Transmitted data and received data lines

‰‰ These are the lines which actually carry the message bits between
the DTE and DCE ends.

‰‰ The transmitted data line (number 2) is defined by RS-232 standard


to carry a signal from the DTE end to DCE end.
ANALOG AND DIGITAL COMMUNICATION

‰‰ And the received data line is used to carry a signal from DCE to DTE
end. The data is transmitted serially on these lines.

Secondary data transmit and receive lines

‰‰ The secondary lines are provided to transmit and receive data for
those applications in which there are two channels in each direction.

‰‰ Out of two the first channel is a primary high speed high perfor-
mance channel and the second channel is used for carrying some
less critical message.

‰‰ This message can be like how many errors have been detected or
about the condition of the data link etc. Most applications do not use
these lines but some DTE and DCE equipment can make use of it.

Control lines

‰‰ This is the largest group of lines in RS-232 standard. There are in


all 12 control lines, nine out of which are used for the control of the
primary channel, and three for the secondary channel.

‰‰ Out of the nine primary control lines, six are for the DCE to DTE
direction and the remaining three are for the DTE to DCE direction.

Timing lines

Timing lines are used in those applications in which a timing


signal must be sent along with the data. These timing signals are used
for the receiver synchronization .Most of the RS-232 installations are
asynchronous and hence do not use these lines.

Ground

‰‰ As seen from the Table 3.6 there are only two data lines name-
ly signal ground and protective ground. The protective ground is
connected to the chassis of the equipment . It provides a protection
to the user against shocks.

‰‰ The signal ground establishes a 0-V point for the RS-232


transmission. The voltages that are transmitted on the other wires
DATA COMMUNICATION

are measured with respect to signal ground.

‰‰ The signal ground may or may not be connected to the protective


ground, by seeing which provides a better performance and minimize
the noise.

3.15.4. Control Lines used for Handshaking

‰‰ The control lines for handshaking are request to send (RTS), clear to
send (CTS),data set ready (DSR) and data terminal ready (DTR).

‰‰ Here “data set” is a communication box which acts as DCE and “data
terminal” is the computer or terminal which is a DTE.

‰‰ The other three control lines are “ring indicator”, “received line signal
detector” and the “signal quality detector”.

‰‰ The received line signal detector shows the presence of data coming
into the DCE from telephone lines. It also shows whether the qual-
ity of the received signal is adequate for low error performance and
indicates it via the “signal quality detector”.

‰‰ If a telephone line is being used then the DCE can tell the DTE that
someone is calling. When it detects the bell ringing signal on the line.
It then uses the “ring indicator” to show this.

3.15.5 Advantages of RS-232 Interface

1. The RS-232 interface provides a reliable means of communication.

2. It is a low cost interface.

3. It is suitable for low baud rate slow systems typically upto 20,000
bauds.

4. Use of control and handshake lines, ensures a successful


communication.

5. The standard voltage levels for mark and space makes it possible to
reduce the interference due to noise.
ANALOG AND DIGITAL COMMUNICATION

6. RS-232 is available in IC form which makes it compact, cost


effective and easy to use.

3.15.6 Limitations of RS-232 Interface

Though the RS-232 is among the most common standards in use,


it has the following limitations

1. RS-232 specifications are for distances less than 50 ft. To increase


the distance, the baud rate has to be reduced.

2. The noise interference for the single ended signals is very high. To
improve noise immunity it is necessary to have the mark and space
voltages close to ± 25 Volts. This difficult in the modern digital sys-
tems which use a 5 Volt supply.

3. The highest baud rate is 20,000 baud for distance less than 50 ft.
This is too slow.

4. Multiple users cannot share the same wire in RS-232 . It is designed


for two users connected directly to each other.

5. Less flexibility: Many systems need greater flexibility .That means


many interconnections should be allowed and the one which is best
suited to the given application be selected.

Some additional standards are used to overcome the


limitations of RS-232 system. These other interfaces are designed to
provide superior performance to RS-232 in one or more areas.

3.16 CENTRONICS PARALLEL INTERFACE

‰‰ Parallel interface allows the user to transfer data between two


devices with eight or more bits at a time.

‰‰ Parallel transmission is also called as serial by word transmission.

‰‰ The advantage of parallel transmission is that the speed of data


transfer is increased as compared to serial transmission.

‰‰ But then the number of wires, number of connections, complexity


DATA COMMUNICATION

everything will go up.

‰‰ Another advantage of parallel transmission is that most


computer terminals and peripheral equipments process data
internally in parallel.

‰‰ So if we use the parallel interface, then it is not necessary to convert


data from serial to parallel form or vice versa.

‰‰ The parallel interface is used between devices that are placed in


close proximity with each other.

Figure 3.45 Shows the parallel interface between a computer and


a printer.

Data circuits
STB
ACK
Computer BUSY Printer
PO
SLCT
AF
PRIME
ERROR

Figure 3.45 A parallel interface between a computer and printer

3.16.1 Data Lines

These are eight parallel data lines d0 to d7. All these are
unidirectional lines taking data from computer to printer.

Characters are transmitted from the computer to printer in the


form of seven-bit ASCII ( with eight bit reserved for parity or in the form
of eight bit extended ASCII or EBCDIC code.

3.16.2 Control Lines

There are four unidirectional control line to send the control


information from computer to printer. The control lines are:

1. Strobe (STB)
ANALOG AND DIGITAL COMMUNICATION

2. Autofed (AF)

3. Prime (PRIME) and

4. Select (SLCT) .

1. Strobe (SRB) line

This is a negative edge triggered signal used by the computer to


direct the printer to accept data.

2. Autofed (AF) signal

‰‰ It is an active low signal.

‰‰ It indicates whether the printer automatically performs a line


feed function after receiving carriage return character from the
computer.

‰‰ If (AF) is low (active) then the printer responds to the carriage return
character by performing a carriage return and a line feed.

3 Prime (PRIME) control signal

‰‰ This line is also called a initialize line. It is an active low line used by
the computer to clear printer’s memory which includes the printer
programming and print buffer.

‰‰ When printer detect an active low condition on this line, it returns


to its original position.

4. Select (SLCTIN)

‰‰ This line is not used very frequently.

‰‰ When used, a low signal on this should be seen by the printer before
accepting the data from the computer.

‰‰ Many printer connect this line permanently to ground.

3.16.3 Status Lines

‰‰ These are unidirectional lines used for conveying the information


DATA COMMUNICATION

from printer to computer Via these lines, the printer tells computer
what the printer is doing.

‰‰ The status lines are: ACK ,BUSY,PO,SLCT and ERROR .

1. Acknowledge (ACK)

The printer makes this line after it receives an active STB signal
from the computer. It tells the computer that the printer is ready to
receive another character from the printer.

2. Busy

““ It is an active high line which becomes 1 when the printer is busy


and cannot accept data from the computer .Following are the
conditions that makes a printer busy.

1. Printer is inputting data from data lines or printer’s data buffer


is full.

2. The printer is printing or processing the data.

3. The printer is off.

4. The printer’s ERROR line is low.

3. Pper pot (PO)

It becomes active when the printer is out of paper. When PO line is


activated to high the ERROR line also is activated to low.

4. Select (SLCT)

This is an active high line which indicates whether the printer is selected
or not.

The printer activates this line when it is on line.

5. Error ERROR

This status line is an active low line. It is used to indicate a


printer problem.
ANALOG AND DIGITAL COMMUNICATION

This lines becomes active under the following operating


conditions.

1. The printer is off line.

2. The printer is out of paper.

3. Some other problem.

3.17 INTRODUCTION TO PULSE MODULATION

Introduction

In the pulse Modulation, the carrier is a train of discrete pulses


rather than being a sine wave.

Definition

In pulse modulation, some parameter of a carrier pulse train is


varied in accordance with the message signal.

13.17.1 Classification of Pulse Modulation

Figure 3.46 Classifications of Pulse Modulation


DATA COMMUNICATION

PAM- Pulse Amplitude modulation PCM- Pulse code Modulation


PTM- Pulse Time Modulation DPCM- Differential Pulse code
modulation
PWM- Pulse width modulation DM- Delta Modulation
PPM - Pulse position Modulation ADM - Adaptive delta modula-
tion

Analog Pulse modulation

‰‰ In analog pulse modulation, a periodic pulse train is used


as the carrier wave, and some characteristics of each pulse
(e.g., Amplitude, duration and position) is varied in a continuous
manner in accordance with the corresponding sample value of the
message signal.

‰‰ Here, information is transmitted basically in analog form, but the


transmission takes place at discrete times

Digital Pulse Modulation

‰‰ In digital pulse modulation, the message signal is represented in a


form that is discrete in both time and amplitude, thereby permitting
its transmission in digital form as a sequence of coded pulses.

3.17.2 Sampling Process

The sampling process is a process of converting a continuous


time signal into an equivalent discrete-time signal

‰‰ In sampling process, an analog signal is converted into a corresponding


sequence of samples that are usually spaced uniformly in time

‰‰ It is an operation that is basic to digital signal processing and digital


communication.
ANALOG AND DIGITAL COMMUNICATION

‰‰ The continuous time signal x(t) is applied at the input of a multiplier.


The other input to the multiplier is a train of pulses. This signal is
called sampling signal.

‰‰ The sampling signal is a discrete signal s(t) is a unit amplitude with


a period of ‘Ts’ seconds.

‰‰ The time ‘Ts’ is called as the sampling time (or) sampling period and
to its reciprocal fs =1/Ts is called sampling frequency (or) sampling
rate. This ideal form of sampling is called instantaneous sampling.
DATA COMMUNICATION

Sampling Theorem

A continuous time signal can be completely represented in its


sample and received back. If the sampling frequency is twice of the high-
est frequency content of the signal.

(ie) fs ≥ 2 fm

Where,

fs = Sampling frequecny

fm = Modulating signal frequency

Nyquist rate

‰‰ When the sampling rate becomes exactly equal to 2fm Samples/sec,


for a signal bandwidth fm. Hence, then it is called ‘Nyquist rate’.

‰‰ It is the minimum sampling rate required to represent the


continuous signal faithfully in its sampled form

fs = 2 fm samples/sec

Nquist Interval

It is the time interval between any two adjacent samples when


sampling rate is Nyquist rate
1
Ts = sec
2fm
Alaising

If the sample frequency fs is less than the Nyquist rate, then a


distortion called “aliasing” is introduced in the spectrum of the sampled
signal. The alaising effect is clearly shown in figure 3.49.
ANALOG AND DIGITAL COMMUNICATION

Figure 3.49 Power spectral density of sampled signal

3.18 PULSE AMPLITUDE MODULATION (PAM)

Definition

The amplitude of the pulse carrier is changed in proportion with


the instantaneous amplitude of the modulating signal.

Types of PAM

Depending upon the shape of the pulse of PAM. There are two
types of PAM

1. Natural PAM

2. Flat -top PAM


DATA COMMUNICATION

Why flat top PAM is widely used ?

i. The flat-top PAM is most popular and is widely used. The reason
for using flat top PAM is that during the transmission, the noise
interferes with the top of the transmitted pulses and this noise can
be easily removed if the PAM pulse has flat-top

ii. In natural PAM, amplitude detection of received pulse is not exact


but in Flat- top PAM, the amplitude detection of received pulse is
exact because the noise (or) distortion is removed easily

iii. The electronic circuitry needed to perform natural sampling is


somewhat complicated because the pulse top shape is to be
maintained. These complications are reduced by flat top-PAM.

Therefore, flat-top sampled PAM is widely used.

3.18.1 Generation of Natural PAM

The generation of natural PAM and its output waveform is as


shown in figure 3.50 and 3.51 respectively

‰‰ The modulating signal x(t) is passed through a LPF which will band
limit this signal to fm. Band limiting is necessary to avoid the “aliasing
effect” in the sampling process.
ANALOG AND DIGITAL COMMUNICATION

‰‰ Pulse train generator generates the train of pulse with frequency fs.
Such that fs ≥ 2fm. Thus the Nyquist criteria is satisfied.

‰‰ Uniform (or) natural sampling takes place at the multiplier block to


generate the PAM-Signal

3.18.2 Flat-Top PAM

‰‰ A sample and hold circuit is used to produce flat-top sampled PAM is


as shown in figure 3.52.

‰‰ A sample and hold circuit consists of two field effect transistor (FET)
switches and a capacitor

‰‰ A input modulating signal x(t) is applied to charging switch at the


emitter terminal and sample pulse is connected to Gate terminals G1
and G2 of the two FETs (Charging and Discharging) switches.

‰‰ At the sampling instant, sampling switch is closed for a short


duration by a short pulse applied to a gate G1 of the transistor.

‰‰ During this period, the capacitor ‘c’ quickly charge upto a voltage
equal to the instantaneous sample value of the incoming signal x(t)
DATA COMMUNICATION

‰‰ Now, the sampling switch is open and capacitor ‘c’ hold the charge.
The discharge switch is then closed by a pulse applied to a gate Q2.
Due to this, capacitor ‘c’ is discharged to zero volts.

‰‰ The discharge switch is then opened and thus the capacitor has no
voltage. This is repeated and flat-top PAM signal is generated is as
shown in figure 3.53.

3.19 PULSE WIDTH MODULATION

Definition

The width of the carrier pulse varies in proportion with the


amplitude of modulating signal.

3.19.1 PWM-Generator

‰‰ The figure 3.54 shows the circuit that generates PWM signal

‰‰ The amplitude and frequency of the PWM wave remains constant.

‰‰ Amplitude of modulating signal changes then the width of the pulse


is also varied (ie). The amplitude is more positive than the width
of the pulse is large and amplitude of modulating signal is more
negative than the width of the pulse is narrow
ANALOG AND DIGITAL COMMUNICATION

‰‰ Therefore, the information is contained in the width variations.


‰‰ Amplitude variations due to the additive noise do not effect the
performance of PWM-generation.
‰‰ Thus PWM is more immune to noise than PAM

3.20 PULSE POSITION MODULATION

The amplitude and width of the pulses are kept constant but the
position of each pulse is varied in accordance with the amplitude of the
sampled values of the modulating signal. The circuit shown in figure
3.54 is also used to generate the PPM-signal
‰‰ The PPM-Signal can be generated from PWM signal.
‰‰ To generate PPM signal, the PWM pulse obtained at the output
of the comparator are used as the trigger input to monostable
multivibrator.
‰‰ The monostable is triggered on negative edge of PWM. The output
of monostable goes high. This high voltage remains high for a fixed
period and then turns low.
‰‰ The highest amplitude of modulating signal produce the larger width
of PWM-signal, therefore the PPM - pulse moves to the far right
‰‰ The lowest amplitude of modulating signal produces the narrow
PWM-Signal, therefore the PPM-Pulse moves to the far left.
‰‰ It is also more immune to noise than PAM.
PAM is used as intermediated form of modulation with PSK, QAM
and PCM, although it is seldom used by itself. PWM and PPM are used
in special-purpose communication systems mainly for the military but
are seldom used for commercial digital transmission systems. PCM is by
far the most prevalent form of Pulse modulation.

3.21 PULSE CODE MODULATION (PCM)

Introduction

PCM is the preferred method of communication within the


public switched telephone network because with PCM it is easy to
combine digitised voice and digital data into a single, high speed digital
signal and propagate it over either metallic (or) optical fibre cables.
DATA COMMUNICATION

‰‰ PCM is the only digitally encoded modulation technique, that is


commonly used for digital transmission.

‰‰ In PCM system, the message signal is first sampled and then


amplitude of each sample is rounded-off to the nearest one of a finite
set of allowable values known as “Quantization” levels, so that both
time and amplitude are in the discrete form.

‰‰ PCM is essentially an analog to digital conversion process, where the


information contained in the instantaneous sample of analog sig-
nal are represented by digital codes and are transmitted as a serial
bit-stream

Figure 3.56 shows a simplified block diagram of a single-channel,


simplex (one-way only) PCM system.

Figure 3.56 PCM-System


3.21.1 PCM-Transmitter

‰‰ The basic operations performed in the transmitter of a PCM system


ANALOG AND DIGITAL COMMUNICATION

are sampling, quantizing and encoding. The quantizing and encoding


operations are usually performed in the same circuit which is called
an analog to digital converter.

Band Pass filter

The BPF prior to sampler is included to prevent aliasing of the


message of the message signal by band limiting the message signal to fm.
So that, a proper sampling rate can be obtained at PCM-Tranmsitter.

Sample and Hold circuit

To ensure perfect reconstruction of the message signal at the


receiver, According to sampling theorem the sampling frequency fs must
be greater than twice the highest frequency component of the message
signal fm in accordance with the sampling theorem (ie) fs ≥ 2fm.

‰‰ The sample and hold circuit periodically samples the analog input
signal and converts those samples to multilevel PAM-signal.

Analog to Digital Converter (ADC)

Quantizer

‰‰ The process of making signal discrete in amplitude (PAM) by


approximating the sampled signal to the nearest predefined (or)
representation level is called quantization.

‰‰ In quantization, when the step-size between any two adjacent level


is same throughout the signal range is called uniform quantization.

‰‰ But generally non-uniform quantization is preferred for most of the


practical purpose because it provides the production for low level
signals which are more precious than large amplitude samples.

Encoder

‰‰ The function of encoder is to encode the discrete set of samples.


The process of allocating some digital code to each level is called
encoding. The obtained codes are transmitted as a bit stream.
DATA COMMUNICATION

‰‰ For the transmission, gray code is preferred, because it has only 1-bit
change for each step in the quantized level noise component.

‰‰ Generally minimum errors are obtained in the recovered analog


signal due to the single error in the received PCM- code word.

Parallel -to -serial converter

‰‰ The PCM- codes are parallel binary data if it is transmitted


means we need n-parallel wires for n-bit data, it will increase the
transmission cost and also for long distance parallel communication
is not possible because of implementing n-wires make complexity.

‰‰ Therefore, n-bit binary datas are converted into serial data by


parallel to serial converter and serial communication is carried out.
It reduce the complexity and also transmission cost because only
one-wire is used (eg RS-232 cable is used)

Transmission path

‰‰ The PCM transmission path is referred as “the path between PCM-


transmision and PCM receiver over which the PCM signal travels.

‰‰ The PCM-signals are transmitted for long distance with the help of
regenerative repeaters.

3.21.2 Regenerative Repeaters

The three basic operations performed by regenerated repeaters


are,

i. Equalizations

ii. Timing circuit and

iii. Decision making


ANALOG AND DIGITAL COMMUNICATION

Figure 3.57 Block diagram of regenerative repeater

‰‰ Figure 3.57 shows the block diagram of regenerative repeater which


are placed on the transmission path must be very close to each other.

‰‰ The equalizer shapes the received pulses so as to compensate for


the effects of amplitude and phase distortions produced by non-ideal
transmission characteristics of the channel.

‰‰ The timing circuitry provides a periodic pulse train which is derived


from the received pulses, for sampling the equalized pulses at the
instants of time where the signal-to-noise ratio is a maximum.

‰‰ The decision device makes a decision about whether the equalized


PCM wave at its input has a ‘O’ value (or) 1 value at the instant of
sampling.

‰‰ At the output of the decision device, we get a clean PCM-signal


without any trace of noise

3.21.3 PCM receiver

• A PCM-signal contaminated with noise is available at the receiver


input.

• The regeneration circuit at the receiver will separate the PCM-


pulses from noise and will reconstruct the original PCM signal. The
reconstructed PCM-signal is then passed through a serial-to-parallel
converter.

• The serial to parallel convertor, convert the received input bit-stream


into n-bit binary parallel data and applied to DAC circuit as a input.

• The DAC perform opposite to ADC operation and produce the output
DATA COMMUNICATION

signal.

‰‰ The hold circuit produce the PAM signal ouput for the analog input
signal.

‰‰ The PAM-signals passes through the LPF to recover the analog signal
x(t). The low pass filter is called as reconstruction filter and it’s cut-
off frequency is equal to the message signal bandwidth fm.

3.21.4 Sampling

There are two basic techniques used to perform the sampling


function:

1. Natural sampling

2. Flat-top sampling.

Natural sampling

‰‰ “Natural sumpling is done when tops of the sample pulses retain


their natural shape during the sampling interval”.

‰‰ In natural sampling, the frequency spectrum of the sampled output


is different from that of an ideal sample.

‰‰ The amplitude of the frequency components generated from narrow,


finite-width sample pulses decreases for the higher harmonics in a
(sin x)/x manner. It alters the information frequency spectrum re-
quiring the use of frequency equalizers before recovery by a low pass
filter.

‰‰ Figure 3.58 below shows natural sampling.


ANALOG AND DIGITAL COMMUNICATION

(a) input
waveform

(b) sample
waveform

(c) output
waveform

Figure 3.58 Natural Samping


Flat Top Sampling
‰‰ Flat- top sampling sometimes called rectangular sampling.
‰‰ “In flat-top sampling, top of the samples remains constant and is
equal to the instantaneous value of the analog signal.”
‰‰ Flat- top sampling is most commonly used for sampling voice signals
in PCM systems.
‰‰ Flat -top sampling is accomplished in a sample-and-hold circuit.
‰‰ In Flat-top sampling, the input voltage is sampled with a narrow
pulse and then kept relatively constant until the next sample is tak-
en.
‰‰ In the flat top sampling, the sampling process alters the frequency
spectrum and introduces an error called aperture error.
‰‰ When the amplitude of the sampled signal changes during the sample
pulse time, aperture error may occur. This aperture error prevents
the recovery circuit in the PCM receiver from exactly reproducing the
original analog signal voltage.
‰‰ Flat Top sampling introduces less aperture distortion than natural
DATA COMMUNICATION

sampling.
‰‰ Aperture error is compensated by using equalizers.
‰‰ Figure 3.59 shows Flat-top sampling.

(a) Input
waveform

(b) sample
waveform

(c) output
waveform

Figure 3.59 Flat Top sampling

Sample and Hold


‰‰ The schematic diagram of a sample-and-hold circuit is shown in fig-
ure 3.60.

Sampling pulse
Analog
output
+ + PAM output
Z1 Q1 Z2
- -
C2

Figure .3.60 Schematic Diagram of a Sample and Hold Circuit

‰‰ The FET acts as a simple analog switch. When Q1 turned ON, it of-
fers a low-impedance path to hold the analog sample voltage across
capacitor C1. The time which is ON, is called the aperture time or
acquisition time.
ANALOG AND DIGITAL COMMUNICATION

‰‰ Essentially C1 is the hold circuit. When Q1 is OFF, C1 does not have a


complete path to discharge through it and hence stores the sampled
voltage.
‰‰ The storage time of the capacitor is known as the A/D conversion
time because, during this time that the ADC converts the sample
voltage to a PCM code.
‰‰ The acquisition time should be very short to ensure that a
minimum change occurs in the analog signal while it is being depos-
ited across C1.
‰‰ If the input to the ADC is changing while it is performing the conver-
sion, results in aperture distortion.
‰‰ Aperture distortion is reduced by sample and hold circuit by having
a short aperture time and keeping the input to the ADC relatively
constant.
‰‰ Figure below shows the input analog signal, the sampling pulse, and
the waveform developed across C1.

Input
waveform

Aperture
Conversion
time Q1 Q1
Sample time
pulse on on
Capacitor Q1 Capacitor
changes off discharges
Output
waveform
Droop
Figure 3.61 Input and Output waveforms
‰‰ It important that the output impedance of voltage follower Z1, and the
resistance of Q1 be as small as possible. So that the RC charging time
constant of the capacitor is kept very short, allowing the capacitor to
charge or discharge rapidly during the short acquisition time.
DATA COMMUNICATION

‰‰ The inter- electrode capacitance between the gate and drain of the
FET is placed in series with C1, when the FET is OFF. Hence it acting
as a capacitive voltage-divider network.
‰‰ The gradual discharge across the capacitor during the
conversion time is called droop and is caused by the capacitor
discharging through its own leakage resistance and the input imped-
ance of voltage follower Z2, Hence, the input impedance of Z2 and the
leakage resistance of C1 be as high as possible.
‰‰ The voltage followers Z1 and Z2 isolate the sample and-hold circuit
from the input and output circuitry.
Sampling Rate:

‰‰ The Nyquist sampling theorem establishes the minimum


sampling rate(fs) that can be used for a given PCM system.
‰‰ For a sample to be recovered accurately in a PCM receiver, each
cycle of the analog input signal (fa) must be sampled at least twice.
Consequently, the minimum sampling rate is equal to twice the
highest audio input frequency.
‰‰ If fs is less than two times fa an impairment called alias or told-over
distortion occurs. The minimum Nyquist sampling rate is expressed
mathematically as,
fs ≥ 2fa
Where , fs - minimum Nyquist sample rate (Hz).
fa - maximum analog input frequency (Hz)
2fs-fa
3fs-fa
fs-fa fs-fa
2fs-fa
3fs+fa

Audio
0 fa fs 2fs 3fs
Frequency

Figure 3.62 No aliasing


ANALOG AND DIGITAL COMMUNICATION

fs-fa fs+fa
2fs-fa 4fs+fa

0 fa fs 2fs-fa 2fs 3fs-fa 3fs 3fs+fa


Frequency

Figure 3.63 Aliasing distortion


‰‰ A sample-and-hold circuit is a nonlinear device (mixer) which has
two inputs: one is the sampling pulse and another one is analog
input signal. Hence, nonlinear mixing (heterodyning) occurs between
these two signals.
‰‰ Figure 3.62 shows the frequency-domain representation of the
output spectrum from a sample-and-hold circuit. The output in-
cludes the two original inputs (the audio and the fundamental fre-
quency of the sampling pulse), their sum and difference frequencies
(fs ± fa), all the harmonics of fs and fa (2fs, 2fa, 3fs, 3fa and so on), and
their associated cross products (2fs ± fa, 3fs ± fa and so on).
‰‰ The sampling pulse is made up of a series of harmonically relat-
ed sine waves. Each of these sine waves is amplitude modulated
by the analog signal and generates sum and difference frequencies
symmetrical around each of the harmonics of fs.
‰‰ Each sum and difference frequency generated is separated from its
respective center frequency by fa as long as fs is at least twice fa,
none of the side frequencies from one harmonic will split into the
sidebands of another harmonic, and aliasing effects does not occur.
‰‰ Figure 3.63 shows the results when an analog input frequency
greater than fs/2 modulates fs. The side frequencies from one
harmonic fold over into the sideband of another harmonic. The
frequency which folds over is an alias of the input signal (hence
the names “aliasing” or “foldover distortion”), it cannot be removed
through filtering or any other technique.
DATA COMMUNICATION

‰‰ In PCM transmitter, the input bandpass filter is called an anti -alias-


ing or anti -fold over filter. Its upper cutoff frequency is chosen such
that no frequency higher than one-half the sampling rate is allowed
to enter the sample-and-hold circuit, hence eliminating the fold-over
distortion occurring.

‰‰ In PCM, the analog input signal is sampled, then converted to a serial


binary code. The binary code is transmitted to the receiver, where it
is converted back to the original analog signal.

‰‰ The binary codes used for PCM are n-bit code word, where n may be
any positive in-teger greater than 1. The code currently used for PCM
are sign-magnitude codes, where the most significant bit (MSB) is the
sign bit and the remaining bits are used for magnitude.

3.21.5 Resolution

‰‰ The magnitude of a quantum is also called the resolution. The reso-


lution is equal to the voltage of the minimum step size, which is equal
to the voltage of the least significant bit (V bb) of the PCM code.

‰‰ The resolution is the minimum voltage other than 0 V. In ta-


ble 3.64, the resolution for the PCM code is 1 V. The smaller the
magnitude of a quantum, the better (smaller) the resolution
and the more accurately the quantized signal will resemble the
original analog sample.
ANALOG AND DIGITAL COMMUNICATION

111 +3V
110 +2V +2.6V
101 +1V
}
100 0
000
a)
001 -1V
010 -2V
011 -2V
b)
t1 t2 t3
111 +3V
110 +2V
101 +1V

000
}
100 0
c)
001 -1V
010 -2V
011 -2V
Sample time Sample time Sample time
110 001 111
(d)

Figure 3.64 (a) Analog input signal, (b) Sample pulse, (c). PAM
signal, (d) PCM code
‰‰ Since the figure 3.64 above each sample voltage is rounded off
(quantized) to the closest available level and then converted to its
corresponding PCM code.
‰‰ The PAM signal in the transmitter is same PAM signal as produced
in the receiver. Therefore, any round off errors in the transmitted
signal are reproduced when the code is converted back to analog in
the receiver. This error is called the quantization error (Qe).
‰‰ The quantization error is equivalent to additive white noise as it al-
ters the signal amplitude. This quantization error is also called quan-
tization noise (Qn).
‰‰ In Figure 3.64 above the first sample occurs at time t1, when the
input voltage is exactly +2V. The PCM code that corresponds to +2
V is 110, and, there is no quantization error. when the input voltage
is + 1V, Sample 2 occurs at time t2 .The PCM code that corresponds
to + 1 V is 001, and again there is no quantization error.
‰‰ To determine the PCM code for some particular sample voltage,
simply divide the sample voltage by the resolution, convert the
DATA COMMUNICATION

quotient to n-bit binary code, and then add the sign bit.
‰‰ In figure 3.64 above, for sample 3, the voltage at t3 is approximately
+2.6V. The folded PCM code is,

sample voltage 2.6


= = 2.6
Resolution 1

‰‰ There is no PCM code for +2.6; therefore, the magnitude of the sam-
ple is rounded off to the nearest value, which is, +3 V or 111. The
rounding-off process resulting in a quantization error of 0.4 V.

The quality of the PAM signal can be improved by

i). Using a PCM code with more bits.

ii). Reducing the magnitude of a quantum

iii). Improving the resolution.

iv). Sampling the analog signal at a faster rate.

3.21.6 Quantization

Quantization is the process of converting an infinite number of


possibilities to a finite number of conditions. Analog signals contain an
infinite number of amplitude possibilities. Thus, converting an analog
signal to PCM-code with a limited number of combinations requires
quantization.

Linear analog to digital converter (or) Linear quantizer

‰‰ Figure 3.65 below shows the input-versus-output transfer function


for a linear analog-to digital converter which is also called a linear
quantizer.

‰‰ The figure 3.65 below shows for a linear analog input signal (ie. a
ramp signal), the quantized signal is a staircase function. Thus, the
maximum quantization error is the same for any magnitude input
signal is shown in Figure below.
ANALOG AND DIGITAL COMMUNICATION

‰‰ The step size of the staircase signal is constant for linear Quantizer.
Vout

Analog
signal
Vin

Vout Analog Vout


signal
Quantized Quantized
signal error
Vin Vi
Maximum Maximum
negative positive
quantizing quantizing
Qe = 1 LSB error error
2

Figure 3.65 Input Vs output transfer function for a linear ADC


Classifications of quantization process:

Quantization

Uniform Quantization Non-Uniform Quantization


Midread type Midrise type

Based on step size, the quantization process can be classified into


two types as,

(i) Uniform Quantization


(ii) Non-Uniform Quantization.
DATA COMMUNICATION

Uniform Quantization
In uniform Quantization the ‘step size’ is same (or) constant
throughout the input range.
There are two types of uniform Quantizer
1. Symmetric Quantizer of midtread type
2. Symmetric Quantizer of midrise type.
Midtread Quantizer
‰‰ In midtread Quantizer, the origin lies in the middle of a tread of stair
case like graph.
‰‰ The figure 3.66 below shows the corresponding input-output
characteristics of a uniform Quantizer midtread type.
Midrise Quantizer
‰‰ In midrise Quantizer, the origin lies in the middle of a rising part of
the stair case like graph.
‰‰ The figure 3.66 below shows the corresponding input-output
characteristics of a uniform Quantizer midrise type.
‰‰ Non-Uniform Quantization:
‰‰ In non-uniform Quantization, a Quantizer characteristic is
non-linear.
‰‰ Step size is not constant instead if it is variable, dependent on
the amplitude of input signal then quantization is known as
non- uniform quantization.
‰‰ The step size is varied according to signal level to keep signal to noise
ratio high.
ANALOG AND DIGITAL COMMUNICATION

Output Xq(nTs)
Maximum
quantization 7 δ/2
error 5 δ/2
−δ/2 3 δ/2
δ
-X(nTs) δ/2 X(nTs)
0 δ 2δ 3δ Decision
δ/2 levels
3 δ/2 Overload
5 δ/2 levels

Peak to peak excursion of the


signal
Quantization error (ε)
δ/2
Input
δ X(nTs)
δ/2
(a)
Output (L=9)
4a

3a

2a

a
-4a -3a -2a -a input

a 2a 3a 4a
-a

Midriser
-2a

-3a

-4a

a/2
0
-a/2

Quantization Noise

Figure 3.66 Uniform quantization (a) Midtreated type (b)Midrise


type
DATA COMMUNICATION

3.21.7 Dynamic Range (DR)


DR is the ratio of the largest possible magnitude to the smallest possible
magnitude that
be decoded by the digital to analog converter in the receiver.

Vmax
Dynamic range =
Vmin
Where
Vmin = The quantum value (resolution)
Vmax = Maximum voltage magnitude

Signal to Quantization Noise ratio (SQR)


‰‰ In 3-bit PCM code scheme when the input signal and its
maximum amplitude (101 = +1 or 001 = -1), the worst possible
signal to Quantization noise voltage ratio occurs

Resolution Virb
=SQR = =2
Qe Virb
2
‰‰ For the maximum amplitude input signal of 3V (either 111 or 011)
the maximum Quantization noise is also equal to the resolution
divided by 2. Hence SQR for a maximum input signal is
1
(SQR )min = =6
0.5
(SQR )min= 20 log 6
in dB = 15.6 dB

‰‰ Quantization error is caused by digitizing an analog sample. It is ex-


pressed as average signal power to average noise power ratio.
‰‰ For linear PCM codes, the signal power to quantizing noise power
ratio (some times called signal to distortion ratio (or) signal to noise
ratio is

V2
SQR(dB ) = 10 log R
(
a
12
2
/R )
ANALOG AND DIGITAL COMMUNICATION

where,
R = resistance (ohms)
v = rms signal voltage ( volts )
q = quantization interval ( volts )
V2
= average signal power ( watts )
R
(q 2 /12) = average quantization noise power watts
( )
R
Iff the resistances are equal
 V2  V 2 
(SQR )dB = 10 log  2  = 10 lo
o g  2 × 12
 q /12  q 
V 
= 20 × log   Since, log 12 = 1.079 = 1
q 
Solved problems
1. For a PCM system with a maximum audio input frequency of
4kHz,determine the minimum sample rate and the alias frequency
produced if a 5kHz audio signal were allowed to enter the sample and
hold circuit.
Solution
Given that f a = 4 kHz
Audio signal frequency = 5 kHz
Using Nyquist sampling theorem,
Sampling rate f s ≥ 2 f a ; since f a = 4kHz
Therefore,f s = 8kHz
Alias frequency
= f s − ( Audio signal frequency in sample & hold circuit )
= 8 kHz − 5kHz
= 3kHz
DATA COMMUNICATION

2. For PCM coding scheme the analog voltage is +1.8 V, resolution 1V


then determine the quantized voltage, quantization error (Q).
Solution
Given that Analog sample voltage = 1.8V
Resolution = 1V
Quantization level = Analog sample voltage/
Resolution
= +1.85V /1V
= 1.8 V
= 2V ( round off )
Quantization error, Qe = ( Quantized level )
− ( Oriiginal sample voltage )
Qe = 2 − 1.8 = 0.2
3. For a PCM system with the following parameters, determine (a) Mini-
mum sampling rate (b) Minimum number of bits used in PCM code
(c) Resolution (d) Quantization error (e) Coding efficiency.

Maximum input frequency = 4kHz


Maximum decoded voltage at the receiver = ± 2.55 V
Minimum dynamic range = 46 dB
Given that: f a = 4 kHz ; DR = 46 dB ; Vmax = 2.55V
(a ) Minimum sample rate is given n by
f s ≥ 2 fa
f s = 2 ( 4 kHz ) = 8 kHz
(b ) Dynamic range is
V
DR − 20 log max
Vmin
V
46 dB = 20 log max
Vmin
V
2.3 = log max
Vmin
V
102.3 = max
Vmin
199.5 = DR
w.k .t for minimum no of bits, 2n − 1 = DR
Then, the minimum number of bit (n ) can be calculated as.
ANALOG AND DIGITAL COMMUNICATION

log ( DR + 1) log (199.5 + 1)


n= = = 7.63
log 2 log 2
n =8
Therefore 8 bits mu ust be used for magnitude for representing the
sign bit, to
otally 9 bits are required.
(c ) Resolution
V 2.55 2.55
Resolution = nmax = 8 = = 0.01V
2 − 1 2 − 1 256 − 1
(d ) Maximum quantization erroor
Resolution 0.01V
Qc = = = 0.005V
2 2
(c ) The coding efficiency
coding efficiency
Minimum number of bits (including sign bit )
= × 100
actual number of bits ( including sign bit )

=
( 7.63 + 1) × 100 = 8.63 × 100 = 95.89%
8 +1 9

3.22 DIFFERENTIAL PULSE CODE MODULATION (DPCM)

In PCM, successive samples are taken in which there is little


difference between the amplitudes of two samples. This necessitates
transmitting several identical PCM codes, which is redundant.

To overcome the redundancy only the Differential PCM is


introduced, In DPCM, the difference in the amplitude of two successive
samples is transmitted rather than actual sample

3.22.1 DPCM Transmitter

Figure 3.67 shows a simplified block diagram of DPCM


Transmitter

‰‰ Analog input signal is first band limited to avoid the aliasing effect
and then applied as one input of differentiator (or) Subtractor. The
previous sampled signal valued is obtained from the integator is also
applied as another input to differentiator. Initially integrator output
is zero (assumed)
DATA COMMUNICATION

‰‰ The differentiator compares the two inputs and produces the


difference signal is analog only and it is PCM encoded and
transmitted

‰‰ The sampler produce the PAM signal and then the analog to digital
converter produce the parallel binary bits.

‰‰ This parallel binary bits are converted into serial PCM and then
transmitted through the channel.

‰‰ In DPCM the number of bits required to represent the PAM signal is


less when compared to PCM because the difference only transmitted
instead of transmitting actual samples.

‰‰ Previous sample (ie) now the transmitted DPCM is the previous


sample to the next sample. So it is applied as input to binary adder
and it is converted to analog by DAC and then through the integrator
it is passed.

‰‰ Integrator act as LPF and produce previous sample value analog in


ANALOG AND DIGITAL COMMUNICATION

nature

‰‰ Now the second sample is applied as one input of differentiator and


another input is the integrator output.

‰‰ Once again the same process repeated until all the samples are
transmitted.

3.22.2 DPCM receiver

Figure 3.68 Shows the simplified block diagram of a DPCM


-receiver

‰‰ The received serial DPCM- Signal is converted into analog signal by


the serial to parallel converter and Digital to analog converters.

‰‰ Initially hold circuit is at zero, the adder circuit add the current
sample and previous sample value, produce the analog output.

‰‰ Current sample is received PAM signal and previous sample is Hold


circuit output, Adder circuit output is the analog output (ie) the
original output is obtained.

‰‰ The function of DPCM receiver is simply explained as each received


sample is converted back to analog, stored and then summed with
the next sample received,
DATA COMMUNICATION

3.23 Delta Modulation

Delta Modulation uses a single bit PCM code to achieve digital


transmission of anolog signals.

With Conventional PCM, each code is a binary representation


of both the sign and the magnitude of a particular sample. Therefore,
multiple-bit codes are required to represent the many values that the
sample can be.

With delta modulation, rather than transmit a coded


representation of the sample, only a single bit is transmitted, which
simply indicates whether that sample is larger (or) smaller than the
previous sample.

The algorithm for a delta modulation system is quite simple. If


the current sample is smaller than the previous sample, a logic O is
transmitted, if the current sample is larger than the previous sample. a
logic 1 is transmitted

3.23.1 Delta Modulation transmitter

Figure 3.69 shows the block diagram of delta modulation


transmitter.

• Analog input is sampled and converted to a PAM-signal, which is


compared with the output of DAC. The output of DAC is a voltage
equal to the regenerated magnitude of the previous sample, which
was stored in the up-down counter as a binary number.
ANALOG AND DIGITAL COMMUNICATION

• The up-down counter is incremented (or) decremented depending on


whether the previous sample is Larger (or) smaller than the current
sample.

• The up-down counter is clocked at a rate equal to the sample rate.


Therefore, the up-down counter is updated after each comparison

• Initially, the up-down counter is zero and DAC is outputting OV. The
first sample is taken, converted to PAM signal, and compared with
zero volts.

• The output of the comparator is a logic 1 condition (+V),


indicating that the current sample is larger in amplitude than the
previous sample.

• On the next clock pulse, the up-down counter is incremented to a


count of 1. The DAC now outputs a voltage equal to the magnitude of
the minimum step size

• The steps change value at a rate equal to the clock frequency (sample
rate)

• The up/down counter is incremented by 1 step size for each clock


until the DAC output exceeds the analog sample. Once the DAC
output exceeds, then the up-down counter will begin counting down
until the output of the DAC drops below the sample amplitude.

• Figure 3.70 shows the ideal operation of a delta modulator encoder.


In the idealized situation, the DAC output follows the input signal.

• Each time the up-down counter is incremented, a logic 1 is


transmitted, and each time the up-down counter is decremented, a
logic 0 is transmitted.

Figure 3.70 Delta Modulation output


DATA COMMUNICATION

3.23.2 Delta- Modulation Receiver

Figure 3.71 shows the block diagram of a delta modulation re-


ceiver. The receiver is almost identical to the transmitter except for the
comparator.

• As the logic ls and Os are received, the up/down counter is


incremented accordingly. Consequently the output of the DAC in the
decoder is identical to the ouptut of the DAC in the transmitter. The
ouptut of LPF is original transmitted signal

Advantage of Delta Modulation over Conventional PCM

With Delta Modulation, each sample requires the transmission of


only one bit, therefore the bit rates associated with delta modulation are
lower than conventional PCM systems.

There are two problems associated with delta modulation that do


not occur with conventional PCM,

i. Slope overload noise

ii. Granular noise

Slope overload noise

Slope over load noise occurs when the step size (d) is too small
compared to large variation in the input signal. Figure 3.72 below shows
the distortion in delta modulation.
• Let x(t) is the slope of analog signal x(t) and x’(t) is the slope of the
ANALOG AND DIGITAL COMMUNICATION

approximated signal (i.e.) Staircase signal


• Due to small step size (d), the slope of approximated signal x’(t) will
be small.
d
The slope x’(t) = = dfs
Ts
• If the slope of the analog signal x (t) is much higher than that of x’(t)
over a long duration than x’(t) will not be able to follow x(t), at all.
• The difference between x(t) and x’(t) is called as the slope overload
distortion.
• Thus the slope overload error occurs when slope of x(t) is much larger
than slope of x’(t).
• Slope overload error can be reduced by,
i. Increasing the step size of ‘d’ or increasing slope of the
approximated signal x’(t).
ii. Increasing the sampling frequency ‘fs’
However with increase in ‘d’, the granular noise increases and if fs
is increased, signalling rate and bandwidth requirement are increased.
One of the best way to reduce slope overload error is to detect
the overload condition and increase the step size when overloading is
detected.
DATA COMMUNICATION

Granular noise

• Granular noise occurs when the step size (d) is too large compared to
small variations in the input signal. That is for very small variations
in the input signal, the staircase signal is changed by large amount
‘d’ because of large step size.

• When the input signal is almost flat, the staircase signal x’(t) keeps
on oscillating by ± d around the signal.

• The error between the input and approximated staircase signal is


called granular noise. To reduce the granular noise the step size
should be as small as possible.

• However this will increase the slope overload distortion.

3.24 ADAPTIVE DELTA MODULATOR (ADM)

¾¾ In the linear delta modulator the step size ‘d’ is not variable, if it is
made variable then the slope overload distortion and granular noise
both can be controlled.

¾¾ A system with a variable step size is adapted to linear delta modulator


as per the level of input signal, such a modulator is known as
“adaptive delta modulator”.

¾¾ There are various types of ADM- system available depending on the


type of scheme used for adjusting the step size.

¾¾ In one type a discrete set of value is provided for the step size whereas
in another type a continuous range of step size variation is provided.
ANALOG AND DIGITAL COMMUNICATION

3.24.1 Adaptive Delta modulation Transmitter

Figure 3.73 shows the simplified block diagram of Adaptive delta


modulation.

If you compare this block diagram with that of the linear delta
modulator, then you will find that except for the counter being replaced
by the digital processor, the remaining block are identical.

Operation

• In the reponse to the Kth clock pulse, the processor generates a step
which is equal in magnitude to the step generated in response to the
previous ( K-1)th clock pulse.

• If the direction of both the steps is same than the processor will
increase the magnitude of the present step by ‘d’. If the directions
are opposite then the processor will decrease the magnitude of the
present step size by ‘d’

• The output of the ADM system is given as,

S0 (t) = + 1 if x (t) > x’ t just before Kth clock pulse

and, S0 (t) = - 1 if x (t) < x’ t just before Kth clock pulse


DATA COMMUNICATION

Where

x(t) = Analog input signal

x’(t) = Staircase approximation pulse (or) step size ‘d’

• Then the step size at the sampling instant ‘K’ is given

d(K) = d(K-1). S0 (K) + d. S0 (K-1) ...(1)

Where

d(K) = Step size at Kth clock pulse

d(k-1) = Step size at (K-1)th clock pulse

S0 (K) = Output at Kth clock pulse

S0(K-1) = Output at (K-1)th clock pulse

‘d’ = Basic step size

For example

Let K = 6, consider the 6th clock pulse

K-1 = 5, d(K-1) = d(5) = d

S0 (K) = S0 (6) = +1 , S0 (K-1) = S0 (5) = +1

Substitute these values in equation (1)

\ d (6) = d (5) S0 (6) + d S0 (5)

= d(1) + d(1) = 2 d

Due to the adjustable step size, the slope overload and


granular noise problem is solved. Hence ADM system has a low bit rate
than the PCM system. Therefore, the bandwidth required is also less
than a comparable PCM system.
ANALOG AND DIGITAL COMMUNICATION

3.24.2 ADM- Receiver

Figure 3.74 shows the block diagram of ADM receiver

• The ADM-Signal is first converted into a DM-Signal with the help of


step size control logic and then applied to DM-receiver.

• (ie) Received ADM signal is logic 1, then step size increased by


the ‘d’ otherwise for Logic 0, step size is decremented by ‘d’. The
corresponding analog output is generated by DAC.

• At the output of low pass filter we get the original signal back.
3.25 Comparison of various pulse communication systems
Pulse Amplitude Pulse width Pulse position
S.No Parameters
modulation Modulation Modulation
Width of the car- Position of the
Amplitude of the carrier
Variable characteris- rier pulse is varied carrier pulse is
1 pulse is varied by modu-
tics of carrier pulse by modulating varied by modu-
DATA COMMUNICATION

lating voltage
Voltage lating voltage
Bandwidth of Bandwidth of
Bandwidth of transmis- the transmission the transmission
sion channel depends on channel depends channel depends
width of the pulse on rise of the on rise of the
2 Bandwidth BW ≥ 1 pulse pulse
2t 1
BW≥ 1 BW ≥
2tr
t - Width of the pulse 2tr
maximum tr` -rise time of the tr-rise time of the
pulse pulse
3 Noise interference Maximum Minimum Minimum
Information is con- Position
4 Amplitude variations Width variations
tained in variations
Necessary of syn-
5 Not necessary Not necessary Necessary
chronization pulse
Complexity in
6 generation and Complex Simple Simple
detection
Varies with amplitude of Varies with width
7 Transmitted power Constant
pulse of the pulse
Similarity with other Frequency modu- Phase Modula-
8 Amplitude modulation
modulation system lation tion

9 Output waveform
ANALOG AND DIGITAL COMMUNICATION
3.26 Comparison of Source Coding methods

S.No Parameters PCM Delta Modulation ADM DPCM


(DM)
1 Number of bits It can use 4,8 or It uses only one bit Only one bit is Bits can be more
DATA COMMUNICATION

16 bits per sample for one sample used to encode than one but are
one sample less than PCM
2 Levels, step size The number of Step size is fixed According to the Fixed number of
levels depends on and cannot be signal variation, levels are used.`
the number of bits varied step size varies
step size is fixed (adapted)
3 Quantization er- Quantization error Slope overload dis- Quantization er- Slope overload
ror and distor- depends on num- tortion and granu- ror is present but distortion and
tion ber of levels used. lar noise is present other errors are quantization noise
absent is present
4. Bandwidth of Highest Bandwidth Lowest Bandwidth Lowest Band- Bandwidth re-
transmission is required since is required width is required quired is lower
channel number of bits are than PCM
high
5 Feedback There is no feedback Feedback exists Feedback exists Feedback exists
in transmitter or in transmitter
receiver
6 Complexity of System is complex Simple Simple Simple
notation
7 Signal to noise Good Poor Better than DM Fair
ratio
8 Area of Audio and video Speech and im- Speech and Images Speech and
applications telephony ages video
ANALOG AND DIGITAL COMMUNICATION
DATA COMMUNICATION

SOLVED TWO MARKS


PART - A
1. What is Pulse Amplitude modulation?

PAM is the pulse amplitude modulation. In pulse


amplitude modulation, the amplitude of a carrier consisting of
a periodic train of rectangular pulses is varied in proportion to
sample values of a message signal.

2. Define Pulse Position modulation?

The position of a constant-width pulse within a prescribed time


slot is varied according to the amplitude of the sample of the
analog signal. This is known as pulse position modulation (PPM).

3. Define Pulse Width modulation?

The width of a constant-amplitude pulse is varied proportional


to the amplitude of the analog signal at the time the signal is
sampled. This is known as pulse width modulation. PWM is also
called as pulse duration modulation (PDM) or pulse length mod-
ulation (PLM).

4. What is meant by PCM?

Pulse code modulation (PCM) is a method of signal coding in


which the message signal is sampled, the amplitude of each
sample is rounded off to the nearest one of a finite set of
discrete levels and encoded so that both time and amplitude
are represented in discrete form. This allows the message to be
transmitted by means of a digital waveform.

5. Define PCM. List out the advantages and disadvantages of PCM.


It is a type of pulse modulation technique where the information
is transmitted in the form of code words. The essential
operations in PCM transmitter are sampling, quantizing and
ANALOG AND DIGITAL COMMUNICATION

encoding.
Advantages:
(i) High noise immunity.,
(ii) Private and secured communication is possible through the
use of encryption.
Disadvantages:
(i) Increased transmission bandwidth.
(ii) Increased system complexity.

6. What are the advantages of digital transmission?

(i) Channel coding is used, the errors can be detected and


corrected in the receivers
(ii) Because of the advances in digital IC technologies and high
speed computers, digital communication systems are simpler
and cheaper compared to analog systems.

7. Draw PWM and PPM waveforms

8. What is meant by DPCM?

In PCM, successive samples are taken in which there is


little difference between the amplitudes of two samples. This
necessitates transmitting several identical PCM codes, which is
redundant.
In DPCM, the difference in the amplitude of two successive
DATA COMMUNICATION

samples is transmitted rather than actual sample.


9. A PCM system uses sampling frequency of 16 k samples/s. Then,
find out the maximum frequency of the signal up to which the
signal can be perfectly reconstructed.
A continuous time signal can be completely represented in its
samples and received back. If the sampling frequency is twice of
the highest frequency content of the signal.ie.,

10. Define Nyquist sampling theorem.

When the sampling rate becomes exactly equal to


2fm samples/sec for a signal bandwidth fm. Hence it is called
Nyquist rate.

It is the minimum sampling rate required to represent the con-


tinuous signal faithfully in its sampled form
fs = 2fm samples/sec

11. Find the capacity of a channel having 50 kHz bandwidth and


produces SNR of 1023 at the output.

Given
S
= 1023
N
B = 50 kHz

( )
Solution
S
Rmax = B log2 1 + = 50000 log2 (1 + 1023)
N
= 150514.99 bits/sec
12. A PCM system uses sampling frequency of 16 k samples/s.
Then, find out the maximum frequency of the signal upto
which the signal can be perfectly reconstructed.
A continuous time signal can be completely represented in its
samples and received back .If the sampling frequency is twice of
the highest frequency content of the signal. ie.,
fs ≥ 2 fm
Here,
fs = sampling frequency = 16 k samples/sec
ANALOG AND DIGITAL COMMUNICATION

fm = maximum frequency of the continuous signal


10 x 103 ≥ 2fm
fm ≤ 5 x 103 Hz

13. Determine the Nyquist rate for analog input frequency of (a)
4 kHz (b) 10 kHz
Solution
Nyquist rate = 2 fm
Where fm - Signal frequency
(a) Nyquist rate = 2 x 4 = 8 KHz
(b) Nquist rate = 2 x 10 = 20 KHz

14. What is meant by fading?
Fading is nothing but decreasing the signal strength when it
propagate through the channel.

15. What is aliasing? What is the effect of aliasing?


The phenomenon of a high-frequency in the spectrum of the
original signal g(t) seemingly taking on the identity of a lower
frequency in the spectrum of the sampled signal g(t) is called
aliasing or fold over.
The effect of aliasing as the output of the reconstruction filter
depends on both the amplitude and phase component of the
original spectrum G (f), making an exact analysis of the output
difficult resulting in distortion.

16. Define quantizing process.

The conversion of analog sample of the signal in to digital form


is called quantizing process. Graphically the quantizing process
means that a straight line representing the relation between the
input and the output of a linear analog system.

17. Define quantization error?

Quantization is the value of which equals the difference between


DATA COMMUNICATION

the output and input values of quantizer.

18. What is nyquist rate?

The minimum sampling rate of 2W sample per second for a


signal bandwidth of W hertz is called the nyquist rate.

19. How many Hamming bits are required for an ASCII character
'D'

Given
For character 'D' ASCII code (from table) is
1 0 0 0 1 00
m=7
\
Formula 2r ≥ m + r +1 ( r = No. of redundant bits)
Let r = 1
21 ≥ 7 + 1+ 1
21 ≥ 9 (false)
Let r = 2
22 ≥ 7 + 2+ 1
4 ≥ 10 (false)
Let r = 3
23 ≥ 7 + 3 + 1
8 ≥ 11 (false)
Let r = 4
24 ≥ 7 + 4+ 1
16 ≥ 12 (True)
Hence No. of Hamming bits is r = 4

20. Calculate odd and even parity bits for the EBCDIC character
'G'.

The Hex value for the character 'G' is C7 (from the EBCDIC
table), whose binary equivalent is,
C 7
1100 0111 (8bits)
ANALOG AND DIGITAL COMMUNICATION

The parity bit at the MSB position is given as, p 1100 0111
For odd parity: 011000111
For even parity: 111000111

21.
Calculate the odd and even parity bits for ASCII character
W.
Solution:
The Hex value for the character W is 57 (from the ASCII table),
whose binary equivalent is 1010 11 1
5 7
101 0111 (7bits)
During transmission, p101 0111 will gets transmit, if an odd
parity is used the value of p will be 0, since number of l' s in 101
0111 is 5 ( odd). For an even parity, the value of p will be 1.
i.e. to make even number of 1's in 101 0111',

Odd parity p 101 0111 Even parity p 101 0111


0 101 0111 1 101 0111

22. Define error detection and correction

Error-Detecting Codes: which is used to determine when an


error has occurred or not.
Error-Correcting Codes: It includes some extraneous
information which helps the receiver to determine when an error
has occurred and which bit is in error.

23. What is data terminal equipment? Give examples.

DTE can be virtually any binary digital device that generates,


transmits, receives, or interprets data messages.
DTEs contain the hardware and software necessary to
establish and control communications between endpoints in a
data communications system.
Examples of DTEs: video display terminals, printers, and
personal computers.
DATA COMMUNICATION

24. What is forward error correction?


It is the type of error correction scheme, where the errors are
detected and corrected without retransmission but by adding the
redundant bit to the message before transmission commences.

25. What is the need for error control coding?

Need for Error Control:


• When digital signals are transmitted from one place to
another, transmission errors will occur, it can be caused
by electrical interference from natural sources, such as
lightning as well as from man-made sources, such as
motors, generators, power lines and fluorescent lights.
• To maintain the quality of the signals transmitted, error
control mechanism has to be implemented.

26. Mention the difference between line encoding and channel


encoding.

In channel encoding, the quantized digital sequence is converted


into a digital data. (code word).
In line encoding, the digital data is converted into a waveforms.

27. Mention any two error control codes.

Vertical Redundancy Check (VRC) and


Cyclic Redundancy Check (CRC)

28. Define sampling theorem.

A continuous time signal can be completely represented in its


sample and received back. If the sampling frequency is twice of
the highest frequency content of the signal.
fs ≥ 2 fm
Where fs = sampling frequency
fm = modulating frequency
ANALOG AND DIGITAL COMMUNICATION

REVIEW QUESTIONS
PART – A
1. What do you mean by non-linear encoding in PCM system?
2. What is the advantage of differential PCM?
3. What are the types of data transmission?
4. Mention the usage of Scrambler and Descrambler
5. Differentiate between error detection and correction
6. Find the minimum sampling frequency for a signal having fre-
quency for a signal having frequency from 10MHz to 10.2MHz,
in order to avoid aliasing.
7. What are the types of pulse modulation systems?
8. List the methods for error correction.
9. What is pulse stuffing?
10. What is meant by fading?
11. Mention any two error control codes.
12. Define sampling theorem.
13. Determine the Nyquist rate for analog input frequency of
(a) 4KHz (b) 10 KHz.
14. List any two data communication standard organization.
15. What is the need for error control coding?
16. What is meant by differential pulse code modulation?
17. What are the advantages of digital transmission?
18. What is data terminal equipment? Give examples.
19. What is forward error correction?
20. What is meant by ASCII code?
21. Which error detection technique is simple and which one is more
reliable?
22. Give some of the alternative names for data communication codes.
23. What are the two types of noises present in Delta modulation
system?
24. Explain why the quantization noise cannot be removed
completely in PCM. How do you reduce this noise?
25. What are the two types of noises present in Delta modulation
system?
DATA COMMUNICATION

PART – B
1. With the neat block diagram, explain the concept of UART
transceiver operation.
2. What are the parallel interfaces? Describe in detail about
centronics parallel interfaces.
3. With block diagram explain the PCM transmitter and receiver.
4. Describe delta modulation system. What are its limitations? How
can they be overcome?
5. (i). Explain any two data communication codes presently used
for character encoding.
(ii). Give brief notes on error detection.
6. With neat block diagram explain the data communication
hardware.
7. Define PWM and explain one method of generating PWM.
8. Describe the processing steps to converts a k bit message word
to n-bit code word (n>k). Introduce a error and demonstrate how
a error can be corrected with an example.
9. Draw• the block diagram and explain the principle of operation
of a PCM system. A binary channel with bit rate =36000 bits/
sec is available for . PCM voice transmission. Find number of
bits per sample, number of quantization levels and sampling fre-
quency assuming highest frequency component of voice signal is
3.2 KHz.
10. (i) Write a note on data communication codes.
(ii) Explain serial and parallel interfaces in detail.
11. Explain in detail about error detection and correction.
12. Explain the standard organization for data communication.
13. Describe the mechanical, electrical and functional
characteristics of Rs. 232 interface.
14. Draw the block diagram and describe the operation of a del-
ta modulator. What are its advantages and disadvantages
compared to a PCM system?
15. Draw the transmitter and receiver block diagram of differential
PCM and describe its operation.
16. The PCM system has the following parameters, maximum analog
input frequency is 4KHz, maximum decoded voltage at the re-
ANALOG AND DIGITAL COMMUNICATION

ceiver is ±2.55V, minimum dynamic range is 46 dB Determine,


(i) Minimum sample rate (ii) Minimum number bits used in the
PCM mode (iii) Resolution and (iv) Quantization error.
17. (i) Draw the block diagram of typical DPCM system an explain.
(ii) In a binary PCM system, the output signal to quantization
noise ratio is to be held to a minimum of 40 dB. Determine the
number of required levels, and find the corresponding out signal
to quantization noise ratio.
Entropy, Source encoding theorem, Shannon fano coding,
Huffman coding, mutual information,channel capacity, channel coding
theorem, Error Control Coding, linear block codes, cyclic codes,
convolution codes, viterbi decoding algorithm.
ANALOG AND DIGITAL COMMUNICATION

SOURCE CODING AND


ERROR CONTROL CODING Unit 4
4.1 INTRODUCTION TO INFORMATION THEORY

ˆˆ Information theory allow us to determine the information


content in a message signal leading to different source coding
techniques for efficient transmission of message.
ˆˆ The information theory used for mathematical modelling and
analysis of the communication system.
ˆˆ With information theory, and the modelling for communication
systems, following two main facts resolved.
i. The irreducible complexity below which the signal cannot
be compressed.
ii. The transmission rate for reliable communication of the
noisy channel.
ˆˆ In this chapter the concept of information entropy, channel
capacity, information rate etc., and source coding techniques
are discussed.
Discrete information source
ˆˆ A discrete information source which has only a finite set of
symbols is called the alphabet, and the elements of the set are
called symbols or letters.
ˆˆ A Discrete Memory less Source (DMS) can be characterized by
the list of symbols, the probability assigned to these symbols
and the specification of the rate of generating these symbols by
the source.
Uncertainty
Information is related to the probability of occurrence of event.
More is the uncertainty, more is the information associated with it.
The following example related to uncertainty (or) surprise.
Example
1. Sun rises in east
Here uncertainty is zero because there is no surprise in the
statement. The probability of occurrence of sun rising in the east is
SOURCE CODING AND ERROR CONTROL CODING

always 1
2. Sun does not rise in east:-
Hear uncertainty is high,because there is maximum information
as it is not possible

4.1.1 Definition of Information (Measure of Information)


Consider a communication system which transmits
messages m1,m2....with probabilities P1,P2,... the amount of information
transmitted through the message mkwith probability Pk is given as,
Amount of information, IK=Log2(1/PK)
Properties of information
1. If there is more uncertainty about the message, information carried
is also more
2. If receiver knows the message being transmitted,the amount of
information carried is zero
3. If I1 is information carried by message m1 and I2 is information carried
by message m2, then amount of information carried completely due
to m1and m2 is I1+I2,
4. If there m=2N equally likely messages,then amount of information
carried by each message will be N bits.

4.1.2 Concept of Amount of Information


Let us assume a communication system in which the allowable
message are m1,, m2,.....with probabilities of occurrence p1 ‘p2 .... Let the
transmitter select the message mk of probability Pk.
ˆˆ Assume that the receiver has correctly identified the message. Then
by the definition of the term information,the system has conveyed an
amount of information Ik given by
IK=log21/Pk
ˆˆ The concept of amount of information is also essential to
examine with some care the suggestion of the above equation. It can
be first noted that while Ik is an entirely dimensionless number,by
convention,the unit assigned is the bit.
ˆˆ Therefore by an example,if Pk=1/4,Ik= log24 = 2 bits. The unit bit is
employed principally as a reminder that in the above equation the
base of the logarithm is 2.(when the natural logarithmic base is used,
ANALOG AND DIGITAL COMMUNICATION

the unit is the nat,and when the base is 10,the unit is the Hartley or
decit. The use of such unit in the present case is analogous to unit
radian used in angle measure and decibel used in connection with
power ratios.)
ˆˆ The use of base 2 is especially convenient when binary PCM is
employed ,If the 2 possible binary digit(bits) may occur with equal
likelihood,each with a probability 1/2,then the correct identification
of the binary digit conveys an amount of informational I=log22=1 bit.
ˆˆ In the past term bit was used as an abbreviation for the phrase
binary digit. When there is an uncertainly whether the word bit is
untended as an abbreviation for binary digit as binit.
ˆˆ Assume there are M equally likely and independent messages that
M=2N,with Nan integer .In this case the information in each message
is
= I log
= 2M log 2 2N = N log 2 2
ˆˆ To identify each message by binary PCM code word ,the number of
binary digits required for the each of the 2N message is also N.Hence
in this case the information in each message,as measured in bits, is
numerically the same as the number of binits needed to encode the
messages.
ˆˆ When pK=1, one possible message s allowed. In this instance,
since the receiver knows the message,there is really no need for
transmission. We find that 1= log 2, I = 0. As PK decreases from 1
to 0,Ik increases monotonically, going from to infinity. Therefore,a
greater amount of information has been conveyed when the receiver
correctly identifies a less likely message.
ˆˆ When two independent messages mK and mj are correctly identified,we
can readily prove that the amount of information conveyed is the sum
of the information associated with each of the message individually.
Therefore,we conclude that the information amount are
I k = log 2 1 / pk
I l = log 2 1 / pl

• As the messages are independent ,the probability of the composite


message is p!!! p, with corresponding information content of message
mk and mj is

I K log 2 1 / Pk Pl = log 2 1 / Pk + log 2 1 / Pl = I K + I l


SOURCE CODING AND ERROR CONTROL CODING

Problem 1
A source produces one of the four possible symbols
during each interval having probabilities p1=1/2,P2=1/4,P3=P 4=1/8.
obtain the information content of each of these symbols.

Solution
we know that the information content of each symbol is
given as, 1
I k = log 2  
 Pk 
Thus we can write
 1   1 
I 1 = log 2   = log 2   = log 2 ( 2) = 1 bit
 p1   1/ 2 
 1   1 
 = log ( 2) = 2 bits
2
I 2 = log 2   = log 2 
 p2   1/ 4 
 1   1 
 = log 2 ( 2) = 3 bits
3
I 3 = log 2   = log 2 
 p3   1/ 8 
 1   1 
 = log 2 ( 2) = 3 bits
3
I4 = log 2   = log 2 
 p4   1/ 8 

Problem 2
Calculate the amount of information,if it is given that pk=1/2.

Solution
The amount of information
 1 
I k = log 2  
 pk 
 1 
log10  
 pk  log10 2
= = = 1 bit
log10 2 log10 2
or
 1 
=log 2   = log 2 ( 2) = 1 bit
 1/ 2 
ANALOG AND DIGITAL COMMUNICATION

Problem 3
Calculate the amount of information ,if binary digits occur with
equal likelihood in a binary pcm system.

Solution
we know that in binary PCM, there are 2 binary levels (i.e.,)1 or 0
Therefore the probabilities,
p1(0 level)=P2(1 level)=1/2
Here the amount of information content is given as,

 1 
I 1 = log 2  
 1/ 2 
 1 
I 2 = log 2  
 1/ 2 
 1  log10 2
I 1 = log 2   = log 2 ( 2) = = 1 bit
 1/ 2  log10 2
 1  log10 2
I2 = log 2   = log 2 ( 2) = = 1 bit
 1/ 2  log10 2
I1 = I 2 = 1 bit

Thus,the correct identification of the binary digits in binary PCM
carries 1 bit of information

4.2 ENTROPY OR AVERAGE INFORMATION


• In a practical communication system,it is defined as the average
information per message. Denoted ‘H’ and its units are bits/
message.
• Entropy must be as high as possible in order to ensure maximum
transfer of information.
• Thus for quantitative representation of average information per
symbol,we make the following assumptions.
I. The source is stationary so that the probabilities may remain
constant with time.
ii. Successive symbols are statistically independent and come from
the source at an average rate of r symbol per second.
m1,m2,m3.....mM and their probablities P1, P2, P3,....Pm
respectively
SOURCE CODING AND ERROR CONTROL CODING

Expression for entropy


• Consider that we have M different messages.
• Let these message be m1m2m3,...mM and their probabilities p1p2p3....
pM respectively.
• Suppose that a sequence of L message is transmitted,Then if L is very
very large then we may say that,
P1 L messages of m1 are transmitted.
P2 L messages of m2 are transmitted
PmL messages of mM are transmitted.
• The information-due to message m1 will be,

 1 
I 1 = log 2  
 p1 
• Since ,there are P1 L number of ml,the total information due to all
message of ml will be,
 1 
I 1 (total ) = P1L log 2  
 p1 

• Similarly ,the total information due to all message of m2 will be,

 1 
I 2(total ) = P1L log 2   and so on
 p2 

• Therefore ,the total information carried due to sequence of L mes-


sages will be,
I (total ) = I 1(total ) + I 2(total ) + ...Im(total )
I (total ) = p1L log 2 (1 / p1 ) + P2L log 2 (1 / p2 ) + .... + PM L log 2 (1 / pM ) ... (1)

• The average information per message will be,


Total information
Average information =
Number of messages
I (total )
=
L
• The average information is represented by entropy, which is
represented by H. Thus,

I (total )
Entropy,H=
L
ANALOG AND DIGITAL COMMUNICATION

From equation (1), we can rewrite the above equation as,

Entropy, H = p1L log 2 (1 / p1 ) + p2L log 2 (1 / p2 ) + ... + pM L log 2 (1 / pM ) ... ( 2)

We can write the above equation using 1:(summation) sign as follows:


M
 1 
Entropy; H = ∑p k log 2  
K =1  pk 
or
M
H = − ∑ Pk log 2 pk
K =1

4.2.1 Properties of Entropy


1. Entropy (H) is zero ,if the event is sure or it is impossible
(i.e.,)H=0 if pk=0 or 1.
2.When Pk=M for all M symbols,then the symbols are equally
likely. For such a source entropy is given by,
H=log2M
3.Upper bound on entropy is given by,
Hmax ≤ log 2 M
These above properties can be proved as.
Property 1
Calculate entropy when pK=0 and when pk=1
Proof
We know that

M
 1 
H = ∑ pk log 2  
k =1  pk 
• Since,Pk=1,the above equation becomes,
M
1
H = ∑ log
K =1
 
2
1
M log10 (1)
= ∑ log ( 2)
K =1 10

= 0[sin ce log10 1=0]


• Next consider the second case,when pk=0,Instead of putting
pk=0,directly,Let us consider the limiting case,(i.e.;)
SOURCE CODING AND ERROR CONTROL CODING

M
 1 
H = ∑P K log 2  
K =1  pk 

• With pk tends to zero,the above equation will be,


M
1
H = ∑ lim P k log 2  
K =1
Pk →0
 Pk 

• The Right hand side of the above equation will be zero, when pk → 0
Hence entropy will be zero(i.e.;)
H=0
Therefore,entropy is zero for both certain and most rated message.
Property 2
When pk=1/M for all M symbols are equally likely .For such a
source entropy is given by H=log2M.
Proof
We know that the probability of M number of equally likely
messages is
1
P =
M
• This probability is same for all M messages,(i.e.,)

1
P1 = P2 = P3 = ...PM = ... (1)
M

• Entropy is given by,


M
 1 
H = ∑P K log 2  
K =1  pk 
 1   1   1 
= p1 log 2   + P2 log 2   + ...PM log 2  
 p1   p2   pM 
Putting the Probabilities form equation we get,

1 1 1
H = log 2 M + log 2 M + ... log 2 M
M M M

• In the above equation,there are M number of terms in summation.


Hence after adding these terms above equation becomes,
H=log2M
ANALOG AND DIGITAL COMMUNICATION

Property 3
The upper bound on entropy is given as Hmax≤log2M.Hear ‘M’ is the
number of messages emitted by the source.

Solution
• To prove the above property,the property of natural logarithm is
used,it can be written as,

In x ≤ x-1 for x ≥ 0 ...(1)

• Let us consider any two probability distribution{p1,p2,...pm} and {q1,q2,...


qm} on the alphabet X={x1,x2,...xM} of the discrete memory less source.

• Consider the term


M
q 
∑P k log 2  k  .It can be written as,
K =1  Pk 

 qk 
log10  
 qk  M  Pk 
M

∑ Pk log 2  ∑ k
= P
log102
K =1  Pk  K =1

Multiply the RHS by log10e and rearrange terms as follows;

q 
log10  k 
q  log10 e  Pk 
M M

∑ pk log 2  k  = ∑ Pk 2
K =1  Pk  K =1 log10 log10 e
M
q 
= ∑ Pk log 2 e log e  k 
K =1  Pk 

 qk   qk 
Here log e   = In   Hence above equation becomes,
 Pk   Pk 

M
q  M
q 
∑P k log 2  k  = log 2 e ∑ Pk In  k 
K =1  Pk  K =1  Pk 
SOURCE CODING AND ERROR CONTROL CODING

• Form the equation(1) we can write

 qk   qk 
In  ≤ − 1
 pk   pk 

• Therefore above equation becomes,


M
 qk  M
 qk 
∑ pk log 2 
pk
 ≤ log 2
e
∑p k  − 1
K =1   K =1  pk 
M
≤ log 2e ∑ (q
K =1
k − pk )

M M

≤ log 2e  ∑ qk − ∑ pk 
 K =1 K =1 
M M
• Note that ∑q
k =1
k = 1 as well as ∑P
k =1
k =1
• Hence above equation becomes,
M
q 
∑P k log 2  k  ≤ 0
K =1  pk 

1
Now consider that qk = k for all k. That is all symbols in the alphabet
are equally likely.
Then above equation becomes,

M
 1
∑P log 2 qk + log 2  ≤ 0
k
Pk 
K =1 
M M
1
∴ ∑ Pk log 2 qk + ∑ Pk log 2 ≤0
K =1 K =1 Pk
M
1 M
∴ ∑ Pk log 2 ≤ − ∑ Pk log 2 qk
K =1 Pk K =1
M
1 1
≤ ∑P
K =1
k log 2
Pk
log 2
qk

1
Replace qk = in above equation,
M
ANALOG AND DIGITAL COMMUNICATION

M
1 M

∑P
K =1
k log 2 ≤ ∑ Pk log 2 M
Pk K =1
M
≤ log 2 M ∑ Pk
K =1

We Know that ∑P
K =1
k =1
,hence above equation becomes,

M
1
∑P
K =1
k log 2
Pk
≤ log 2 M

The L H S of above equation is entropy H(X) with arbitrary probability


distribution.(i.e.,)

H ( X ) ≤ log 2 M

Hence proved. The maximum value of entropy is,

H max ( X ) = log 2 M

Problem 1
In binary PCM if ‘0’ occur with probability 1/4 and ‘1’ occur with
the probability equal to 3/4,then calculate the amount of information
carried by each bin it.

Solution

Here,given that binary ‘0’ has P(x1)=1/4


And binary ‘1’ has P(x2)=3/4

Then the amount of information is given as,


SOURCE CODING AND ERROR CONTROL CODING

1
I ( x i ) = log 2
P ( xi )
1
P ( x1 ) =
4
1
With P ( x1 ) =
4
log10 4
We have I ( x1 ) = log 2 4 = = 2bit
log10 2
3
And with I ( x 2 ) =
4
4
log10  
We have I ( x 2 ) =  3  = 0.415 bits
log10 2

Here,it may observed that binary ’0’ has probability 1/4 and it
carries 2 bits of information.

Whereas binary it’1’ has probability 3/4 and it carries 0.415 bits
of information.
Thus, this reveals the fact that if probability of occurrence is less,then
the information carried is more and vice versa.

Problem 2
If there are M equally likely and independent symbol,then prove
that amount of information carried by each symbol will be,
I(xi)=N bits,where M=2N and N is an integer

Solution
Since, it is given that all the M symbols are equally likely and
independent,therefore,the probability of occurrence symbol must be
1/M.
We know that amount of information is given as,
1
I ( x i ) = log 2 ... (1)
P ( xi )
ANALOG AND DIGITAL COMMUNICATION

Here,Probability of each message is,

1
P ( xi ) =
M
Hence ,equation(1) will be,

I ( x i ) = log 2 M ... ( 2)

Further we know that M= 2N amount of information,hence equation(2)


will be

I ( x i ) = log 2 2N = N log 2 2
[Since log 22=1]
= N bits
Hence,amount of information carried by each symbol will be ‘N’
bits. We know that M=2.

This means that there ‘N’ binary digits (bin its)in each symbol.
This indicate that when the symbols are equally likely and coded with
equal number of binary digits (bin its), then the information carried by
each symbols(measured n bits) is numerically same as the number of
bin its used for each symbols.

Problem 3
Prove the statement stated as under “if a receiver knows the
message being transmitted,the amount of information carried will be
zero”.
Solution
Here it is stated that receiver “Knows” the message. This means
that only one message is transmitted. Thus,probability of occurrence o
this message will be P(xi) =1. This is because only one message and its
occurrence is certain(probability of certain events is’1’)The amount of
information carried by this type of message will be,
1 log10 1
I ( x i ) = log 2 =
P ( x i ) log10 2
Substituting(xi)=1
SOURCE CODING AND ERROR CONTROL CODING

Or
I(xi)=0 bits
This proves the statement if the receiver knows message,the
amount of information carried will be zero.
Also,as P(xi) is decreased from 1 to 0,I(xi ) increased monotonically
from 0 to infinity. This shows that the amount of information conveyed
is greater when receiver correctly identifies less likely messages.

Problem 4
Verify the following expression

I ( xixi ) =I (xi)+I ( xj ) if xi and xj are independent.

Solution
If xiand xj independent then we know that

P ( xi x j ) = P ( xi ) P ( x j )
1
also I ( x i x j ) = log 2
P ( xi x j )
1
I ( x i x j ) = log 2
P ( xi ) P ( x j )
1 1
I ( x i x j ) = log + log
P ( xi ) P (x j )
I ( xi x j ) = I ( xi ) + I ( x j )

Problem 5
A discrete source emits one of five symbols once every millisecond
with probabilities 1/2,1/4,1/8,1/16 and 1/16 respectively. Determine
the source entropy and information rate

Solution
We know that the source entropy is given as
m
1
H ( x ) = ∑ P ( X i ) log 2
i =1 P ( xi )
ANALOG AND DIGITAL COMMUNICATION

5
1
= ∑ P ( x i ) log 2 bits / symbol
i =1 P ( xi )
1 1 1 1 1
(or ) H ( X ) = log 2 2 + log 2 4 + log 2 8 + log 2 16 + log 2 16
2 4 8 16 16
1 1 3 1 1 15
(or ) H ( X ) = + + + + =
2 2 8 4 4 8
(or ) H ( X ) = 1.875 bits/symbol
1 1
The symbol rate r= = = 1000 sym
mbols/sec
Tb 10−3
Therefore,the information rate is expressed as

R = rH ( X ) = 1000 × 1.875 bits/sec

Problem 6
The probabilities of the five possible outcomes of an experiment
are given as
1 1 1 1
P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = , P ( x 4 ) = P ( x 5 ) =
2 4 8 16
Determine the entropy and information rate if there are 16 out
comes per second.
Solution
The entropy of the system is given as

5
1
H ( X ) = ∑ P ( x i ) log 2 bits / symbol
i =1 P ( xi )
1 1 1 1 1 15
(or ) H ( X ) = log 2 2 + log 2 4 + log 2 8 + log 2 16 + log 2 16 =
2 4 8 16 16 8
H ( X ) = 1.875bits / outcome

Now, rate of information r=16 outcomes/sec.


Therefore,rate of information R will be

R = rH ( X ) = 16 × (15 / 8 ) = 30bits /sec


SOURCE CODING AND ERROR CONTROL CODING

Problem 7
An analog signal is band limited to fm Hz and sampled at Nyquist
rate. The samples are quanti zed into four levels. Each level represents
one symbol. Thus there are four symbols. The probabilities of these
four levels(symbols) are P(xi)=P(x4)=1/8 and P(x2)=P(x3)=3/8. Obtain
information rate of the source.
Solution
We are given four symbols with probabilities p(x1)=P(x4)=1/8 and
P(x2)=P(x3)=3/8. Average information H(X)(or entropy)is expressed as,

1 1 1 1
H ( X ) = P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2 + P ( x 4 ) log 2
P ( x1 ) P ( x2 ) P ( x3 ) P (x4 )

Substituting all the given values we get


1 3 8 3 8 1
H (X ) = log 2 8 + log 2 + log 2 + log 2 8
8 8 3 8 3 8
(or ) H ( X ) = 1.8bits / symbols
It is given that the signal is sampled at Nyquist rate for fm Hz band
limited signal is,
Nyquist rate =2 fm samples/sec
Since every sample generated one source symbol,
Therefore,symbols per second,r=2 fm symbols/sec
Information rate is given by : R=r H(x)
Putting values of r and H(X) in this equation,we get
R=2 fm symbols/sec X 1.8 bits/symbols
=3.6 fm bits/sec.

In this example there are four levels. Those four levels may be
coded using binary PCM as show in Table 6.1
ANALOG AND DIGITAL COMMUNICATION

Symbol or
S.No Probability Binary digits
level
1 Q1 1/8 00
2 Q2 3/8 01
3 Q3 3/8 10
4 Q4 1/8 11
Table 6.1
Hence,two binary digits(bin its) are required to send each
symbols are sent at the rate of 2fm symbols/sec. Therefore,transmission
rate of binary digits will be binary rate=2 binary/symbol×2fm sym-
bols/sec=4 fm bin its/sec. Because one bin it is capable of conveying
one bit of information,therefore the above coding scheme is capable of
conveying 4 fm bits of information per second. But in this example, we
have obtained that we are transmitting 3.6 fm bits of information per
second.This means that the information carrying ability of binary PCM
is not completely utilized by the transmission scheme.
4.3 SOURCE CODING TO INCREASE AVERAGE INFORMATION
PER BIT

4.3.1 Source coding theorem(Shannon’s first theorem)


• Coding offers the most significant application of the information
theory
• The main purpose of the coding is to improve the efficiency of the
communication system.
• Coding is a procedure, for mapping a given set of messages or
information {m1m2,....mN} into a new set of encoded messages{ c1,,c2,...
cN} in such away that the transmission is one to one (i.e) for each
message,there is only one encoded message. This is called “Source
coding”.
• The device which performs source coding is called source encoder.
• The main problem of coding technique is the development of an
efficient source encoder.
The primary requirements are:
1. The code words produced by source encoder should be in
binary nature.
2. The source code should be unique in nature. The every code
SOURCE CODING AND ERROR CONTROL CODING

word should represent unique message.


Let there be L number of messages emitted by the source.
The probability of the kth message is pkand number of bits
assigned to this message be nk. Then the average number of bits(N) in
the code word of the message are given by,
L −1
N = ∑P N
K =0
k k ... (1)

Let Nminbe the minimum value of N. Then the coding efficiency of the
source encoder is defined as,

= N min / N k ... ( 2)

• If the coding efficiency(η)approaches unity, then the source encoder


is called efficient.
• In other words,Nmin≤ N ,and coding efficiency is maximum when
Nmin≈ N
• The value of Nmin can be determined with help of Shannon s first
theorem called source coding theorem.
Statement
Given a discrete memory less source of entropy H, the average
code word length N for any distortion less source encoding is bounded
as,
N ≥ H ...(3)
Note:
i. (Where ,the entropy H indicates the fundamental limit on the
average number of bits peer symbol(i.e) N this limit says tha
t average number of bits per symbol cannot be made smaller them
entropy H.
ii. (ii) Hence Nmin=H and we can write the efficiency of source encoder
from equation(2)as,

η = H/N ....(4)

4.3.2 Code Redundancy


• It is measure of redundancy of bits in the encoded message sequence.
It is given by,
Redundancy r=1- code efficiency
ANALOG AND DIGITAL COMMUNICATION

=1-η ... (5)


NOTE;Redundancy should be as low as possible.

4.4 DATA COMPACTION

In efficient signal transmission, the redundant information should


be removed from the signal prior to transmission. This operation with no
loss of information,is ordinarily performed on a signal form. It is referred
as data
Compaction or loss less data compression.
• Basically, data compression is achieved by assigning short
description to the most frequently outcomes of source output and
longer description to the less frequent out comes.
• The various source coding schemes for data compaction are:
(i)Prefix coding.
(ii) Shannon Fanocoding.
(iii) Huffman coding.
(iv) Lempel ziv coding.
• There are two algorithms of variable length coding techniques which
is done to increase the efficiency of the source encoder. They are
1. Shannon Fano algorithm
2. Huffman coding.

4.5.SHANNON FANO CODING

Need
(i)If the probability of occurrence of all the messages are not
equally likely,then average information or entropy is reduced
and Results in information rate is reduced.
(ii) This problem can be solved by coding the messages with
different number of bits.
NOTE
(i).Shannon - Fano coding is used to encode the messages
depending upon their probabilities.
(ii).This algorithm is assigns less number off or
highly probable message and more number of bits for rarely
occurring messages.
SOURCE CODING AND ERROR CONTROL CODING

Producer

An efficient code can be obtained by the following simple producer


known as Shannon Fano coding algorithm.

Step 1:List the source symbols in order of decreasing probability.

Step 2:Partition the set into two sets that are as close to equi-probable
as possible and assign 0 to the upper set and assign 1 to the lower
set.

Step 3:Continue this process each time partitioning the sets with as
nearly probabilities as possible until further partitioning is not
possible.

Problem 1

A discrete memory less source has symbols x1,x2,x3,x4,x5 with


probabilities of 0.4,0.2,0.1,0.2,0.1 respectively. Construct a Shannon
Fano code for the source and calculate code efficiency ‘ η ’.

Solution

Step 1:Arrange the given probabilities in descending order.

Given Probabilities

P1=0.4,P2=0.2,P3=0.1,P4=0.2,P5=0.1.

Probabilities in descending order

Symbols Probabilities
x1 0.4
x2 0.2
x3 0.2
x4 0.1
x5 0.1
ANALOG AND DIGITAL COMMUNICATION

Step 2:

The initial partitioning can be done in two ways.(i.e) we can split


as equiprobable in two methods.

Method 1:

Sym- Prob- Stage Stage Stage 3 Codeword No of bits per


bol ability 1 2 message(nk)
x1 0.4 0 0 00 2
x2 0.2 0 1 01 2
x3 0.2 1 0 10 2
x4 0.1 1 1 0 110 3
x5 0.1 1 1 1 111 3

L =1 L
N = ∑ Pknk
k =0
(or ) =∑ Pknk
K =1
5
N = ∑P n
K =1
k k

= P1n1 + P2n 2 + P3n 3 + P4n 4 + P5n 5


= ( 0.4 × 2) + ( 0.2 × 2) + ( 0.1 × 3 ) + ( 0.2 × 2) + ( 0.1 × 3 )
= 0.8 + 0.4 + 0.3 + 0.4 + 0.3
= 2.2 bits/symbol.

Method II
No of
Symbol Probability Stage 1 Stage Stage Code bits per
2 3 word message
(nk)
x1 0.4 0 0 1
x2 0.2 1 0 0 100 3
x3 0.2 1 0 1 101 3
x4 0.1 1 1 0 110 3
x5 0.1 1 1 1 111 3

Table 6.3
SOURCE CODING AND ERROR CONTROL CODING

5
N = ∑P n
K =1
k k

= P1n1 + P2n 2 + P3n 3 + P4n 4 + P5n 5


= ( 0.4 × 1) + ( 0.2 × 3 ) + ( 0.1 × 3 ) + ( 0.2 × 3 ) + ( 0.1 × 3 )
= 0.4 + 0.6 + 0.3 + 0.6 + 0.3
= 2.2 bits/symbol.

Step 3: Entropy of source (H)

M
1
H = ∑ Pk log 2  
K =1  Pk 
5
1
= ∑ Pk log 2  
K =1  Pk 
1 1 1 1 1
= P1 log 2   + p2 log 2   + P3 log 2   + P4 log 2   + P5 log 2  
 P1   p2   P3   P4   P5 
 1   1   1   1   1 
= 0.4 log 2   + 0.2 log 2   + 0.1log 2   + 0.2 log 2   + 0.1 log 2  
 0.4   0.2   0.1   0.2   0.1 
 1   1   1   1   1 
log10   log10   log10   log10   log10  
= 0.4  0.4  + 0.2  0.2  + 0.1  0.1  + 0.2  0.2  + 0.1  0.1 
log10 2 log10 2 log10 2 log10 2 log10 2
= ( 0.4 × 1.3219 ) + ( 0.2 × 2.3219 ) + ( 0.1 × 3.3219 ) + ( 0.2 × 2.3219 ) + ( 0.1 × 3.3219 )
= 0.52876 + 0.46439 + 0.33219 + 0.46439 + 0.33219
= 2.12192 bits/ssymbol.

(iii) Efficiency
H
η =
N
2.12192
= = 0.96450
2.2
0 η = 96.45 0
0 0
ANALOG AND DIGITAL COMMUNICATION

4.6 HUFFMAN CODING


• Huffman coding assigns different number of binary digits to the
messages according to their bits/symbol probabilities of occurrence.
• Since Huffman coding,one binary,digit carries almost one bit
of information,Which is the maximum information that can be
conveyed by one digit.
Procedure
Step 1:The messages are arranged in an order of decreasing
probabilities for example x3 and x4 have lowest probabilities and
hence they are put at the bottom in the column of stage -1.
Step 2:The two messages of lowest probabilities are assigned binary ‘0’
and’1’.
Step 3:The two lowest probabilities in stage I are added and the sum
is placed in stage II,such that probabilities are in descending
order.
Step 4: Now last two probabilities are assigned 0 and 1 and they are
added. The sum of last two probabilities placed in stage III such
that probabilities are in descending order. Again’0’ and’1’ is
assigned to the last two probabilities
Step 5:This prosses continued till the last stage contains only two val-
ues. These two values are assigned digits 0 and 1 and no further
repetition required. This results in a construction of tree is know
as Huffman tree.

Step 6: Start encoding with the last stage,which consist of exactly two
ordered probabilities Assign 0 as the first digit in the code words
for all the source of symbols associated with probability,assign 1
to the second probability.

Step 7: Now go back and assign 0 and 1 to the second digit for the two
probabilities that were combined in the previous step retaining
all assignments made in that stage.

Step 8: Keep regressing this way until first column is reached


SOURCE CODING AND ERROR CONTROL CODING

Problem 1
A discrete memory less source has 6 symbols x1 ,x2,x3,x4,x5,x6
with probabilities 0.30,0.35,0.20,0.12,0.08,0.05 respectively. Construct
a huffman code and calculate its efficiency also calculate redundancy of
the code.
Solution
Code words obtained in bracket in stage. We can write the code
words for the respective probabilities,as follows

Stage I Stage
Xi Stage II Stage IV Stage V
P(xi) III
x1 0.30 0.30 0.30 0.45 0.55
x2 0.25 0.25 0.25 0.30 0.45
x3 0.20 0.20 0.25 0.25
x4 0.12 0.13 0.20
x5 0.08 0.12
x6 0.05

Number of
Message Probability Code word
bits nk
x1 0.3 00 2
x2 0.25 01 2
x3 0.2 11 2
x4 0.12 101 3
x5 0.08 1000 4
x6 0.05 1001 4

Table 6.5
(iii) To find efficiency h we have to calculate average code word length(N)
and entropy (H).
M
N = ∑P n
K =1
k k where nk is code word
6
= ∑P n
K =1
k k

= P1n1 + P2n 2 + P3n 3 + P4n 4 + P5n 5 + P6n 6


= 0.30 × 2 + 0.25 × 2 + 0.20 × 2 + 0.12 × 3 + 0.08 × 4 + 0.05 × 4
= 0.60 + 0.50 + 0.40 + 0.36 + 0.32 + 0.20
= 2.38 bits/symbol
ANALOG AND DIGITAL COMMUNICATION

Entropy

M
1
H = ∑ Pk log 2  
K =1  Pk 
6
1
= ∑ Pk log 2  
K =1  Pk 
1 1 1 1 1 1
= P1 log 2   + P2 log 2   + P3 log 2   + P4 log 2   + P5 log 2   + P6 log 2  
 P1   P2   P3   P4   P5   P6 
 1   1   1   1   1   1 
0.30 log 2   + 0.25 log 2   + 0.20 log 2   + 0.12 log 2   + 0.08 log 2   + 0.05 log 2  
 0.30   0.25   0.20   0.12   0.08   0.05 
log 0.30 log 0.25 log 0.20 log 0.12 log 0.08 log 0.05
0.30 10 + 0.25 10 + 0.20 10 + 0.12 10 + 0.08 10 + 0.05 10
log10 2 log10 2 log10 2 log10 2 log10 2 log10 2
= 0.521 + 0.5 + 0.4643 + 0.367 + 0.2915 + 0.216
= 2.3598 bits of information/message

To obtain code efficiency(h)


H 2.3598
η= = = 0.99
N 2.38
o η = 99 o
o o
Redundancy of the code(g)

γ = 1 − γ ⇒ 1 − 0.99
= 0.01

Problem 2

A discrete memory less source X has four symbols x1,x2,x3, and x4
with probabilities 1 1 1 construct a
P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = P ( x 4 ) =
2 4 8
shannon fano code has the optimum property that ni=I(xi)and the code
efficiency is 100 o/o
SOURCE CODING AND ERROR CONTROL CODING

Solution
Given
1 1 1
P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = P ( x 4 ) = ,n (i ) = ( x i )
2 4 8
 1 
We know that,I(x i ) = log 2  
 P ( xi ) 
 
 1   1 
I ( x1 ) = log 2   ⇒ log 2  
 P ( x1 )  1
2
 
 
 1 
log10  
1
  2   log 2
=   = 10
=1
log10 2 log10 2
 
 1   1 
I ( x 2 ) = log 2  ⇒ log 2  
 P ( x )  1
 2 
4
 
 
 1 
og10 
lo 
1
4
=   = 2
log10 2
 
 1   1 
I ( x 3 ) = log 2   ⇒ log 2  =3
 P ( x3 )  1
8
 
 
 1   1 
I ( x 4 ) = log 2   ⇒ log 2  =3
 P (x4 )  1
8
 
ANALOG AND DIGITAL COMMUNICATION

Symbol Probability Stage 1 Stage 2 Stage 3 Code No of bits per


word message(nk)
x1 1/2 0 0 1
x2 1/4 1 0 10 2
x3 1/8 1 1 0 110 3
x4 1/8 1 1 1 111 3
We know that,
entropy,
4  1 
H ( X ) = ∑ P ( x i ) log 2   (or )
i =1  P ( xi ) 
4
= ∑ P ( xi ) I ( xi )
i =1

= P ( x1 ) I ( x1 ) + P ( x 2 ) I ( x 2 ) + P ( x 3 ) I ( x 3 ) + P ( x 4 ) I ( x 4 )
1  1  1  1 
=  × 1 +  × 2  +  × 3  +  × 3 
2  4  8  8 
1 1 3 3
= + + +
2 2 8 8
= 1.75 bits/message

Average code word length(N)

M M
N = ∑ P n (or )∑ P ( x ) n
K =1
k k
i =1
i i

= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4
1  1  1  1 
=  × 1 +  × 2  +  × 3  +  × 3 
 2   4   8   8 
= 1.75 bits/syymbol
code efficiency
H (X ) 1.75
η= = =1
N 1.75
o η = 100 o
o o
SOURCE CODING AND ERROR CONTROL CODING

Problem 3
A DMS has five equaly likely symbols. Construct a Shannon
fano code for x and calculate the efficiency of code. Construct another
Shannon- fano code and compare the results. Repeat for the Huffman
code and compare results.

Solution

(i)A Shannon fano code[by choosing two approximately equi-
probable (0.4 versus 0.6) sets] is constructed as follows

Symbol Probability Stage


1
Stage Stage Code
2 3 word
No of bits per
message(nk)
x1 0.2 0 0 00 2
x2 0.2 0 1 01 2
x3 0.2 1 0 10 2
x4 0.2 1 1 0 110 3
x5 0.2 1 1 1 111 3

Entropy

5  1 
H ( X ) = ∑ P ( x i ) log 2  
i =1  P ( xi ) 
Here all five probabilitie es are same(i.e.,) 0.2 so we can write,
 1 
H ( X ) = 5 × P ( x i ) log 2  
 P ( xi ) 
 1 
= 5 × 0.2 × log 2  
 0.2 
 1 
0.2 log10  
= 5×  0.2 
log10 ( 2)
H ( X ) = 2.32 bits/message
ANALOG AND DIGITAL COMMUNICATION

Average code word length N ( )


5
N = ∑ Pk nk
k =1

= ( 0.2 × 2) + ( 0.2 × 2) + ( 0.2 × 2) + ( 0.2 × 3 ) + ( 0.2 × 3 )


= 0.4 + 0.4 + 0.4 + 0.6 + 0.6
= 2.4 bits/symbol.
H (X )
coding efficiency η=
N
2.32
= = 0.967
2.4
o η = 96.7 o
o o

(ii) Another method for Shannon fano code[by choosing another two
approximately equiprobable (0.6 versus 0.4) sets]is constructed as
follows
Stage Stage Stage Cord No of bits per
Symbol Probability
1 2 3 word message (nk)
x1 0.2 0 0 00 2
x2 0.2 0 1 0 010 3
x3 0.2 0 1 1 011 3
x4 0.2 1 0 10 2
x5 0.2 1 1 11 2

The entropy H(X) is same as in the previous method,(i.e.,)

5  1 
H ( X ) = ∑ P ( x i ) log 2  
i =1  P ( xi ) 
= 2.32 bits/message
SOURCE CODING AND ERROR CONTROL CODING

Average code word length N ( )


5 5
N = ∑ P n (or ) ∑ P ( x ) n
K =1
k k
i =1
i i

= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
= ( 0.2 × 2) + ( 0.2 × 3 ) + ( 0.2 × 3 ) + ( 0.2 × 2) + ( 0.2 × 2)
= 0.4 + 0.6 + 0.6 + 0.4 + 0.4
= 2.4 bits/symbol
ding coefficiency ( η)
Cod
H (X )
coding efficiency η=
N
2.32
= = 0.967
2.4
o η = 96.7 o
o o

Since, average code word length is same as that for the code of
part(i), the efficiency is same.
(iii)The huffman code is constructed as follows

Stage 1
xi Stage II Stage III Stage IV
P(xi)
x1 0.2 (01) 0.4 (1) 0.4 (1) 0.6 (0)
x2 0.2 (000) 0.2 (01) 0.4 (00) 0.4 (1)
x3 0.2 (001) 0.2 (000) 0.2 (01)
x4 0.2 (10) 0.2 (001)
x5 0.2 (11)

Symbol Probability Code word Length


x1 0.2 01 2
x2 0.2 000 3
x3 0.2 001 3
x4 0.2 10 2
x5 0.2 11 2

The average code word length


ANALOG AND DIGITAL COMMUNICATION
M 5
N = ∑ P n (or ) N = ∑ P ( x ) n
K =1
k k
i =1
i i

= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
Here all probability have same value (0.2).
=0.2 × [n1 + n 2 + n 3 + n 4 + n 5 ]
so,=
= 0.2[2 + 3 + 3 + 2 + 2]
= 0.2 × 12
= 2.4 bits/symbol
Entropy & efficiency are also same as that the Shannon fano code
due to same code word length.
Entropy
5  1 
H ( X ) = ∑ P ( x i ) log 2  
i =1  P ( xi ) 
 1   1   1 
= P ( x1 ) log 2   + P ( x 2 ) log 2   + P ( x 3 ) log 2  
 P ( x1 )   P ( x2 )   P ( x3 ) 
 1   1 
+P ( x 4 ) log 2   + P ( x 5 ) log 2  
 P (x4 )   P ( x5 ) 
Here all five probab bilities have same value as 0.2 so we can write,
 1 
=5 × P ( x1 ) lo
og 2  
 P ( x1 ) 
 1 
= 5 × 0.2 log 2  
 0.2 
 1 
0.2 log10  
= 5×  0.2 
log10 2
= 2.32 bits/message
Coding efficiency ( η)
H (X )
Coding efficiiency η =
N
2.32
= = 0.967
2.4
o η = 96.7 o
o o

Problem 4
A Discrete memory less source (DMS) has five symbols x1x2,x3,x4,
SOURCE CODING AND ERROR CONTROL CODING

and x5 with P(x1)=0.4,P(x2)=0.19,P(x3)=0.16,P(x4)=0.15,P(x5)=0.1.


(i) Construct the Shannon-fano code for x and calculate efficiency of the
code.
(ii) Repeat for the Huffman code and compare the results.

Symbol Probability Stage 1 Stage 2 Stage 3 Code No of bits per


word message (nk)
x1 0.4 0 0 00 2
x2 0.19 0 1 01 2
x3 0.16 1 0 10 2
x4 0.15 1 1 0 110 3
x5 0.1 1 1 1 111 3

Entropy

5  1 
H ( X ) = ∑ P ( x i ) log 2  
i =1  P ( xi ) 
 1   1   1 
= P ( x1 ) log 2   + P ( x 2 ) log 2   + P ( x 3 ) log 2  
 P ( x1 )   P ( x2 )   P ( x3 ) 
 1   1 
+P ( x 4 ) log 2   + P ( x 5 ) log 2  
 P (x4 )   P ( x5 ) 
 1   1   1 
= 0.4 log 2   + 0.19 log 2   + 0.16 log 2  
 0.4   0.19   0.16 
 1   1 
+0.15 log 2   + 0.1 log 2  
 0 .15   0.1 
H ( X ) = 2.15 bits/symbol
Code efficiency ( η)
H (X ) 2.15
η= = = 0.956
N 2.25
o η = 95.6 o
o o

(ii) Huffman code


Huffman code is constructed as follows
ANALOG AND DIGITAL COMMUNICATION

Stage I
Xi Stage II Stage III Stage IV
P(xi)
x1 0.4 (1) 0.4 (1) 0.4 (1) 0.6 (0)
x2 0.19 (000) 0.25 (01) 0.35 (00) 0.4 (1)
x3 0.16 (001) 0.19 (000) 0.25 (01)
x4 0.15 (010) 0.16 (001)
x5 0.1 (011)

Entropy H(X)
Entropy H(X) of Huffman code is same as that for the Shannon-
Fano code.

5  1 
H ( X ) = ∑ P ( x i ) log 2  
i =1  P ( xi ) 
 1   1   1 
= P ( x1 ) log 2   + P ( x 2 ) log 2   + P ( x 3 ) log 2  
 P ( x1 )   P ( x2 )   P ( x3 ) 
 1   1 
+P ( x 4 ) log 2   + P ( x 5 ) log 2  
 P (x4 )   p ( x5 ) 
 1   1   1 
= 0.4 log 2   + 0.19 log 2   + 0.16 log 2  
 0.4   0.19   0.16 
 1   1 
+0.15 log 2   + 0.1 log 2  
 0.15   0.1 
H ( X ) = 2.15 bits/message
SOURCE CODING AND ERROR CONTROL CODING

Symbol P(X) Code word Length(nk)


x1 0.4 1 1
x2 0.19 000 3
x3 0.16 001 3
x4 0.15 010 3
x5 0.1 011 3

Average code word length N ( )


5
N = ∑ P ( xi ) ni
i =1

= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
= ( 0.4 × 1) + ( 0.19 × 3 ) + ( 0.16 × 3 ) + ( 0.15 × 3 ) + ( 0.1 × 3 )
N = 2.2 bits/symbol
Code efficiency ( η)
H 2.15
η= = = 0.977
N 2.2
o η = 97.7 o
o o
ANALOG AND DIGITAL COMMUNICATION

their probabilities given


x1 x2 x3 x4 x5 x6 x7
0.05 0.15 0.2 0.05 0.15 0.3 0.1

Solution
Arranging the symbols in decreasing order and obtain the
Huffman code as follows
Stage I Stage
Xi Stage III Stage IV Stage V Stage VI
P(Xi) II
x6 0.3 (00) 0.3 (00) 0.3 (00) 0.3 (00) 0.4 (1) 0.6 (0)
x3 0.2 (10) 0.2 (10) 0.2 (10) 0.3 (10) 0.3 (00) 0.4 (1)
x2 0.15 (010) 0.15 (010) 0.2 (11) 0.2 (10) 0.3 (01)
x5 0.15 (011) 0.15 (011) 0.15 (010) 0.2 (11)
x7 0.1 (110) 0.1 (110) 0.15 (011)
x1 0.05 (1110) 0.1 (111)
x4 0.05 (1111)

Message Probability Code Length


x1 0.05 1110 4
x2 0.15 010 3
x3 0.2 10 2
x4 0.05 1111 4
x5 0.15 011 3
x6 0.3 00 2
x7 0.1 110 3
Table 6.15
Average codeword lenght N ( )
7
N = ∑ P ( xi ) ni
i =1

= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5
+ P ( x6 )n6 + P ( x7 )n7
= ( 0.05 × 4 ) + ( 0.15 × 3 ) + ( 0.2 × 2) + ( 0.05 × 4 ) + ( 0.15 × 3 )
+ ( 0.3 × 2) + ( 0.1 × 3 )
N = 2.6 bits/symboll
Entropy H(X)
SOURCE CODING AND ERROR CONTROL CODING
7  1 
H ( X ) = ∑ P ( x i ) log 2  
i =1  P ( xi ) 
 1   1   1   1 
= 0.05 log 2   + 0.15 log 2   + 0.2 log 2   + 0.05 log 2  
 0.05   0.15   0.2   0.05 
 1   1   1 
+0.15 log 2   + 0.3 log 2   + 0.1 log 2  
 0.15   0.3   0.1 
H ( X ) = 2.57 bits/message
Coding efficiency ( η )
H 2.57
η= = = 0.9885
N 2.6
o η = 98.85 o
o o

Problem 7
A discrete memory less source has a alphabet given below.
Compute two different Huffman codes for this source,hence for each of
the two codes,find,
(i) The average code-word length.
(ii) The variance of the average code-word length over the
ensemble of source symbol.

Symbol S0 S1 S2 S3 S4
Probability 0.55 015 0.15 0.10 0.05

Solution
The two different Huffman codes are obtained by placing the com-
bined probability as high as possible or as low as possible.
1. Placing combined probability as high as possible

Stage Stage Stage


Symbol Stage III
I P(xi) II IV
0.55
S0 0.55 (0) 0.55 (0) 0.55 (0)
(0)
0.45
S1 0.15 (100) 0.15 (11) 0.3 (10)
(1)
S2 0.15 (101) 0.15 (100) 0.15 (11)
S3 0.1 (110) 0.15 (101)
S4 0.05 (111)
ANALOG AND DIGITAL COMMUNICATION

Symbol Probability Code word hk


s0 0.55 0 1
s1 0.15 100 3
s2 0.15 101 3
s3 0.1 110 3
s4 0.05 111 3
1. Average code-word length
4
∴N = ∑P n
k =0
k k

= ( 0.55 × 1) + ( 0.15 × 3 ) + ( 0.15 × 3 ) + ( 0.1 × 3 ) + ( 0.05 × 3 )


= 1.9 bits/symbol
(ii )Variance of the code
4 2
σ2 = ∑P
k =0
k
nk − N 
 

= 0.55 [1 − 1.9] + 0.15 [3 − 1.9] + 0.15 [3 − 1.9] + 0.1[3 − 1.9]


2 2 2 2

+ 0.05 [3 − 1.9]
2

= 0.99
2. Placing combined probability as low as possible
Stage I Stage Stage
Symbol Stage IV
P(Xi) II III
s0 0.55 (0) 0.55 (0) 0.55 (0) 0.55 (0)
s1 0.15 (11) 0.15 (11) 0.3 (10) 0.45 (1)
s2 0.15 (100) 0.15 (100) 0.15 (11)
s3 0.1 (1010) 0.15 (101)
s4 0.05 (1011)
(i ) Average code-word length
4
∴N = ∑P n
K =0
k k

= ( 0.55 × 1) + ( 0.15 × 2) + ( 0.15 × 3 ) + ( 0.1 × 4 ) + ( 0.05 × 4 )


= 1.9 bits/symbol
(ii )Variance of the code
4 2
σ2 = ∑P
K =0
k
nk − N 
 
SOURCE CODING AND ERROR CONTROL CODING

= 0.55 [1 − 1.9] + 0.15 [2 − 1.9] + 0.15 [3 − 1.9] + 0.1[ 4 − 1.9]


2 2 2 2

+ 0.05 [4 − 1.9]
2

= 1.29

Average code-word
Method Varaiance
length
As high as possible 1.9 0.99
As low as possible 1.9 1.29

4.7 MUTUAL INFORMATION

Mutual information I(X,Y) of a channel is defined as amount


of information transferred when xi transferred and transmitted and yj
received .Its represented by,I(xi,yj)

x 
P i 
 yj 
I ( x i , y j ) = log   bits ... (1)
P ( xi )


 xi 
P
Here I(xi,yj) is the mutual information,  y  is the conditional
 j 

Probability that xi was transmitted and yj is received. P(xi) is the


probability of symbol xi for transmission.
The average mutual information is represented by I(X;Y).It is
calculated in bits/symbol. Average mutual information is defined as the
amount of source information gained per received symbol. It is given by
m n
I ( X ;Y ) = ∑ ∑ P ( x i , y j ) I ( x i y j ) ... ( 2)
i =1 j =1

Substitue (1) in (2)


  x 
P  i 
m m
  yj  
I ( X ;Y ) = ∑ ∑ P ( x i , y j ) log 2    
i =1 j =1
 P ( xi ) 
 
 
ANALOG AND DIGITAL COMMUNICATION

Properties of Mutual information


(i) The mutual information of a channel is symmetric.

I ( X ;Y ) = I (Y ; X )
(ii) The mutual information can be expressed in terms in terms of
entropies of channel input or channel out put and conditional entropies.

I ( X ;Y ) = H ( X ) − H ( X /Y )
I (Y ; X ) = H (Y ) − H (Y / X )
where , H (X /Y ) and H (Y / X ) are conditional entropies.

(iii) The mutual information is always positive.

I ( X ;Y ) ≥ 0

(iv) The mutual information is related to the joint entropy H(X,Y) by


following relation,

I ( X ;Y ) = H ( X ) + H (Y ) − H ( X ,Y )

Property 1
The mutual information of a channel is symmetric.
(i.e.,) I(X;Y)=I(Y;X)
Proof
Let us consider some standard relationships from probability
theory.These are as follows
X 
P ( X i ,Y j ) = P  i  P (Y j ) ... (1)
 Yj 
 
 Yj 
and P ( X i ,Y j ) = P   P ( Xi ) ... ( 2)
 Xi 
From equation (1) and (2) we can write,
X   Yj 
P i
 Yj  P (Y j ) = P   P ( Xi ) ... ( 3 )
   Xi 

Therefore,the average mutual information is given by


SOURCE CODING AND ERROR CONTROL CODING

 X 
P  i 
m n
  Y j  
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2   
i =1 j =1
 P ( Xi ) 
 
 

Hence we can write I(X;Y) as follows;


  Yj  
P  
m n X
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2   i  
i =1 j =1
 P (Y ) 
 j

 

By considering equation (3) the above equation can be written as,

 X 
P  i 
m n
  Y j  
I (Y ; X ) = ∑ ∑ P ( X i ,Y j ) log 2   
i =1 j =1
 P ( Xi ) 
 
 
= I ( X ;Y )

Thus the mutual information of the discrete memory less channel is


symmetric
Property 2
I (X;Y)=H (X)-H (X/Y)
I (Y;X)=H (Y)-H (Y/X)
Solution H(X/Y) is the conditional entropy and it is given as,

m n
1
H ( X /Y ) = ∑ ∑ P ( X i ,Y j ) log 2 ... (1)
i =1 j =1 P ( X i /Y j )

H(X/Y) is the information or uncertainly in X after Y is received. In


other words H(X/Y) is the information lost in the noisy channel. It is the
average conditional self information.

We know that,average mutual information is given as,
ANALOG AND DIGITAL COMMUNICATION

 X 
P  i 
m n
  Y j  
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2   
i =1 j =1
 P ( Xi ) 
 
 

Let us write the above equation as,


m n
1
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2
i =1 j =1 P ( Xi )
m n
1
−∑ ∑ P ( X i ,Y j ) log 2
i =1 j =1 P ( X i ,Y j )

From equation(1) above equation can be rewritten as,

m n
1
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2 − H ( X /Y ) ... ( 2)
i =1 j =1 P ( Xi )

We know that the standard probability relation which is given as follows

∑ P ( X ,Y ) = P ( X )
j =1
i j i

Hence equation (2) will be,


m
1
I ( X ;Y ) = ∑ P ( X i ) log 2 − H ( X /Y ) ... ( 3 )
i =1 P ( Xi )
First term of the above equatiion represents entropy.(i.e.,)
m
1
H ( X ) = ∑ P ( X i ) log 2 ... ( 4 )
i =1 P ( Xi )
Hence equation(3) becomes,
I ( X ;Y ) = H ( X ) − H ( X /Y ) ... ( 5 )

Here note that I(X;Y) is the average information transferred


per symbol across the channel. It is equal to source entropy minus
information lost in the noisy channel is given by above equation.
SOURCE CODING AND ERROR CONTROL CODING

  Yj  
P  
m n Xi  
I (Y ; X ) = ∑ ∑ P ( X i ,Y j ) log 2  
i =1 j =1
 P (Y ) 
 j

 
m n
1
= ∑ ∑ P ( X i ,Y j ) log 2
j =1 i =1 P (Y j )
m n
1
−∑ ∑ P ( X i ,Y j ) log 2 ... ( 6 )
i =1 j =1 P (Y j / X i )
The conditional entropy H(Y/X) is given as,
m n
1
/X)=∑ ∑ P ( X i ,Y j ) log 2
H(Y/ ... ( 7 )
i =1 j =1 P (Y j / X i )

Here H (Y/X) is the uncertainty in Y when X was transmitted.


With this result, equation (6) becomes,
m n
1
I (Y ; X ) = ∑ ∑ P ( X i ,Y j ) log 2 − H (Y / X ) ... ( 8 )
j =1 i =1 P (Y j )
By using th
he standard probability equation,
m

∑ P ( X ,Y ) = P (Y )
i =1
i j j ... ( 9 )

Hence equation(8) becomes,


n
1
I (Y ; X ) = ∑ P (Y j ) log 2 − H (Y / X )
j =1 P (Y j )

1 n
H (Y ) = ∑ P (Y j ) log 2
We know that j =1 P (Y j )
Hence first term of above equation represents H (Y).Hence above
equation becomes,

I (Y ; X ) = H (Y ) − H (Y / X ) ... (10 )

Property 3
I(X;Y) ≥ 0
Solution
Average mutual information can be written as,
m n  P (X ) 
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2  i
 ... (1)
 P ( X i /Y j ) 
i =1 j =1
 
ANALOG AND DIGITAL COMMUNICATION

Using Bayes rules,we have

since log 2P = log 2 e × log 2 p


 P ( X )   P ( X i ) P (Y j ) 
 i
=  log e P = In p
 P ( X i /Y j )   P ( X i ,Y j ) 
    1
In e=
In 2
We can write equation (1)) as under

1 m n  P ( X i ) P (Y j ) 
-I (Y ; X ) = P ( X ,Y )
∑ ∑ i j  P X ,Y In ... ( 2)
In 2 i =1 j =1
 ( i j) 
Also we know that
In α ≤ α − 1
There fore we have

1 m n  P ( X i ) P (Y j ) 
-I (Y ; X ) ≤ P (
∑ ∑ i j  P X ,Y
X ,Y )  − 1 
In 2 i =1 j =1
 ( i j ) 

1 m n m n 
−I (Y ; X ) ≤  ∑ ∑ P ( X i ) P (Y j ) − ∑ ∑ P ( X i ,Y j )  ... ( 3 )
In 2  i =1 j =1 i =1 j =1 
Since

m n m n

∑ ∑ P ( X i ) P (Y j ) = ∑ P ( X i )∑ P (Y j ) = (1)(1)
i =1 j =1 i =1 j =1

m n m  n  m

∑ ∑ P ( X ,Y ) = ∑ ∑ P ( X ,Y ) = ∑ P ( X ) = 1
i j i j i
i =1 j =1  j =1 i =1  i =1

Equation (3)) reduces to


-I ( X ;Y ) ≤ 0
I ( X ;Y ) ≥ 0

Hence proved.
Property 4
I(X;Y) =H (X) +H (Y)-H (X,Y)
Solution
We know the relation
H (X,Y) = H (X,Y) -H (Y)
Therefore
SOURCE CODING AND ERROR CONTROL CODING

H (X/Y) = H (X,Y) -H (Y) ...(1)


Mutual information is given by
I (X;Y) =H (X) -H (X/Y) ...(2)
Substituting equation (1) in (2)
I (X;Y) =H (X) + H (Y) - H (X,Y)
Thus the required relation is proved.
Problem 1
Verify the following expression

H (X,Y) =H (X/Y) +H (Y)


Solution
We know that
P (Xi,Yj)=P (Xi/Yj) P (Yj) and
m

∑P (X
i =1
i /Y j ) = P (Y j )

Also ,we have


n m
H ( X ,Y ) = −∑ ∑ P ( X i ,Y j ) log P ( X i ,Y j )
j =1 i =1
n m
H ( X ,Y ) = −∑ ∑ P ( X i ,Y j ) log P ( X i /Y j ) P (Y j ) 
j =1 i =1
n m
H ( X ,Y ) = −∑ ∑ P ( X i ,Y j ) log P ( X i /Y j ) 
j =1 i =1

n
 m

−∑  ∑ P ( X i ,Y j )  log P (Y j )
j =1  i =1 

4.8 CHANNEL CAPACITY

• The mutual information I(X;Y) represented a measure of the average


information per symbol transmitted in the system.
• A suitable measure for efficiency of transmission of information may
be introduced by comparing the actual rate and the upper bound of
the rate of information transmission for a given channel.
• Shannon has introduced a significant concept of channel capacity
defined as the maximum of mutual information.
• Thus,the channel capacity C is given by
C=Max I(X;Y)=Max[H(X)-H(X/Y)] ...(1)
ANALOG AND DIGITAL COMMUNICATION

• I(X;Y) is the difference of two entropies and C is max I (X;Y). Hence,


sometimes the unit of I(X,Y) and C is taken as bits/sec.
• The transmission efficiency or channel efficiency is defined as

actual transinformation
η= (or )
max imum transinformation
I ( X ;Y ) I ( X ;Y )
η= = ... ( 2)
max I ( X ;Y ) C

• The redundancy of the channel is defined as

R =1− η
C − I ( X ;Y )
= ... ( 3 )
C
4.9 MAXIMUM ENTROPY FOR CONTINOUS CHANNEL OR GAUSSIAN
CHANNEL
• Probability density function of Gaussian function is given as’
X2
1
P (x ) = e 2σ
2

σ 2π
Where σ2 = Average power of the so urce
The maximum entropy is computed as follows

1
h (x ) = ∫ p ( x ) log dx
P (x )
2
−∞

= − ∫ P ( x ) log 2 P ( x ) dx
−∞

 1 x  2

= − ∫ P ( x ) log 2  e 2σ dx
2

−∞  σ 2π 
 x 
2

= − ∫ P ( x ) log 2 σ 2πe 2σ  dx [  log 2 ( AB ) = log 2 A + log 2 B ]
2

−∞  
 − 2 
2
∞ x

( )
= ∫ P ( x ) log 2 σ 2π + log 2 e 2σ dx
 
−∞

∞ ∞ x2
1
( )
− 2
= ∫ P ( x ) 2 log 2 σ 2π dx + ∫ P ( x ) log 2 e 2σ dx
−∞
2 −∞
SOURCE CODING AND ERROR CONTROL CODING

x2
−log ex 2
[ log 2 e 2 σ2
]=
2σ2
[ n log m=logmn ]

1 log e 2
( )
2
= ∫ P ( x ) log 2 σ 2π dx + ∫ x P ( x ) dx
2 −∞ 2σ2

1 log e
= log 2 ( 2πσ2 ) ∫ P ( x ) dx + 2 ∫
x 2P ( x ) dx
2 −∞

 ∞ 
 ∫ P ( x ) dx = 1, from properties of pdf 
 −∞ 
∞ 
 ∫ x 2P ( x ) dx = σ2 , from definition of variance 
 −∞ 
1 log e 2
log 2 ( 2πσ2 ) + σ
2 2σ2
1
log 2 ( 2πσ2 ) + log e 
2 
1
h ( x ) = log 2 ( 2πσ2e ) 
2

4.10. CHANNEL CODING THEOREM OR SHANNON’S THEOREM

• The information is transmitted through the channel with rate ‘R’


called information rate.
• Shannon’s theorem says that it is possible to transmit
information with an arbitrarily small probability of error provided
that the information rate ‘R’ is less than or equal to a rate ’C’, called
channel capacity.
• Thus the channel capacity is the maximum information rate with
which the error probability is within the tolerable limits.
Statement
• There exists a coding technique such that the output of the source
may be transmitted over the channel with a probability of error in the
received message which may be made arbitrarily small.
Explanation
• This theorem says that if R≤C,it is possible to transmit information
without any error even if the noise is present.
ANALOG AND DIGITAL COMMUNICATION

• Coding techniques are used to detect and correct the errors.


Negative statement of channel coding theorem
• Given a source of ‘M’ equally likely messages,with M>>1,which is
generating information at a rate ‘R’,then if’
R>C,
the probability of error is close to unity for every possible set of M
transmitter signals.
• Hence, the negative statement of Shannon’s theorem says that if
R>C,then every message will be in error.

4.10.1 Channel Capacity Theorem (or) Shannon’s Hartley Theorem


(cr)Information Capacity Theorem

• When Shannon’s theorem of channel capacity is applied specifically


to a channel in which the noise is Gaussian is know as Shannon-
Hartley theorem.
• It is also called information capacity theorem.
Statement of theorem
• The channel capacity of a white band limited Gaussian channel is,

C = Blog 2 (1 + S / N ) bite/sec

Where
B →is the channel bandwidth,
S →is the signal power
N →is the total noise power within the channel bandwidth

• We know that the signal power is given as,


B
Power ( P ) = ∫ Power spectral density
−B

No
Here B is bandwidth and power spectral density of white noise is
2
hence noise power N becomes,
B
No
N = ∫
−B
2
df
Noise power
N = NoB
SOURCE CODING AND ERROR CONTROL CODING

T
X Y

Source Destination
N

Figure 4.1 Noisy Communication Channel


Consider a source x and receiver y. As x and y are dependent.

H [ x , y ] = H [y ] + H [ x / y ] ... ( 2)
The noise is added to the system is Gaussian in nature
As source is independent of noise
H [ x , y ] = H [ x ] + H [N ] ... ( 3 )

As y depends on x and N so
Y = f ( x , N ) and Y=x+N
Therefore, H [ x , y ] = H (x , N ) ... ( 4 )
Combining equation(2),(3) and (4)
H [y ] + H [ x / y ] = H [ x ] + H [N ]
H [ x ] − H [ x / y ] = H [y ] − H [N ] ... ( 5 )

We Know that property of mutual information,i.e.,


H [x ] − H [x / y ] = I [x;y ]
Hence , equation (5)becomes,
I [ x ; y ] = H [y ] − H [N ]
Channel Capacity ⇒ C=max I [ x ; y ]
= max H ( y )  − max H ( N )] ... ( 6 )
As noise is gaussian,
max H ( N ) = H ( N ) = log 2 2πe σ2N ... ( 7 )
Where σ2N = N = noisepower
max H ( y ) = log 2 2πe σ2y
Where σ2y = power at receiver
=S +N
= Signal power +Noise power
ANALOG AND DIGITAL COMMUNICATION

Substitute equation (7),(8)in(6)


C = log 2 2πe σ2y − log 2 2πe σ2N

= log 2 2πe (S + N ) − log 2 2πeN


1/2
 2πe (S + N ) 
= log 2  
 2πeN 
1  S
= log 2 1 + 
2  N

If the signal is band limited, it is sampled at Nyquist rate is given as 2B.

1  S
C = 2B × log 2 1 + 
2  N
 S
C = B log 2 1 +  bits /sec.
 N

Where B is the channel band width. We Know the power spectral density
of noise is
N = NoB
 S 
C = B log 2 1 +  bits /sec
 NoB 

• This is channel capacity of band limited white gaussian noise.


4.10.2 Tradeoff Between Bandwidth and Signal to Noise Ratio
• Channel capacity of the Gaussian channel is given as,
 S
C = B log 2 1 + 
 N 
Above equation shows that the channel capacity depends on two factors.
i. Band width(B) of the channel
ii. Signal to Noise ratio(S/N)
Noiseless channel has infinite capacity
If there is no noise in the channel,then N=0. Hence S/N=∞.Such
a channel is called noiseless channel. Then capacity of such a channel
will be

C = B log 2 (1 + ∞ ) = ∞
SOURCE CODING AND ERROR CONTROL CODING

Thus the noiseless channel has infinite capacity.

Infinite band width channel has limited capacity


• Now if the band width ‘B’ is infinite, the channel capacity is
limited. This is because, as band width increases,noise power (N)
also increases. Noise power is given as
N=NoB

• Due to this increase in noise power, signal to noise (S/N) ratio


decreases. Hence even if B approaches infinity,capacity does not
approach infinity,capacity infinity,As B→∞,capacity approaches an
upper limit. This upper limit is given as,
lim C S
C ∞ = B → ∞1.44
No
Problem 1
The data is to be transmitted at the rate of 10000 bits/sec over a
channel having band width B=3000 Hz .Determine the signal to noise
ratio required. If the band width is increased to 10000 Hz ,then determine
the signal to noise ratio.
Solution
The data is to be transmitted at the rate of 10,000 bits /sec.Hence
channel capacity must be atleast 10000 bits/sec for error-free
transmission.
Channel capacity(C) =10000 bits/sec
Band width(B) =3000Hz
The channel capacity of Gaussian channel,
 S
C = B log 2 1 + 
 N
Putting the values,
 S
10000=3000log 2 1 + 
 N
S
∴ =9
N

Now if the band width is B=10000Hz,then,


ANALOG AND DIGITAL COMMUNICATION

 S
10000 = 10000 log 2 1 + 
 N
S
∴ =1
N
S
Here , B=3000 =9
N
S
B=10000 =1
N

• Above results show that band width is increases to 10,000 Hz the


signal to noise ratio is reduced by nine times.
• This means the required signal power is reduced,When band width
is increased.

Problem 2
Channel capacity is given by

 S 
C = B log 2 1 +  bits /sec. ... ( 6 )
 N 

In the above equation when the signal power is fixed and white
gaussian noise present,the channel capacity approaches an upper limit
with increase band width ’B’.prove that this upper limit is given as,

lim C S 1 S
C ∞ = B → ∞ = 1.44 =
N o In 2 N o

State Shannon’s information capacity theorem and derive the


expression fo limiting capacity of the channel.
Solution
We Know that,noise power is given as,
N=NoB
Putting this value in equation (1)we get,

 S 
C = B log 2 1 + 
 NoB 
SOURCE CODING AND ERROR CONTROL CODING

By rearranging the above equation we get,


S NoB  S 
C= . log 2 1 + 
No S  NoB 
NoB
S  S  S
log 2 1 + 
No  NoB 
1
S  S  NSB
log 2 1 +  o
No  NoB 

Let us apply the limits as B→∞,


1
lim C S
lim  S  NSB
C∞ = B → ∞ = B → ∞ log 2 1 +  o
No  NoB 
S
In the above equation put x= .Then asB → ∞,x → 0,i.e.,
NoB
S im 1
C∞ = x = 0 log 2 (1 + x )
No x
lim 1
Here let us use the standard relation, x = 0 log 2 (1 + x ) = e,
x
then above equation becomes,
S S log10 e
C∞ = log 2 e =
No N o log10 e
S
1.44
No

This is the required equation. It gives the upper limit on channel


capacity as band width B approaches infinity.
ANALOG AND DIGITAL COMMUNICATION

Problem 3
A black and white TV picture consists of about 2 x106 Picture
elements with 16 different brightness levels,with equal probabilities. If
pictures are repeated at the rate of 32 per second,calculate average rate
of information conveyed by this TV picture source. If SNR is 30 dB,What
is the maximum band width required to support the transmission of the
resultant video signal

Solution
Given
Picture elements =2x106
Source levels(symbols) =16 i.e.,M=16
Picture repetition rate =32/sec.
S 
  = 30
 N dB
(i) The source symbol entropy(H)
Source emits any one of the 16 brightness levels. Here M=16. These
levels are equiprobable. Hence entropy of such source is given by,
H=log2M
=log216
=4 bits/symbol(level)
(ii)Symbol rate(r)
Each picture consists of 2 x 106 picture elements. Such 32
pictures are transmitted per second. Hence number of picture elements
per second will be,
r = 2 × 106 × 32 symbols/sec
= 64 × 06 symbols/sec

(iii) Average information rate (R):


Information rate of the source is given by

R = rH = 64 × 106 × 4bits /sec.


=2.56 × 108bits /sec.
This is the average rate of by TV picture.

(iv) Required band width for S


= 30dB
N
SOURCE CODING AND ERROR CONTROL CODING

S  S
We Know that   = 10 log10
 N dB N
S
∴ 30=10log10
N
S
∴ =1000
N
Channel coding theorem states that information can be received
without error if,
R≤C
 S
R = 2.56 × 108 and C=Blog 2 1 + 
 N
 S
2.56 × 108 ≤ B log 2 1 + 
 N
i .e., 2.56 × 10 ≤ Blog 2 (1 + 1000 )
8

2.56 × 108
(or ) B≥ i .e., 25.68MH Z
log 2 (1001)
Therefore ,the transmission channel must have a band width of
25.68 MHZ to transmit the resultant video signal.

Problem 4
A voice grade telephone channel has a band width of 3400 Hz.
If the signal to noise ratio(SNR) on the channel is 30 dB,determine the
capacity of the channel. If the above channel is to be used to transmit
4.8 kbps of data determine the minimum SNR required on the channel.
Solution:
Given data: Channel band width B=3400Hz
S 
  = 30dB
 N dB
We Know that
S  S
  = 10 log10
 N dB N
S
∴ 30=10log10
N
S
log10 =3
N
S
∴ =10000
N
ANALOG AND DIGITAL COMMUNICATION

(i) To calculate capacity of the channel


Capacity of the channel is given as,
 S
C = B log 2 1 + 
 N
=3400log 2 (1 + 1000 )
=33.888 kbits/sec.

(ii) To obtain minimum  S  for 4.8 kbps data


 
N 
Hence the data rate is 4.8 kbps.From channel coding theorem,
R≤C

 S
Here R=4.8 kbps and C=Blog 2 1 + 
 N
Hence above equation becomes,

 S
4.8 kbps ≤ Blog 2 1 + 
 N
 S
i .e., 4800 ≤ 3400log 2 1 + 
 N 
 S
i .e., log 2 1 +  ≥ 1.41176
 N
 S
log10 1 + 
 N  ≥ 1.41176
log10 2
S
∴ ≥ 1.66
N
S 
This means   = 1.66 to transmit data at the rate of 4.8kbps
 N min

Problem 5
For an AWGN channel with 4.0 kHz band width, the noise
spectral density h/2 is 1.0 pico watts/Hz and the signal power at the
receiver is 0.1 mW. Determine the maximum capacity, as also the
actual capacity for the above AWGN channel.
Solution :
Given: B =4000 Hz, S=0.1 x 10-3 W
SOURCE CODING AND ERROR CONTROL CODING

(i) To obtain actual capacity


 S
C = B log 2 1 +  bits /sec.
 N 
Noise power(N)=NoB = 10−12 × 2 × 4000
=8 × 10-9W
 0.1 × 10−3 
C=4000log 2 1 + 
 8 × 10−9 
=4000log 2 (12501)
log10 (12501)
=4000
log10 2
C=54.44 kbits/sec.

(ii) To obtain maximum capacity

Signal power is given as S=0.1 × 10-3W

Maximum capacity of gaussian channel is given as,

lim C S
C ∞ = B → ∞ 1.44
No
No
Here = 1 × 10−12Watts / Hz .Hence above equation becomes,
2
0.1 × 10−3
C ∞ = 1.44
2 × 10−12
=72 × 106 Hz or 72 MHz.

4.11 ERROR CONTROL CODING

• When the data passed through the channel, errors are introduced
in the data because channel noise interferes the signal. Hence the
signal power is reduced and errors are introduced.
ANALOG AND DIGITAL COMMUNICATION

Input Discrete Channel


Channel
message Modulator +
Encoder
bits
noise

Noisy signal Channel


from Demodulator Message bits
Decoder
Channel

Figure 4.2 Digital Communication system with channel encoding


• The channel encoder adds extra (redundant bits) to message bits.
The encoded signal is transmitted through the noisy channel.
• The channel decoder identifies the extra bits (or) redundant bits and
using that the decoder will detect and correct the errors presence in
the received message bits if any
• The data rate increased due to extra redundant its. The system
becomes slightly complex because of coding techniques.

4.11.1 Definitions and Principles


i. Code word: The encoded block of ‘n’ bits is called a code word. It
consists of message bits and redundant bits.
ii. Block length: The number of bits ‘n’ after coding is called as block
length of the code.
iii. Code rate: The ratio of message bits (k) and encoded output bits (n)
is called code rate (r)
k
r =
n
iv. Channel data rate: It is the bit rate at the output of the encoder. If
the bit rate at the input of encoder is R’ then channel date rate (i.e.,)
the output of encoder will be
k
channel data rate ( R0 ) = Rs
n

v. Hamming distance: Hamming distance between two code vectors is


defined as the number of positions in which they differ.
For example, X =110 and Y =101. The two code vectors differ in
second and third bits. Hence Hamming distance between X and Y is ‘2’
SOURCE CODING AND ERROR CONTROL CODING

(i.e.,) d(X,Y) = d = 2.
vi. Minimum distance (dmin): The minimum distance of linear block
code is defined as the smallest hamming distance between any pair
of code words. In the code (or) minimum distance is the same as the
smallest hamming weight of the difference between any pair of code
words.
The following table list some of the requirements of error capabil-
ity of the code.
1. Detect upto ‘s’ errors per word, dmin ≥ s + 1
2. Correct upto ‘t’ errors per word, dmin ≥ 2t + 1
3. Correct upto ‘t’ errors and detect s > t errors per word, dmin ≥ (t+ s
+1)

4.12 LINEAR BLOCK CODES

• We consider an (n, k) linear block code in which k number of mes-


sage bits and (n-k) parity bits or check bits are transmitted.
• The total bits at the output of the channel encoder are ‘n’
• The below figure illustrates this concept
Message
Channel code block
block
encoder output
output

code word
Message length =nbits

k-bits m1m2m3.....mk c1c2c3.....cn-k

k - message (n-k) parity


bit bit
Figure 4.3 Linear Block codes
Systematic code: In the systematic block code, the message bits
present at the beginning of the code block output and then parity/check
bits appears as shown in figure but in non-systematic code, it not possi-
ble to differentiate message bits and parity bits; they are mixed together.

Linear code: A code is said to be linear if the sum of the two code vec-
tors produces another code vector.
• A code word consists of k message bits which are denoted by
ANALOG AND DIGITAL COMMUNICATION

m1, m2.....mk and (n-k) parity bits (or) check bits denoted by
c1, c2..........cn-k
• The sequence of message bits is applied to linear block encoder to
produce and n bit code word. The elements of this code are x1, x2,.....
xn .
• We can express this code word mathematically as

X = (m1,m2 .............mk ,c1,c 2 ..............c n −k ) ... (1)

• The code vector represented by equation (1) can be mathematically


represented as
X = [M : C ] ... ( 2)
Where M=k -message vectors
AND C= (n − k ) or q paritty vectors q = n − k
• A block code generates the parity vectors (or parity bits) required to
be added to the message bits to generate the code words. The code
vector X can also be represented as under:
where M = k − message vectors
and C = (n − k )( or ) q parity vectors where q = (n − k )

• A block code generator generates the parity vectors (or parity bits)
required to be added to the message bits to generate the code words.
The code vector X can also be represented as under:
X = MG ... ( 3 )
Where X = Code vector for 1 × n size
M = Message vector of 1× k size
G = Generator matrix of k × n size
Representation of code vector

[ X ]1×n = [M ]1×k [G ]k ×n ... ( 4 )

• The generator matrix is dependent on the type of linear block code


used. The generator matrix is generally represented as under:
[G ] = [ I k / P ] ... ( 5 )
Where I k = k × k identity matrix
and P = k × (n − k ) coe
efficient matrix (or ) P = k × q coefficient matrix
where q = n − k
SOURCE CODING AND ERROR CONTROL CODING

1 0 ... 0 
0 1 ... 0 
 
. . . .
Therefore, Ik =  and
. . . .
 
. . . .
0 0 1 k ×k
 P11 P21 ... P1q 
P 
 21 P22 ... P2q 
 . . . . 
and P = ... ( 6 )
. . . . 
 
 . . . . 
P Pkq 
 k1 Pk 2 k ×q

• The parity vector can be obtained as


C = MP ... ( 7 )
Substituting the matrix form, we obtain
 P11 P12 ... P1q 
P 
 21 P22 ... P2q 
 . . . . 
c1,c 2 .....c q  = [m1,m2 ,.....mk ]1×k  ... ( 8 )
1×q . . . . 
 
 . . . . 
P Pkq 
 k1 Pk 2 k ×q

By solving the above matrix equation, we get parity vectors as under:

c1 = m1P11 ⊕ m2P21 ⊕ m3P31 ⊕ ....... ⊕ mk Pk1


c1 = m1P12 ⊕ m2P22 ⊕ m3P32 ⊕ ........ ⊕ mk Pk 2 ... ( 9 )
c1 = m1P13 ⊕ m2P23 ⊕ m3P33 ⊕ ....... ⊕ mk Pk 3

Similarly we can obtain the expressions for the remaining parity bits

4.13 HAMMING CODES


Hamming codes are linear block codes. The family of (n, k)
Hamming codes for q ≥ 3 is defined by the following expressions:
ANALOG AND DIGITAL COMMUNICATION

Code word length n = (2q-1)

k = 2q - q -1 q =(n-k)

Message Parity
bits bits

Figure 4.4 Code word structure


of Hamming code

1. Block length: n = 2q − 1
2. Number of message bits
k = 2q − q − 1 = n − q
3. Number of parity bits is (n - k ) = q where q ≥ 3
(ie ) The minimum number of parity bit is 3
4. the minimum distance dmin = 3
k 2q − q − 1 q
5. The code rate efficiency = = q
=1− q
n 2 −1 2 −1
If q >> 1, then code rate r = 1

The general structure of Hamming code is shown in figure

4.13.1 Error Detection and Correction Capabilities of Hamming


Code
Considering the minimum distance, we have dmin = 3
1. The number of errors that can be detected per word = 2
since dmin ≥ ( s + 1) ∴ 3 ≥ s + 1 ∴s ≤ 2
2. The number of errors that can be detected per word =1
since dmin ≥ ( 2t + 1)∴ 3 ≥ ( 2t + 1) ∴t ≤ 1

Thus with dmin = 3 , it is possible to detect upto 2 errors and it is


possible to correct upto only 1 error.
There is another way of expressing the relationship be-
tween the message bits and the parity check bits, of linear
block code. Let H denote an (n − k ) × n matrix defined as under:

H = P T | I n −k 

Where P = An (n − k ) × k matrix representing the transpose of


T

the coefficient matrix P and In-k is an (n − k ) × (n − k ) identity matrix.


SOURCE CODING AND ERROR CONTROL CODING

The transpose of the coefficient matrix can be obtained by


interchanging the row and columns of the coefficient matrix P given by

 P11 P12 ... Pk1 


P 
 12 P22 ... Pk 2 
 . . . . 
PT =  ... (1)
. . . . 
 
 . . . . 
P Pqk 
 1k P2k (q )×k

Therefore, matrix H is given by


 P11 P21 ... Pk1 : 1 0 0 ... 0
P 
 12 P22 ... Pk 2 : 0 1 0 ... 0
 P13 P22 ... Pk 3 : 0 0 1 ... 0
P = .
T
. . . . . . . .
 ... (1)

 . . . . : . . . . .
 . . . . . . . . .
 
Pk1 P2k Pqk : 0 0 0 ... 1  (n −k )×k

Problem 1
The generator matrix for a (6,3) block code is given below. Find all
the code vectors of this code.

1 0 0 | 0 1 1 
G = 0 1 0 | 1 0 1 
0 0 1 | 1 1 0 

Solution
The general pattern of the block codes , hence, in this case, n = 6
and k = 3. This means that the message block size k is 3 and the length
of code vector (i.e.,) n is 6. For obtaining code vectors, we shall follow the
steps given below:
(i) First, we separate out the identity matrix I and coefficient ma-
trix
We know that the generator matrix is given by
G = [ I k ][P ]
Comparing this with the given generator matrix, we obtain
ANALOG AND DIGITAL COMMUNICATION

1 0 0 
I k = I 3×3 = 0 1 0 
0 0 1 

0 1 1 
and Pk ×q = P3×3 = 1 0 1 
1 1 0 

As the size of message block is k = 3, there are eight possible


message blocks:
( 0, 0, 0 )( 0, 0,1)( 0,1, 0 ) , ( 0,1,1) , (1, 0, 0 ) , (1, 0,1) , (1,1, 0 ) , (1,1,1) .

Now let us obtain the relation between parity vectors c, message


vectors M and the coefficient matrix P as under:
0 1 1 
[c1,c 2 ,c 3 ] = [m1,m2 ,m3 ] 1 0 1  (1)
1 1 0 
Now let us obtain words for all the message words.
The solution of equation (1) is given by
c1 = (m1 × 0 ) ⊕ (m2 × 1) ⊕ (m3 × 1) = m2 ⊕ m3 ... ( 2)
c 2 = (m1 × 1) ⊕ (m2 × 0 ) ⊕ (m3 × 1) = m1 ⊕ m3 ... ( 3 )
c 3 = (m1 × 1) ⊕ (m2 × 1) ⊕ (m3 × 0 ) = m1 ⊕ m2 ... ( 4 )

By substituting the values of m1,m2 , and m3 into these equa-


tions, it is possible for us to obtain the parity bits c1,c 2 and c 3
1. For the message word (m1,m2 , m3 ) = ( 0, 0, 0 ) we have
c1 = 0 ⊕ 0 = 0
c2 = 0 ⊕ 0 = 0
c3 = 0 ⊕ 0 = 0
∴ c1,c 2 ,c 3 = ( 0, 0, 0 )
The complete code word for this message word is given by
m1 m2 m3 c1 c2 c3
Code word = 0 0 0 0 0 0

message parity
2. For the second message vector (i.e.,) (m1,m2 , m3 ) = ( 0, 0,1) we have
SOURCE CODING AND ERROR CONTROL CODING

c1 = 0 ⊕ 1 = 1
c2 = 0 ⊕1 = 1
c3 = 0 ⊕ 0 = 0
∴ c1,c 2 ,c 3 = (1,1, 0 )
The complete code for this message word is given by

m1 m2 m3 c1 c2 c3
Code word = 0 0 1 1 1 0

message parity

3. Similarly, we can obtain the code words for the remaining message
words. All these code words have been given in table below
S.No Message vectors Parity bits Complete code vector
m1 m2 m3 c1 c2 c3 m1 m2 m3 c1 c2 c3
1 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 1 1 1 0 0 0 1 1 1 0
3 0 1 0 1 0 1 0 1 0 1 0 1
4 0 1 1 0 1 1 0 1 1 0 1 1
5 1 0 0 0 1 1 1 0 0 0 1 1
6 1 0 1 1 0 1 1 0 1 1 0 1
7 1 1 0 1 1 0 1 1 0 1 1 0
8 1 1 1 0 0 0 1 1 1 0 0 0
ANALOG AND DIGITAL COMMUNICATION

Problem 2
The parity check matrix of a particular (7,4) linear block code is
given by,
1 1 1 0 1 0 0
[H ] = 1 1 0 1 0 1 0 
1 0 1 1 0 0 1 

(i) Find the generator matrix (G).


(ii) List all the code vectors
(iii) What is the minimum distance between the code vectorsw?
(iv) How many errors can be detected? How many errors can be
corrected ?

Solution
First, let us obtain the PT matrix
PT is the transpose of the coefficient matrix P. The given parity check
matrix H is (n − k ) × n matrix (or) q × n matrix, where q = n − k .
It is given that the code is (7,4). Hamming code. Therefore, we
have

=n 7=
and k 4, q = 3

Hence the parity check matrix [H] will be matrix (i.e.,)


[H ] = P T |[ I n −k ]
Where PT is (n-k)by k matrix and In-k is (n-k) x (n-k)

1 1 1 0 | 1 0 0 
We have [H ]3×3 = 1 1 0 1 | 0 1 0 
1 0 1 1 | 0 0 1  3×7

The transpose matrix PT is given by


1 1 1 0 
P = 1 1 0 1 
T

- 1 0 1 1  3×4

The P matrix can be obtained by interchanging the rows and col-


SOURCE CODING AND ERROR CONTROL CODING

umns of the transposed PT.

1 1 1
1 1 0 
We have P =
1 0 1
 
0 1 1  4×3

Generator matrix X(G)


The generator matrix G is a k × n matrix. So, here, it will be a
4 × 7 matrix. Thus, we have G = [ I | P ] where I is a k × k (i.e.,) 4 × 4
k k
matrix.
Substituting the 4 × 4 identity matrix and the coefficient matrix,
we obtain,
1 0 0 0 | 1 1 1 
0 1 0 0 | 1 1 0 
G= 
0 0 1 0 | 1 0 1 
 
0 0 0 1 | 0 1 1 
This is the required generator matrix.
Now, let us obtain the parity bits for each message vector.
The parity bits can be obtained by using the following expression:
C = MP
Therefore [ 1 2 3 ]1×3 = [m1,m2 ,m3m4 ]1×4 [P ]4×3
c , c , c
1 1 1 
1 1 0 
Therefore [ 1 2 3 ]1×3 [ 1 2 3 4 ]1×4 1 0 1 
c , c , c = m , m , m m
 
0 1 1  4×3
Solving
g, we obtain
c1 = (m1 × 1) ⊕ (m2 × 1) ⊕ (m3 × 1) ⊕ (m 4 × 0 )
c 2 = (m1 × 1) ⊕ (m2 × 1) ⊕ (m3 × 0 ) ⊕ (m 4 × 1)
c 3 = (m1 × 1) ⊕ (m2 × 0 ) ⊕ (m3 × 1) ⊕ (m 4 × 1)
c1 = 0 ⊕ 1 ⊕ 0 ⊕ 1 = 0
c2 = 0 ⊕1 ⊕1 = 0
c3 = 0 ⊕ 0 ⊕1 = 1
Hence, the parity bits are c1c 2c 3 = 101
Therefore, the complete code word for the message word 0101 is

Complete code word = 0 1 0 1 1 0 1


Message Parity
ANALOG AND DIGITAL COMMUNICATION

Similarly, we can obtain the code words for the other message
vectors and the corresponding parity bits and code words are given in
table given below. The weight of the code word is also given.
Code words for the (7,4) Hamming code
S.No Weight of
the code
m1 m2 m3 m4 c1 c2 c3 x1 x2 x3 x4 x5 x6 x7
vector
1. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2. 0 0 0 1 0 1 1 0 0 0 1 0 1 1 3
3. 0 0 1 0 1 0 1 0 0 1 0 1 0 1 3
4. 0 0 1 1 1 1 0 0 0 1 1 1 1 0 4
5. 0 1 0 0 1 1 0 0 1 0 0 1 1 0 3
6. 0 1 0 1 1 0 1 0 1 0 1 1 0 1 4
7. 0 1 1 0 0 1 1 0 1 1 0 0 1 1 4
8. 0 1 1 1 0 0 0 0 1 1 1 0 0 0 3
9. 1 0 0 0 1 1 1 1 0 0 0 1 1 1 4
10. 1 0 0 1 1 0 0 1 0 0 1 1 0 0 3
11. 1 0 1 0 0 1 0 1 0 1 0 0 1 0 3
12. 1 0 1 1 0 0 1 1 0 1 1 0 0 1 4
13. 1 1 0 0 0 0 1 1 1 0 0 0 0 1 3
14. 1 1 0 1 0 1 0 1 1 0 1 0 1 0 4
15. 1 1 1 0 1 0 0 1 1 1 0 1 0 0 4
16. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 7

We know that the minimum distance dmin is equal to the mini-


mum weight of any non-zero code vector. Looking at table 7.3 we obtain

dmin = 3
Number of errors that can be detected is given by
dmin ≥ s + 1
3 ≥ s +1
or s≤2

This means that at the most one error can be corrected


Thus for the (7,4) linear block code, at the most two errors can be
detected and at the most only one error can be corrected
SOURCE CODING AND ERROR CONTROL CODING

4.13.2 Encoder of (7,4) Hamming Code

Message register

Input m4 m3 m2 m1
sequence
Message bits

code words
S
Modulo - 2
addition + + +

Parity or
Parity bits c3 c2 c1 check bits
or
check bits Parity bit register

Figure 4.5 Encoder of (7,4) Hamming Code


The encoder for (7,4) Hamming code is shown in figure 4.5. This
encoder produces all the code words corresponding to various message
words listed in table given above
The parity check or check bits (c 3 ,c 2 ,c1 ) are being generated for
each message word (m 4 ,m3 ,m2 ,m1 ) . The parity bits are obtained from
the message bits by means of modulo-2-adders. The output switch S is
first connected to the message register to transmit all the message bits
in a code word. Then it is connected to the parity bit register to transmit
the corresponding parity bits. Thus, we get a 7 bit code word at the out-
put of the switch.

4.14 SYNDROME DECODING FOR BLOCK CODES

1. Basic concept
The decoding of linear block codes is done by using a special
technique called syndrome decoding, which reduces the memory
requirement of the decoder are as under: (i) Error detection in the received
code word, (ii) Error correction. The syndrome decoding technique is
explained as follows.

2. Practical Assumptions
(i) Let X represent the transmitted code word and Y represent the re-
ceived code word.
ANALOG AND DIGITAL COMMUNICATION

(ii) Then if X =Y, no errors in the received signal and if X ≠ Y, then some
errors are present.
3. Detection of Error
a. For an (n,k) linear block code, there exists a parity check matrix of size
(n − k ) × n . We know that,
Parity check matrix, H = P : I n −k 
T
( or ) H = P T : I q 
(n −k )×n q ×n

Where PT represents transpose of P matrix and I n −k is the identity


matrix.
b. The transpose of the parity check matrix is given by,
 P  Q 
   
H =  ...  ( or ) ... 
T

 I n −k  I q 
 n ×q

Where P is the coefficient matrix


c. The transpose of parity check matrix (HT) exhibits a very important
property (i.e.,) XH T = ( 0, 0,.....0 )

This means that the product of any coder vector X and the
transpose of the parity check matrix will always be 0.
We shall use this property for the detection of errors in received
code words as under:
At the receiver, we have
If YH T = ( 0, 0,...0 ) , then Y = X ( i.e.,) there is no error
But, if YH T ≠ ( 0, 0,...0 ) , then Y ≠ X ( i.e.,) error exists in the received
code word
4. Syndrome and its use for error detection
The syndrome is defined as the non-zero output of the
product YHT. Thus, the non-zero syndrome represents some errors
present in the received code word Y. The syndrome is represented by S
and is mathematically given as,
S = YH T
or [S ]1×(n −k ) = [Y ]1×n H T 
n ×(n −k )

(or ) [S ]1×q = [Y ]1×n [H ]n ×q


SOURCE CODING AND ERROR CONTROL CODING

Thus, when s = 0

Either Y = X, i.e., there OR Y ≠ X, but Y is some


is no error in the received other valid codeword (other than X)
signal
Figure 4.6 Two different possibilities for S = 0
Then, all zero elements of syndrome represent that there is no
error, and a non-zero value of an element in syndrome represents the
presence of error. But sometimes, even if all the syndrome elements
have zero value, the error exists. This has been shown in figure
5. Error Vector (E)
i. For the n-bit transmitted and received code words X and Y
respectively, let us define an n-bit error vector E such that its non-
zero elements represents the locations of errors in the received code
vector Y as shown in figure 4.7
ii. The encircled entries in figure indicate the presence of errors.

Transmitted
0 0 1 1 1 1 0
code vector, X:

Received
code vector, Y: 1 0 01 0 1 0

Error vector E: 1 0 1 0 1 0 0

Figure 4.7 The non-zero elements of error vector represent the


location of errors in the received code vector Y
The elements in the code word vector Y can be obtained by using
the modulo 2 additions, as under:
Y = X ⊕E
From figure 4.7 we can write that,
Y = [0 ⊕ 1, 0 ⊕ 0,1 ⊕ 1,1 ⊕ 0,1 ⊕ 1,1 ⊕ 0, 0 ⊕ 0]
or Y= [1, 0, 0,1, 0,1, 0]
iii. The principle of modulo-2-addition can be applied in a slightly differ-
ent way as under:
X =Y ⊕E
From figure 4.7 we can write that
ANALOG AND DIGITAL COMMUNICATION

X = [1 ⊕ 1, 0 ⊕ 0, 0 ⊕ 1,1 ⊕ 0, 0 ⊕ 1,1 ⊕ 0, 0 ⊕ 0]
X = [0, 0,1,1,1,1, 0]

6. Relation between syndrome and error vector


S = YH T
We know that Y = Y ⊕ E , then
S = [X ⊕ E ] H T
= XH T ⊕ EH T
But XHT = 0
This is the relationThere
between syndrome SS and
fore, = 0⊕the
EHerror
T vector.
Problem 1
or S = EH T
For a code vector and the parity check matrix H given below,
prove that
XH T = ( 0, 0....0 )
1 1 1 0 1 0 0
H = 1 1 0 1 0 1 1 
1 0 1 1 0 0 1  3×7
Solution
The transpose of the given parity matrix H, can be obtained by
interchanging the rows and columns and under:

1 1 1
1 1 0 

1 0 1
T  
H = 0 1 1
1 0 0
 
0 1 0
0 1 1  7×3

Also the product XHT is given by
1 1 1
1 1 0 

1 0 1
 
XH = ( 0111000 )1×7 0
T
1 1
1 0 0
 
0 1 0
0
 1 1  7×3
SOURCE CODING AND ERROR CONTROL CODING

= ( 0 × 1) ⊕ (1 × 1) ⊕ (1 × 1) ⊕ (1 × 0 ) ⊕ ( 0 × 1) ⊕ ( 0 × 0 ) ⊕ ( 0 × 0 )
( 0 × 1) ⊕ (1 × 1) ⊕ (1 × 0 ) ⊕ (1 × 1) ⊕ ( 0 × 0 ) ⊕ ( 0 × 1) ⊕ ( 0 × 1)
( 0 × 1) ⊕ (1 × 0 ) ⊕ (1 × 1) ⊕ (1 × 1) ⊕ ( 0 × 0 ) ⊕ ( 0 × 0 ) ⊕ ( 0 × 1)
or XH T = 0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0,
0 ⊕ 0 ⊕1 ⊕1 ⊕ 0 ⊕ 0 ⊕ 0
or XH = [0, 0, 0]1×3
T


The parity check matrix proved that for valid code word, the
product XH T = ( 0, 0, 0 ) .

Problem 2
The parity check matrix of a (7,4) Hamming code is as under:
1 1 0 1 1 0 0
H = 1 1 1 0 0 1 1 
1 0 1 1 0 0 1 
Calculate the syndrome vector for single bit errors.
Solution
We know that syndrome vector is given by
S = EH T = [E ]1×7 H T 
7×3
Therefore, syndrome vector will be represented by a 1 × 3 matrix
Thus, S1×3 = [E ]1×7 H 
T
7×3
Now let us write various error vectors
Various error vectors with single bit errors are shown in table
given below The bolded/encircled bits represent the locations of errors

S.No Error Vectors Bit in Error


1. 1 0 0 0 0 0 0 First

2. 0 1 0 0 0 0 0 Second

3. 0 0 1 0 0 0 0 Third

4. 0 0 0 1 0 0 0 Fourth

5. 0 0 0 0 1 0 0 Fifth

6. 0 0 0 0 0 1 0 Sixth

7. 0 0 0 0 0 0 1 Seventh
ANALOG AND DIGITAL COMMUNICATION

Let us calculate the syndrome corresponding to each error vector.


(i) We have [S ]1×3 = [E ]1×7 H T  7×3

Substituting [E ] = (10 0 0 0 0 0 ) and H T we obtain

1 1 1 
1 1 0 
 
1 0 1 
[S ] = [10 0 0 0 0 0]1×7 0 1 1 
1 0 0 
 
0 1 0 
0 1 1 
 
Therefore, [S ] = [1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0,1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0
1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0]
Here, [S ] = [1, 1, 1]
This is the syndrome for the first bit in error.
(ii) For the second bit in error, we have

1 1 1 
1 1 0 
 
1 0 1 
[S ] = [0 1 0 0 0 0 0] 0 1 1 
1 0 0 
 
0 1 0 
0 1 1 
 
The erefore, [S ] = [0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0
0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0]
Here, [S ] = [1, 1, 0]

(iii) Similarly, we can obtain the other syndromes as shown in table


below
(iv) Here, it may be noted that the first row of table represent an error
vector with no errors. The corresponding syndrome is (0,0,0)
(v) Table below shows that the syndrome vectors for various error
vectors.
SOURCE CODING AND ERROR CONTROL CODING

Syndrome vector
S.No Error vectors with single bit errors
“S”
1. 0 0 0 0 0 0 0 0 0 0
2. 1 0 0 0 0 0 0 1 1 1 ←1st Row of HT

3. 0 1 0 0 0 0 0 1 1 0 ←2nd Row of HT

4. 0 0 1 0 0 0 0 1 0 1 ←3rd Row of HT

5. 0 0 0 1 0 0 0 0 1 1 ←4th Row of HT

6. 0 0 0 0 1 0 0 1 0 0 ←5th Row of HT

7. 0 0 0 0 0 1 0 0 1 0 ←6th Row of HT

8. 0 0 0 0 0 0 1 0 1 1 ←7th Row of HT

4.14.1 Error correction using syndrome vector


Let’s see how single bit errors can be corrected using syndrome
decoding.
Error correction using syndrome vector
(i) For a transmitted code vector X = ( 0 1 0 0 1 1 0 ) , we obtain the
received code vectors Y = ( 0 1 1 0 1 1 0 ) . let there be an error in the
3rd position.
(ii) We calculate the corresponding syndrome vector

S = YH T

(iii) We obtain the error vectors as S = EH T (i .e.,) S = E


(iv) From the syndrome vector, we obtain the error vector.
(v) From the error vector, we obtain the transmitted signal (or) correct
vector as under
X =Y ⊕E
ANALOG AND DIGITAL COMMUNICATION

Problem 2
To clear the above concept of error correction using syndrome
vector, let us consider one particular example. For this, let us use the
following parity check matrix:

1 1 1 0 1 0 0 
H = 1 1 0 1 0 1 1 
1 0 1 1 0 1 1  3×7

Solution
(i) First, we obtain the received code vector ‘Y’
Assuming X = ( 0 1 0 0 1 1 0 ) to be the transmitted code vector
Let the received code vector be obtained by assuming that the third
bit is in error
Y = (0 1 1 0 1 1 0)
Thus, . Here, third bit represents error.
(ii) Next let us determine the corresponding syndrome vector
We know that,
Syndrome S = YH T
1 1 1 
0 1 1 
 
1 0 1 
 
S [ 0 1 1 0 1 1 0 ] 1 1 0 
0 0 1 
 
0 1 0 
1 1 0 
 
We have S = [ 0 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0,
0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0]
or S = [1, 0,1]

This is the syndrome vector for the given received signal. This
corresponds to the 3rd row of the transpose matrix HT.
T
(iii) But
= S YH= EH T
Therefore EH T = [1, 0, 1]
(iv) Let us obtain the error vector for the syndrome vector S = [1, 0, 1]
From the table the error vector corresponding to the syndrome
(1, 0, 1) is given by
E = (0 0 1 0 0 0 0)
SOURCE CODING AND ERROR CONTROL CODING

This shows that the error is present in the third position of the
received code vector Y.
(v) To obtain the correct vector. The vector X can be obtained as under;
X =Y ⊕E
Similarly the values of Y and E, we obtain
X = [0 1 1 0 1 1 0 ] ⊕ [0 0 1 0 0 0 0 ]
or X = [0 1 0 0 1 1 0 ]

This is same as the transmitted code vector.


4.14.2 Syndrome Vector for (n,k) block codes
1. Block Diagram
The Block diagram of a syndrome decoder for (n, k) block codes
for correcting errors is shown in figure 4.8

Error vector corresponding to syndrome S

}
+

X = Y⊕E
+
corrected
code vector
+

Received n-bit register


code word Y

Syndrome Calculator S=YHT

Syndrome S

Look up table
for error
patterns E

Figure 4.8 Block diagram of decoder


For (n,k) block codes for correcting errors
ANALOG AND DIGITAL COMMUNICATION

2. Working Operation
The received n-bit code word Y is stored in an n-bit register.
This code vector is then applied to a syndrome calculator to calculate
syndrome S = YHT . In order to obtain the syndrome, the transposed
parity check matrix, HT is stored in the syndrome calculator. The (n-k)
bit syndrome vector S is applied to the look-up table containing to the
error patterns. An error pattern is selected corresponding to the
particular syndrome S generated at the output of the of the syndrome
calculator. The selected error pattern E is then added (modulo-2-
addition) to the received signal Y to generate the corrected code vectors
X.
Therefore, X = Y ⊕ E

Problem 1
An error control code has the following parity check matrix

1 0 1 1 0 0 
H = 1 1 0 0 1 0 
0 1 1 0 0 1 

(i) Determine the generator matrix G


(ii) Find the code word that begins with 101
(iii) Decode the received code word 1 1 0 1 1 0 . Comment on
error detection capability of this code.
Solution
From the parity check matrix [ H ]3×6 , it is obvious that this is a
(6,3) linear block code. Therefore,= n 6= ,k 3 and (n − k ) = q = 3
(i) We know that the parity check matrix is given by,
[H ] = P T : I q 
q ×n

or [H ]3×6 = P T : I 3 
3×6

Using the given expression for H, we obtain


1 0 1 1 0 0
H = 1 1 0 0 1 0 
0 1 1 0 0 1 
1 0 1
or P = 1
T
1 0 
0 1 1  3×3
SOURCE CODING AND ERROR CONTROL CODING

Therefore, the transpose matrix P T is given by


1 1 0 

P = 0 1 1 
1 0 1 
We know that the generator matrix is given by,
1 0 0 1 1 0
 
G =  I k : Pk ×(q )  = 0 1 0 0 1 1 
k ×n
0 0 1 1 0 1 
This is the required generator matrix
(ii ) The message vector is given by
M = [1 0 1]
The three parity bits are obtained by using the following
standard expression
Therefore, C = MP
[c1 c 2 c 3 ] = [m1 m2 m3 ] [P ]
1 1 0 
or [c1 c 2 c 3 ] = [m1 m2 m3 ] 0 1 1 
1 0 1 [3× 6]
or c1 = (m1 × 1) ⊕ (m2 × 0 ) ⊕ (m3 × 1) = m1 ⊕ m3
Substituting m1 = 1 and m3 = 1, we obtain
c1 = 1 ⊕ 1 = 0
Similarly, c 2 = (m1 × 1) ⊕ (m2 × 1) ⊕ (m3 × 0 ) = m1 ⊕ m2
Substituting m1 = 1 and m2 = 0, we obtain
c2 = 1 ⊕ 0 = 1
and c 3 = (m1 × 0 ) ⊕ (m2 × 1) ⊕ (m3 × 1) = m2 ⊕ m3
or c3 = 0 ⊕1 = 1
Therefore the parity word is C= [0, 1, 1]
Hence, the complete code word is given by
 
X = 1 0 1 0 1 1
 
   
 M C 
(iii ) The received code word Y = 1 1 0 1 1 0
efore, the syndrome is given by S = YH T
There
Substituting for Y and H T , we obtain
ANALOG AND DIGITAL COMMUNICATION

1 1 0 
0 1 1 
 
1 0 1 
S = [1 1 0 1 1 0]  
1 0 0 
0 1 0 
 
0 0 1 
or S = [1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0,1 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0]
or S = [0, 1, 1]
This is same as the second row of the transpose matrix HT, which
indicates that there is an error in the second bit of the received signal
(i.e.,)
Error

Y=1 1 0 1 1 0

Therefore, the correct code word X = 1 0 0 1 1 0 [Ans]


The correct code word is obtained by replacing the second bit by a 0.
(iv) It is possible to verify that this code has minimum dmin = 3. The
relation between dmin and the number of errors can be detected is:
dmin ≥ s + 1
For dmin = 3, we have
3 ≥ s +1
or s≥2
wo errors can be detected and dmin ≥ 2t + 1
This means that upto tw
or 3 ≥ 2t+1 or t ≤ 1
This means that upto one error can be corrected.

Problem 2
Given a (7,4) Hamming code whose generator matrix is given by
1 0 0 0 1 0 1
0 1 0 0 1 1 1 
G=
0 0 1 0 1 1 0
 
0 0 0 1 0 1 1

(i) Find all the code words


(ii) Find the parity check matrix
SOURCE CODING AND ERROR CONTROL CODING

Solution
(i) First, we obtain the P matrix from the generator matrix.
(ii) Then, we obtain the parity bits for each message vector using the
expression,
C = MP
(iii) Next, we obtain all the possible code words as
X = [M : C]
(iv) Lastly we obtain the transpose of P matrix (i.e.,) PT and obtain the
parity check matrix as : [H] = [PT | In-k]

(i) First let us obtain the P matrix

1 0 0 0 1 0 1
0 1 0 0 1 1 1 
Given generator matrix G =
0 0 1 0 1 1 0
 
0 0 0 1 0 1 1
Therefore, the P matrix is given by
1 0 1 
1 1 1 
P = 
1 1 0 
 
0 1 1  4×3
(ii ) Next we obtain the parity check bits
Thee parity bits can be obtained using the following expressiion:
C = MP
1 0 1 
1 1 1 
or [c1 c 2 c 3 ] = [m1 m2 m3 ] = 1 1 0 
 
0 1 1  4×3
Solving, we obtain
we c1 = m1 ⊕ m2 ⊕ m3
c 2 = m2 ⊕ m3 ⊕ m 4
c 3 = m1 ⊕ m2 ⊕ m 4
Using these equations, we can obtain the parity bits for each
messsage vector. For example, let the message word be
m1m2m3m 4 = 0101
Therefore, we write c1 = 0 ⊕ 1 ⊕ 0 = 1
ANALOG AND DIGITAL COMMUNICATION

c2 = 1 ⊕ 0 ⊕1 = 0
c3 = 0 ⊕1 ⊕1 = 0
Hence, the corresponding parity bits arre c1c 2c 3 = 100
Therefore, the compelete code word for the message word 0101 is
given by t

Complete codeword = 0 1 0 1 1 0 0

Message Parity
bits bits
Similarly, we can obtain the codewords for the remaining
message words. All the message vectors, the corresponding parity bits
and codewords are given in table

Table Code vectors for all the message vectors

S.No Message vector M Parity bits c Code words X


m4 m3 m2 m1 c4 c4 c4 X6 X5 X4 X3 X2 X1 X0
1. 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2. 0 0 0 1 1 0 1 0 0 0 1 1 0 1
3. 0 0 1 0 0 1 1 0 0 1 0 0 1 1
4. 0 0 1 1 0 1 0 0 0 1 1 0 1 0
5. 0 1 0 0 0 1 1 0 1 0 0 0 1 1
6. 0 1 0 1 1 1 0 0 1 0 1 1 1 0
7. 0 1 1 0 1 0 0 0 1 1 0 1 0 0
8. 0 1 1 1 0 0 1 0 1 1 1 0 0 1
9. 0 0 0 0 1 1 0 1 0 0 0 1 1 0
10. 1 0 0 1 0 1 1 1 0 0 1 0 1 1
11. 1 0 1 0 0 0 1 1 0 1 0 0 0 1
12. 1 0 1 1 1 0 0 1 0 1 1 1 0 0
13. 1 1 0 0 1 0 1 1 1 0 0 1 0 1
14. 1 1 0 1 0 0 0 1 1 0 1 0 0 0
15. 1 1 1 0 0 1 0 1 1 1 0 0 1 0
16. 1 1 1 1 1 1 1 1 1 1 1 1 1 1
SOURCE CODING AND ERROR CONTROL CODING

(iv )
Lastly, let us obtain the parity check matrix
The paritty check matrix [ H ] is a 3 × 7 matrix (i .e.,)
H = P T : I n −k 
The transpose matrix PT is given by
1 1 1 0
P = 0
T
1 1 1 
1 1 0 1  3×4

1 1 1 0 1 0 0 
Therefore, we have H = P T
: I n −k  = 0 1 1 1 0 1 0 
1 1 0 1 0 0 1  3×7
This is the required check matrix.

Problem 3
For a systematic linear block code, the three parity check digits
c4, c5 and c6 are given by
c 4 = m1 ⊕ m2 ⊕ m3
c 5 = m1 ⊕ m2
c 6 = m1 ⊕ m3

(i) Construct generator matrix


(ii) Construct code generated by this matrix
(iii) Determine the error correcting capability.
(iv) Decode the received words 101100 and 000110

Solution
(i) First, we obtain parity matrix P and generator matrix G
(ii) Then we obtain the values C4, C5, C6 for various combinations of m1,
m2, m3 and obtain dmin and from the value of dmin, we calculate the
error detecting and correcting capability.
(iv) Lastly, we decode the received words with the help of syndromes
listed in the decoding table.
(i( First, let us obtain the parity matrix P and generator G.
We know that the relation between and check (parity) bits,
message bits and the parity matrix P is given by:
ANALOG AND DIGITAL COMMUNICATION

[c 4 c 5 c 6 ]1×3 = [m1 m2 m3 ]1×3 [P ]3×3 ... (1)


 P11 P12 P13 
[c 4 c 5 c 6 ]1×3 = [m1 m2 m3 ]1×3 P21 P22 P23  ... ( 2)
P31 P32 P33 
c 4 = P11m1 ⊕ P21m2 ⊕ P31m3 

Therefore, we have c 5 = P12m1 ⊕ P22m2 ⊕ P32m3  ... ( 3 )
c 6 = P13m1 ⊕ P23m2 ⊕ P33m3  
Comparing equation (iii ) with the given n equations for c 4 ,c 5 ,c 6 ,
we obtain

P11 = 1 P12 = 1 P13 = 1


P21 = 1 P22 = 1 P23 = 0
P31 = 1 P32 = 0 P33 = 1
Hence, the parrity matrix below:
1 1 1 
P = 1 1 0 
1 0 1  3×3
The is the required parity matrix. The generator matrix is given by:
G = [I k : P ][ I 3 : P3×3 ]
1 0 0 1 1 1 
G = 0 1 0 1 1 0 
0 0 1 1 0 1 
[ Ans ]
(ii ) Now let us obtain the codewords
It is given that,
c 4 = m1 ⊕ m2 ⊕ m3
c 5 = m1 ⊕ m2
c 6 = m1 ⊕ m3
Using the equations we can obtain the check bits for various
combinations of the bits m1,m2 and m3 . After that the corresponding
codewords are obtained as shoown in table
For m1m2m3 = 0 0 1, we have
c 4 = m1 ⊕ m2 ⊕ m3 = 0 ⊕ 0 ⊕ 1 = 1
c 5 = m1 ⊕ m2 = 0 ⊕ 0 = 0
SOURCE CODING AND ERROR CONTROL CODING

c 6 = m1 ⊕ m3 = 0 ⊕ 1 = 1
Therefore c 4c 5c 6 = 101 and the codeword is give
en by

m1 m2 m3 c4 c5 c6
Codeword for m1 m2 m3 =
0 0 1 1 0 1

Similarly, the other codewords are obtained. They are listed in table
below
S.No Message Check Code Vector (or) Code

Vector Bit Code words weight W(X)


m1 m2 m3 c4 c5 c6 m1 m2 m3 c4 c5 c6
1. 0 0 0 0 0 0 0 0 0 0 0 0 0
2. 0 0 1 1 0 1 0 0 1 1 0 1 3
3. 0 1 0 1 1 0 0 1 0 1 1 0 3
4. 0 1 1 0 1 1 0 1 1 0 1 1 4
5. 1 0 0 1 1 1 1 0 0 1 1 1 4
6. 1 0 1 0 1 0 1 0 1 0 1 0 3
7. 1 1 0 0 0 1 1 1 0 0 0 1 3
8. 1 1 1 1 0 0 1 1 1 1 1 0 4

(iii ) Now, let find the error correcting capacity.


The error correcting capacting depends on the minimum distance
e
dmin . From the table dmin = 3.
Therefore, numbers of errors detectable is dmin ≥ s + 1
or 3 ≥ s +1 or s≤2
So, at the most, two error can be detected
and dmin ≥ 2t + 1
3 ≥ 2t + 1 or t ≤ 1.
Thus, at the most one error can be corrected.
(iv ) Let us obtain the decoding of the received words.
The first given code word 101 1100. But, this codeword does not exist
in the code word ta able. This shows that the error must be present in the
rec
ceived code vector. Let us represent the received code worrd as under:
Y1 = [1 0 1 1 0 0]
The syndrome for this codeword is given by,
S = Y1H T
ANALOG AND DIGITAL COMMUNICATION

1 1 1 
1 1 0 
 
1 0 1 
or S = [1 0 1 1 0 0]  
1 0 0 
0 1 0 
 
0 0 1 
S = [1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 0,1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0,1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0]
or S = [1 1 0]
Thus, the syndrome of the received word is [110] which is
the same as the second syndrome in the decoding table. Hence the
corresponding error pattern is given by
E = [0 1 0 0 0 0 ]
and the correct word can be obtained as undr
X1 = Y1 ⊕ E = [1 0 1 1 0 0] ⊕ [0 1 0 0 0 0]
X1 = [1 1 1 1 0 0]
This the corrected trransmitted word.
Similarly, we can perform decoding of 0 0 0 1 1 0
Let X2 = 000110....... is the second received codeword. Even this
is not the valid codeword listed in codeword table. The syndrome for this
can be obtained as under
S = Y2 H T
1 1 1 
1 1 0 
 
1 0 1 
or S= 
1 0 0 
0 1 0 
 
0 0 1 
or S= [1 1 0]

The error pattern corresponding to this syndrome is obtained


from the decoding table as under
E = [0 1 0 0 0 0 ]
X 2 = Y2 ⊕ E = [ 0 0 0 1 1 0 ] ⊕ [ 0 1 0 0 0 0 ]
or X 2 = [0 1 0 1 1 0]

This is the correct transmitted word


SOURCE CODING AND ERROR CONTROL CODING

Problem 4
For a (6,3) code, the generator matrix G is given by
1 0 0 1 0 1
G = 0 1 0 0 1 1 
0 0 1 1 1 0
(i) Realize an encoder for this code.
Solution
(i )
First, we obtain the expression for the parity bits.
The
e parity matrix can be obtained by using the expression:
C = MP

or [c1,c 2 ,c 3 ]1×3 = [m1,m2 ,m3 ]1×3 [P ]3×3


The parity matix can be obtained from the generator matrix as under
1 0 0 1 0 1 
G = 0 1 0 0 1 1 
0 0 1 1 1 0 
1 0 1 
Therefore, [P ] = 0 1 1 
1 1 0  3×3
1 0 1 
Also, [c1,c 2 ,c 3 ] = [m1,m2 ,m3 ] 0 1 1 
1 1 0 
c1 = m1 ⊕ m3 

From above, we get c 2 = m2 ⊕ m3  ... (1)
c 3 = m1 ⊕ m2 
Now, let us draw the encoder.
The encoder is obtained to implement the expresssions given in
equation (1) as shown in figure below

Input
m3 m2 m1
Sequence Message bits

Code words
Modulo-2
addition + + + S

Parity or
Parity or check bits
c3 c2 c1
check bits
Parity bit register
ANALOG AND DIGITAL COMMUNICATION

4.15 CYCLIC CODES


Definition
Cyclic codes are also linear block codes. A binary code is said to
be a cyclic code if it exhibits the following properties:
(i) Linearity Property
(ii) Cyclic Property
(i) Linearity Property
A code is said to be linear if sum of any two code words also is a
code word. This property that the cyclic codes are linear block codes.
(ii) Cyclic Property
A linear block code is said to be cyclic if every cyclic shift of a word
produces some other code word, Let (x0,x1,............,xn-1) be an n-bit (n,
k) linear block codes. This codes is shifted right by 1 bit every time in
order to get the other code words. All the n bit code words obtained by
the circular right shifting of new code words. This is called as the cyclic
property of the cyclic codes.
4.15.1 Code Words Polynomial
The cyclic property suggests that it is possible to treat the ele-
ments of code word of length n as the coefficient of a polynomial of a
degree (n-1). Thus, the code word:
[x 0 , x1..............xn -1 ]
Can be expressed in the form of a code word polynomial as under

[ X ( p )] = x 0 + x1 p + x 2 p2 + ...... + xn -1 pn -1 ]
Where p = An arbitrary real variable
X ( p ) = polynomial of degree (n - 1)

4.15.2 Generator Polynomial for the cyclic code


The generator polynomial of cyclic code is represented G(p). It
is used for generation of cyclic code words, from the message bits. The
generator code word polynomial can be as,
SOURCE CODING AND ERROR CONTROL CODING

X ( p ) = M ( p ) .G ( p )
Where M ( p ) = Message signal polynomial of degree e ≤k
M ( p ) = m0 + m1 p + m2 p + .... + mk −1 p
2 k −1

G ( p ) = Generating polynomial of degree (n − k ) , which is


given by,
G ( p ) = 1 + g1 p + g 2 p 2 + ..... + gn −k −1 p n −k −1 + p n −k
The generator polynomial can be expressed in the summation form
as under
n −k −1
G ( p) = 1+ ∑
i =1
g i p i + p n −k

This is expression is quite useful while realizing the encoder


for the cyclic codes. The other important point about the generator
polynomial is that the degree of generator polynomial is equal to the
number of parity bits (check bits) in the code word.
4.15.3 Generation of Non-Systematic code words
The non-systematic code words can be obtained by multiplication
of message polynomial with the generator polynomial as under:

X1 ( p ) = M 1 ( p ) .G ( p )
X 2 ( p ) = M 2 ( p ) .G ( p )
X 3 ( p ) = M 3 ( p ) .G ( p ) .............and so on.

4.15.4 Generation of systematic code words


There are three steps involved in the encoding process for a
systematic (n, k) cyclic code. They are as under

(i ) We multiply the message polynomial M ( P ) by p n −k to get p n −k M ( p ) .


This multiplication is equivalent to shifting th
he message bits by
(n − k ) bits.
(ii ) We divide the shifted message polynomial pn −k M ( p ) by the
generator polynomial G ( P ) to obtain the remainder C ( p ) .
 p n −k M ( p )   p 2M ( p ) 
C ( p ) = rem   = rem   where n - k = p
 G ( p )   G ( p ) 
ANALOG AND DIGITAL COMMUNICATION

(iii ) We add the remainder polynomial C ( p ) to the shifted message


polynomial P n −k M ( P ) to obtain the code word polynomial X ( p ) .
Therefore, code word polynomial X ( p )
Therefore, th code word polynomial X ( p ) is exactly divisible by
the generator polynomial G ( p )

Problem 1

The generator polynomial for a (7,4) Cyclic Hamming code is given


by, G ( p ) = p 3 + p + 1. determine all the systematic and non-systematic
code vectors.

Solution

First, let us obtain the non-systematic code vectors.

(i) This is a (7,4) cyclic Hamming code. Therefore, the message vectors
are going to be 4 bit long. There will be total 24 = 16 message vectors.
Let us consider any message code vector as under

M = (m3 m2 m1 m0 ) = ( 0 1 0 1)

Therefore, the message polynomial is given by

M ( p ) = m3 p 3 + m2 p 2 + m1 p + m0 ... (1)

Substituting the values of m3, m2, m1 and m0, we get

M ( p ) = p2 + 1

(ii) The generator polynomial is given by,

G ( p ) = p3 + p + 1
SOURCE CODING AND ERROR CONTROL CODING

(iii) The non-systematic cyclic code is obtained as


X ( p ) = M ( p ) .G ( p ) = ( p 2 + 1) ( p 3 + p + 1) = p 5 + p 3 + p 2 + p 3 + p + 1
( or ) X ( p ) = p 5 + p 3 (1 ⊕ 1) + p 2 + p + 1
But, 1 ⊕ 1 = 0
∴ X ( p ) = p 5 + Op 4 + Op 3 + p 3 + p + 1 = Op 6 + p 5 + Op 3 + p 2 + p + 1
Note that the degree of the code word is 6 (i .e.,) (n − 1) . The code
word polynomial is given by
X = ( 0 1 0 1 1 1 1)
This is the code word for the given message word. Similarily, we
can obtain the other non-systematic code words.
The sys stematic code words, is obtained as,
(i ) We multiply by M ( p ) by pn −k
p n −k M ( p ) p5 + p3 p 5 + 0 p 4 + p 3 + 0p
p2 + 0 p + 0
Therefore, = 3 =
G ( p) p + p +1 p3 + 0 p2 + p + 1
The division is carried out as under:
p 2 + 0 p + 0 ← Quotient polynomial Q ( p )
p3 + 0 p2 + p + 1 p5 + 0 p 4 + p3 + 0 p2 + 0 p + 0
p5 + 0 p 4 + p3 + p2
⊕ ⊕ ⊕ ⊕
0 + 0 + 0 + p2 + 0 p + 0
  
Remainder polynomial C ( p )
Thus the results of the division are as under:
Quotientt polynomial Q ( p ) = p 2 + 0 p + 0
Remainder polynomial C ( p ) = p 2 + 0 p + 0 ← Represents the parity bits
(i.e.,) C = (100 )
(ii ) We obtain the code word polynomial X ( p )
The code word polynomial can be obtained by adding p n -k M ( p ) to
the remainder polynomial C(p)

Therefore, X ( p ) =  p M ( p )  ⊕ C ( p )
n −k

= 0 p 6 + p 5 + 0 p 4 + p 3 + 0 p 2 + 0 p + 0  ⊕  p 2 + 0 p + 0 

or X ( p ) = 0 p 6 + p 5 + 0 p 4 + p 3 + 0 p + 0 
The code word vector is given by : ( 0 1 0 1 : 1 0 0 )
or X (mk −1....m1 m0 : c q −1c q −2 ...c1c 0 ) = (m3m2m1m0 : c 2c1c 0 )
= (0 1 0 1 : 1 0 0)
ANALOG AND DIGITAL COMMUNICATION

Message bits Systematic code vectors


S.No
M= m3 m2 m1 m0 X= m3 m2 m1 m0 c2 c1 c0
1 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 1 0 0 0 1 0 1 1
3 0 0 1 0 0 0 1 0 1 1 0
4 0 0 1 1 0 0 1 1 1 0 1
5 0 1 0 0 0 1 0 0 1 1 1
6 0 1 0 1 0 1 0 1 1 0 0
7 0 1 1 0 0 1 1 0 0 0 1
8 0 1 1 1 0 1 1 1 0 1 0
9. 1 0 0 0 1 0 0 0 1 0 1
10. 1 0 0 1 1 0 0 1 1 1 0
11. 1 0 1 0 1 0 1 0 0 1 1
12. 1 0 1 1 1 0 1 1 0 0 0
13. 1 1 0 0 1 1 0 0 0 1 0
14. 1 1 1 1 1 1 0 1 0 0 1
15. 1 1 1 0 1 1 1 0 1 0 0
16. 1 1 1 1 1 1 1 1 1 1 1
4.15.5 Generator and parity check matrices of the cyclic codes
The cyclic codes are linear block codes. Therefore, we can define
the generator and parity check matrices for the cyclic codes as well. The
generator matrix G(p) has a size of k x n (i.e.,) It has rows and n col-
umns. Let the generator matrix G (p) is given by
G ( p ) = p q + g q −1 + p p −1 + ...g 2 p 2 + g1 p + 1
We multiply both the sides by pi , (i .e.,)
p iG ( p ) = p i + q + q q −1 p i +q + .... + g , p i +1 + p i
Where i = (k − 1) , (k − 2) ,....2,1,0

The above equation represents the polynomial for the rows of the
generating polynomials. It is possible to obtain the generator matrix from
this equation.
SOURCE CODING AND ERROR CONTROL CODING

Problem 1
For a (7,4) cyclic code, determine the generator matrix if
G(p) = 1+ p+ p3
Solution
H ere, n = 7 and k = 4, hence q = n − k = 3
G ( p ) = 1 + p + p3
(i ) We multiply both the sides of G ( p ) by p i ,i = (k − 1) .....1, 0.
∴ p iG ( p ) = p i + 3 + p i +1 + p i , i = (k − 1) ........1, 0
But k = 4 ∴ i = 3, 2,1, 0
(ii ) By substtituting these values of i into the above equation we get
four different polynomials as under:
These polynomials corrrespond to the four rows of the generator matrix
as under:
Row No.1 : i = 3 → p 3G ( p ) = p 6 + p 4 + p 3
Row No.2 : i = 2 → p 2G ( p ) = p 5 + p 3 + p 2
Row No.3 : i = 1 → p G ( p ) = p4 + p2 + p
Row No.4 : i = 0 → G ( p ) = p3 + p + 1
The generator matrix for (n ,k ) code is of size k × n . Therefore for ( 7, 4 )
cyclic code, the generator matix will be a 4 × 7 matrix. The polynomials
corresponding to the four are therefore, as s under:
Row No.1 : i = 3 → p + 0 p + p + p3 + 0 p2 + 0 p + 0
6 5 4

Row No.2 : i = 2 → 0 p 6 + p 3 + 0 p 4 + p 3 + p 2 + 0 p + 0
Row No.3 : i = 1 → 0 p6 + 0 p5 + p 4 + 0 p3 + p2 + p + 0
Row No.4 : i = 0 → 0 p 6 + 0 p 5 + 0 p 4 + p 3 + 0 p 2 + p + 1
These polynomials can be converrted into generator matrix G as under
 p 6 p 5 p 4 p 3 p 2 p1 p 0 
 
1 0 1 1 0 0 0
G=0 1 0 1 1 0 0
 
0 0 1 0 1 1 0
0 0 0 1 0 1 1  4×7

This is the req quired generator matrix.
The cyclic codes are subclass of linear block codes. Therfore, its code
vectors can be obtaiined by using the generator matrix as under.
X = MG
Where M = 1 × k message vector
ANALOG AND DIGITAL COMMUNICATION

Problem 2
For the generator matrix of the previous example, determine all
the possible code vectors
Solution:
All the code vectors can be obtained by using the following
g
expression:
X = MG
Let consider any 4-bit message vector. M = (m3 m2 m1 m0 ) = (1010 )
1 0 1 1 0 0 0 
0 1 0 1 1 0 0 
Therefore, X = [1010]  
0 0 1 0 1 1 0 
 
0 0 0 1 0 1 1 
Therefore, X = [1 ⊕ 0 ⊕ 0 ⊕ 0 0 ⊕ 0 ⊕ 0 ⊕ 0 1 ⊕ 0 ⊕ 1 ⊕ 0
1⊕ 0 ⊕ 0 ⊕ 0 0 ⊕ 0 ⊕1⊕ 0 0 ⊕ 0 ⊕1⊕ 0
0 ⊕ 0 ⊕ 0 ⊕ 0]
Therefore, we have X = [1001 : 110]
Similarily, the other code vectors can be obtained.

4.15.6 Systematic form of Generator Matrix


We know that the generator matrix in the systematic form is given
by
G =  I k : Pk ×(n −k ) 
k ×(n −k )

This means that there are k number of rows in the generator


matrix. Let us represent the row number (in general) by I. The ith row of
the generator matrix is represented by
i th row of G = p (n -i ) + Ri ( p )..............where i = 1, 2,.......k ... (1)

Now, we divide p(n-i) by the generator matrix G(p). The result of
the division is expressed as,
p (n − i ) Remainder
= Quotient + ... ( 2)
G ( p) G ( p)
Let Remainder = Ri ( p )
Quotient = Qi ( p )
Substitute this into equation ( 2) , we obtain
p (n − i ) R ( p)
= Qi ( p ) + i ... ( 3 )
G ( p) G ( p)
SOURCE CODING AND ERROR CONTROL CODING

Simplifying this equation, we obtain


p (n −i ) = Qi ( p ) G ( p ) ⊕ Ri ( p ) : where i = 1, 2,....k ... ( 4 )
In mod-2 additions, the addition and subtraction will yield the
same result.
∴ p (n − i ) ⊕ R i ( p ) = Q i ( p ) G ( p )
From equation (1) the above expression represents the ith row the
systematic generator matrix.

Problem 1
For systematic (7,4) cyclic code, determine the generator matrix
and parity check matrix. Given; G(p)=p3+p+1
Solution
(i ) The ith row of the generator matrix is given under by equation as
under
p(n −i ) ⊕ Ri ( p ) = Qi ( p ) G ( p ) ; where i = 1, 2....k ... (1)
(ii ) It is given that the cyclic code is systematic ( 7, 4 ) code
Therefore, n = 7, k = 4 and (n − k ) = 3
Substituting these values into the above expression, we obtain
p ( 7 − i ) ⊕ Ri ( p ) = Qi ( p ) . ( p 3 + p + 2) .....i = 1, 2,....4
(iii ) With i = 1, the above equation is given by
p 6 ⊕ Ri ( p ) = Qi ( p ) ( p 3 + p + 1) ... ( 2)
Let us obtain the value of Qi ( p ) . The quotient Qi ( p ) can be
obtained by dividing p (n − i ) by G ( p ) as per equation ( 2) . Therefore,
to obtain Qi ( p ) ,
let us divide p 6 by ( p 3 + p + 1) .
The division takes place as under
p 3 + p + 1 ← Quotient polynomial Q (n )
p3 + 0 p2 + p + 1 p6 + 0 p5 + 0 p3 + 0 p2 + 0 p + 0
p6 + p4 + p3
Mod - 2 → ⊕ ⊕ ⊕
additions
0 + p4 + p3 + 0 + 0
p4 + 0 + p2 + p
Mod - 2 → ⊕ ⊕ ⊕ ⊕
additions p3 + p2 + p + 0
p3 + 0 + p + 1
⊕ ⊕ ⊕ ⊕
p2 + 1 Remainder
ANALOG AND DIGITAL COMMUNICATION

Here the quotient polynomial Qi ( p ) = p 3 + p + 1


and the remainder polynomial Ri ( p ) = p 2 + 0 p + 1
Substituting these values into equation (ii ) , we obtain
p 6 ⊕ Ri ( p ) = ( p 3 + p + 1) ( p 3 + p + 1)
= p6 + p4 + p3 + p4 + p2 + p + p3 + p + 1
= p 6 + 0 p 5 + (1 ⊕ 1) p 4 + (1 ⊕ 1) p 3 + p 2 + (1 ⊕ 1) p + 1
= p6 + 0 p5 + 0 p 4 + 0 p3 + p2 + 0 p + 1
∴ 1st Row polynomial ⇒ p 6 + 0 p 5 + 0 p 4 + 0 p 3 + p 2 + 0 p + 1
∴ 1st Row elements ⇒ 1 0 0 0 1 01
Using the same procedure, we e can obtain the polynomials for the
other rows of the gen nerator matrix as under:
i =2 2nd Row polynomial ⇒ p 5 + p 2 + p + 1
i =3 3rd Row polynomial ⇒ p 4 + p 2 + p
i =4 4th Row polynomial ⇒ p 3 + p + 1
Thhese polynomials can be transformed into the generator mattrix as
under
p 6 p 5 p 4 p 3 p 2 p1 p 0
Row 1 → 1 0 0 0 1 0 1 
Row 2 → 0 1 0 0 1 1 1 
G=
Row 3 → 0 0 1 0 1 1 0 
 
Row 4 → 0 0 0 1 0 1 1  4×7
This is the req
quired generator matrix.
The parity check matrix is given by:
H = P T : I 3×3 
The transpose matrix P T is given by interchanging the rows and
columns of the P matrix
1 1 1 0 
P = 0 1 1 1 
T

1 1 0 1  3×4
Hence the parity check is given by
1 1 1 0 1 0 0 
H = 0 1 1 1 0 1 0 
1 1 0 1 0 0 1  3×7
This is the required parity che
eck matrix.
SOURCE CODING AND ERROR CONTROL CODING

4.15.7 Encoder for cyclic codes


The encoder for an (n,k) cyclic codes is shown in figure . This
encoder is useful for generating the systematic cyclic codes.
Working Operation of the encoder
The flip-flops (F/F) are used for construction of a shift
register. Operation of all these flip-flops is controlled by an external
clock. The flip-flop contents will get shifted in the direction of the arrow
corresponding to each clock pulse. The feedback switch is connected to
the message input. All the flip-flops are initialized to zero state. First k
message bits are shifted to the transmitter and also shifted into the shift
register. After shifting the k message bits the shift register will contain
the (n-k) parity (or check) bits. Hence after shifting k message bits, the
feedback switch is open circuited and the output switch is thrown to
parity bit position. Now with every shift, the parity bits are transmitted
over the channel. Thus, the encoder generated the code words in the
format shown in figure 4.9
n-bits
message
bits Parity bits
k - bits (n - k ) bits

Figure 4.9 Format of the code word


The encoder thus performs the division operations and generates
the remainder. The remainder is nothing but the parity bits. When all
the message bits are shifted out, what remains inside the shift register is
remainder. The encoder also consists of modulo 2 adders. The output
of the coefficient multipliers (i.e.,) g1, g2....etc are added to the flip-flop
outputs to generate the parity bits.
Feedback
Switch

g1 g2 gn-k-1

F/F + F/F + F/F + F/F +


Parity
Code word
bits
to the transmitter
Output Message
switch bits

Figure 4.10 Encoder for an (n,k) cyclic codes


ANALOG AND DIGITAL COMMUNICATION

Problem 1
Draw the encoder for a (7,4) cyclic Hamming code generated by
the generator polynomial G ( p ) = 1 + p + p
3

Solution
The generator polynomial is given by
G ( p ) = p3 + 0 p2 + p + 1 ... (1)
The generator polynomial of an (n ,k ) cyclic code is expressed as
under:
n −k −1
G ( p) = 1+ ∑
i =1
g i p i + p n −k ... ( 2)

For a ( 7, 4 ) cyclic Hamming code, n = 7 and k = 4


7 − 4 −1
Therefore, G ( p ) = 1 + ∑
i =1
g1 pi + p 7 − 4

Hence , G ( p ) = p 3 + g 2 p 2 + g1 p + 1 ... ( 3 )
Comparing equations (1) and ( 3 ) , we get obtain
g1 = 1 and g 2 = 0
Thus the encoder for a ( 7, 4 ) Hamming code is shown in figure below

Feedback
Switch

g2=0
g1=1

F/F + F/F + F/F +


Parity
Code word
bits
to the transmitter message
Output bit
switch
Encoder for a cyclic Hamming codes
4.15.8 Syndrome calculator for cyclic codes
Figure shows the syndrome calculator, where there are (n-
k) stages of the feedback shift register to generate (n-k) syndrome
vector. Initially, the output switch is connected to position 1 and all
the flip-flops are in their Reset mode. As soon as all the received bits
are shifted into the shift register, its content will contain the desired
(n-k) syndrome vector S. Once, we know the syndrome S, we can de-
termine the corresponding error patter E and ten make the appropriate
SOURCE CODING AND ERROR CONTROL CODING

corrections. After shifting all the incoming bits of signal Y, the output
switch is transferred to position 2 and clock pulses are applied to the
shift register to out the syndrome. The following example will make the
concept of syndrome calculation obvious

g1 g2 gn-k-1

+ F/F + F/F + + F/F


Syndrome
Received S0 S1 output
Sn-k-1
code
vector Y

Figure 4.11 Syndrome calculator for (n,k) cyclic codes


4.15.9 Decoder for cyclic codes
Once the syndrome is calculated, an error pattern E is detected
corresponding to this syndrome.

Feedback connections input


+ +

Syndrome Register Sout

Error pattern detector


Sin (combinational logic circuit) Corrected code
Sin vector ‘X’
Buffer register +
Received
vector input
Figure 4.12 General form of a decoder for cyclic codes
This error vector is then added (modulo-2-addition) to the received
code word Y, to get the corrected code vector X at the output.
Therefore, corrected code vector X ′ = Y ⊕ E
Working operation of the decoder
The switches Sin are closed and Sout are opened. The received
code vector Y is then shifted to the buffer register and syndrome
register. After shifting all the n bits of received code vector Y, the
syndrome register holds the corresponding syndrome vector. Then,
the contents of the syndrome register are given to the error pattern
detector. A particular error pattern will be detected for each syndrome
vector present in the syndrome register. Then the switches Sin are opened
and Sout are closed. The contents of the buffer register, error register and
ANALOG AND DIGITAL COMMUNICATION

syndrome register are then shifted. The received code vector Y (which is
stored in the buffer register), is then added with the error vector E (which
is stored in the error register) bit by bit to obtain the corrected code word
X at decoder output.
Advantages of Cyclic codes
The advantage of cyclic codes over most of the other codes are as
under:
(i) They are easy to encode
(ii) They possess a well defined mathematical structure which has led to
development of very efficient decoding schemes for them
(iii) The methods that are to be used for error detection and correction
are simpler and easy to implement.
(iv) These methods do not require look-up table decoding
(v) It is possible to detect the error bursts using the cyclic codes.
Drawbacks of cyclic codes
Even though the error detection is simpler, the error correc-
tion is slightly more complicated. This is due to the complexity of the
combinational logic circuit used for error correction.

4.16 CONVOLUTIONAL CODES

The main difference between the block codes and the convolu-
tional (or recurrent) codes may be listed below
(i) Block codes
In block codes, the block n bits generated by the encoder in a
particular time unit depends on the block k message bits within that
time unit, but also on the preceding ‘L’ blocks of the message bits (L>1).
Generally, the values of k and n will be small.
(ii) Convolutional code
In the convolutional codes, the block of n bits generated by the
encoder at given time depends not only on the k message bits within that
time unit, but also on the preceding ‘L’ blocks of the message bits (L>1).
Generally, the values of k and n will be small.
Application of convolutional code
Like block codes, the convolutional codes can be designed to
either detect or correct errors. However, because data is usually
retransmitted in blocks, the block codes are more suitable for error
detection and the convolutional codes are more suitable for error
correction.
SOURCE CODING AND ERROR CONTROL CODING

Encoding and decoding


Encoding of the convolutional codes can be accomplished using
simple shift register. Several practical methods have been developed for
decoding. The convolutional codes perform as well or better than the
block codes in many error control applications
4.16.1 Convolutional Encoder
Current Bit
{ State

message m m1 m2
input

+ Communicator
x1
switch
Encoded bits
x2
+

Figure 4.13 Convolutional encoder


• Whenever the message bit is entered into position m’ the new values
of x1 and x2 are computed depending upon m, m1 and m2. m1 and m2
store the previous two successive message bits.
• The convolutional encoder of figure 4.13 for n = 2, k = 1 and L = 2.
It therefore generates n = 2 encoded bits as under:
x = m ⊕ m1 ⊕ m2
and x 2 = m ⊕ m2 ... (1)
• The commutator switch selects these encoded bit alternately to
produce the stream of encoded bits as under:

X = x1x 2 x1 ' x1 '' x 2 '' ... ( 2)

• The output bit rate is twice that of the input bit rate.
4.16.2 Important De initions
1. The code rate (r)
The code rate of the encoder of figure is expressed as
k
r =
n
=
Here, k Number
= of message bits 1
n = Number of encoded bits per message bits = 2
1
Therefore, r =
2
ANALOG AND DIGITAL COMMUNICATION

2. Constraint length (k)


It is defined as the number of shifts over the single message bits
can influence the output of the encoder. For the encoder of figure , con-
straint length k = 3 bits, since a single message bit influences encoder
output for three successive shifts. At the fourth shift it has not effect on
the output.
3. Code Dimension
The code dimension of a convolutional code depends on n and
k. Here k represents the number of message bits taken at a time by
the encoder, n is the number of encoded bits per message bit. The code
dimension therefore represented by (n,k).
4.16.3 Analysis of convolutional encoder
4.16.3.1 Time Domain approach
The time-domain behaviour of binary convolutional encod-
er may be defined i n t erms o f n - impulse responses. Let the mpulse i
response of the adder generating x1 in figure , be given by, the sequence
{ } {
g 0(1) , g1(1) ,............g L (1) . Similarly, let the sequence g 0(2) , g1(2) ,............g L (2) }
denote the impulse response of the adder generating x2 in figure . These
impulse responses are also called as generator sequences of the code .
Let (m0 ,m1,m2 ,.....) denote the message sequence entering the encoder
of figure one bit at a time (starting from m0). The encoder generates two
output sequences by performing convolutions on the message sequenc-
es and the impulse responses. The bit sequence x1 is given by
L
x1 = x i (1) = ∑ g i (1)mi −l i = 0,1, 2,.... ... (1)
l =0

Similarly, the other bit sequence x2 is given by


L
x1 = x i (2) = ∑ g i (2)mi −l i = 0,1, 2,.... ... ( 2)
l =0

Then these bit sequences are multiplexed with the help of the
commutator switch to produce the following output:

{
X = x 0(1) x 0(2) x1(1)x 2(1) x 2(2) ... }
Where x1 = x i (1)
{x 0
(1) (1) (1)
x1 x 2 ..... }
and {
x 2 = x i (2) = x 0(2) x1(2) x 2(2) ..... }
SOURCE CODING AND ERROR CONTROL CODING

Problem 1
From the convolutional figure below determine the following
(i) Dimension of the code
(ii) Code rate
(iii) Constraint length
(iv) Generating sequence
(v) The encoded sequence for the input message (10011)

Mod-2
adder

Commutator
1 switch
Message FF1 FF2 m2
m0 m1
input 2
Mod-2
adder
+

Solution
The given encoder can be drawn in standard form as shown
Given message sequence (m0m1m2m3m 4 ) = (10011)

g0(1)=1 g2(1)=1
+

Message g1(1)=1 x1=x1(1)


Input 1 Encoded
m0 m1 m2
2 x =x (2) output
2 1
FF1 FF2

+
g0(2)=1 g2(2)=1

(i) Dimension of the code


Note that encoder takes one message bit at a time. Hence k =1. It
generates two bits for every message bit
Therefore, n = 2 so dimension = (n, k) = (2,1).
ANALOG AND DIGITAL COMMUNICATION

(ii) Code rate


k 1
Code rate= r
= =
n 2

(iii) Constraint length (k)


Observe that every message bit affects output bits for three
successive shifts, hence constraints length k = 3 bits.

(iv) Generating sequence (or) Impulse response


In figure x1 (or) xi(1) is obtained by adding all the three bits hence
generating sequence gi(1) is given by

g i (1) = {1,1,1}
where g 0(1) = 1 indicates the connections of bit m
g1(1) = 1 indicates the connection of bit m1
g 2(1) = 1 indicates connection of bit m2
Similarly x2 (or) xi(2) is obtained by adding first and last bits. Hence
generating sequence is given by
g i (2) = {1, 0,1}
where g 0(2) = 1 indicates the connections of bit m

g i (2) = {1, 0,1}


where g 0(2) = 1 indicates the connections of bit m
(v) The output sequence may be obtained as follows:
(a) To obtain the bit stream x1(1)
L
We have x1 = x i (1) = ∑ g i (1)mi −1 i = 0,1, 2...
l =0
Substituting i = 0, we get
x 0(1) = g (0)(1)m0 = 1 × 1 = 1 Here g (0)(1) = 1 and m0 =1
Simlarily, substituting i = 1, we get,
x1(1) = g 0(1)m1 + g1(1)m0
= (1 × 0 ) + (1 × 1) = 1 ( mod 2 addition)
We can obtain the other code bits s in a similiar manner as under:
x 2(2) = g 0(1)m2 + g1(1)m1 + g 2(1)m0
SOURCE CODING AND ERROR CONTROL CODING

= (1 × 0 ) + (1 × 0 ) + (1 × 1)
= 0 + 0 +1 = 1 ( mod 2 addition)
(1) (1) (1) (1)
x3 = g 0 m3 + g1 m2 + g 2 m1
= (1 × 0 ) + (1 × 0 ) + (1 × 0 ) = 1
x 4(1) = g 0(1)m5 + g1(1)m 4 + g 2(1)m2
= (1 × 1) + (1 × 1) + (1 × 0 )
= 1+1+ 0 = 1 ( mod 2 addition)
(1) (1) (1) (1)
x5 = g 0 m5 + g1 m 4 + g 2 m3
= g1(1)m 4 + g 2(1)m3
= (1 × 1) + (1 × 1)
= 1+1 = 0 [ m5 and m6 are not available ]
(1) (1) (1) (1)
x6 = g 0 m6 + g1 m5 + g 2 m 4
= g 2(1)m 4 = 1 × 1 = 1

(b) To obtain the bit stream x1(2)


The bit stream x1(2) can be obtained at the output of the bottom
adder. It is given by,
L
x i = x 2 = ∑ g i (1)mi −1 i = 0,1, 2, 3,....
l =0

In this equation, we can substitute gl (2) = g 0(2) , g1(2) or g 2(2) (i .e.,)


l = 0,1, 2... and mi −1 = m0 ,m1,.....
Substituting values of i into equation x i (2) , we get,
x 0(2) = g 0(2)m0

or x 0 ( 2) = 1 × 1 = 1
Similarily substitu uting i = 1, we get
x1 = g 0 m1 + g1(1)m0 = (1 × 0 ) + ( 0 × 1) = 0 + 0 = 0
( 2) ( 2)

Similarily, we can obtain the remaining code bits as under


x 2(2) = g 0(1)m2 + g1(2)m1 + g (2)m0
= (1 × 0 ) + ( 0 × 0 ) + (1 × 1) = 1
x.3(2) = g 0(2)m3 + g1(2)m2 + g 2(2)m1
= (1 × 1) + ( 0 × 0 ) + (1 × 0 ) = 1
x.4(2) = g 0(2)m 4 + g1(2)m3 + g 2(2)m2
= (1 × 1) + ( 0 × 1) + (1 × 0 ) = 1
x.5(2) = g1(2)m 4 + g 2(2)m3
= ( 0 × 1) + (1 × 1) = 1
x 6 = g 22m 4
2

= 1×1 = 1
ANALOG AND DIGITAL COMMUNICATION

Hence, the code bits obtained at the output of the bottom adder is
given by,
x 0(2) x1(2) x 2(2) x 3(2) x 4(2) x 5(2) x 6(2) = (1 0 1 1 1 1 1)

4. The output encoded sequence


By interleaving the code bits at the outputs of the two adders, we
can get the encoded sequence at the encoder output as under
Encoded sequence =x 0(1) x 0(2) x1(1) x1(2) ...x 6(1) x 6(2)
Substituting the values, we get
Codeword = [11 10 11 11 01 01 11]
Note that the message sequence of length k = 5 bits produces an output
coded sequence of length 14 bits.
4.16.3.2 Transform domain approach
We know that the convolution in time domain is transformed into
the multiplication of Fourier transforms in the frequency domain. We
can use this principle in the transform domain approach.
In this process, the first step is to replace each path in the
encoder by a polynomial whole coefficients are represented by the
respective elements of the impulse response.
The input top adder output path of the encoder can be expressed
in terms of the polynomial as under:

G (1) ( p ) = g 0(1) + g1(1) p + g 2(1) p 2 + .... + g L (1) p L ... (1)

The variable p denotes a unit delay operation and the power of


p defines the number of time units by which the associated bit in the
impulse response has been delayed with respect to the first bit (i.e.) g0(1)
Similarly the polynomial corresponding to the input-bottom ad-
der-output path for the encoder is given by
G (2) ( p ) = g 0(2) + g1(2) p + g 2(2) p 2 + .... + g L (2) p L ... ( 2)

( 2)
The polynomial G (1) P and G (P ) are called as the generator
( )
polynomials of the code.
From the generator polynomials, we can obtain the codeword
polynomial as under
Code word polynomial corresponding to top adder is given by
x (1) ( p ) = G (1) ( p ) .m ( p )
where m ( p ) = Message polynomial
=m0 + m1 p + m2 p 2 + ....m L −1 p L −1
SOURCE CODING AND ERROR CONTROL CODING

and the code word polynomial corresponding to the bottom adder is


given by x ( )(P ) = G ( ) ( P ) .m ( P ) . Once we get the code polynomial, it is
2 2

possible to obtain the corresponding output sequence by simply using


the individual coefficients. This has been illustrated in the following
problem.

Problem 1
Determine the codeword for the cyclic encoder of figure for the
message signal (1 0 0 1 1), using the transform domain approach. The
impulse response of the input top adder output path is (1, 1, 1) and that
of the input bottom adder path is (1, 0, 1)
Solution
First, let us write the generator polynomial G(1)(p)
The impulse response of the input top adder output path of the
convolutional encoder of figure . Therefore, we have

g 0(1) = 1
g1(1) = 1
and g 2(1) = 1

Therefore, the generator polynomial G(1)(p) is given by,


G (1) ( p ) = g 0(1) + g1(1) p + g 2(1) p
or G(1) ( p ) = 1 + p + p 2 ... (1)

The given message is (m0 m1 m2 m3 m 4 = 10 011) . Therefore, the


message polynomial is given by,
M ( p ) = m0 + m1 ( p ) + m2 p 2 + m3 p 3 + m 4 p 4
or M ( p ) = 1 + p3 + p4 ... ( 2)
Now, we find the code word polynomial for the top adder
x (1) ( p ) = G (1) ( p ) . M ( p )
x (1) ( p ) = (1 + p + p 2 ) (1 + p 3 + p 4 )
= 1 + p 3 + p 4 + p + p 4 + p5 + p 2 + p5 + p6
or x (1) ( p ) = 1 + p + p 2 + p 3 + (1 + 1) p 4 + (1 + 1) p 5 + p 6
or x (1) ( p ) = 1 + p + p 2 + p 3 + p 6 ...... ( addition is Mod-2) ... ( 3 )

Now, we obtain the generator polynomial G(2)(p)


ANALOG AND DIGITAL COMMUNICATION

The impulse response of the input bottom adder output path of


convolutional encoder of figure is (1, 0, 1). Therefore,
g 0(2) = 1, g1(2) = 0, and g 2(2) = 1,

Therefore, the generator polynomial G2(p) is given by

G (2) ( p ) = g 0(2) + g1(2) + g 2(2) = 1


or G ( 2) ( p ) = 1 + p 2 ... ( 4 )

codeword polynomial for the bottom adder is given by


x (2) ( p ) = G (2) ( p ) .M ( p )

Substituting equation (ii) and (iv) we get


x 2 ( p ) = (1 + p 2 ) (1 + p 3 + p 4 )
or x 2 ( p ) = 1 + 0 p + p2 + p3 + p4 + p5 + p6
Next, let us obtain the code sequences.
The output sequence at the output of the top-adder can be ob-
tained from the corresponding generator polynomial.
x 2 ( p ) = 1 + p2 + p3 + p 4 + p5 + p6
or x 2 ( p ) = 1 + 0 p + p2 + p3 + p4 + p5 + p6

Thus, corresponding code sequence is (1 0 1 1 1 1)


Codeword = 11 10 11 11 01 01 11

Advantage of Transform Domain Approach


Here, it may be noted that the code word obtained using the time
domain approach and the frequency domain approach. It may be ob-
served that we get the same result. But the computation using transform
domain approach demands less efforts than the time-domain approach.
4.16.4 Graphical representation for convolutional encoding
For the convolutional encoding, there are three different graphical
representation that are widely used. They are related to each other. The
graphical representations as under
(i) The code tree
(ii) The code trellis
(iii) The state diagram
SOURCE CODING AND ERROR CONTROL CODING

States of the encoder


The two successive message bits m1 and m2 decides the state.
The incoming message bit m change the state of the encoder and change
the outputs x1 and x2. If the new message bit entered into m, the
contents of m1 and m2 define new state and according to the new state
the outputs x1 and x2 are also changed.
Assume initial values of m1 and m2 be zero (i.e.,) m1, m2 = 0 0
and initially

m2 m1 State of
encoder
0 0 a
0 1 b
1 0 c
1 1 d
4.16.4.1 THE CODE TREE
Let us draw the code tree for (2, 1) encoder. We assume that the
register has been cleared so that it contains all zeros, (i.e.,) Initial state
m2 m1= 0 0. Let us consider input message sequence m = 0 1 0 when
m = 0, x and xc can be determined as,
(i) When the input message bit m = 0 (first bit)
x1 = m ⊕ m1 ⊕ m2
=0+0+0
=0
x 2 = m ⊕ m2 = 0 + 0 = 0
Therefore
x1x 2 = 0 0
new state

0 0 0 x1x2=0 0 0 0 0 0
m0 m1 m2 m0 m1 m2 This bit is
before shift after shift discarded

Figure 4.14 Code tree


The values of x1 x2 = 0 0 are transmitted to output and register
contents are shifted to right by one bit. The new state is formed. The
code tree has been drawn in figure 4.14 . it begins at a branch point on
ANALOG AND DIGITAL COMMUNICATION

node ‘a’ which represents the initial state. Hence if m =0, we should take
the upper branch from node ‘a’ to obtain the output x1 x2 = 0 0. The new
state of the encoder is m2 m1 = 0 0 (or) a.
(ii) When the input message bit m =1 (second bit)
When m = 1, x1 and x 2 can be determined as,
x1 = m ⊕ m1 ⊕ m2
= 1⊕ 0 ⊕ 0 = 1
x 2 = m ⊕ m2
= 1⊕ 0 = 1
new state

0 1 0 x1x2=1 1 0 1 0
m0 m1 m2 m0 m1 m2 This bit is
before shift after shift discarded

The values of x1 x2 = 1 1 are transmitted to output and contents


of the register are shifted to right by one bit. The next state is formed.
Now m2 m1 = 01 (i.e.,) b state. Hence we should take the lower branch,
(since m = 1) from a to b to obtain output x1 x2 = 11. This operation is
illustrated in table in second row
(iii) When the input message bit m = 0 (Third bit)

when m = 0, x1 and x 2 can be found as,


x1 = m ⊕ m1 ⊕ m2
= 0 ⊕1⊕ 0 = 1
x 2 = m ⊕ m2
=0+0 =0

new state

0 1 0 x1x2=1 1 0 1 0
m0 m1 m2 m0 m1 m2 This bit is
before shift after shift discarded
SOURCE CODING AND ERROR CONTROL CODING

00 a
00
00 11 b
a
10 c
This is the code word 11
x1 x2 = 0 0
for m0 = 0 01 d
00 a
11 a
10
00 b
Take this path if m0=0 11
b 01 c
01
10 d

Start a
01 a
11 a
10 c 11 b
10 c

Take this path if m1=0


00 b
01 d
11 b 11 a
01 c
This is the code word
00 b
01 d
x1x2 = 1 1 01 c
for m1 = 0
10 d
10 d

Figure 4.15 Code tree for (2,1) encoder

The values of x1 x2 = 1 0 are transmitted to output and


register contents are shifted right by one bit. The next state is formed.
Now m2 m1 = 1 0 (i.e.,) c state. For 3rd bit, as m = 0 we should take the
upper branch from node b to node c to obtain the output x1 x2 = 1 0. The
complete code tree for convolutional encoder is shown in figure 4.15
ANALOG AND DIGITAL COMMUNICATION

4.16.4.2 Code Trellis


Figure shows a more compact graphical representation which is
popularly known as code trellis. Here, the nodes on the left denote the
four possible current states and the nodes on the right are the resulting
next state.
Output
Current state 00 Next state
a = 00
00 = a 11

01 = b 11 00 b = 01

10
10 = c c = 10
01
01 10
d = 11
11 = d
Figure 4.16 Code trellis for the (2, 1)
convolution encoder
A solid line represents the state transition of branch m = 0 and
dotted line represents the branch m =1. Each branch is labelled with the
resulting output bits x1 x2.
4.16.4.3 State Diagram
Figure shows a state diagram for the encoder of figure . We can
obtain this state diagram from the code trellis, by coalencing the left and
right sides of trellis. The self loops at the nodes a and d represent the
state transition a-a and d-d.
b
Self loop Self loop
11 01 This indicates the
output code word
x1x2
a 00 10 d
00 10

11 01

Figure 4.17 State diagram for the (2,1) convolutional encoder of


figure
SOURCE CODING AND ERROR CONTROL CODING

A solid line represents the state transition for m0 = 1 and the


dotted line represents the state transition for m0 =1. Each branch is
labelled with the resulting output bits x1 , x2.

4.17. DECODING METHODS OF CONVOLUTIONAL CODES

There are three methods for decoding the convolutional codes.


They are as under:
(i) Virtebi Algorithm
(ii) Feedback decoding
(iii) Sequential decoding
S.No Input Status of shift Calculation of Status of shift New Trans- Code tree
mes- register after outputs x1 and register after state of mitted diagram
sage bit entry of m x2 transmission of encoder outputs upward ar-
o/p and shift m2 m1 x1 x2 row indicated
right by one bit input is 0
1 0 00 00 Upper arrow mean
0 0 0 x1 = 0 ⊕ 0 ⊕ 0 = 0 message bit is ‘0’
0 0 0 i.e., a 00
x2 = 0 ⊕ 0 = 0 a
i.e.,
m m1 m2 m m1 m2
0 0 0 0 0 0
a
Encoder in state ‘a’
Code tree for
m=0
2 1 x1 = 1 ⊕ 0 ⊕ 0 = 1 01 11 00
1 0 0 a
x2 = 1 ⊕ 0 = 1 1 0 0 i.e., b
b
i.e., i.e., 11
m m1 m2 m m1 m2
a
1 0 0 1 0 0 Code tree for
m=01
Encoder in state ‘a’

3 0 x1 = 0 ⊕ 1 ⊕ 0 = 1 10 10 00 10
0 1 0 a c
x2 = 0 ⊕ 0 = 0 0 1 0 i.e.,c
b
i.e., i.e., 11
m m1 m2 m m1 m2
a
Code tree for
0 1 0 0 1 0
m=010
Encoder in state ‘b’
ANALOG AND DIGITAL COMMUNICATION
SOURCE CODING AND ERROR CONTROL CODING

4.17.1 Viterbi Algorithm (Maximum Likelyhood Decoding)


The verterbi algorithm operates on the principle of maximum
likelihood decoding and achieves optimum performance. The maximum
likelihood decoder has to examine the entire received sequence Y and
find a valid path which has the smallest Hamming distance from Y. But
there are 2N possible paths for a message sequence of N bits. The viterbi
algorithm for the decoding of convolutional codes, it is necessary to de-
fine certain important terms.
Metric
It is defined as the Hamming distance of each branch of
each surviving path from the corresponding branch of Y (received
signal). The metric is defined by assuming that 0’s and 1’s have the
same transmission error probability. It is the discrepancy between the
received signal Y and the decoded signal at particular node.
ffSurviving path
(i) Let the received signal be represented by Y. The viterbi decoder as-
signs a metric to each branch of each surviving path.
(ii) By summing the branch metrices we get the path metric. To under-
stand more about viterbi algorithm, let us solve the example given
below.
Problem 1
Given the convolutional encoder of figure and for a received signal
Y. = 11 01 11. Show that the first three branches of the valid paths
emerging from the initial node a0 in the code trellis.
Solution
The received or input signal Y = 11 01 11.
Let us consider the trellis diagram of figure for the convolutional
encoder. It shows that for the current a, the next state will be a or b
depending on the message bit m0 = 0 or 1. We have redrawn these two
branches in figure where a0 represents the initial state and a1 and b1
represent the next possible states. The solid line represents the branch
for m0=0 and dotted line represents the branch for m0=1.

ANALOG AND DIGITAL COMMUNICATION

Received signal

Y=11 00 2
a0 a1
(2)
11
0
(0)
b1

Running path metric obtained by ending branch metric from a0


Branch metric = difference, Between encoded signal and
Corresponding received signal Y
Figure 4.18 First step in Viterbi’s algorithm
In figure , the number in the brackets, written below the
branches represent the branch metric which is obtained by taking the
difference between the encoded signal and corresponding received signal
Y. For example for the branch a0 to a1, encoded signal is 00 and received
signal is 11, the discrepancy between these two signals is 2 hence the
branch metric is (2), whereas for the path a0b1, the encoded signal and
encoded signal, hence the branch metric is (0).
The encircled numbers at the right hand end of each branch
represents the running path metric which is obtained by summing the
branch metrices from a0. For example, the running path metric for the
branch a0 a1 =2 and that for the branch a0b1 is 0.
When the next part of inputs bits (i.e.,) Y=01 are received at the
nodes a1 and b1, then four possible branches emerge from these two
nodes are as shown in figure . The next four possible states are a2, b2,
c2 and d2.
The numbers in the brackets written below each branch repre-
sent the branch metric. For example, for branch a1 a2, the branch metric
is (1) which is obtained by taking the difference between the encoded
signal 00 and received signal 01. The running path metric for the same
branch is 3 which is obtained by adding the branch metrices from a0
[(2)+(1)=3 from a0 to a1 and a1 to a2]
SOURCE CODING AND ERROR CONTROL CODING

y = 11 y = 01
a0 00 2 a1 00 3 a2

(2) (1)
0 11 3

(0) (1)
b2
10 2
b1
01
(2) c2
(0)

d2

Figure 4.19 Second step in viterbi algorithim

Similarly the path metric for the path a0 a1 b2 is 3, that of the path
a0 b1 d2 is 0 and so on. The virterbi’s algorithm for all the input bits is as
shown in figure 4.19
Y=11 Y=01 Y=11
a0 00 2 a1 00 3 a2 00 5 a3
11 (1) 11 2
(0) 11 (0) 3
0 3 b2 11 b3
(1) (0) 00 4
b1 10 (2) 2 10
2 c3
01 (1)
(2) c2 1
(1)
01 01
0 (1) 4
(0) 10 d3
d2 (1) 1

Figure 4.20 Paths and their path matrices for the viterbi algorithm
ANALOG AND DIGITAL COMMUNICATION

Choosing the Paths of Smaller Metric


1. There are different paths shown in figure . We must carefully see the
path metric for each path. For example, the path a0 a1 a2 a3 has the
path metric equal to 5. The other paths and path metric are as listed
in the table
Table 4.1

S.No Path Path Metric Decision


1. a0a1a2a3 5 x
2. a0a1a2b3(survivor) 3 �
3. a 0 a 1 b 2 c3 4 x
4. a0a1b 2d 3 4 x
5. a0b1c2a3(survivor) 2 �
6. a 0 b 1 c2 b 3 4 x
7. a0b1d2c3(survivor) 1 �
8. a0b1d2d3(survivor) 1 �

Now, we observe another path a0 a1 a2 b3 arriving at node b3


and has a metric.

2. Hence regardless of what happens subsequently, this path will have


a smaller hamming distance from Y than the other paths arriving at
b3. Thus, this path is more likely to represent the actual transmitted
sequence. Therefore, we discard the large metric paths arriving at
nodes a3b3c3 and d3, leaving the total 2kL =4 surviving paths marked
with (x)sign in table
The paths marked by a (x) in table and figure are large metric
paths and hence they are discarded and the paths with smaller met-
ric paths are declared as hence they are discarded and the paths with
smaller metric paths are declared as survivor at that node. Note that
there is one survivor for each node (table ).
Important Point: Thus the surviving paths are : a0 a1 a2 b3 b1 c2 a3
a0b1 d2c3 and a0b1d2d3. None of the surviving path metrices is equal
to zero. This shows the presence of detectable errors in the received sig-
nal Y.
Figure depicts the continuation of figure 4.22 for a complete
SOURCE CODING AND ERROR CONTROL CODING

message of N=12 bits. All the discarded branches and all labels except
for running path metrices have been omitted for the sake of simplicity.
In case, if there are two paths have same metric, then any of them is
continued. Under such circumstances the choice of survivor is arbitrary.
The maximum likelihood path follows the thick line from a0 to a12, as
shown in figure 4.22. The final value of path metric is 2 which shows
that there are atleast two transmission errors present in Y.
Y= 11 01 11 00 01 10 00 11 11 10 11 00
2 3 2 3 3 4 5 2
a0 a13
2 3 2
2
0 3 3 3 3 3 2 5
b1 3

2 2 maximum
3 4 4 5
1 2 2 2 likelihood
Path

4 4
1 2 1 1 2 3
d1
Y+E= 11 01 01 00 01 10 01 11 11 10 11 00 Decoded
M= 1 1 0 1 1 1 0 0 1 0 0 0 signal

Figure 4.22 Complete sequence message of N=12


The decoder assumes the corresponding transmitted sequence
Y+E and the message sequence M has been written below the trellis

From figure 4.22, we observe that at node a12 only one path has
arrived with metric 2. This path is shown by a dark line. Note that since
this path has the lowest metric, it is the surviving path and signal Y is
decoded from this path. Whenever this path is a solid line, the message
is 0 and when it is dotted line, the message is 1.
4.17.2 Metric Diversion Effect
For a large number of message bits to be decoded, the
storage requirement is going to be extremely large. This problem can be
avoided by the metric diversion effect. The metric diversion effect is used
for reducing the required memory storage.
4.17.3 Free distance and coding gain
The error detection and correction capability of the block and cy-
ANALOG AND DIGITAL COMMUNICATION

clic codes is dependent on the minimum distance, dmin between the code
vectors. But, in case of convolutional code the entire transmitted se-
quence is to be considered as a single code vector. Therefore, the free
distance (dfree) is defined as the minimum distance between the code
vectors. But, the minimum distance between the code vectors is same as
the minimum weight of the code vector. Hence, the free distance is equal
to minimum weight of the code vector.
Therefore,
Free distance dfree = Minimum distance
=Minimum weight of code vectors
If X represents the transmitted signal, then the free distance is
given by
dfree = [W (X)]min and X is non-zero
In this way, the minimum distance decides the capacity of the
block or cyclic codes to detect and correct errors, the free distance will
decide the error control capacity for the convolutional code.
Coding gain (A)
The coding gain (A) is defined as the ratio of (E0/N0) of an en-
coded signal to (Eb/N0) of a coded signal. The coding gain is used for
comparing the different coding techniques.

( Eb / N 0 ) Encoded or = rd free
Coding gain A = ( )
( Eb / N 0 ) Coded 2
where r = Code rate and d free = The free distance
Problem 1
Using the encode figure given below generate an all zero sequence
which is sent over a binary symmetric channel. The received sequence
01001000... There are two errors in this sequence (at second and fifth
position). Show that this double error detection is possible with correc-
tion by application of viterbi algorithm
Input
s2 s3

+ +

Output
SOURCE CODING AND ERROR CONTROL CODING

Solution
The trellis diagram for the encoder shown in figure is shown in
figure
Output
Current State 00 Next State

00 = a a = 00
11

11
01 = b b = 01

00
10
10 = c 01 c = 10

01
10 d = 11
11 = d

From the above trellis diagram, the below viterbi decoding


diagram is drawn
Maximum
01 00 10 00 likelihood
1 1 2 path
00 00 00 00 a4
(1) a3
a1 (0) a
2 11 (1) 3 11
(11) (1) 2 (2)
1 3 11 11
3 (2)
(1) b (1) 00 b4
1
b2 00 b3
(0)
10
2
(1)
c c3 c4
01 1 01 4
(1) 2 (2) 01 (1) 01
10 10 (1)
d2 (0) d3 (1) d4
ANALOG AND DIGITAL COMMUNICATION

From the virtebi diagram, let will be able to write the possible
paths for each state a4,b4,c4 and d4 and the running path metric for each
path is as shown below

State Possible Paths Running Path Metric


a4 a0 - a1 - a2 - a3 - a4 2 P
a0 - a1 - b2 - c3 - a4 5 x
b4 a0 - a1 - a2 - a3 - b4 4 x
a0 - b1 - c2 - a3 - b4 5 x
a0 - b1 - d2 - a3 - b4 5 x
a0 - a1 - b2 - c3 - b4 3 P
c4 a0 - a1 - a2 - b3 - c4 3 P
a0 - b1 - c2 - b3 - c4 4 x
a0 - b1 - d2 - d3 - c4 3 x
a0 - a1 - b2 - d3 - c4 6 x

State Possible Paths Running Path Metric


d4 a0 - a1 - a2 - b3 - d4 3 P
a0 - b1 - c2 - b3 - d4 4 x
a0 - b1 - d2 - d3 - d4 3 x
a0 - a1 - b2 - d3 - d4 9 x

Out of the possible paths listed above, we select four survivor


paths having the minimum value of running path metric. The survivor
paths are marked by (P) sign.
They are as under:

S.No Paths Path Metric


1. a4 → a0 - a1 - a2 - a3 - a4 2
2. b4 → a0 - a1 - b2 - c3 - b4 3
3. c4 → a0 - a1 - a2 - b3 - c4 3
4. d4 → a0 - a1 - a2 - b3 - d4 3
Out of these survivor paths, the path having minimum running
SOURCE CODING AND ERROR CONTROL CODING

path metric equal to 2 (i.e.,) the path (a0 - a1 -a2 -a3 -a4). Hence the
encoded signal corresponding to this path is given by
a4 → 00 00 00 00
This is corresponding to the received signal 01 00 10 00
Therefore, Received signal → 01 00 10 00

Therefore, Received signal → 0 1 0 0 1 0 0 0


Encoded signal → 0 0 0 0 0 0 0 0

Errors are eliminated

This shows that Viterbi algorithm can correct the errors present
in the received signal.
Problem 2
For the convolutional encoder arrangement shown in figure draw
the state diagram and hence trellis diagram. Determine output digit
sequence for the data digits 1 1 0 1 0 1 0 0. What are the dimensions of
the code (n, k) and constraint length?

Solution
(i) To obtain dimension of the code:
Observe that one message bit is taken at a time in the encoder of
figure . Hence the dimension of the code is (n,k) =(3,1)

m m1 m2

+ +
xi(3)
xi(2)
Output sequence
xi(1)

(ii) Constraint length


Here note that every message bit affects three output bits. Hence
Constraint length K = 3 bits
(iii) To obtain the code trellis and state diagram
Let the states of the encoder be as given in table figure above
shows the code trellis of the given encoder.
ANALOG AND DIGITAL COMMUNICATION

Output
Current State 000 Next State

00 = a a = 00
111

010
01 = b b = 01

101
001
10 = c 110 c = 10

011
100 d = 11
11 = d

The nodes in the code trellis can be combined to form stage


diagram as shown in the below figure
b
Self loop Self loop This indicates the
110
111
output code word x1x2

000 a 100
101 001 d

011
010

c
Bold line represents 0 input
Dotted line represents 1 input

(iv) To obtain output sequence


(a) Obtain generator polynomials
The generating sequence can be written for xi(1) from the given
figure above
gi(1) = { 1, 0, 0} since only m is connected
Similarly, generating sequence for xi(2) will be
gi(2) = { 1, 0, 1} since only m2 is connected

Similarly, generating sequence for xi(3) will be


SOURCE CODING AND ERROR CONTROL CODING

gi(3) = { 1, 1, 0} Since m and m1 are connected


Hence the corresponding generating polynomials can be written as,
G(1) (p) = 1
G(2) (p) = 1+p2
G(3) (p) = 1+p
(b) Obtain message polynomials
The given message sequence is
m = { 1 1 0 1 0 1 0 0}
Hence the message polynomial will be,
M (p) = 1 + p + p3 + p5
(c) Obtain output for gi(1)
The firs sequence xi(1) is given as,
xi(1) = G(1) (p) + M (p) = 1 (1+ p + p3 + p5)
=1+ p + p3 + p5
Hence the corresponding sequence will be
{xi(1)} = {1 1 0 1 0 1}
(d) Obtain output for gi(2)
The second sequence xi(2) is given as,
xi(1) = G(3) (p) M (p) =(1 + p). (1 + p + p3 + p5)
=1 + p + p2 + p7
Hence the corresponding sequence will be
xi(2) = { 1 1 1 0 0 0 0 1}
(e) Obtain output for gi(3)
Hence the corresponding sequence will be
xi(1) = G(3) (p) . M (p) = (1+p). (1 + p + p3 + p5)
= 1 + p2 + p3 + p4 + p5 + p6
Hence the corresponding sequence is
xi(3) = { 1 0 1 1 1 1 1}
(f) To multiplex three output sequences
The three sequences xi(1), xi(2) and xi(3) are made equal in length
(i.e.,) 8 bits. Hence, zeros are appended in sequence xi(1) and xi(2). These
sequences are shown below:
xi(1) = {1 1 0 1 0 1 0 0)
xi(2) = {1 1 1 0 0 0 0 1)
xi(3) = {1 0 0 1 1 1 1 0)
The bits from above three sequences are multiplexed
{xi} = {111 110 011 101 001 101 001 010}
ANALOG AND DIGITAL COMMUNICATION

Problem 3

A rate 1/3 convolution encoder has generating vectors as


g1 = ( 1 0 0), g2 = ( 1 1 1) and g3 = ( 1 0 1).

(i) Sketch the encoder configuration

(ii) Draw the code tree, state diagram and trellis diagram

Solution

(i) To draw encoder configuration

To determine dimensions of the code this is to rate 1/3 node.


k 1
W.K.t rate = = , ∴ k = 1,n = 3
n 3
Here k = 1 and n = 3. This means each message bit generates
three output bits. There will be three stage shift register. It will contain
m, m1 and m2.

1st output x1 will be generated due to g1 =(100)

Since g1 = (100), x1 = m

2nd output x2 will be generated due to g2 =(111)

Since g2 = (111), x2 = m ⊕ m1 ⊕ m2

3rd output x3 will be generated due to g3 =(101)

Since g2 = (111), x3 = m ⊕ m2

Figure below shows the diagram of encoder on above discussion

m m1 m2

+ +
xi
xi
Output sequence
xi

(ii) To draw code tree, state diagram and trellis diagram


(a) To obtain trellis diagram
The two bits m2 m1 in the shift register will indicate the state of
SOURCE CODING AND ERROR CONTROL CODING

the encoder. Let these states defined as follows


The below table lists the state transition calculations
m2 m1 State
0 0 a
0 1 b
1 0 c
1 1 d
Table below shows the State transition calculations
S.No Current Input Outputs Next state
state m x1 = m m1 m
m2 m1 x2 = m ⊕ m1 ⊕ m2
x3 = m ⊕ m2
1 a=00 0 0 0 0 0 0 (i.e.,) a
1 1 1 1 0 1 (i.e.,) b
2 b=01 0 0 1 0 1 0 (i.e.,) c
1 1 0 1 1 1 (i.e.,) d
3 c=10 0 0 1 1 0 0 (i.e.,) a
1 1 0 0 0 1 (i.e.,) b
4 d = 11 0 0 0 1 0 0 (i.e.,) c
1 1 1 0 1 1 (i.e.,) d

A trellis diagram is shown in figure based on table

Output
Current State Next State
m2m1 000 m2m1
00 = a a = 00
111

011
01 = b b = 01

100
010
10 = c 101 c = 10

001
110 d = 11
11 = d
ANALOG AND DIGITAL COMMUNICATION

(b) To obtain state diagram


If we combine the nodes in trellis diagram, then we will get state
diagram. It is shown below
b
101
111

000 a 110
100 010 d

001
011

(c) To obtain code tree


Code tree can be constructed with help of state diagram. The
following steps are performed
i. Start with any node (normally node a)
ii. Draw its next states for m = 0 and 1
iii. For every state determine next states for m = 0 and 1
iv. Repeat step 3 till code tree starts repeating
Assumptions: Upward movement in code tree indicated m = 0
Downward movement indicates m =1
Based on above procedure, the code tree is developed as shown in figure
SOURCE CODING AND ERROR CONTROL CODING

000
a
000

111 b
000
a
010
c
111
000 101
a
d
011
a
010
111
m=0 b 100
b

001
c
101
d
110
Start d
a
000
a
011

m=1 111 b
010
c
010
c
100

111 101
b d
011
a
001

101 100 b
d
001
c
110
d
110 d

In the figure above we observe that the code tree repeats after
third stage. This is because each input bit affects upto three output bits
of every mod-2 adder.
ANALOG AND DIGITAL COMMUNICATION

Solved Two Marks


1. Define information theory.
Information theory allows us to determine the information
content in a message signal leading to different source coding techniques
for efficient transmission of message.
2. Define discrete memory less source.
A memory less source is one for which each symbol produced is
independent of the previous symbols. A Discrete Memory less Source
(DMS) can be Characterized by the list of the symbols,the probability
assigned to these symbols and the specification of the rate of generating
these symbols by the source.
3. Define uncertainty
• Information is related to the probability of occurrence of the event.
More is the uncertainty, more is the information associated with it..
• The following example related to uncertainty (or) surprise.
Example
1.Sun rises in east:-Here uncertainty is zero,becomes there is no
surprise in the statement. The probability of occurrence of sun
rising in the east is always 1.
2. Sun does not rise in east:- Here uncertainty is high,because
there is maximum surprise,maximum information as it is not
possible.
4. Define amount of information.
The amount of information transmitted through the message mk
with probability Pk is given as,

1
Amount of information Ik = log 2  
 Pk 

5. Define entropy
The entropy defined as the source which produces information
per individual message or symbol in a particular interval.
M
1
Entropy H=∑ Pk log 2  
k =1  Pk 

6. List out the properties of entropy.


1. Entropy (H) is Zero,if the event is sure or it is impossible
SOURCE CODING AND ERROR CONTROL CODING

(i.e.,) H =0 if Pk=0 or 1

2. When 1 for all the M symbols, then the symbols are equally
Pk =
M
likely. For such a source entropy is given by,
H=log2M.
3. Upper bound on entropy is given by,
Hmax≤ log2 M.

7. Define information rate.


If the source of the message generates the messages per
second,then the information rate is defined to be,
R =rH =average number of bits of information per second.

 messages   inf ormation 


R =r ×H  
 sec ond   message 
Where, R=Information Rate,
H ( X ) = Entropy or average information
r =The rate at which symbols are generated.
8. Define Source coding.
Coding is a procedure,for mapping a given set of messages or
information {m1m2,..mN} into a new set of encoded messages {c1 ,c2 ,...
cN} in such a way that the transformation is one to one.(i.e.,) for each
message, there is only one encoded message. This is called”Source
coding”.
9. State Shannon’s first theorm.
Given a discrete memory less source of entropy H,the average
code word length N for any distortion less source encoding is bounded as
N ≥H

10. Define code redundancy.


• It is the measure of redundancy of bits in the encoded message
sequence.
• It is given by,
Redundancy r=1- code efficiency
=1-h
ANALOG AND DIGITAL COMMUNICATION

11. Define mutual information.


Mutual information I(X,Y) of a channel is defined as the amount
of information transferred when xi transmitted and yj received. Its repre-
sented by,I(xi,Yj)

x 
P i 
Yj 
I ( X i ,Y j ) = log   bits
P ( Xi )

12. Define Average information.


Average mutual information is defined as the amount of source
information gained per received symbol. It is given by
m n
I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) I ( X i ,Y j )
i =1 j =1

13. Give Properties of mutual information.


(i) The mutual information of a channel is symmetric.

I ( X ;Y ) = I (Y ; X )

(ii) The mutual information can be expressed in terms of entropies of


channel input or channel output and conditional entorpies.

I ( X ;Y ) = H ( X ) − H ( X /Y )
=H (Y ) − H (Y / X )
Where,H ( X /Y ) and H (Y / X ) are conditional entropies.
(iii) The mutual information is always positive.

I ( X ;Y ) ≥ 0
(iv) The mutual information is related to the joint entropy H(X,Y) by
following relation,

I ( X ;Y ) = H ( X ) + H (Y ) − H ( X ,Y )

14. Define channel capacity.


The channel capacity C is given by
SOURCE CODING AND ERROR CONTROL CODING

C = max I ( X ;Y ) = max H ( X ) − H ( X /Y ) 

• I(X;Y) is the difference of two entropies and C is max I(X;Y).Hence,


sometimes the unit of I(X;Y) and C is taken as bits/sec.
15. Define channel efficiency.
The transmission efficiency or channel efficiency is defined as
actual transinformation
η=
max imum tran sin formation
(or )
I ( X ;Y ) I ( X ;Y )
η= =
max I ( X ;Y ) C

16. State channel coding theorem.


This theorem says that if R ≤ C,it is possible to transmit
information without any error even if the noise is present.
17. State channel capacity theorem.
The channel capacity of a white band limited Gaussian channel
is,
 S
C = B log 2 1 +  bits /sec.
 N

Where B is the channel bandwidth,


S is the signal power
N is the total noise power within the channel band width

18. Define code word, block length and code


(i) Code word: The encoded block of “n” bits is called a code word. It
consists of message bits and redundant bits.
(ii) Block length: The number of bits “n” after coding is called as block
length of the code
(iii) code rate: The ratio of message bits (k) and encoded output bits (n)
is called code rate (r)
k
r =
n

ANALOG AND DIGITAL COMMUNICATION

19. Define Hamming distance.


Hamming distance between two code vectors is defined as the
number of positions in which they differ.
For example, X = 110 and Y =101. The two code vectors differ in
second and third bits. Hence hamming distance between X and Y is “2”
(i.e.,) d(X, Y) = d =2

20. Define systematic code.


Systematic code: In the systematic block code, the message bits
present at the beginning of the code block output and then parity/check
bits appears as shown in figure . But in non-systematic code, it is not
possible to differentiate message bits and parity bits; they are mixed
together.
21. Define linear code
Linear code: A code is said to be linear if the sum of the two code
vectors
22. Define cyclic codes
Cyclic codes are also linear block codes. A binary code is said to
be a cyclic code. If it exhibits the following properties:
(i) Linearity Property
(ii) Cyclic Property
(i) Linearity Property
A code is said to be linear if sum of any two code words also is a
code word. This is a property states that the cyclic codes are linear block
codes.
(ii) Cyclic Property
A linear block code is said to be cyclic if every cyclic shift of a word
produces some other code word. Let (x0, x1,........,xn-1) be an n-bit (n,k)
linear block codes. This code is shifted right by 1 bit every time in order
in order to get the other codewords. All the n bit code words obtained by
the circular right shifting of new code words. This is called as the cyclic
property of the cyclic codes.
23. What are the advantages of cyclic codes?
The advantage of cyclic codes over most of the other codes are as
under:
(i) They are easy to encode
(ii) They possess a well defined mathematical structure which has
led to development of very efficient decoding schemes for them.
(iii) The methods that are to be used for error detection and correc-
SOURCE CODING AND ERROR CONTROL CODING

tion are simpler and easy to implement.


(iv) These methods do not require look-up table decoding.
(v) It is possible to detect the error bursts using the cylic codes.
24. What is the drawback of cyclic codes?
Even though the error detection is simpler, the error
correction is slightly more complicated. This is due to the complexity of
the combinational logic circuit used for error correction.
25. What are Golay codes ?
The (23,12) Golay code is a very special type of binary code. This
code is capable of correcting any combination of three or less than three
random errors in the block of 23 bits. The number of message bits out
of 23 is 12. This code has a minimum distance of 7. It satisfies the Ham-
ming bound for t = 3. The (23,12) Golay code is the only known code
which is capable of correcting three errors. The (23, 12) Golay code is
generated by one of the following two polynomials
g1(p) = 1 + p2 + p4 + p5 + p6 + p10 + p11
or g2 (p) =1 + p +p5 + p6 + p7 + p9 + p11
In fact, both g1 (p) and g2 (p) are factors of 1 + p23
Thus, (1 + p23) = (1+p) g1(p) g2(p)
26. What is the difference between block codes and convolutional
codes?
The main difference between the block codes and the convolu-
tional (or recurrent) codes may be listed as under:
(i) Block codes
In block codes n bits generated by the encoder in a particular
time unit depends only on the block of k message bits within that time
unit.
(ii) Convolutional code
In the convolutional codes, the block of n bits generated by the
encoder at any given time depends not only on the k message bits within
that time, but also on the preceding ‘L’ blocks of the message bits (L>1).
Generally, the values of k and n will be small.
27. Write the applications of convolutional code.
Like block codes, the convolutional codes can be designed to ei-
ther detect or correct errors. However, because data is usually retrans-
mitted in blocks, the block codes are more suitable for error detection
and the convolutional codes are more suitable for error correction.
ANALOG AND DIGITAL COMMUNICATION

28. What is constraint length?


It is defined as the number of shifts over single message bit can
influence the output of the encoder. For the encoder of figure
constrain length k = 3 bits. Since a single message
bit influences encoder output for three successive shifts. At the fourth
shift it has no effect on the output

29. What is code dimension ?


The code dimension of a convolutional code depends on n and
k. Here k represents the number of message bits taken at a time by the
encoder, n is the number of encoded bits per message bit. The code di-
mension is therefore represented by (n, k
SOURCE CODING AND ERROR CONTROL CODING

Review Questions
PART A
1. Define channel capacity
2. Differentiate: Uncertainity, information and entropy
3. State the properties of entropy
4. Find entropy of sourece emitting symbols x, y, z with probabilities of
1/5, 1/2,1/3 respectively
5. A source emits four symbols with probabilities P0 = 0, 4, P1 = 0.3, P2
= 0.2, P3 =0.1. Find out the amount of information obtained due to
these four symbols.
6. State the channel capacity theorem.
7. Find the entropy of an event of throwing a die.
8. State Shanno’s first theorem
9. What is meant by self information?
10. State the syndrome properties
11. Define Hamming distance.
12. What are the reasons to use an inter leaver in a turbo code?
13. Define constraint length
14. What is meant by cyclic code?
15. Define trellis diagram
16. Give the difference between linear block code and cyclic code.
17. What are Convolutional codes
18. What are the advantages of viteri decoding technique?
19. Define turbo code.
PART B
1. State and prove the properties of mutual information
2. (a) Explain briefly the source coding theorem
(b) Given five symbols S0, S1,S2, S3 and S4 with their
respective probabilities 0.4,0.2,0.2,0.1,0.1. Use Huffman’s encoding
for symbols and find the average code word length. Also prove that it
satisfies source coding theorem
3. Explain in detail the veterbi algorithim for decoding of Convolutional
codes with a suitable example
4. Consider the generation of (7,4) Cyclic code by the generator polyno-
mial g(x) = 1 + x + x3
(i) Calculate the code word for the message sequence (1001) and
ANALOG AND DIGITAL COMMUNICATION

construct systematic generator matrix ‘G’


(ii) Draw the diagram of encoder and syndrome calculator generated
by the polynomial
5. Explain syndrome decoding in linear block codes with example.
6. Construct a convolution encoder for the following specifications: rate
efficiency 1/2 , constraint length 3, the connection from the shift
register to modulo-2-adder are described by the following equations
g(x) = 1 + x + x3, g(x) = 1 + x2. Determine the output code words for
the message (10011)
7. Explain turbo decoding in details.
8. Explain the viterbi algorithim taking a suitable example.
9. Discuss in detail about cyclic codes.
10. Discuss in detail about Convolutional codes and compare with block
codes.
11. (i) Find a (7,4) systematic cyclic code for the message 1101 using a
generator 1 + x + x3
(ii)Find the message vector for corresponding to the cyclic coded
vector 11001010 using a generator polynomial 1+ x + x3
NUMBER SYSTEM

Advanced Mobile Phone System (AMPS) - Global System for Mobile


Communications (GSM) - Code division multiple access (CDMA) –
Cellular Concept and Frequency Reuse - Channel Assignment and Hand -
Overview of Multiple Access Schemes - Satellite Communication -
Bluetooth
MULTI-USER RADIO COMMUNICATION

MULTI - USER RADIO


COMMUNICATION Unit 5
5.1. INTRODUCTION

5.1.1. First Generation System


During the early 1980’s, there were analog technologies and
the 1G-cellular phone systems was designed for analogue voice
communications only. The following are examples of 1G cellular analog
radio system.
• Advanced mobile phone system (AMPS) in the United States.
• Total access communication systems (TACS) in the United Kingdom.
• Nippon advanced mobile telephone system (NAMTS) in Japan.
Definitions
Simplex System: Simplex systems utilize simplex channels
therefore the communication is unidirectional. The first user can
communicate with the second user. However, the second user cannot
communicate with the first user.
Example One example of such a system is a pager.
Half Duplex System
Half duplex radio systems that use halfduplex radio channels al-
low for non-simultaneous bidirectional communication. The first user can
communicate with the second user but the second user can communicate
to the first user only after the first user has finished his’ conversation. At
a time, the user can only transmit or receive information.
Example A walkie-talkie is an example of a half-duplex system which uses
‘push to talk’ and ‘release to listen’ type of switches.
Full Duplex System
Full duplex systems allow two way simultaneous communications.
Both the users can communicate to each other simultaneously.
Example Tele-communication
This can be, done by providing two simultaneous but/separate
channels to both the users. This is possible by one of the two following
methods.
Frequency Division Duplex (FDD) FDD supports two-way radio
ANALOG AND DIGITAL COMMUNICATION

communication by using two distinct radio channels. One frequency


channel is transmitted downstream from the BS to the MS (forward
channel).

A second frequency is used in the upstream direction and


supports transmission from the MS to the BS (reverse channel).
Because of the pairing of frequencies, simultaneous transmission in both
directions is possible. To mitigate self-interference between upstream and
downstream transmissions, a minimum, amount of frequency separation
must be maintained between the frequency pair, as shown in Figure 5.1.(a)
and (b).
Time Division Duplex (TDD)
TDD uses a single frequency band to transmit signals in both
the downstream and upstream directions. TDD, operates by toggling
transmission directions over a time interval. This toggling takes place very
rapidly and is imperceptible to the user.
Mobile Stations (MS)
Mobile handsets which is used by an user to communicate with
another user Cell ,each cellular service area is divided into small regions
called cell (5 to 20 Km)
Base Stations (BS)
Each cell contains an antenna, which is controlled by a small office.
Mobile Switching Center (MSC)
Each base station is controlled by a switching office, called mobile
switching center.
MULTI-USER RADIO COMMUNICATION

5.2 ADVANCED MOBILE PHONE SYSTEM(AMPS)

5.2.1. History
• AT &T Bell Labs developed the first cellular telephone system in the
late 1970’s.
• First Amps system was deployed in Chicago to cover approximately
2100 square miles 1983.
• A total of 40 MHz spectrum in the 800 MHz band was allocated by the
FCC (Federal Communication Commission).
• In 1989, additional 10 MHz (called ‘Extended Spectrum’) was allocated.
• Large cells and omni-directional BS antennas were used.
5.2.2. AMPS Frequency Allocation
• AMPS uses a 7-cell reuse pattern with cell splitting and sectoring (120
degrees).
• It requires S/I (Signal to Interference ratio) of 18 dB for satisfactory
system performance.
• It uses frequency modulation (FM) for radio transmission.
• Mobile - BS (reverse link) uses frequencies between 824 - 849 MHz.
• BS - Mobile (forward link) uses frequencies between 869 - 894 MHz.
• Separation for forward and reverse channel is 45 MHz.
• Figure 5.2 shows complete Advanced Mobile Phone Service(AMPS) fre-
quency spectrum.
Frequency 824 835 845 846.5 849
(MHz)
812
366 466

33 312 21 21 312 50 83
A A A B B A B
Channel No 991 1023 313 333334 354 666 716 799

Reverse channel-mobile unit transmit and base station receive


frequencies
Frequency 869 870 830 890 890.5 894
(MHz)
832
366 466

33 312 21 21 312 50 83
A A A B B A B
Channel No 991 1023 313 333334 354 666 716 799

Forward channel-mobile unit transmit and base station receive


frequencies
Figure 5.2. Complete Advanced Mobile Phone Service (AMPS)
frequency spectrum
ANALOG AND DIGITAL COMMUNICATION

5.2.3 Mobile unit Transmit and Receive Carrier frequency


Mobile unit’s transmit carrier frequency in MHz for any
channel is calculatedas follows:

f t = 0.03N + 825, for 1 ≤ N ≤ 866 ... (1)


f t = 0.03 ( N − 1023 ) + 825, for 990 ≤ N ≤ 1023 ... ( 2)
Mobile unit's receive frequency is obtained byy simply adding 45 MHz
the transmit frequency
f r = f t + 45 MHz

5.2.4 AMPS Control Channels
Each BS (Base station) has one control channel transmitter
(broadcasts on the forward control channel) and one control
channel receiver (that listens on the reverse control channel for any
cellular phone trying to set up a control). Each BS has 8 or more FM
duplex voice channels and each BS supports 57 voice channels.
Forward Voice Channel
It carries the portion of the telephone conversation originating
from the landline telephone network caller and going to the mobile
user.
Reverse Voice Channels (RVC)
It carries the portion of the telephone conversation originating
from the mobile user and going to landline telephone network
caller.
Forward Control Channel (FCC)
Each BS continuously transmits digital FSK data on the
forward control channel at all times so that idle mobile users can
lock into the strongest FCC wherever they are.
Reverse Control Channel (RCC)
The BS reverse control channel receiver constantly monitors
transmissions from mobile users that are locked on to the matching
FCC.
Number of Control Channels
There are 21 control channels and they are scanned to find
the best serving base station.
A wired user calls a mobile user
1. The call arrives at MSC (Mobile station Controller) and a paging
MULTI-USER RADIO COMMUNICATION

massage is sent out with the mobile’s MIN (Mobile Identification


Number) simultaneously on every BS forward control channel
in the system.
2. When the target call receives the page, it will respond with ACK
transmission on reverse control channel.
3. The MSC then directs the BS to assign a FVC (Forward Voice
Channel)and RVC (Reverse Voice Channel) pair to the mobile so
that this call can take place on a dedicated voice channel.
4. The BS (Base Station) also assigns the mobile a Supervisory Au-
dio Tone(SAT tone) and a Voice Mobile Attenuation Code (VMAC)
as it moves the call to the voice channel.
5. The mobile automatically changes its frequency to the assigned
voice channel pair.
Mobile user places a call
1. It transmits a message on the RCC (Reverse Control
Channel) containing its MIN (Mobile Identification Number),
Electronic Serial Number (ESN),Station Class Mark (SCM) and
the destination telephone number.
2. If received correctly by the BS, this information is sent to the
MSC which checks to see if the mobile is properly registered,
connects the mobile to the PSTN (Public Switched Telephone
Network) assigned the call to a forward and reverse voice
channel pair with a specific SAT and VMAC and commences the
conversation.
3. MSC uses scanning receives called “located receives” in nearby
BS to determine the signal level of a particular user which needs
a hand off.
4. Each mobile reports its MIN and ESN during the brief
registration transmission so that the MSC can validate and
update the customer list.

1 bit 10 bits 11 bits 40 bits 40 bits 40 bits 40 bits 40 bits
Busy Bit word Repeat # 1 Repeat # 1 Repeat # Repeat # 5 Repeat # 5
idle bit synchronization synchronization word A word B 2, 3 and word A word B
A word B
ANALOG AND DIGITAL COMMUNICATION

FM Voice Binary FSK

DOT 1 SYNC W1 DOT 2 SYNC W2

101 11 40 37 11 40
# of bits

DOT 2 SYNC W10 DOT 2 SYNC W11 FM Voice

101 11 40 37 11 40
# of bits
Voice Channel Format - Forward channel
DOT 1 = 101 bit dotting sequence
DOT 2 = 37 bit dotting sequence
SYNC = Synchronization word
WN = Message Word (N)
N = Number of repeated message words

(a) Forward control channel

FM Voice Binary FSK

DOT 1 SYNC W1 DOT 2 SYNC W2

101 11 40 37 11 40
# of bits

DOT 2 SYNC W10 DOT 2 SYNC W11 FM Voice

101 11 40 37 11 40
# of bits
Voice Channel Format - Reverse channel
(b) Reverse control channel
Figure 5.3 Control Channel formats

5.2.5. AMPS Identification Codes


1. Mobile Identification Number (MIN)
• It is a 34-bit binary code.
• It is a 10-digit telephone number.
• The MIN is comprised of a three-digit area code, a digit prefix (exchange
MULTI-USER RADIO COMMUNICATION

number), and a four-digit subscriber (extension) number.


• The exchange number is assigned to the cellular operating company.
2. Electronic Serial Number (ESN)
It is a 32 bit binary code permanently assigned to each mobile unit.
This numberis unique and positively identifies a specific unit.
3. Station Class Mark (SCM)
It is a four bit identification code and it also specifies the maximum
radiated power for the unit.
4. System Identifier (SID)
It is a 15 bit binary code issued by the FCC to an operating
company when it issues it a licence to provide AMPS cellular service to an
area.
5. Digital Colour Code (DCC) and a Supervisory Audio Tone (SAT)
Assigned to base stations by local operating companies in order to
distinguish one base station from another station.
5.2.6. Overview Of Amps
1) Multiple Access Techniques used - FDMA
2) Duplexing -FDD
3) Channel BW -30 KHz
4) Reverse Channel Frequency -824 - 849 MHz
5) Forward Channel Frequency -869 - 894 MHz
6) Voice Modulation -FM
7) Date rate on control/voice channels -10 Kbps
8) Number of Channels -666 - 832
9) Coverage radius by 1 BS -2-25 Km

5.3 GLOBAL SYSTEM FOR MOBILE (GSM)

5.3.1. Introduction
• The development of GSM started in early 1980’s for Europe’s Mobile
infrastructure.
• The first was to establish a team with the title “Group Special Mobile”
(hence the term “GSM”, which today stands for Global System for Mo-
bile Communications) to develop a set of common standards.
• GSM became popular very quickly because it provided improved speech
quality and, through a uniform international standard, made it possi-
ble to use a single telephone number and mobile unit around the world.
ANALOG AND DIGITAL COMMUNICATION

• The European Telecommunications Standardization Institute (ETSI)


adopted the GSM standard in 1991, and GSM is now used in 135
countries.
5.3.2. The Goals of GSM
• Improved spectrum efficiency.
• International roaming.
• Low-cost mobile sets and base stations.
• High-quality speech.
• Compatibility with ISON and other telephone company services.
• Support for new services.
• QoS.
5.3.3. GSM system architecture

Figure 5.4 GSM Architecture


The best way to create a manageable communications system is to
divide it into various subgroups that are interconnected using standard-
ized interfaces. A GSM network can be divided into three groups shown in
Figure 5.4
1. The mobile station (MS)
2. The base station subsystem (BSS)
3. The network subsystem
MULTI-USER RADIO COMMUNICATION

(1) Mobile Station (MS)


• A mobile station may be referred to as a handset, a mobile, a portable
terminal or mobile equipment (ME).
• It also includes a subscriber identity module (SIM) that is normally re-
movable and comes in two sizes.
• Each SIM card has a unique identification number called IMSI (Inter-
national Mobile Subscriber Identity).
• In addition, each MS is assigned a unique hardware identification
called IMEI (International Mobile Equipment Identity).
(2) Base Station Subsystem (BSS)
The base station subsystem (BSS) is made up of the base station
controller (BSC) and the base transceiver station (BTS).
(3) Base Transceiver Station (BTS)
• GSM uses a series of radio transmitters called BTSs to connect the
mobiles to a cellular network.
• Their tasks include channel coding/ decoding and encryptions/
decryptions.
• A BTS is comprised of radio transmitters and receivers, antennas, the
interface to the PCM facility, etc.
• The BTS may contain one or more transceivers to provide the required
call handling capacity.
• A cell site may be Omni directional or split into typically three
directional cells.
(4) Base Station Controller (BSC)
• A group of BTSs are connected to a particular BSC which manages the
radio resources for them.
• The primary function of the BSC is call maintenance.
• The mobile stations normally send a report of their received signal
strength to the BSC every 480ms.
• With this information the BSC decides to initiate handovers to other
cells, change the BTS transmitter power, etc.
(5) Network Subsystem
The mobile switching center (MSC)
• An act like a standard exchange in a fixed network and additionally
provides all the functionality needed to handle a mobile subscriber.
• The main functions are registration, authentication, location updating
ANALOG AND DIGITAL COMMUNICATION

and handovers and call routing to a roaming subscriber.


• The signalling between functional entities (registers) in the network
subsystem uses Signaling System7 (SS7).
• If the MSC also has a gateway function form communicating with other
networks, it is called Gateway MSC (GMSC).
The home locations register (HLR)
• A database used for management of mobile subscribers. It stores the
international mobile subscriber identity (IMSI), mobile station ISDN
number (MSISDN) and current visitor location register (VLR) address.
• The main information stored there concerns the location of each mobile
station in order to be able to route calls to the mobile subscribers man-
aged by each HLR.
• The HLR also maintains the services associated with each MS.
• One HLR can serve several MSCs.
The visitor locations register (VLR)
• Contains the current location of the MS and selected administrative
information from the HLR, necessary for call control and provision of
the subscribed services, for each mobile currently located in the geo-
graphical area controlled by the VLR.
• VLR is connected to one MSC and is normally integrated into the MSC’s
hardware.
The authentication center (AuC)
• A protected database that holds a copy of the secret key stored in each
subscriber’s SIM card, which is used for authentication and encryption
over the radio channel.
• The AuC provides additional security against fraud. It is normally
located close to each HLR within a GSM network.
The equipment identity register (EIR)
• The EIR is a database that contains a list of all valid mobile station
equipment within the network, where each mobile station is identified
by its international mobile equipment identity (IMEI).
The EIR has three databases
White list: for all known, good IMEIs
Black list: for bad or stolen handsets
Grey list: for handsets/IMEIs that are uncertain
MULTI-USER RADIO COMMUNICATION

Operation and Maintenance Center (OMC)


• The OMC is a management system that assists the network operator in
maintaining satisfactory operation of the GSM network.
• Hardware redundancy and intelligent error detection mechanisms help
prevent network down-time.
• The OMC is responsible for controlling and maintaining the MSC, BSC
and BTS.
• It can be in charge of an entire public land mobile network (PLMN) or
just some parts of the PLMN.
5.3.4. Interfaces and protocols

Figure 5.5 GSM protocols are basically divided into three layers

Layer 1 Physical layer


• Enables physical transmission (TDMA, FDMA, etc.)
• Assessment of channel quality.
Layer 2 Data link layer
• Multiplexing of one or more layer 2 connections on control/ signalling
channels.
• Error detection (based on HDLC).
• Flow control.
• Transmission quality assurance.
• Routing.
Layer 3 Network layer
• Connection management (air interface).
• Management of location data.
• Subscriber identification.
• Management of added services (SMS, call forwarding, conference, calls,
etc.)
The air interface Um
• The air interface for GSM is known as the Um interface.
• The International Telecommunication Union (ITU), which manages in-
ternational allocation of radio spectrum (among many other functions),
ANALOG AND DIGITAL COMMUNICATION

has allocated the following bands: GSM900:


Uplink: 890 - 915 MHz (mobile station to base station)
Downlink: 935 - 960 MHz (base station to mobile station).
B interface: between MSC and VLR (use MAP /TCAP protocols)
C interface: between MSC and HLR (MAP /TCAP)
D interface: between HLR and VLR (MAP /TCAP)
E interface: between two MSCs (MAP /TCAP + ISUP /TUP)
F interface: between MSC and EIR (MAP /TCAP)
G interface: between VLRs (MAP /TCAP)
5.3.5. GSM logical channels
• Several logical channels are mapped onto the physical channels.
• The organization of logical channels depends on the application and
the direction of information flow (uplink/downlink or bidirectional).
• A logical channel can be either a traffic channel (TCH), which carries
user data, or a signalling channel for call establishment. GSM channel
classifications are shown in Fig.
• The signalling channels on the air interface are used for call establish-
ment, paging, call maintenance, synchronization, etc.
• There are 3 groups of signalling channels

Figure 5.6 GSM Logical Channels


(i) Broadcast Channels (BCH)
Carry only downlink information and are responsible mainly for
MULTI-USER RADIO COMMUNICATION

synchronization and frequency correction. This is the only channel type


enabling point-to-multipoint communications in which short messages
are simultaneously transmitted to several mobiles.
The BCHs include the following channels
Broadcast Control Channel (BCCH)
• General information, cell specific; e.g. local area code (LAC), network
operator, access parameters, list of neighboring cells, etc.
• The MS receives signals via the BCCH from many BTSs within the
same network and or different networks.
Frequency Correction Channel (FCCH)
• Downlink only; correction of MS frequencies; transmission of
frequency standard to MS; it is also used for synchronization of an
acquisition by providing the boundaries between time slots and the
position of the first time- slot of a TDMA frame.
Synchronization Channel (SCH)
• Downlink only; frame synchronization (TDMA frame number) and
identification of base station.
• The valid reception of one SCH burst will provide the MS with all the
information needed to synchronize with a BTS.
(ii). Common Control Channels (CCCH)
• A group of uplink and downlink channels between the MS card and the
BTS.
• These channels are used to convey information from the network to
MSs and provide access to the network.
The CCCHs include the following channels
Paging Channel (PCH)
• Downlink only; the MS is informed by the BTS for incoming calls via
the PCH.
Access Grant Channel (AGCH)
• Downlink only; BTS allocates a TCH or SDCCH to the MS, thus
allowing the MS access to the network.
ANALOG AND DIGITAL COMMUNICATION

Random Access Channel (RACH)


• Uplink only; allows the MS to request an SDCCH in response to a page
or due to a call; the MS chooses a random time to send on this channel.
• This creates a possibility of collisions with transmissions from other
MSs.
• The PCH and AGCH are transmitted in one channel called the paging
and access grant channel (PAGCH).
• They are separated by time.
(iii). Dedicated Control Channels (DCCH)
Responsible for example roaming, handovers, encryption, etc. The
DCCHs include the following channels.
Stand-Alone Dedicated Control Channel (SDCCH)
Communications channel between MS and the BTS; signaling
during call setup before a traffic channel (TCH) is allocated.
Slow Associated Control Channel (SACCH)
Transmits continuous measurement reports (e.g. field strengths) in
parallel to operation of a TCH or SDCCH; needed,
e.g: For handover decisions; al- ways allocated to a TCH or SDCCH;
needed for ‘non-urgent’ procedures,
e.g: For radio measurement data, power control (downlink only), timing
advance, etc., always used in parallel to a TCH or SDCCH.
Fast Associated Control Channel (FACCH)
• Similar to the SDCCH, but used in parallel to operation of the TCH; if
the data rate of the SACCH is insufficient, ‘borrowing mode’ is used:
Additional bandwidth is borrowed from the TCH; this happens for mes-
sages associated with call establishment authentication of the sub-
scriber, handover decisions, etc.
• Almost all of the signaling channels use the •normal burst” for-
mat, except for the RACH (Random Access Burst), FCCH (Frequency
Correction Burst) and SCH (Synchronization Burst) channels.
MULTI-USER RADIO COMMUNICATION

5.3.6. Call setup

BSS VLR 4 HLR

8
7 6
4 4
11 10
8 8
MS 5 1
9 BSS 9 MSC GMSC
Public
network
12 12

BSS

Figure 5.7 Block diagram for Call setup


1. The incoming call is passed from the fixed network to the gateway MSC
(GMSC)
2. Then, based on the IMSI numbers of the called party, its HLR is deter-
mined
3. The HLR checks for the existence of the called number. Then the
relevant VLR is requested to provide a mobile station roaming number
(MSRN)
4. This is transmitted back to the GMSC
5. Then the connection is switched through to the responsible MSC
6. Now the VLR is queried for the location range and reachability status
of the mobile subscriber
7. If the MS is marked reachable, a radio call is enabled and executed in
all radio zones assigned to the VLR
8. When the mobile subscriber telephone responds to the Page request
from the current radio cell.
9. All necessary security procedures are executed
10. If this is successful, the VLR indicates to the MSC that the call can be
completed.
ANALOG AND DIGITAL COMMUNICATION

5.3.7 GSM network features


(a) Roaming:
• The roaming feature allows a user to make and receive calls in any
GSM network and to use the same user- specific services worldwide.

(b) Handover
• In a cellular network’ the radio and fixed voice connections are not
permanently allocated for ‘the duration of a call. Handover or handoff
as it is called in North America, means switching an ongoing call to a
different channel or cell.
• There are four different types of handovers in GSM, which involve
transferring a connection between:
• Channels (timeslots) in the same cell (intra-BTS handover)
• Cells under the control of the same BSC (inter-BTS handover)
• Cells under the control of different BSCs, but belonging to the
same MSC (inter-BSC handover)
• Cells under the control of different MSCs (inter-MSC handover)
• The first two types of handover involve only one base station
controller (BSC). To save signalling bandwidth, they are managed by
the BSC without involving the MSC, except to notify it upon completion
of the handover.
• The last two types of handover are handled by the MSCs involved.
Note:
• Handovers can be initiated by either the BSC or the MSC (as a means
of traffic load balancing).
• During its idle timeslots, the mobile scans the broadcast control
channel of up to 16 neighbouring cells, and forms a list of the six
best candidates for possible handover, based on the received signal
strength.
• This information is passed to the BSC and MSC, at least once per sec-
ond, and is used by the handover algorithm.
(c) Short Message Service (SMS)
• SMS offers message delivery (similar to ‘two-way-paging) that is
guaranteed to reach the MS. If the GSM telephone is not turned on, the
message is held for later delivery. Each time a message is delivered to
an MS; the network expects to receive an acknowledgement from this
MULTI-USER RADIO COMMUNICATION

MS that the message was correctly received.


• Without a positive acknowledgement the network will re-send the mes-
sage or store it for later delivery. SMS supports messages up to 160
characters in length that can be delivered by any GSM network around
the world• wherever the MS is able to roam.
(d) Call Waiting (CW)
• CW is a network-based feature that must also be supported by the
GSM telephone (MS).
• With CW, GSM users with a call in progress will receive an audible
beep to alert them that there is an incoming call for the MS.
• The incoming call can be accepted, sent to voice mail or rejected. If the
incoming call is rejected, the caller will receive a busy signal.
• Once the call is accepted, the original call is put on hold to allow a con-
nection to the new incoming call.
(e) Call Hold (CH)
• CH must be supported by the MS and the network.
• It allows the MS to ‘park’ an ‘in progress call’, to make additional calls
or to receive incoming calls.
(f) Call Forwarding (CF)
• This is a network-based feature that can be activated by the MS.
• CF allows calls to be sent to other numbers under conditions defined
by the user.
• These conditions can be either unconditional or dependent on certain
criteria (no answer, busy, not reachable).
(g) Calling Line ID
• Calling Line ID must be supported by the GSM network and the tel-
ephone.
• The GSM telephone displays the originating telephone number of in-
coming calls.
• This feature requires the caller’s network to deliver the calling line ID
(telephone no.) to the GSM network.
(h) Mobility Management (MM)
• The GSM network keeps track of which mobile telephones are powered
on and active in the network.
• To provide as efficient call delivery as possible, the network keeps track
of the last known location of the MS in the VLR and HLR.
ANALOG AND DIGITAL COMMUNICATION

(i) Authentication
• Authentication normally takes place when the MS is turned on with
each incoming call and outgoing call.
• A verification that the »Ki« (security code) stored in the AuC matches
the »Ki« stored in SIM card of the MS completes this process.
• The user must key in a PIN code on the handset in order to activate the
hardware before this automatic procedure can start.

5.4 CDMA

5.4.1 Introduction
What is CDMA?
• CDMA (Code-Division Multiple Access) is a channel access method
used by various radio communication technologies.
• It is a form of multiplexing, which allows numerous signals to occupy
a single transmission channel, optimizing the use of available band-
width.
• The technology is used in ultra-high-frequency (UHF) cellular telephone
systems in the 800-MHz and 1.9-GHz bands.
5.4.2 Principle Operation
CDMA employs analog-to-digital conversion (ADC) in combination
with spread spectrum technology. Audio input is first digitized into binary
elements. The frequency of the transmitted signal is then made to vary
according to a defined pattern (code), so it can be intercepted only by a
receiver whose frequency response is programmed with the same code, so
it follows exactly along with the transmitter frequency.
There are trillions of possible frequency-sequencing codes, which
enhance privacy and makes cloning difficult.
5.4.3 General Specification of CDMA
• Rx: 869-894MHz Tx: 824-849MHz
• 20 Channels spaced 1250kHz apart (798 users/channel)
• QPSK/(Offset) OQPSK modulation scheme
• 1.2288Mbps bit rate
• IS-95 standard
• Operates at both 800 and 1900 MHz frequency bands.
MULTI-USER RADIO COMMUNICATION

5.4.4 Characteristics of CDMA


• These systems were designed using spread spectrum because of its
security and resistance to jamming.
• CDMA can effectively reject narrow band interference.
• CDMA devices use a rake receiver, which exploits multipath delay com-
ponents to improve the performance of the system.
• In a CDMA system, the same frequency can be used in every cell, be-
cause channelization is done using the pseudo-random codes.
• Reusing the same frequency in every cell eliminates the need for fre-
quency planning in a CDMA system.
• CDMA systems use the soft hand off, which is undetectable and pro-
vides a more reliable and higher quality signal.
5.4.5 Preferable Codes in CDMA
• Preferable code is used in CDMA. There are different codes that can be
used depending on the type of a system of CDMA. There are two types
of system
1. Synchronous (Synchronous) System and
2. Asynchronous (Asynchronous) System.
• In a synchronous system, orthogonal codes (Orthogonal Code) can be
used.
• In asynchronous system for this, such as pseudo-random code
(Pseudo-random Noise) or Gold code is used.
• In order to minimize mutual interference in DS-CDMA, the spreading
codes with less cross-correlation should be chosen.
1. Synchronous DS-CDMA
• Synchronous CDMA Systems are realized in Point to Multi-point
Systems.
For example: Forward Link (Base Station to Mobile Station) in
Mobile Phone.
Forward
(Down Link)

Synchronous Chip Timing

A
A
B
Less Interference for
B
A station

Signal for B station


(after re-spreading)

Figure 5.8 Synchronous DS-CDMA


ANALOG AND DIGITAL COMMUNICATION

2. Asynchronous DS-CDMA
• In asynchronous CDMA system, orthogonal codes have bad cross-
correlation.
• Unlike the signal from the base station, the signal from the mobile
station to the base station, becomes the asynchronous system.
• In an asynchronous system, somewhat mutual interference increases,
but it uses the other codes such as PN code or Gold code.
Asynchronous Chip
Reverse link
Timing
(Up Link) A

B
Signals from A and B
are interfering each
other
B Big Interference from
A A station

Signal for B station


(after re-spreading)

Figure 5.9 Asynchronous DS-CDMA


5.4.5 Spread Spectrum Technique of CDMA
• CDMA is based around the use of direct sequence spread spectrum
techniques.
• Essentially CDMA is a form of spread spectrum transmission which
uses spreading codes to spread the signal out over a wider bandwidth
then would normally be required.
• By using CDMA spread spectrum technology, many users are able to
use the same channel and gain access to the system without causing
undue interference to each other.
There are two types of spread spectrum
1. Frequency Hopped Spread Spectrum (FHSS)
2. Direct Sequence Spread Spectrum (DSSS)
• The key element of code division multiple access CDMA is its use of
a form of transmission known as direct sequence spread spectrum,
DSSS.
• Direct sequence spread spectrum is a form of transmission that looks
very similar to white noise over the bandwidth of the transmission.
MULTI-USER RADIO COMMUNICATION

However once received and processed with the correct de scrambling


codes, it is possible to extract the required data.
• When transmitting a CDMA spread spectrum signal, the required data
signal is multiplied with what is known as a spreading or chip code
data stream. The resulting data stream has a higher data rate than the
data itself. Often the data is multiplied using the XOR (exclusive OR)
function.

Information bit - “1”

Spreading code
“1 00 10”

Result “01101”
Figure 5.10 CDMA spreading

1. Frequency Hopped Spread Spectrum (FHSS)


• The fundamental concept of this spectrum is to break a message into
fixed size blocks. This is used for a secure communication in the battle
environment (military).
• The data with each block is transmitted in sequence except on a
different carrier frequency
• A pseudorandom code is used to generate a unique frequency hopping
sequence.
• In the frequency hopping the sequence of the frequencies selected has
to be known by both the transmitter and the receiver prior to the begin-
ning of the transmission.
• The transmitting sends one block on the Radio frequency carrier and
it switches (hop) to the next frequency and so on in sequence manner.
• As soon as it receives, the receiver switches the next frequency in the
sequence.
ANALOG AND DIGITAL COMMUNICATION

• Each transmitter in the system has different hopping (switching)


sequence to prevent interfacing with the subscribers using the same
radio channel frequency

IF Modulated with m(t)

Data Modulator s(t)


M(t)

Local Oscillator Spectrum s(t)

Synthesizer

Hop Word

Hop Clock PN m(t)


Generator Spectrum

Figure 5.11 Block diagram Frequency Hopped Spread Spectrum


2. Direct Sequence Spread Spectrum (DSSS)
• In this system, a high bit rate pseudorandom code is added to the
low bit rate Information signal to generate high bit rate pseudorandom
signal closely resembling noise that contains both the pseudorandom
code and the original data signal.
• The pseudorandom code must be known to both the Transmitter and
the intended receiver.
• When the receiver detects the direct sequence transmission it simply
subtract the pseudorandom signal from the composite receive signal to
extract the information.
• In CDMA the radio frequency bandwidth is divided into a few broad-
band radio channels that have a much higher bandwidth than the digi-
talized voice signal.
• The voice signal is to generate the high bit rate signal and transmitted
in such a way that it occupies the entire broadband radio channel.
• Adding high bit rate to the pseudorandom signal to the voice
information made the signal more dominant and less susceptible to
MULTI-USER RADIO COMMUNICATION

Interference. (Low power transmission and low number of transmitter


and less expensive receivers)

BPSK or QPSK Modulator

Data Spread s(t)

fc

PN Clock PN
Generator Oscillator
1.25 MHz

fc

Figure 5.12 Direct Sequence Spread Spectrum


5.4.6 Advantages
• Efficient practical utilization of fixed frequency spectrum.
• Flexible allocation of resources.
• Many users of CDMA use the same frequency, TDD or FDD may be
used.
• Multipath fading may be substantially reduced because of large signal
bandwidth.
• No absolute limit on the number of users, Easy addition of more users.
• Impossible for hackers to decipher the code sent
• Better signal quality
• No sense of handoff when changing cells
• CDMA is compatible with other cellular technologies this allows for
nationwide roaming.
5.4.7 Disadvantages
• As the number of users increases, the overall quality of service
decreases.
• Self-jamming.
• Near- Far- problem arises.
5.4.8 Uses of CDMA
• One of the early applications for code division multiplexing is in GPS.
This predates and is distinct from its use in mobile phones.
• The Qualcomm standard IS-95, marketed as CDMA One.
ANALOG AND DIGITAL COMMUNICATION

• The Qualcomm standard IS-2000, known as CDMA2000. This standard


is used by several mobile phone companies, including the Globalstar
satellite phone network.
• The UMTS 3G mobile phone standard, which uses W-CDMA.
• CDMA has been used in the OmniTRACS satellite system for
transportation logistics.
5.4.9 Drawbacks caused by the CDMA near far problem
One of the problems encountered with CDMA is known as the “Near
Far” problem. This CDMA near far problem is a key element in CDMA and
as a result close control of the power within CDMA handsets is required.
CDMA near far problem
• The CDMA near far problem arises because handsets may be anywhere
within the particular cell boundaries. Some handsets will be close to
the base station, whereas others will be much further away.
CDMA near far problem solution
• The CDMA near far problem is a serious problem, and requires an
effective means of overcoming the problem for CDMA to operate
correctly.
• The schemes used to overcome the CDMA near far problem utilise fast
and accurate power control systems.
• The CDMA near far problem is resolved in systems such as CDMA
One, CDMA2000 and W-CDMA by using sophisticated power control
schemes to ensure that the power levels at the base station fall within
a given band. Although there are some penalties to be paid for these
schemes to overcome the CDMA near far problem, they operate well
and enable significant gains to be made by using CDMA over previous
technologies.

5.5 CELLULAR CONCEPTS


5.5.1. Principles of Cellular Network
Cellular radio is a technique that was developed to increase
the capacity available for mobile radio telephone service. Prior to the
introduction of cellular radio, mobile radio telephone service was only”
provided by a high-power transmitter/receiver. A typical system would
support about 25 channels with an effective radius of about 80 km. The
way to increase the capacity of the system is to use lower-power systems
with shorter radius and to use numerous transmitters/receivers.
MULTI-USER RADIO COMMUNICATION

5.5.2 Cellular Network Organization


The essence of a cellular network is the use of multiple low-power
transmitters, on the order of 100 W or less. Because the range of such a
transmitter is small, an area can be divided into cells, each one served by
its own antenna. Each cell is allocated a band of frequencies and is served
by a base station, consisting of transmitter, receiver, and control unit.
Adjacent cells are assigned different frequencies to avoid
interference or crosstalk. However, cells sufficiently distant from each oth-
er can use the same frequency band. The first design decision to make is
the shape of cells to cover an area. A matrix of square cells would be the
simplest layout to define (figure 5.13(a)). However, this geometry is not
ideal. If the width of a square cell is d, then a cell has four neighbors at a
distance d and four neighbors at a distance V2d. As a mobile user within
a cell moves toward the cell’s boundaries, it is best if all of the adjacent
antennas are equidistant. This simplifies the task of determining when to
switch the user to an adjacent antenna and which antenna to choose. A
hexagonal pattern provides for equidistant antennas (figure 5.13 (b)). The
radius of a hexagon is defined to be the radius of the circle that circum-
scribes it (equivalently, the distance from the center to each vertex; also
equal to the length of a side of a hexagon). For a cell radius R, the distance
between the cell centre and each adjacent cell centre is d =√3R.

Figure 5.13 Cellular Geometries


5.5.3 Frequency Reuse
In a cellular system, each cell has a base transceiver. The transmission
power is carefully controlled (to the extent that it is possible in
the highly variable mobile communication environment) to allow
communication within the cell using a given frequency band while
limiting the power at that frequency that escapes the cell into adjacent
ANALOG AND DIGITAL COMMUNICATION

cells. Nevertheless, it is not practical to attempt to use the same frequency


band in two adjacent cells. Instead, the objective is to use the same fre-
quency band in multiple cells at some distance from one another. This
allows the same frequency band to be used for multiple simultaneous
conversations in different cells. Within a given cell, multiple
frequency bands are assigned, the number of bands depending on the traffic
expected. A key design issue is to determine the minimum separation
between two cells using the same frequency band, so that the two cells
do not interfere with each other. Various patterns of frequency reuse are
possible, figure 5.14 shows some examples. If the pattern consists of N
cells and each cell is assigned the same number of frequencies, each cell
can have KIN frequencies, where K is the total number of frequencies al-
lotted to the system. For AMPS, K = 395, and N = 7 is the smallest pattern
that can provide sufficient isolation between two uses of the same fre-
quency. This implies that there can be at most 57 frequencies per cell on
average.
In characterizing frequency reuse, the following parameters are
commonly used:
D= minimum distance between centers of cells that use the same frequen-
cy band (called co channels)
R= radius of a cell
d= distance between centers of adjacent cells (d = √3R)
N= number of cells in a repetitious pattern (each cell in the pattern uses a
unique set of frequency bands),’ termed the reuse factor’.
MULTI-USER RADIO COMMUNICATION

Fig. 5.14 Frequency Reuse pattern


In a hexagonal cell pattern, only the following values of N are possible
N = 1 ^ 2 + J ^ 2 + (1× J )
where, I . J = 0,1, 2, 3,....
Hence, possible values of N are 1,3,4,7,9,12,13,16,19,21, and so
on. The following relationship holds
D / R = 3N
The number of users is called Frequency Reuse Factor (FRF), which
can be expressed as, N
FRF = N / C
Where N= total number of full duplex channels in an area
C = total number of full duplex channels in a cell.
Hexagonal shape cell provides exactly six equidistant neighbouring
cells and lines joining the centres of any cell with its neighbouring cells
are separated by multiples of 60°, there are only certain cluster sizes and
cell layouts is possible. In order to connect cells without gaps between ad-
jacent cells, the geometry of hexagon is such that the number of cells per
cluster can have only values that satisfy the equation,

N = i 2 + ij + j 2

Where, N = number of cells per cluster.


i, j = non-negative integer values
ANALOG AND DIGITAL COMMUNICATION

} j=2
600

A
} i=3

Figure 5.15 Locating First Tier Co-Channel cells

The process of finding the tier with the nearest co-channel cells is
as follows and shown in figure 5.15
(i) Move i cells center of successive cells.
(ii) Turn 60° in a counter clockwise direction.
(iii) Move j-cells forward through the center of successive cells.
5.5.3.1 Increasing Capacity
In time, as more customers use the system, traffic may build up so
that there are not enough frequency bands assigned to a cell to handle its
calls. A number of approaches have been used to cope with this situation,
including the following:
5.5.3.2 Adding new channels
Typically, when a system is set up in a region, not all of the
channels are used, and growth and expansion can be managed in an
orderly fashion by adding new channels.
5.5.3.3 Frequency borrowing
In the simplest case, frequencies are taken from adjacent cells by
congested cells. The frequencies can also be assigned to cells dynamically.
5.5.4. Interference
The two major types of interference occurs within the
cellular telephone system are co-channel interference and adjacent
channel interference.
MULTI-USER RADIO COMMUNICATION

5.5.4.1 Co-channel interference


In a given coverage area, two cells using the same set of frequencies
are called co-channel cells, and interference between them is called
co-channel interference ‘reduce the co-channel interference, a certain
minimum distance must be maintained to separate co-channels.

f1

f1 Base station
cell A
Cluster 2

Base station
cell A
Cluster 1

Figure 5.16 Co-Channel Interference


Figure 5.16 shows co-channel interference. The base station
located in cell A of cluster 1 is using frequency f1 , and at the same
time, the base station in cell A of cluster 2 is using the same frequency.
Although the two cells are in different clusters, they both use the A-group
of frequencies. The mobile unit in cluster 2 is receiving the same frequency
from two different base stations in two different clusters. Although the
mobile unit is under the control of the base station in cluster 2, the signal
from cluster 1 is received at a lower power level as co-channel interference.
Cell A
Cluster 2

Cell A
Cluster 1

Fig 5.17 Co-Channel reuse Ratio


In a cellular system where all cells are approximately the same size,
co-channel interference is dependent on the radius (R) of the cells and
ANALOG AND DIGITAL COMMUNICATION

the distance to the center of the nearest co-channel cell (D) as shown in
5.17. Increasing the D/R ratio (sometimes called co-channel reuse ratio)
increases the spatial separation between cochannel cells relative to the
coverage distance. Therefore. increasing the co-channel reuse ratio (Q) can
reduce co-channel interference.
For a hexagonal geometry, the co-channel reuse ratio determined
by
Q = D/R
where Q = co-channel reuse ratio (unitless)
D = a distance to center of the nearest co-channel cell (kilometers)
R = a cell radius (kilometers)
The smaller the value of (Q, the larger will be the channel
capacity. However, a large value of Q improves the co-channel interference
and, thus, the overall transmission quality.

5.5.4.2. Adjacent-Channel Interference


Adjacent-channel interference occurs when transmissions from
adjacent channels/(channels next to one another in the frequency do-
main) interfere with each other.
Adjacent channel interference due to imperfect filters in receivers
that allow nearby frequencies to enter the receiver. Adjacent-channel in-
terference is most prevalent when an adjacent channel is transmitting very
close to a mobile unit’s receiver at the same time the mobile unit is trying
to receive transmissions from the base station on an adjacent frequency.
This is called the near-far effect and is most prevalent when a mobile
unit is receiving a weak signal from the base station. Adjacent-channel
interference is illustrated in Figure 5.18. Mobile unit 1 is receiving
frequency II from base station A. At the same time, base station A is
transmitting frequency 12 to mobile unit 2. Because mobile unit 2 is much
farther from the base station than mobile unit 1, h is transmitted at a
much higher power level than f1.
MULTI-USER RADIO COMMUNICATION

Imperfect filtering Filter response


allows some of the
f1 signal to enter the f1 f2 f3 f4 f5
receiver and interfere f2
with f1 f1

f2 plus weak/signal
f1 plus strong
f1 signal

Mobile unit 1 Mobile unit 2

Fig 5.18 Adjacent Channel interference


Mobile unit 1 is located very close to the base station, and 12 is
located next to f1 in the frequency spectrum (i.e .. the adjacent channel)
hence, mobile unit 1 is receiving at a much higher power level than f1,
Because of the high power level, the filters in mobile unit 1 cannot block
all the energy from 12 and the signal intended for mobile unit 2 interferes
with mobile unit l’s reception of f1. f1 does not interfere with mobile unit
2’s reception because f1 is received at a much lower power level than f2,
Using precise Altering and making careful channel assignments, adjacent,
channel interference can be minimized in receivers. Maintaining a reason-
able frequency separation between channels in a given cell can also reduce
adjacent-channel interference. However, if the reuse factor is small, -the
separation between adjacent Channels may not be sufficient to maintain
an adequate adjacent-channel interference level.

5.5.5. Cell splitting and Cell sectoring


There are two methods of increasing the capacity of a cellular
telephone system,
1. Cell splitting
2. Sectoring.
Cell splitting allows for an orderly growth of a cellular system,
whereas sectoring utilizes directional antennas to reduce co-channel
and adjacent-channel interference and allow channel frequencies to be
reassigned (reused).
ANALOG AND DIGITAL COMMUNICATION

5.5.5.1 Cell splitting


In practice, the distribution of traffic and topographic features is
not uniform, and this presents opportunities of capacity increase. Cells
in areas of high usage can be split into smaller cells. Generally, the
original cells an; about 6.5 to 13 km in size. The smaller cells
can themselves be split; however, 1.5-km cells are close to the
practical minimum size as a general solution (but see the subsequent
discussion of microcells). To use a smaller cell, the power level used must
be reduced to keep the signal within the cell. Also, as the mobile units
move, they pass from cell to cell, which requires transferring of the call
from one base transceiver to another. This process is called a hand off. As
the cells get smaller, these handoffs become much more frequent.
Figure 5.19 indicates schematically how cells can be divided to
provide more capacity. A radius reduction by a factor of F reduces the
coverage area and increases the required number of base stations by a
factor of F2.
5.5.5.2. Cell sectoring
With cell sectoring, a cell is divided into a number of wedge shaped
sectors, each with its own set of channels, typically 3 or 6 sectors per
cell. Each sector is assigned a separate subset of the cell’s channels, and
directional antennas at the base station are used to focus on each sector.
Minicell S1
Macrocell
S2
Microcell S3

(b)

S1
S6 S2
S5 S3
S4
(c)
(a)

Fig 5.19 (a). Cell splitting (b). 120°cell sectoring and (c). 60° cell sec-
toring
5.5.6. Channel assignment or allocation
Channel assignment affects the performance of the system, espe-
cially when it comes to handoffs. There are several channel assignment
MULTI-USER RADIO COMMUNICATION

strategies. We will discuss two basic types:


a) Fixed Channel Assignment(FCA)
b) Dynamic Channel Assignment (DCA).
a) Fixed Channel Assignment (FCA)
In this channel assignment, channels are pre-allocated to different
cells meaning that each cell is assigned a specific number of channels and
the frequencies of these channels are set. Such a channel assignment has
the following aspects:
Any call attempts in a cell after all channels of that cell become
occupied gets BLOCKED (meaning that the caller gets a signal indicating
that all channels are occupied).
Very simple and requires least amount of processing
• A variation of this method is the Borrowing Strategy
• Cells in this strategy are allowed to borrow channels from adjacent
cells if their channels are fully occupied while adjacent cells have free
channels
• MSC (Mobile Switching Center) monitors the process and gives permis-
sion to borrowing cell to borrow channels putting in mind (i) donating cell
is not affected by the borrowing process, (ii) no interference will occur by
moving the channel from one cell to another.
b) Dynamic Channel Assignment (DCA)
In this channel assignment, channels are NOT pre-allocated to any
cells meaning that any channel can be allocated to any desired cell during
the operation of the system. Such a channel assignment has the following
aspects are
• MSC monitors all cells and all channels
• Each time a call request is made, serving BS requests a channel from
the MSC
• MSC runs an algorithm that takes into account
• Possibility of future blocking in cells
• Frequency being used for channel
• The reuse distance of the channel
MSC assigns a channel only if it is not used and if it will not cause
co-channel interference with any cell in range. This algorithm provides
higher capacity (less blocking). It requires huge computational power
• MSC collects real-time data of channel occupancy, traffic distribution,
ANALOG AND DIGITAL COMMUNICATION

and radio signal strengths indicators (RSSI).


5.5.7 Handoff Strategies
Handoff (H.O.) is the process of transferring an active call from one
cell to another as the mobile unit moves from the first cell to the other
cell without disconnecting the call. The amount of received power by the
mobile phone or the amount of received power by the tower or both are
usually used to determine whether a handoff is necessary or not.
So hand off is the way to maintain the call connection during change
in base station. It happens in two different manners. One is the soft hand
off and second one is the hard hand off.
a. Soft handoff defines the ability to select instantaneous received sig-
nals from a variety of base stations. Furthermore it allows continues
calls without termination or any interference. Soft handoff normally
takes approximately 200ms. Generally hard handoff occurs in GSM
(Global system for mobile) and” soft handoff occurs in CDMA (code
division multiple access).
b. Hard handoff is applicable in GSM system. It is applicable, when the
mobile station is disconnected from serving the base station 1 before
connection with neighboring base station 2.
In the case of hard handoffs, a Mobile terminal is served by
only one base station (or by only one access network in the case of the
vertical handoff) at a time. It connects with the new base station or the
new network only after having broken its connection with the serving base
station. This is referred to as “break before make connection”
• Most systems give higher priority to handoff over call initiation (it
is more annoying to have an active call disconnected than to have
a new call blocked).
• Handoffs must be completed successfully as much as possible as
infrequently as possible and must be unnoticeable to the user i.e.,
the user should not feel about handoff
To meet these requirements, two power levels are defined,
• Minimum acceptable signal to maintain the call Minimum to main-
tain call
• This is the minimum power received by the mobile phone or tower
that allows the call to continue.
• Once the signal drops below this level, it becomes impossible to
MULTI-USER RADIO COMMUNICATION

maintain the active call because the signal is too weak (noise level
becomes high relative to signal level).
• Handoff Threshold PThreshold
• This power limit is usually selected to 1)e few dB’s (S dB to 10 dB)
above the minimum acceptable signal to main the call level.
• The margin Δ = PThreshold - PMinimum to maintain call should
not be too large or too small.
If it is, too large, unnecessary handoffs will occur because the hand-
off threshold is high and will be reached very often even while the mobile
phone is still deep inside the serving cell.
• Unnecessary handoffs put a lot of strain (a lot of work) on the MSC and
system and reduce network capacity because of the need of free chan-
nels in other cells.
• Too small, calls may get dropped before a successful handoff takes
place because not enough time is available for the handoff where the
signal power will drop very quickly from the handoff threshold to the
minimum power to maintain a call.
The following two figure 5.20 and 5.21 shows how two handoff
situations. In the first case, a successful handoff takes place where the
mobile phone is switched from one tower to another while in the second
case, the signal power drops to the minimum value needed for maintaining
a call and the call is dropped without a handoff

Figure 5.20 Successful Handoff


ANALOG AND DIGITAL COMMUNICATION

Figure 5.21 Unsuccessful Handoff

A major problem with this approach to handoff decision is that the


received signals of both base stations often fluctuate. When the mobile is
between the base stations, the effect is to cause the mobile to wildly switch
links with either base station. The base stations bounce the link with the
mobile back and forth. Hence the phenomenon is called ping-ponging.
Four basic steps involved in handoff process
1. Initiation Either the mobile unit or the network finds the need for a
handoff and initiates the necessary network procedures.
2. Resource reservation Appropriate network procedures reserve the
resources required to support the handoff (i.e., a voice channel and
control channel).
3. Execution The actual handover of control from one base station to
another base station takes place.
4. Completion Unnecessary network resources are relinquished and
made available to other mobile subscribers.
5.6 MULTIPLE ACCESS TECHNIQUES FOR WIRELESS
COMMUNICATION
In wireless communication systems it is often desirable to allow the
subscriber to send simultaneously information to the base station while
receiving information from the base station.
A cellular system divides any given area into cells where a
mobile unit in each cell communicates with a base station. The main aim
in the cellular system design is to be able to increase the capacity of the
channel i.e. to handle as many calls as possible in a given bandwidth with
a sufficient level of quality of service. There are several different ways to
allow access to the channel. This includes mainly the following
MULTI-USER RADIO COMMUNICATION

1. Frequency Division Multiple-Access (FDMA)


2. Time Division Multiple-Access (TDMA)
3. Code Division Multiple-Access (CDMA)
4. Space Division Multiple-Access (SDMA)
Table 1 Multiple Access techniques in different wireless
communication systems

FDMA, TDMA and CDMA are the three major multiple access
techniques that are used to share the available bandwidth in a wireless
communication system.
Depending on how the available bandwidth is allocated to the users
these techniques can be classified as narrowband and wideband systems.
Narrowband Systems
The term narrowband is used to relate the bandwidth of the
single channel to the expected coherence bandwidth of the channel. The
available spectrum is divided in to a large number of narrowband
channels. The channels are operated using FDD. In narrow band FDMA,
a user is assigned a particular channel which is not shared by other users
in the vicinity and if FDD is used then the system is called FDMA/FDD.
Narrow band TDMA allows users to use the same channel but allocated
a unique time slot to each user on the channel, thus separating a small
number of users in time on a single channel. For narrow band TDMA,
there generally are a large number of channels allocated using either FDD
ANALOG AND DIGITAL COMMUNICATION

or TDD, each channel is shared using TDMA. Such systems are called
TDMA/FDD and TDMA/TDD access systems.

Wideband Systems
In wide band systems, the transmission bandwidth of a single
channel is much larger than the coherence bandwidth of the channel.
Thus, multipath fading does not greatly affect the received signal within
a wideband channel, and frequency selective fades occur only in a small
fraction of the signal bandwidth.
5.6.1. Frequency Division Multiple Access
This was the initial multiple-access technique for cellular sys-
tems in which each individual user is assigned a pair of frequencies
while making or receiving a call as shown in figure 5.22 One frequency
is used for downlink and one pair for uplink. This is called frequency di-
vision duplexing (FDD). That allocated frequency pair is not used in the
same cell or adjacent cells during the call so as to reduce the co-channel
interference. Even though the user may not be talking, the spectrum
cannot be reassigned as long as a call is in place. Different users can use
the same frequency in the same cell except that they must transmit at
different times.

Fig 5.22 The basic concept of FDMA

The features of FDMA


The FDMA channel carries only one phone circuit at a time. If an
MULTI-USER RADIO COMMUNICATION

FDMA channel is not in use, then it sits idle and it cannot be used by
other users to increase share capacity. After the assignment of the voice
channel the BS and the MS transmit simultaneously and continuous-
ly. The bandwidths of FDMA systems are generally narrow i.e. FDMA is
usually implemented in a narrow band system. The symbol time is large
compared to the average delay spread. The complexity of the FDMA mobile
systems is lower than that of TDMA mobile systems. FDMA requires tight
filtering to minimize the adjacent channel interference.
FDMA/FDD in AMPS
The first U.S. analog cellular system, AMPS (Advanced Mobile Phone
System) is based on FDMA/FDD. A single user occupies a single channel
while the call is in progress, and the single channel is actually two sim-
plex channels which are frequency duplexed with a 45 MHz split. When a
call is completed or when a handoff occurs the channel is vacated so that
another mobile subscriber may use it. Multiple or simultaneous users
are accommodated in AMPS by giving each user a unique signal Voice.
Signals are sent on the forward channel from the base station to the
mobile unit, and on the reverse channel from the mobile unit to the base
station. In AMPS, analog narrowband frequency modulation (NBFM) is
used to modulate the carrier.
FDMA/TDD in CT2
Using FDMA, CT2 system splits the available bandwidth into
radio channels in the assigned frequency domain. In the initial call setup,
the handset scans the available channels and locks on to an unoccupied
channel for the duration of the call. Using TDD (Time Division Duplexing),
the call is split into time blocks that alternate between transmitting and
receiving.
FDMA and Near-Far Problem
The near-far problem is one of detecting or filtering out a
weaker signal amongst stronger signals. The near-far problem is
particularly difficult in CDMA systems where transmitters share
transmission frequencies and transmission time. In contrast, FDMA and
TDMA systems are less vulnerable. FDMA systems offer different kinds
of solutions to near-far challenge. Here, the worst case to consider is
recovery of a weak signal in a frequency slot next to strong signal. Since
both signals are present simultaneously as a composite at the input of a
ANALOG AND DIGITAL COMMUNICATION

gain stage, the gain is set according to the level of the stronger signal; the
weak signal could be lost in the noise floor. Even if subsequent stages have
a low enough noise floor to provide
5.6.2 Time Division Multiple Access
In digital systems, continuous transmission is not required
because users do not use the allotted bandwidth all the time. In such
cases, TDMA is a complimentary access technique to FDMA. Global
Systems for Mobile communications (GSM) uses the TDMA technique. In
TDMA, the entire bandwidth is available to the user but only for a finite
period of time. In most cases the available bandwidth is divided into fewer
channels compared to FDMA and the users are allotted time slots during
which they have the entire channel bandwidth at their disposal, as shown
in figure 5.23.

Figure 5.23 Basic concept of TDMA

TDMA requires careful time synchronization since users share the


bandwidth in the frequency domain. The number of channels are less,
inter channel interference is almost negligible. TDMA uses different time
slots for transmission and reception. This type of duplexing is referred to
as Time division duplexing (TDD).
MULTI-USER RADIO COMMUNICATION

The features of TDMA


TDMA shares a single carrier frequency with several users where
each user makes use of non overlapping time slots. The number of time
slots per frame depends on several factors such as modulation technique,
available bandwidth etc. Data transmission in TDMA is not continuous
but occurs in bursts. This results in low battery consumption since the
subscriber transmitter can be turned OFF when not in use. Because of a
discontinuous trans- mission in TDMA the handoff process is much sim-
pler for a subscriber unit, since it is able to listen to other base stations
during idle time slots. TDMA uses different time slots for transmission and
reception thus duplexers are not required. TDMA has an advantage that is
possible to allocate different numbers of time slots per frame to different
users. Thus bandwidth can be supplied on demand to different users by
concatenating or reassigning time slot based on priority.
TDMA/FDD in GSM
As discussed earlier, GSM is widely used in Europe and other parts
of the world. GSM uses a variation of TDMA along with FDD. GSM digi-
tizes and compresses data, then sends it down a channel with two other
streams of user data, each in its own time slot. It operates at either the 900
MHz or 1800 MHz frequency band. Since many GSM network operators
have roaming agreements with foreign operators, users can often continue
to use their mobile phones when they travel to other countries.
TDMA/TDD in DECT
DECT is a pan European standard for the digitally enhanced cord-
less telephony using TDMA/TDD. DECT provides 10 FDM channels in the
band 1880-1990 MHz. Each channel supports 12 users through TDMA
for a total system load of 120 users. DECT supports handover; users can
roam over from cell to cell as long as they remain within the range of the
system. DECT antenna can be equipped with optional
spatial diversity to deal with multipath fading.
5.6.3 Space Division Multiple Access
SDMA utilizes the spatial separation of the users in order to opti-
mize the use of the frequency spectrum. A primitive form of SDMA is when
the same frequency is re- used in different cells in a cellular.
Wireless network. The radiated power of each user is controlled by
Space division multiple access. SDMA serves different users by using spot
ANALOG AND DIGITAL COMMUNICATION

beam antenna. These areas may be served by the same frequency or differ-
ent frequencies. However for limited co-channel interference it is required
that the cells be sufficiently separated. This limits the number of cells a
region can be divided into and hence limits the frequency re-use factor. A
more advanced approach can further increase the capacity of the network.
This technique would enable frequency re-use within the cell. In a practi-
cal cellular environment it is improbable to have just one transmitter fall
within the receiver beam width. Therefore it becomes imperative to use
other multiple access techniques in conjunction with SDMA. When differ-
ent areas are covered by the antenna beam, frequency can be re-used, in
which case TDMA or CDMA is employed, for different frequencies FDMA
can be used.
5.6.4 Code Division Multiple Access
In CDMA, the same bandwidth is occupied by all the users, however
they are all assigned separate codes, which differentiates them from each
other (shown in figure 5.24). CDMA utilize a spread spectrum technique
in which a spreading signal is used to spread the narrow band message
signal.

Figure 5.24 Basic concepts of CDMA

Direct Sequence Spread Spectrum (DS-SS) is the most commonly


used technology for CDMA. In DSSS, the message signal is multiplied by a
MULTI-USER RADIO COMMUNICATION

Pseudo Random Noise Code. Each user is given his own codeword which
is orthogonal to the codes of other users and in order to detect the user,
the receiver must know the codeword used by the transmitter. There are,
however, two problems in such systems which are discussed in the sequel.
1. CDMA/FDD in IS-95
In this standard, the frequency range is: 869-894 MHz (for Rx) and
824-849 MHz (for Tx). In such a system, there are a total of 20 channels
and 798 users per channel. For each channel, the bit rate is 1.2288 Mbps.
For orthogonality, it usually combines 64 Walsh-Hadamard codes and
am-sequence.
2. CDMA and Self-interference Problem
In CDMA, self-interference arises from the presence of delayed
replicas of signal due to multipath. The delays cause the spreading
sequences of the different users to lose their orthogonality, as by design
they are orthogonal only at zero phase offset. Hence in dispreading a given
user’s waveform, nonzero contributions to that user’s signal arise from the
transmissions of the other users in the network. This is distinct from both
TDMA and FDMA, wherein for reasonable time or frequency guard bands,
respectively, orthogonality of the received signals can be preserved.
3. CDMA and Near-Far Problem
The near-far problem is a serious one in CDMA. This problem arises
from the fact that signals closer to the receiver of interest are received with
smaller attenuation than are signals located further away.
Therefore the strong signal from the nearby transmitter will mask
the weak signal from the remote transmitter. In TDMA and FDMA, this is
not a problem since mutual interference can be filtered. In CDMA,
However, the near-far effect combined with imperfect orthogonality
between codes (e.g. due to different time sifts), leads to substantial
interference. Accurate and fast power control appears essential to ensure
reliable operation of multiuser DS-CDMA systems.
5.6.5 Hybrid Spread Spectrum Techniques
The hybrid combinations of FHMA, CDMA and SSMA result in
hybrid spread spectrum techniques that provide certain advantages. These
hybrid techniques are explained below,
ANALOG AND DIGITAL COMMUNICATION

1. Hybrid FDMA/CDMA (FCDMA)


An alternative to the CDMA technique in which the available
wideband spectrum is divided into a smaller number of sub spectra with
smaller bandwidths. The smaller sub channels become narrow band
CDMA systems with processing gain lower than the original CDMA
system. In this scheme the required bandwidth need not be contiguous
and different user can be allotted different sub spectrum bandwidths
depending on their requirements. The capacity of this hybrid FCDMA
technique is given by the sum of the capacities of a system operating in
the sub spectra.
2. Hybrid Direct Sequence/Frequency Hopped Multiple Access
Techniques (DS/FHMA)
A direct sequence modulated signal whose center frequency is made
to hop periodically in a pseudo random fashion is used in this technique.
One of the advantages using this technique is they avoid near-far effect.
However, frequency hopped CDMA systems are not adaptable to the soft
handoff process since it is difficult to synchronize the frequency hopped
base station receiver to the multiple hopped signals.
3. Time and Code Division Multiple Access (TCDMA)
In this TCDMA method different cells are allocated different
spreading codes. In each cell, only one user per cell is allotted a
particular time slot. Thus at any time only one user is transmitting in
each cell. When a handoff takes place the spreading code of that user is
changed to the code of the new cell. TCDMA also avoids near-far effect as
the number of users transmitting per cell is one.
4. Time Division Frequency Hopping (TDFH)
This technique has been adopted for the GSM standard, where the
hopping sequence is predefined and the subscriber is allowed to hop only
on certain frequencies which are assigned to a cell. The subscriber can
hop to a new frequency at the start of a new TDMA frame. Thus, avoiding
a severe fade or erasure event on a particular channel. This technique has
the advantage in severe multipath or when severe channel interference
occurs.
5.6.6 Comparison SDMA/TDMA/FDMA/CDMA

Approach SDMA TDMA FDMA SDMA


Segment sending time into Segment the
Segment space into cell/ Spread the spectrum us-
Idea disjoint time-slots, demand frequency band into
sectors ing orthogonal codes
driven or fixed patterns disjoint sub-bands
All terminals can be
All terminals are active for Every terminal has its
Only one terminal can be active at the same place
Terminals short periods on time on own frequency uninter-
active in cell/one sector at the same moment,
the same frequency rupted
uninterrupted
Signal Cell structure, directed Synchronization in time Filtering in the fre- Code plus special
Separation antennas domain quency domain receivers
Flexible, less frequency
Very simple, increases Established fully digital, Simple,
Advantages planning needed soft
capacity per km2 flexible established robust
handover
Guard space needed Inflexible, Complex receivers, needs
Inflexible antennas
Disadvantages (Multipath propagation), frequencies are a scarce more complicated power
typically fixed
synchronization difficult resource control for senders
Typically combined Still faces some prob-
Standard in fixed networks,
Only in combination with with TDMA (Frequency lems, higher complexity,
together with FDMA/SDMA
Comment TDMA, FDMA or CDMA hopping patterns) and lowered expectations,
used in many mobile
usefull SDMA (Frequency will be integrated with
networks
Reuse) TDMA/FDMA
MULTI-USER RADIO COMMUNICATION
ANALOG AND DIGITAL COMMUNICATION

5.7 SATELLITE COMMUNICATION

Introduction
• Satellite can ‘see’ a very large area of the earth. Hence the satellite
can form a start point of a communication net, to link many users
together simultaneously. This will include users widely separated
geographically
• A satellite communication system is economically only where the
system is used continuously and a large number of users use it
Block Diagram of a satellite communication
• The block diagram of a satellite communication system is as shown in
figure 5.25
(Transponder)

Satellite
Uplink Downlink
-
6 GHz 4GHz Parabolic dish
antenna
Highly directional
Dish antenna
Transmitting
earth Receiving
station earth station
Earth

Figure 5.25 Block diagram of a satellite communication


The important component of satellite communications are
1. Uplink model
2. Downlink model
3. Transponder
Uplink Model
The signals which is being transmitted upwards to the satellite is
called as the “up-link” and it is normally at a frequency of 6 GHz.
Downlink Model
The signal which is transmitted back to the receiving earth station
is called as the “down-link model” and it is normally at a frequency of
4 GHz
MULTI-USER RADIO COMMUNICATION

Transponder
• Thus a satellite has to receive, process and transmit the signal. All
these functions are performed by a unit called “satellite transponder”
• A communication satellite has two set of transponders each set having
12-transponders making it a total of 34 transponders.
• Each transponder has a bandwidth of 36 MHz which is sufficient to
handle at least one TV-channel
• The uplink signal received by a transponder is weak and downlink
signal transmitted by the transponder is strong. Therefore, to avoid
interference between them, the uplink and downlink frequencies are
selected to be of different values.
• The operation of satellite takes place at a very high signal frequencies
in the microwave frequency range.
The typical band of frequencies used for the communication
satellite are as follows.
1. C-band - 4/6 GHz
2. ku-band=11/14 GHz
3. ka-band -20/30 GHz
• One of the advantages of operating at such a high frequency is
reduction in the size of antennas and other components of the system.
• Multiple access methods such as FDMA, TDMA and CDMA are used
to allow the access of a satellite to the maximum number of earth
stations.
• The power requirement of a satellite is satisfied by solar panels and a
set of nickel cadmium batteries, carried by the satellite itself.
• The power requirement
5.8 SATELLITE SYSTEM LINK MODELS

A satellite system consists of three sections


1. An uplink
2. A Transponder
3. A downlink
ANALOG AND DIGITAL COMMUNICATION

5.8.1 Uplink Model

Up Counter

Base
BPF Mixer BPF High Power
band MUX Modulator
Amplifier
inputs

~ Local
Oscillator
Carrier oscillator
Figure 5.26 Block diagram of uplink model
Multiplexer
The baseband signals are first multiplexed (ie) combined and make
as a single composite signal and apllied as a input to modulator (ie) as a
modulating signal.
Modulator
The modulator combine (mixed) both modulating signal and
high frequency carrier signal and then produce modulated signal. This
modulated signal takes place at a lower frequency than the actual uplink
frequency.
BPF
The Band pass filter allows the Intermediate frequency (IF) to up
converter
Up Converter
• It consist of mixer, local oscillator and BPF. This modulation takes
place at a lower frequency than the actual uplink frequency. Therefore,
a frequency upconverter is used to increase the frequency to the level
of uplink frequency.
• Mixer consist of four outputs, out of which the “sum” component is
selected by the BPF followed by the mixer
Power Amplifier
The upconverter signal is then passed through a power amplifier to
raise the signal to an adequate power level.
Antenna
• The transmitter output is coupled to the antenna. The antenna
transmits this signal at the uplink frequency to the satellite
MULTI-USER RADIO COMMUNICATION

transponder
• A highly directional parabolic dish antenna is used as a transmitting
antenna
5.8.2 Transponder
The combination of a transmitter and receiver in the satellite is
known as transponder. The block diagram of transponder is as shown in
figure 5.27

Satellite antenna

6 GHz 4 GHz
Diplexer

6 GHz 4 GHz
Low High
Noise Mixer Power
Amplifier Amplifier

2 GHz

~ Local Oscillator

Figure 5.27 Block diagram of transponder


• The basic funcition of a satellite transponder is
1. Amplification
2. Frequency Translation
Diplexer
• It is used to separate the uplink and downlink frequencies connect the
6 GHz frequency to Low noise amplifier and which is received by the
satellite antenna.
• The 4 GHz frequency from power amplifier is connected to antenna for
transmission.
Low Noise Amplifier
Low noise amplifier is used to amplify the received signal with low
noise environment and applied as a input to mixer
Frequency converter
• The mixer, local oscillator here act as a frequency translator which
translate the 6 GHz frequency into 4 GHz frequency
ANALOG AND DIGITAL COMMUNICATION

• The difference frequency of mixer is taken and then applied as a input


to power amplifier.
Power amplifier
Power amplifier increases the signal level and applied as a input to
diplexer.
Satellite antenna
The same antenna is used for transmission and reception by using
widely spaced frequencies for transmit and receive interference is avoided.
In practice one transponder is used only with a single signal. But
we, cannot install a satellite only for one channel. Hence most of the
satellite have more than one, typically 12 to 24 transponders.
5.8.3 Downlink Model
The figure 5.28 below shows satellite downlink model. It is basically
the receiver section of the earth station
From Satellite

Base
band
O/P’s
De
BPF LNA Mixer BPF De-MUX
modulator

~
Local oscillator

Figure 5.28 Block diagram of Downlink model


Antenna
A parabolic reflector horn type antenna is used for receiving the
signals. Thus it can receive the downlink signals from the satellite.
Band Pass filter (BPF)
The received signal is then passed through a band pass filter (BPF)
which allows only the downlink frequency signal to pass through a low
noise amplifier (LNA).
Low Noise Amplifier
Low noise amplifier is a specially designed amplifier that produces
MULTI-USER RADIO COMMUNICATION

a very low noise voltage. It is operated at extremely low temperatures to


minimize the thermal noise generation.
Down Converter
• The amplified signal is then passed through a down converter. It
consist of mixer, local oscillator and BPF
• The frequency of signal at the mixer output is equal to the differ-
ence between the oscillator frequency and downlink frequency (4
GHz)
• The BPF after mixer will select only difference frequency which
equal to indeterminate frequency (IF)
Demodulator
The IF frequency is then applied to a demodulator and produce the
original base band signal.
De multiplexer
The base band signals are separated by the demultiplexer and
connected to various subscribers. These can be different telephone
channels.
5.9 EARTH STATION (OR) GROUND STATION

• A simplified block diagram of an earth station is as shown in figure


5.29
• The earth station is supposed to communicate with the satellite to
convey information from users to satellite and back from the satellite
to the users.
• In early days of satellite communication, the earth station were located
in remote country location, away from the cities. This due to the huge
size of antenna and critical requirements
• But today earth stations are much less complex and the antennas used
are smaller in size. So many earth stations are found to be located on
top of tall buildings.
• The receiver section is nothing but the downlink model whereas the
transmitter section is the uplink model.
• A special microwave device called Diplexer is used for coupling the
transmitter output and receiver input to the common antenna
• The diplexer at a time couples the antenna to either transmitter (or)
receiver and isolates the section from each other
Base
De
BPF LNA Mixer BPF De- MUX band
modulator
signals

Local
4 GHz Oscillator

Diplexer Receiver section

6 GHz
Local Local
ANALOG AND DIGITAL COMMUNICATION

Oscillator Oscillator

Power Base
amplifier BPF Mixer BPF Modulator MUX band
signals
Transmitter section

Figure 5.29 Block diagram of Earth station


MULTI-USER RADIO COMMUNICATION

5.10 KEPLER’S LAWS

• A satellite remains in it’s orbit because the “Centrifugal force” caused


by it’s rotation around the earth is exactly balanced out by the earth’s
“gravitational pull”
• In all three laws discovered by kepler. They describe the shape of the
orbit, the velocity of the planet and the distance of the planet with
respect to sun
• The simplified statement of keplers three laws are as follows
1. Kepler’s First law
• It states that a satellite will orbit a primary body (earth) in an elliptical
orbit

Figure 5.30 Kepler’s First Law

• We cn define the “eccentricity” (or) “abnormality” of the ellipse as


follows
α2 − β2
ε =
α
ANALOG AND DIGITAL COMMUNICATION

2. Kepler’s second law


• This law is also called as the law of areas

Figure 5.31 Kepler’s Second Law

• It states that during equal intervals of time, a satellite will sweep out
areas in the orbital plane, focused at the bary centre
Area A 1 = Area A 2

• This statement is true if and only if the velocity V1 corresponding to


area A1 travel distance is greater that the velocity V2 corresponding to
area A2 travel distance
• We can say that the velocity of the satellite will be greatest at the point
of closest approach to Earth (This point is called as “perigree”) and the
satellite will travel at the slowest speed at the farthest point from earth
(This point is called as “apogee”).
3. Kepler’s third law
• This is also known as the ‘Harmonic law’
• This law states that the square of the periodic time of orbit is
proportional to the cube of the mean distance between the primary
and the satellite.
• This means distance is equal to the semi-major axis. Hence, Kepler’s
third law can be expressed mathematically as follows
2
α = Ap 8

Where,
α = semi major axis in kilometers
p = mean solar earth days
A = A unitless constant
MULTI-USER RADIO COMMUNICATION

The simplified statements of keplers three laws are as follows;


i. The planets move in an elliptical orbits with the sun at one focus of the
ellipse
ii. The line joining the sun and a planet sweeps out equal areas in equal
intervals of time.
iii. If we divide the square of the time of revolution of a planet by the cube
of it’s mean distance from the sun, then the quotient that we obtain is
same for all the planets.
5.11 SATELLITE ORBITS
• In space, satellite move in certain specific paths. These paths are called
as orbits. A satellite stays in an orbit because the two forces acting on
it namely the centripetal force and the gravitational force are equal
• The selection of particular orbit depends on the following factors
i. Transmission path loss
ii. Delay time
iii. Earth coverage area
iv. Time period for which the satellite should be visible
Types of orbits
i. Synchronous orbit
ii. Polar orbit
iii. Inclined orbit
1. Synchronous orbit
• These satellites are at a highest of about 36,000 km from the earth’s
surface
• It is parallel to equator. Therefore it is also called as “equatorial orbit”
or “geostationary orbit”
• Velocity of satellite = Velocity of earth
• Communication satellites are generally placed in such equatorial orbits
N Equator
Satellite

Equatorial Orbit

Earth
S
Figure 5.32 Synchronous Orbit
ANALOG AND DIGITAL COMMUNICATION

Disadvantages
• Powerful rockets are required to launch a satellite in the orbit
• The satellites placed in these orbits cannot establish communication in
the polar region of the earth
2. Polar Orbit
• It passes over the N and S poles
• It’s height is 900-1000 km above earth
• It is used for navigation and remote sensing satellites
N Polar Orbit
Equator
Satellite

Earth

S
Figure 5.33 Polar Orbit

3. Inclined Orbits
• It routes earth in a particular angle is as shown in figure 5.34
• It provides communication coverage of polar regions
• Used for domestic communication
• This orbit is not used very frequently. The height of the inclined orbit is
generally set to cover the are of interest
N Equator
Satellite

Inclined orbit
Earth

S
Figure 5.34 Inclined Orbits
MULTI-USER RADIO COMMUNICATION

5.12 SATELLITE ELEVATION CATEGORIES


Satellite Elevation Categories
Satellites are generally categorizes into three types depending on
their height from the earth
1. Low earth orbit (LEO) satellite
2. Medium earth orbit (MEO) satellite
3. Geosynchronous Earth orbit (GEO) Satellite
LEO satellites
• They revolve around the earth in the orbits, which are 500 to 2000 km
above the earth.
• They travel at a very high speed in order to avoid falling down on the
earth
• Most LEO -satellites operates in 1 GHz to 2.5 GHz frequency range.
Characteristics of LEO-Satellites
1. Low orbit height
2. One revolution is completed in 1 to 1.5 hours
3. Low launching cost
4. Low path-loss
5. Less transmitter power
6. Smaller antennas are to be used
7. Less weight
8. Small round trip delay
9. Covers smaller area of earth
10. Short life-span
MEO satellite
• They revolve around the earth in the orbits which are 5000 km to
15000 km above the earth.
• They are used for global positioning system (GPS)
• These satellites generally operates in the 1.2 GHz to 1.66 GHz frequency
ranges
Characteristics of MEO
1. Orbit height: Medium
2. Time taken for one revolution : 2 to 4 hours
3. Moderate launching cost
4. Moderate path loss
5. Moderate round trip delay
ANALOG AND DIGITAL COMMUNICATION

6. More earth surface is covered as compared to LEO-satellite


7. Longer life-span
Geostationary Satellite
• The satellites orbiting in equatorial orbit are called geostationary
satellites (or) Geosynchronous satellites
• These satellites are at about 36000 km above the earth surface
• They travel at the velocity of revolution of earth, hence complete one
revolution around the earth in one day (24 hours). This is the reason
why geostationary satellites appears to be stationary
• These satellite operate at a frequency range from 2-18 GHz
Characteristics of GEO
1. The solar cells get the solar radiation for almost 99% of the orbit pe-
riod. Therefore energy storage is not necessary
2. The effect of magnetic field is absent
3. Three communication satellite can cover the entire surface of earth
4. Mostly used for communication satellite
Disadvantages
1. These satellites need high power transmitter and more sensitive re-
ceiver due to heavy path loss
2. Advance technology is necessary for launching and maintaining
3. Propagation delay is longer nearly 500 to 600 ms.
MULTI-USER RADIO COMMUNICATION

5.13 SATELLITE FREQUENCY PLANS AND ALLOCATIONS


Uplink Downlink
Frequency Bandwidth
S.No Frequency Frequency Applications
Bands (GHz)
(GHz) (GHz)
1 UHF-bands 0.3 0.2 0.02 Military applications
TV transmission used

by Doordarshan to
2. S-band 4 2 0.5
transmit 14 different

language channels
3. C-band 6 4 0.5 TV broadcast
4. X-band 8 6 0.5 Ship and Aircraft
TV broadcast, Non-
5. Ku- band 14 11 0.5
Military applications
Ka-band Commercial
6. 30 20 3
(commercial) broadcasting
Ka-band
7. 31 21 1 Military
(military)
Non-military
8. V-band 50 40 1
applications
5.14 BLUETOOTH
5.14.1 Introduction

The concept behind Bluetooth is to provide a universal


short-range wireless capability. Using the 2.4-GHz band, available
globally for unlicensed low-power uses, two Bluetooth devices within 10m
of each other can share up to 720 kbps of capacity. Bluetooth is intended
to support an open-ended list of applications, including data (e.g., sched-
ules and telephone numbers), audio, graphics, and even video. For exam-
ple, audio devices can include headsets, cordless and standard phones,
home stereos, and digital MP3 players. The following are examples of some
of the capability Bluetooth can provide consumers

‰‰ Make calls from a wireless headset connected remotely to a cell


phone.

‰‰ Eliminate cables linking computers to printers, keyboards, and


the mouse.
ANALOG AND DIGITAL COMMUNICATION

‰‰ Hook up MP3 players wirelessly to other machines to download


music.

‰‰ Set up home networks so that a couch potato can


remotely monitor air conditioning, the oven, and children’s’
Internet surfing.

‰‰ Call home from a remote location to turn appliances on and off, set
the alarm, and monitor activity.

‰‰ The figure 5.35 shows the connection of some peripheral devices


using bluetooth device

5.14.2 Bluetooth Application

Bluetooth is designed to operate in an environment of many


users. Up to eight devices can communicate in a small network called a
piconet. Ten of these piconets can coexist in the same coverage range
of the Bluetooth radio. To provide security, each link is encoded and
protected against eavesdropping and interference.

Bluetooth provides support for three general application areas using


short­range wireless connectivity:

‰‰ Data and voice access points

‰‰ Real-time voice and data transmissions


MULTI-USER RADIO COMMUNICATION

‰‰ Bluetooth facilitates real-time voice and data transmissions by


providing effortless wireless connection of portable and stationary
communications devices.

‰‰ Cable replacement

Bluetooth eliminates the need for numerous, often


proprietary, cable attachments for connection of practically any kind
of communication device. Connections are instant and are maintained
even when devices are not within line of sight. The range of each radio is
approximately 10m but can be extended to 100 m with an optional
amplifier.

‰‰ Ad hoc networking

cc A device equipped with a Bluetooth radio can establish instant


connection to another Bluetooth radio as soon as it comes into
range.

‰‰ Connection of peripheral devices

hh Loudspeaker, joystick, headset

hh Support of ad-hoc networking

hh Small devices, low-cost

hh Bridging of networks

hh e.g., GSM via mobile phone - Bluetooth - laptop

5.14.3 Bluetooth Standards Documents

The Bluetooth standards present a formidable bulk-well over


1500 pages, divided into two groups: core and profile. The core
specifications describe the details of the various layers of the Bluetooth
protocol architecture, from the radio interface to link control. Related
topics are covered, such as inter-operability with related technologies,
testing requirements, and a definition of various Bluetooth timers and
ANALOG AND DIGITAL COMMUNICATION

their associated values.

The profile specifications are concerned with the use of Bluetooth


technology to support various applications. Each profile specification
discusses the use of the technology defined in the core specifications to
implement a particular usage model. The profile specification includes
a description of which aspects of the core specifications are mandatory,
optional, and not applicable. The purpose of a profile specification is
to define a standard of interoperability so that products from different
vendors that claim to support a given usage model will work together. In
general terms, profile specifications fall into one of two categories:

‹‹ Cable replacement or wireless audio

The cable replacement profiles provide a convenient means


for logically connecting devices in proximity to one another and for
exchanging data. For example, when two devices first come within range
of one another, they can automatically query each other for a common
profile. This might then cause the end users of the device to be alerted,
or cause some automatic data exchange to take place. The wireless audio
profiles are concerned with establishing short-range voice connections.

5.14. 4 Protocol Architecture

‰‰ Bluetooth is a layered protocol architecture. It consist of

cc Core protocols

cc Cable replacement and telephony control protocols

cc Adopted protocols

‰‰ Core protocols form a five-layer stack consisting of the following


elements:

1. Radio

Specifies details of the air interface, including frequency, the use of


frequency hopping, modulation scheme, and transmit power.
MULTI-USER RADIO COMMUNICATION

2. Baseband

Concerned with connection establishment within a piconet,


addressing, packet format, timing, and power control.

3. Link manager protocol (LMP)

Responsible for link setup between Bluetooth devices and ongoing


link management. This includes security aspects such as authentication
and encryption, plus the control and negotiation of baseband packet sizes.

4. Logical link control and adaptation protocol (L2CAP)

Adapts upper-layer protocols to the baseband layer. L2CAP


provides both connection less and connection-oriented services.

5. Service discovery protocol (SDP)

Device information, services, and the characteristics of the


services can be queried to enable the establishment of a connection
between two or more Bluetooth devices.

AT
Commands
= core protocols vCard/vCal WAE
TCS BIN SDP
=Cable OBEX WAP
replacements
protocol UDP/TCP
=Telephony control
protocols IP
=Adopted protocols PPP

Audio Control

Logical link control and Adaptation Protocol (L2CAP)

Host-Controller Interface
Link Manager Protocol (LMP)

Baseband

Bluetooth Radio

Figure 5.36 Bluetooth Protocol Stack

AT = attention sequence (modem prefix)


ANALOG AND DIGITAL COMMUNICATION

IP = Internet Protocol

OBEX = Object Exchange Protocol

PPP = Point-to-Point Protocol

RFCOMM = Radio frequency communications

SDP =Service Delivery Protocol

TCP =Transmission Control Protocol

TCS BIN = Telephony control specification-binary

UDP = User Datagram Protocol

vCal = virtual calendar

vCard = virtual card

WAE =Wireless application Environment

WAP =Wireless Application Protocol

‰‰ Cable replacement protocol

cc RFCOMM is the cable replacement protocol include in the


Bluetooth specification. RFCOMM presents a virtual serial port
that is designed to make replacement of cable technologies as
transparent as possible. Serial ports are one of the most com-
mon types of communications interfaces used with comput-
ing and communications devices. Hence, RFCOMM enables the
replacement of serial port cables with the minimum of
modification of existing devices. RFCOMM provides for binary data
transport and emulates EIA-232 control signals over the Bluetooth
baseband layer. EIA-232 (formerly known as RS-232) is a widely
used serial port interface standard.

‰‰ Telephony control protocol

cc Telephony control specification – binary (TCS BIN) is a


bit- oriented protocol that defines the call control signaling for
MULTI-USER RADIO COMMUNICATION

the establishment of speech and data calls between Bluetooth


devices. In addition, it defines mobility management procedures
for handling groups of Bluetooth TCS devices.

‰‰ Adopted protocols

cc PPP: The point-to-point protocol is an Internet standard protocol


for transporting IP datagram over a point-to-point link.

cc TCP/UDP/IP: These are the foundation protocols of the TCP/IP


protocol suite

cc OBEX: The object exchange protocol is a session-level


protocol developed by the Infrared Data Association (IrDA) for the
exchange of objects. OBEX provides functionality similar to that
of HTTP, but in a simpler fashion. It also provides a model for
representing objects and operations. Examples of content formats
transferred by OBEX are vCard and vCalendar, which provide the
format of an electronic business card and personal calendar en-
tries and scheduling information, respectively.

cc WAE/WAP: Bluetooth incorporates the wireless application


environment and the wireless application protocol into its archi-
tecture

Usage Models

A number of usage models are defined in Bluetooth


profile documents. In essence, a usage model is set of protocols that
implement a particular Bluetooth-based application. Each profile defines
the protocols and protocol features supporting a particular usage model.

‰‰ File transfer: The file transfer usage model supports the transfer of
directories, files, documents, images, and streaming media formats.
This usage model also includes the capability to browse folders on a
remote device.
ANALOG AND DIGITAL COMMUNICATION
MULTI-USER RADIO COMMUNICATION

Figure 5.37 Usage Models

‰‰ Internet Bridge: With this usage model, a PC is wirelessly


connected to a mobile phone or cordless modern to provide dial-
up networking and fax capabilities. For dial-up networking, AT
commands are wed to control the mobile phone or modem, and
another protocol stack (e.g., PPP over RFCOMM) is used for data trans-
fer. For fax transfer, the fax software op rates directly over RFCOMM.

‰‰ LAN access: This usage model enables devices on a piconet to ac-


cess a LAN. Once connected, a device functions as if it were
directly connected (wired) to the LAN.

‰‰ Synchronization: This model provides a device-to-device


synchronization of PIM (personal information management)
information, such a phone book, calendar, message, and note
information. IrMC (Ir mobile communications) is an IrDA protocol
that provides a client/server capability for transferring updated PIM
information from one device to another.

‰‰ Three-in-one phone: Telephone handsets that implement this usage


ANALOG AND DIGITAL COMMUNICATION

model may act as a cordless phone connecting to a voice base station,


as an intercom device for connecting to other telephones, and as a cel-
lular phone.

‰‰ Headset: The headset can act as a remote device’s audio input and
output interface.

5.14.5 ‘Piconets and Scatternets

‰‰ Piconet

• Basic unit of Bluetooth networking: a piconet, consisting of a


master and from one to seven active slave devices. The ra-
dio designated as the master makes the determination of the
channel (frequency-hopping sequence) and phase (timing offset,
i.e., when to transmit) that shall be used by all devices on this
piconet. The radio designated as master makes this determination
using its own device address as a parameter, while the slave
devices must tune to the same channel and phase. A slave may
only communicate with the master and may only communicate
when granted permission by the master. A device in one piconet
may also exist as part of another piconet and may function as
either a slave or master in each piconet.

Figure 5.38 Piconet and Scatternet


MULTI-USER RADIO COMMUNICATION

‰‰ Scatternet

cc Device in one piconet may exist as master or slave in another


piconet. This form of overlapping is called a Scatternet.

Fig 5.39 Master/Slave Relationships

Figure 5.40 Wireless Network Configurations

‰‰ Advantage of the Piconet/Scatternet Scheme

• It allows many devices to share same physical area

• It makes efficient use of bandwidth


ANALOG AND DIGITAL COMMUNICATION

5.14.7 Bluetooth Radio Specification

The Bluetooth Radio (layer) is the lowest defined layer of the


Bluetooth specification. It defines the requirements of the Bluetooth
transceiver device operating in the 2.4GHz ISM band. The Bluetooth air
interface is based on three power classes,

• Power Class 1: designed for long range (~100m), max output


power of 20 dBm,

• Power Class 2: ordinary range devices (~10m), max output power


of 4 dBm,

• Power Class 3: short range devices (~10cm), with a max output


power of 0 dBm.

The radio uses Frequency Hopping to spread the energy


across the ISM spectrum in 79 hops displaced by 1MHz, starting at
2.402 GHz and stopping at 2.480GHz. Some countries use the 79 RF
channels whereas countries like Japan use 23 channels. Currently, the
SIG (Special Interest Group) is working to harmonize this 79-channel ra-
dio to work globally and has instigated changes within Japan, Spain, and
other countries. Also, the Bluetooth radio module uses GFSK (Gaussian
Frequency Shift Keying) where a binary one is represented by a positive
frequency deviation and a binary zero by a negative frequency deviation.
BT is set to 0.5 and the modulation index must be between 0.28 and 0.35.
The receiver must have a sensitivity level for which the bit error rate (BER)
0.1% is met. For Bluetooth this means an actual sensitivity level of -70
dBm or better.

5.14.8 Baseband Specification

The Baseband is the physical layer of the Bluetooth. It


manages physical channels and links apart from other services
like error correction, data whitening, hop selection and Bluetooth
security. As mentioned previously, the basic radio is a hybrid spread
spectrum radio. Typically, the radio operates in a frequency-hopping
manner in which the 2.4 GHz ISM band is broken into 79 channels 1MHz
that the radio randomly hops through while transmitting and receiving
MULTI-USER RADIO COMMUNICATION

data. A piconet is formed when one Bluetooth radio connects to another


Bluetooth radio.

Both radios then hop together through the 79


channels. The Bluetooth radio system supports a large number of
piconets by providing each piconet with its own set of random hopping
patterns. Occasionally, piconets will end up on the same channel.
When this occurs, the radios will hop to a free channel and the data are
retransmitted (if lost). The Bluetooth frame consists of a transmit
packet followed by a receive packet. Each packet can be composed of
multiple slots (1, 3, or 5) of 625us. A typical single slot frame typical-
ly hops at 1,600hops/second. Multi-slot frames allow higher data rates
because of the elimination of the turn-around time between packets and
the reduction in header overhead.

ADVANTAGES

hh Low Power Consumption

hh Works in noisy environments

hh No line of sight restriction

hh Reliable and secure

hh The 2.45 GHz ensures universal compatibility. Also complies with


airline regulations

hh The qualification and logo program ensure higher quality

hh Very Robust as the radio hops faster and uses shorter packets

DISADVANTAGES

• Too many unfeasible applications so do we really need it?

• No handoff / handover capability

• Initial stages so it needs to prove its worth

• Few analog or FH cordless phones have designed to operate at the


ANALOG AND DIGITAL COMMUNICATION

2.4 GHz band. Certainly interference exists in between, but more


serious effects would be exerted on analog 2.4 GHz cordless phone

TECHNICAL SPECIFICATIONS

Table 5.2 Technical Specifications


MULTI-USER RADIO COMMUNICATION

Comparison of 802.11 and Bluetooth

802.11 Bluetooth
Represents Internet Represents faux internet
Already proved itself Still to prove
Widespread Connectivity Connect at close proximity
ANALOG AND DIGITAL COMMUNICATION

Solved Two Marks

Part-A

1. What is AMPS and in what way it differs from D-Amps?


AMPS is a purely analog cellular telephone system developed by
Bells Labs and use North America and other countries. On the
other hand D-Amps is a backward compatible digital version of
AMPS.

2. What is IG and 2G?


First generation use analog FM for speech transmission
(i) AMPS→Advanced mobile phone system.
(ii) ETACS→European Total Access Communication System
(iii) NTT→Nippon Telephone and Telegraph
Second generation use digital system.
(i) GSM→Global System for Mobile.
(ii) IS-136→Interim Standard 136.
(iii) PDC→Pacific Digital Cellular.
(iv) IS-95→Interim Standard 95 code Division Multiple Access.

3. Define MS,BS and MSC.


MS→Mobile Station – A Station in the Cellular radio service
intended for use (e g) Hand held units (portables), installed in ve-
hicles (mobiles).
BS→Base Station –A fixed Station in a mobile radio system used
for r+++++
audio communication with MS.
MSC→Mobile switching centre-co ordinates the routing of calls in
a service area.
4. What is meant by frequency reuse?
Physical Separation of two cells is sufficiently wide means the same
subset of frequencies can be used in both cells. So the spectrum
is efficiently utilized.
MULTI-USER RADIO COMMUNICATION

5. Define Hand off and mode of Hand-off.


The process of transferring as MS from one BS to another BS
is known as simply Hand-off (or) Hand-over.
Mode of Hand-off
1. MCHO-Mobile controlled Hand-off
2. NCHO-network Controlled Hand-off
3. MAHO-Mobile Assisted Hand-off
6. What are the types of Hand-off?
1. Hard HO→Mobile monitors BS and new cell is allocated to a call
with strong signal.
2. Soft HO→MS with 2 or more calls at the same time and find
which one is the strongest signal BS, then MSC automatically
transfers the calls to that BS.
Advantages: 1. Fast and loss less
2. Efficient use of spectrum.
7. Write the principles of cellular network.
If a given set of frequencies (or) radio channels can be reused
without increasing the interference , then the large geographical
area is covered by a single high power transmitter.
The Large cell is divided into smaller cells; each allocated a
subset of frequencies. For small area, low power transmitter with
lower antennas is used.
8. Define cell, cluster.
Each cellular base station is allocated a group of radio chan-
nels to be used with a small geographical area called a cell.
A group of cells that use a different set of frequencies in each
cell is called a cluster.
9. What do you mean by foot print, dwell time?
The actual radio coverage of a cell is known as the foot print.
The time over which a call may be maintained within a cell without
hand-off is called the dwell time.
ANALOG AND DIGITAL COMMUNICATION

10. Define Frequency reuse ratio.

Distance between centre of the nearest to channel cells


Q=D/R=

Radius of the cell

11. Define FDMA, TDMA and CDMA.


FDMA→ The total bandwidth is divided into non-overlapping fre-
quency sub-bands.
TDMA→Divides the radio spectrum into time slots and in each slot
only one user is allowed to either transmit or receive
CDMA→Many users share the same frequency same time with dif-
ferent coding.
12. State the principle of CDMA?
* Many users share the same frequency.
* Each user is assigned a different spreading code.
13. Write the goal of GSM-Standard.
*Better and more efficient technical solution for wireless commu-
nication.
*Single Standard was to be realized all over Europe enabling roam-
ing across borders.
14. What is mobility management ?
Mobility management deals with two important aspects;
Hand-off management and location management . Hand-off man-
agement maintains service continuity when an MS migrates out of
its current BS into the footprint of another BS. To do this it is nec-
essary to keep track of the user’s current location. The procedure
performs for this purpose is known as location management.
15. What is the maximum number of callers in each cell in a GSM?
In multi-frame 8-users can transmit in 8-slots. As there are
124 such channels are sent simultaneously using TDMA , total
number of callers in a clusters is 124 x 8. As reuse factor is 7 in
GSM, maximum number of callers in a cell is (124x8)%7=141.
MULTI-USER RADIO COMMUNICATION

Review Question
Part-A
1. What is AMPS and in what way it differs from D-Amps?
2. What is IG and 2G?
3. Define MS,BS and MSC.
4. What is meant by frequency reuse?
5. Define Hand off and mode of Hand-off.
6. What are the types of Hand-off?
7. Write the principles of cellular network.
8. Define cell, cluster.
9. What do you mean by foot print, dwell time?
10. Define Frequency reuse ratio.
11. Define FDMA, TDMA and CDMA.
12. State the principle of CDMA?
13. Write the goal of GSM-Standard.
14. What is mobility management ?
15. What is the maximum number of callers in each cell in a GSM?

PART-B
1. Explain briefly the principle of cellular networks.
2. Compare TDMA, FDMA and CDMA.
3. Discuss on 1G of mobile network (or) AMPS.
4. Discuss the effects of multipath propagation on CDMA-technique.
5. Enumerate on (i) GSM architecture
(ii) GSM-Channels
6. Explain code division multiple Access (CDMA) and compare its
performance with TDMA.
7. Explain in detail about the GSM-logical channels.
8. Write short notes on
(i) Frequency reuse
(ii) Channel alignment
(iii) Hand-off
9. Write short notes on Bluetooth technology.
10. Discuss various multiple access techniques.

You might also like