You are on page 1of 36

Information Theory & Coding

Channel Coding

Professor Dr. A.K.M Fazlul Haque


Dept. of Electronics and Telecommunication Engineering
(ETE)
Daffodil International University
1
Channel Coding

Note:

Channel coding is a technique used in digital


communications to ensure a transmission is
received with minimal or no errors.

2
Channel Coding in Digital Communication
Systems

3
Channel capacity

In electrical engineering, computer science and information


theory, channel capacity is the tight upper bound on the rate at
which information can be reliably transmitted over a communications
channel. By the noisy-channel coding theorem, the channel capacity
of a given channel is the limiting information rate (in units
of information per unit time) that can be achieved with arbitrarily
small error probability.

4
Channel Coding

There are two types of channel.


These are:
1. Noisy channel
2. Noiseless channel

Channel coding can be divided into two major classes:


1. Waveform coding by signal design
2. Structured sequences by adding redundancy

5
Waveform Coding

It deals with transforming waveform into “better waveform” robust to


channel distortion hence improving detector performance.

• Examples:

– Antipodal signaling

– Orthogonal signaling

– Bi-orthogonal signaling

– M-array signaling

– Trellis-coded modulation
Structured Sequences

It deals with transforming sequences into “better sequences” by


adding structured redundancy (or redundant bits). The redundant bits
are used to detect and correct errors hence improves overall
performance of the communication system.

• Examples:

• Linear codes Hamming codes

• BCH codes Cyclic codes

• Reed-Solomon codes Non-Linear codes

• Convolution codes Turbo codes


Orthogonal Signals

• Definition
C i j
pi  t  p j  t  dt  
Tb

0
0 i j

• This means that the signals are perpendicular to each other in


M-dim space

• For a correlative receiver, this means that each incoming


signal can be compared with a model of the signal and the best
match is the symbol that was sent
Multi-Amplitude Shift Keying (MASK)

• Send multiple amplitudes to denote different signals

• Typical signal configuration:

– +/- p(t)

– +/- 3 p(t)

– +/- (M-1) p(t)


Multi-Phase

• Binary Phase Shift Keying (BPSK)


Im
1: f1(t)= p(t) cos(wct)
Re
0: f0(t)= p(t)cos(wct+p) x x

• M-ary PSK
x Im
x x
 2 
pk  t   p  t  cos  ct 
Re
k x x
 M 
x x
x
Quadrature Amplitude Modulation (QAM)

• Amplitude-phase shift keying (APK)

ri
pk  t   p  t   ak cos ct   bk sin ct   i
 p  t  rk cos ct   k 

 bk 
rk  a  b 2
k
2
k k   tan  
 ak 
Sampling

In signal processing, sampling is the reduction of a continuous signal to


a discrete signal. A common example is the conversion of a sound
wave (a continuous signal) to a sequence of samples (a discrete-time
signal).

• A sample is a value or set of values at a point in time and/or space.

• A sampler is a subsystem or operation that extracts samples from


a continuous signal.

• A theoretical ideal sampler produces samples equivalent to the


instantaneous value of the continuous signal at the desired points.
PAM
Note:

Pulse amplitude modulation has some


applications, but it is not used by itself
in data communication. However, it is
the first step in another very popular
conversion method called
pulse code modulation.
Quantized PAM signal
Quantizing by Using Sign and
Magnitude
PCM
From Analog Signal to PCM Digital Code
Noisy Channel Coding Theorem

In information theory, the noisy-channel coding


theorem (sometimes Shannon's theorem), establishes that for any
given degree of noise contamination of a communication channel,
it is possible to communicate discrete data (digital information)
nearly error-free up to a computable maximum rate through the
channel. This result was presented by Claude Shannon in 1948 and
was based in part on earlier work and ideas of Harry
Nyquist and Ralph Hartley.
Shannon Capacity

Note:

The Shannon limit or Shannon capacity of a


communications channel is the theoretical
maximum information transfer rate of the
channel, for a particular noise level.
Nyquist Theorem

Note:

According to the Nyquist theorem, the


sampling rate must be at least 2 times
the highest frequency.
Nyquist Theorem
Example 4

What sampling rate is needed for a signal with a


bandwidth of 10,000 Hz (1000 to 11,000 Hz)?
Solution
The sampling rate must be twice the highest frequency
in the signal:

Sampling rate = 2 x (11,000) = 22,000 samples/s


Example 5

A signal is sampled. Each sample requires at least 12


levels of precision (+0 to +5 and -0 to -5). How many
bits should be sent for each sample?

Solution
We need 4 bits; 1 bit for the sign and 3 bits for the
value. A 3-bit value can represent 23 = 8 levels (000 to
111), which is more than what we need. A 2-bit value is
not enough since 22 = 4. A 4-bit value is too much
because 24 = 16.
Example 6

We want to digitize the human voice. What is the bit rate,


assuming 8 bits per sample?
Solution

The human voice normally contains frequencies from 0


to 4000 Hz.
Sampling rate = 4000 x 2 = 8000 samples/s

Bit rate = sampling rate x number of bits per sample


= 8000 x 8 = 64,000 bps = 64 Kbps
Note:

Note that we can always change a


band-pass signal to a low-pass signal
before sampling. In this case, the
sampling rate is twice the bandwidth.
Transmission Mode

Parallel Transmission

Serial Transmission
Data Transmission
Parallel Transmission
Serial Transmission
Note:

In asynchronous transmission, we
send 1 start bit (0) at the beginning
and 1 or more stop bits (1s) at the end
of each byte. There may be a gap
between each byte.
Note:

Asynchronous here means


“asynchronous at the byte level,” but
the bits are still synchronized; their
durations are the same.
Asynchronous Transmission
Note:

In synchronous transmission,
we send bits one after another without
start/stop bits or gaps.
It is the responsibility of the receiver to
group the bits.
Synchronous Transmission
END

36

You might also like