You are on page 1of 9

Que 1- Explain any two fields that use digital image processing.

Ans - There are various fields which uses digital image processing and are categorized
according to their sources for e.g. X-Ray, Gamma ray, ultraviolet, visible, infrared and
microwave bands etc.
i) X-ray imaging: The X-ray best use is known to be in medical diagnostics, but has extensive
use in industries and the other areas like astronomy. X-rays are among the oldest sources of
Electromagnetic (EM) radiation used for imaging.
EM waves can be conceptualized as propagating sinusoidal waves of varying
wavelengths, or can be thought of stream of massless particles, each travelling in a wavelike
pattern and moving at the speed of light. Each massless particle contains a certain amount
of energy or bundle of energy called a photon. If spectral bands are grouped according to
energy per photon, we obtain the spectrum ranging from gamma rays (highest emery) to
one end to radio waves (lowest energy) at the other.
X-rays for medical and industrial imaging are generated using an X-ray tube, which is
a vaccum tube with a cathode and anode. The cathode is heated, causing free electron to be
released. These electrons flows at a very high speed to the positively charged anode. When
the electrons strike a nucleus, energy is released in the form of X-ray radiation. The
penetrating power of the X-ray is controlled by a voltage applied across the anode and the
number of X-rays is controlled by s current applied to the filament in the cathode.
The intensity of the X-ray is modified by absorption as they pass through the patient
and the resulting energy falling on the film develops it, much in the same way that light
develops photography film. In digital radiography, digital images are obtained by one the two
methods:
a) By digitizing X-ray films
b) By having the X-rays the pass through the patient fall directly onto devices that
convert X-rays to light. The light signal in turn is captured by a light-sensitive digitizing
system.

ii) Imaging in Microwave band: The dominant application of imaging in the microwave band is
radar. The unique feature of imaging radar is its ability to collect data over virtually any
region at any time, regardless of weather or ambient lighting conditions. Some radar waves
can penetrate clouds, and under certain conditions can also see through vegetation, ice, and
extremely dry sand. In many cases, radar is the only way to explore inaccessible regions of
the Earths surface.
An imaging radar works like a flash camera in that it provides its own illumination
(microwave pulses) to illuminate an area on the ground and take a snapshot image. Instead
of a camera lens, a radar uses an antenna and digital computer processing to record its
images. In a radar image, one can see only the microwave energy that was reflected back
toward the radar antenna. Fig. 1.9 shows a space borne radar image covering a rugged
mountainous area of Southeast Tibet, about 90 km east of the city of Lhasa. In the lower
right corner is a wide valley of the Lhasa River, which is populated by Tibetan farmers and
yak herders and includes the village of Menba. Mountains in this area reach about 5800 m
(19,000 ft.) above sea level, while the valley floors lie about 4300 m (14,000 ft.) above sea
level. Note the clarity and detail of the image, unencumbered by clouds or other atmospheric
conditions that normally interfere with images in the visual band.

Que 2 - Explain the properties and uses of electromagnetic spectrum?


Ans 2
Properties
1. Electromagnetic waves are propagated by oscillating electric and magnetic fields
oscillating at right angles to each other.
2. Electromagnetic waves travel with a constant velocity of 3 x 108 ms -1 in vacuum.
3. Electromagnetic waves are not deflected by electric or magnetic field.
4. Electromagnetic waves can show interference or diffraction.
5. Electromagnetic waves are transverse waves.
6. Electromagnetic waves may be polarized.
7. Electromagnetic waves need no medium of propagation. The energy from the sun is
received by the earth through electromagnetic waves.
8. The wavelength () and the frequency (v) of electromagnetic wave is related as
c = v = /k
The S.I. unit of frequency is Hertz.
1 Hertz = 1 c / s
The S.I. unit of wavelength is metre.
We however, often express wavelength in Angstrom unit []
1 = 10-10 m
Also, 1 nanometer = l nm = 10-9 m
Uses
1. Radio Waves (communications)

TV and FM radio (short wavelength)

Direct line of sight with transmitter (do not diffract)

Medium wavelength travel further because they reflect from layers in the
atmosphere

2. Satellite signals (Microwaves)

Frequency of microwaves pass easily through atmosphere and clouds

3. Cooking (Microwaves)

Microwaves are absorbed by water molecules.

These water molecules become heated > heat food

Dangers: microwaves are absorbed by living tissue Internal heating will damage or kill
cells

Infrared Radiation (remote controls, toasters)

Any object that radiates heat radiates Infrared Radiation

Infrared Radiation is absorbed by all materials and causes heating

It is used for night vision and security cameras as Infrared Radiation is visible in
daytime or night-time

Police use it to catch criminals, army use it to detect enemy

Dangers: damage to cells (burns)

4. Ultraviolet

Over-exposure to UVA and B damages surface cells and eyes and can cause cancer.

There is a problem with current sunscreens which protect against skin burning from
high UVB but give inadequate protection against free radical damage caused by UVA.

Dark skins are not necessarily safer from harm.

Sun exposure for the skin is best restricted to before 11am and after 3pm in the UK in
summer months.

Sanitary and therapeutic properties have a marked effect on architecture, engineering


and public health and have done so throughout history.

UVC is germicidal, destroying bacteria, viruses and moulds in the air, in water and on
surfaces.

UV synthesises vitamin D in skin, controls the endocrine system and is a painkiller.

Used in state of the art air-handling units, personal air purifiers and swimming pool
technology.

Used to detect forged bank notes: they fluoresce in UV light; real bank notes dont.
Used to identify items outside visible spectrum areas, known as 'black lighting'.

5. X-rays

X-rays detect bone breaks

X-rays pass through flesh but not dense material like bones

Dangers: X-rays damage cells and cause cancers. Radiographer precautions include
wearing lead aprons and standing behind a lead screen to minimise exposure.

6. Gamma Rays

Gamma Rays cause and treat cancers

In high doses, gamma can kill normal cells and cause cancers

Gamma can be used to kill mutated cells though too.

Que 3 - Explain different Photographic process models?


Ans : There are many different types of materials and chemical processes that have been
utilized for photographic image recording. No attempt is made here either to survey the field
of photography or to deeply investigate the physics of photography. Rather, the attempt here
is to develop mathematical models of the photographic process in order to characterize
quantitatively the photographic components of an imaging system.
1. Monochromatic Photography
The most common material for photographic image recording is silver halide emulsion,
depicted .In this material, silver halide grains are suspended in a transparent layer of
gelatine that is deposited on a glass, acetate or paper backing. If the backing is transparent,
a transparency can be produced, and if the backing is a white paper, a reflection print can be
obtained. When light strikes a grain, an electrochemical conversion process occurs, and part
of the grain is converted to metallic silver. A development centre is then said to exist in the
grain. In the development process, a chemical developing agent causes grains with partial
silver content to be converted entirely to metallic silver. Next, the film is fixed by chemically
removing unexposed grains. The photographic process described above is called a no
reversal process. It produces a negative image in the sense that the silver density is
inversely proportional to the exposing light. A positive reflection print of an image can be
obtained in a two-stage process with no reversal materials. First, a negative transparency is
produced, and then the negative transparency is illuminated to expose negative reflection
print paper. The resulting silver density on the developed paper is then proportional to the
light intensity that exposed the negative transparency. A positive transparency of an image
can be obtained with a reversal type of film.
2. Colour Photography
Modern colour photography systems utilize an integral tri-pack film, as to produce positive or
negative transparencies. In a cross section of this film, the first layer is a silver halide
emulsion sensitive to blue light. A yellow filter following the blue emulsion prevents blue light
from passing through to the green and red silver emulsions that follow in consecutive layers
and are naturally sensitive to blue light. A transparent base supports the emulsion layers.
Upon development, the blue emulsion layer is converted into a yellow dye transparency
whose dye concentration is proportional to the blue exposure for a negative transparency
and inversely proportional for a positive transparency. Similarly, the green and red emulsion
layers become magenta and cyan dye layers, respectively.
Colour prints can be obtained by a variety of processes. The most common technique is to
produce a positive print from a colour negative transparency onto no reversal colour paper.
In the establishment of a mathematical model of the colour photographic process, each
emulsion layer can be considered to react to light as does an emulsion layer of a
monochrome photographic material. To a first approximation, this assumption is correct.
However, there are often significant interactions between the emulsion and dye layers. Each
emulsion layer possesses a characteristic sensitivity. In the chemical development process of
the film, a positive transparency is produced with three absorptive dye layers of cyan,
magenta and yellow dyes.

Que 4 - Define and explain Dilation and Erosion concept?


Ans : Dilation: With dilation, an object grows uniformly in spatial extent. Generalized dilation
is expressed symbolically as
G(j,k) = F(j,k)

H(j,k)

where F(j, k), for 1 j, k N is a binary-valued image and H(j, k) for , 1 j, k L, where L is
an odd integer, is a binary-valued array called a structuring element. For notational
simplicity, F(j,k) and H(j,k) are assumed to be square arrays. Generalized dilation can be
defined mathematically and implemented in several ways. The Minkowski addition definition
is

It states that G(j,k) is formed by the union of all translates of F(j,k) with respect to itself in
which the translation distance is the row and column index of pixels of H(j,k) that is a logical
1. Fig. 1 illustrates the concept.

Fig 1: Generalized dilation computed by Minkowski addition.

Erosion: With erosion an object shrinks uniformly. Generalized erosion is expressed


symbolically as
G(j,k) = F(j,k) H(j,k)
where H(j,k) is an odd size L * L structuring element. Generalized erosion is defined to be

The meaning of this relation is that erosion of F(j,k) by H(j,k) is the intersection of all
translates of F(j,k) in which the translation distance is the row and column index of pixels of
H(j,k) that are in the logical one state. Fig. 2 illustrates this. Fig. 3 illustrates generalized
dilation and erosion.

Fig 2: Generalized Erosion

Fig 3: Generalized dilation and erosion for a 5 5 structuring element.

Que 5 - Which are the two quantitative approaches used for the evaluation of image
features? Explain.
Ans: There are two quantitative approaches to the evaluation of image features:

prototype performance

figure of merit.

In the prototype performance approach for image classification, a prototype image with
regions (segments) that have been independently categorized is classified by a classification
procedure using various image features to be evaluated. The classification error is then
measured for each feature set. The best set of features is, of course, that which results in the
least classification error.
The prototype performance approach for image segmentation is similar in nature. A
prototype image with independently identified regions is segmented by a segmentation
procedure using a test set of features. Then, the detected segments are compared to the
known segments, and the segmentation error is evaluated. The problems associated with the
prototype performance methods of feature evaluation are the integrity of the prototype data
and the fact that the performance indication is dependent not only on the quality of the
features but also on the classification or segmentation ability of the classifier or segmenter.
The figure-of-merit approach to feature evaluation involves the establishment of some
functional distance measurements between sets of image features such that a large distance
implies a low classification error, and vice versa. Faugeras and Pratt have utilized the
Bhattacharyya distance figure-of-merit for texture feature evaluation. The method should be
extensible for other features as well. The Bhattacharyya distance (B-distance for simplicity) is
a scalar function of the probability densities of features of a pair of classes defined as

where x denotes a vector containing individual image feature measurements with conditional
density p (x | S1).

Que 6 - Explain about the Region Splitting and merging with example.
Ans: Splitting and merging attempts to divide an image into uniform regions. The basic
representational structure is pyramidal, i.e. a square region of size m by m at one level of a
pyramid has 4 sub-regions of size m/2 by m/2 below it in the pyramid. Usually the algorithm
starts from the initial assumption that the entire image is a single region, then computes the
homogeneity criterion to see if it is TRUE. If FALSE, then the square region is split into the
four smaller regions. This process is then repeated on each of the sub-regions until no further
splitting is necessary. These small square regions are then merged if they are similar to give
larger irregular regions. The problem (at least from a programming point of view) is that any
two regions may be merged if adjacent and if the larger region satisfies the homogeneity
criteria, but regions which are adjacent in image space may have different parents or be at
different levels (i.e. different in size) in the pyramidal structure. The process terminates when
no further merges are possible.

Quad splitting of an image


Although it is common to start with the single region assumption, it is possible to start at an
intermediate level, e.g. 16 regions or whatever. In the latter case, it is possible that 4 regions
may be merged to form a parent region. For simplicity, assume we start with a single region,
i.e. the whole image. Then, the process of splitting is simple. A list of current regions to be
processed, i.e. regions defined as not homogeneous is maintained. When a region is found to
be homogeneous it is removed from the ProcessList and placed on a RegionList.
Algorithm for successive region splitting
Set ProcessList = IMAGE
Repeat

Extract the first element of ProcessList


If the region is uniform then add to RegionList
Else split the region into 4 sub-regions and add these to ProcessList
Until (all regions removed from ProcessList)

Uniformity is determined on the basis of homogeneity of property as in the previous


examples. For a grey level image, say, a region is said to be statistically homogeneous if the
standard deviation of the intensity less than some threshold value, where the standard
deviation is given by,

and is the mean intensity of the N pixels in the region. Whereas splitting is quite simple,
merging is more complex. Different algorithms are possible, some use the same test for
homogeneity but others use the difference in average values. Generally, pairs of regions are
compared, allowing more complex shapes to emerge.

A program in use at Heriot-Watt is spam ( split and merge) which takes regions a pair at a
time and uses the difference of averages to judge similarity, i.e. merge region A with
neighbouring region B if the difference in average intensities of A and B is below a threshold.
Algorithm for successive region merging
Put all regions on ProcessList
Repeat
Extract each region from ProcessList
Traverse remainder of list to find similar region (homogeneity criterion)
If they are neighbours then merge the regions and recalculate property
values;
until (no merges are possible)

You might also like