You are on page 1of 105

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

CHAPTER 1

INTRODUCTION

1.1 Image Fusion:

Image fusion is a process to combine multisource imagery data using


advanced fusion techniques including fusion framework, schemes and algorithms.
The main purpose is the integration of disparate and complementary data to enhance
the information apparent in the images as well as to increase the reliability of the
interpretation as shown in Figure 1.1.

Figure 1.1: General block diagram of image fusion

1.2 Motivation:

Fusion leads to more accurate data [1] and increased utility and it can also
improve the quality and increase the application of these data. Combine higher spatial
information in one band with higher spectral information in another dataset to create

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 1


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

‘synthetic’ higher resolution multispectral datasets and images with rapid


advancements in technology, it is now possible to obtain information from multi
source images to produce a high quality fused image with spatial and spectral
information. The main aim of image fusion is to

 Reduce amount of data


 Retain important information
 Create new image that is more suitable for the purposes of human/machine
perception or for further processing tasks.

It is also stated that fused data provides robust operational performance,


increased confidence, reduced ambiguity, improved reliability and improved
classification [2]. It is more suitable for visual perception and for digital processing.
Image fusion is generally applied to digital imagery for the following applications that
are valuable in human life such as:

1) Medical imaging
2) Microscopic imaging
3) Remote sensing
4) Robotics
5) Battle field surveillance
6) Automated target recognition
7) Guidance and control of autonomous vehicle.

Our project is related to multi modal medical image fusion, Generally for a
physician to analyze the condition of a patient, in most of cases he needs to study
different images like MRI, CT, PET, SPECT etc. This is a time taking process. So, our
idea is to fuse all these images into a single image to provide better diagnosis.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 2


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

1.3 Objective:

1) Broad objective: To perform Multimodal medical image fusion by using hybrid


techniques like wavelet, curvelet, contourlet.
2) Specific objective: To compare and analyze performance metrics of fused image
obtained from proposed methods those are wavelet-contourlet, curvelet-contourlet
transforms and compares it with existing method in terms of PSNR, MSE and
Entropy, in order to come out with best fusion technique to get better diagnosis.

1.4 Multi Modal Image Fusion:

In the recent years, Multimodal image fusion algorithms and devices, has
evolved as a powerful tool in the clinical applications, of medical imagining
techniques. It has shown significant achievements in improving clinical accuracy of
diagnosis based on medical images. The main motivation is to produce most relevant
information from different sources into a single output, which plays a crucial role in
medical diagnosis.

Medical imaging has gained significant attention due to its predominant role in
health care. Some of the different types of imaging modalities used now-a-days are X-
ray, computed tomography (CT), magnetic resonance imaging (MRI), magnetic
resonance angiography (MRA), etc., These imaging techniques are used for extracting
clinical information, which are although complementary in nature most of the times,
some are unique depending on the specific imaging modality used.

For example,

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 3


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

X-rays: Is used to detect fractures, abnormality in bone position

CT: Is used to provide more accurate information about calcium deposit, air and
dense structures like bones with less distortion, acute bleeds and tumours. But it
cannot detect physiological changes.

MRI: Under strong magnetic field and radio-wave energy, information about Nervous
system, structural abnormalities of soft tissue, muscles can be better visualized.

MRA: Is used to evaluate blood vessels and its abnormalities.

PET: PET (positron emission tomography) offers quantitative analyses, allowing


relative changes over time to be monitored as a disease process evolves or in response
to a specific stimulus by looking at blood flow, metabolism, neurotransmitters, and
radio-labelled drugs.

SPECT: Single positron emission computed tomography provides functional and


metabolic information. It helps to diagnose and stage a cancer.

FMRI: Functional magnetic resonance imaging is a functional neuro-imaging


procedure using MRI technology that measures brain activity by detecting changes
associated with blood flow

Hence, we can understand none of these modalities are able to carry all
relevant information in a single image. So, that anatomical and functional medical
images are needed to be combined for a concise view. For this purpose, the
multimodal medical image fusion has been identified as a source with better potential.
It aims to integrate information from multiple modalities to obtain a more complete
and accurate description of the same object which facilitate in more precise diagnosis

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 4


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

and better treatment. Fused image provides higher accuracy and reliability by
removing redundant information.

The applications of image fusion are found in radiology, molecular and brain
imaging, oncology, diagnosis of cardiac diseases, neuro-radiology and ultrasound.
Multimodal medical image fusion helps in diagnosing diseases, and also cost effective
by minimising storage to a single fused image instead of multiple-source images.

We have so many fusion techniques to perform medical image fusion but till
now no one will provide better results. Fusion techniques are broadly classified two
group’s spatial and spectral domain. Spatial domain transform directly deals with
pixels of an image. It leads to spatial distortion in the fused image. It does not give
directional information and also leads to spectral distortion, while the arithmetic
combination will lose original details as a result of low contrast of the fused image. It
becomes a negative factor while we go for further processing, such as classification
problem, of the fused image.

This can be overcome by transform domain. It involves the decomposition of


the source image into sub-bands which are then selectively processed using
appropriate fusion algorithm. In frequency domain methods the image is first
transferred in to frequency domain. It means that the Fourier Transform of the image
is computed first. All the Fusion operations are performed on the Fourier transform of
the image and then the Inverse Fourier transform is performed to get the resultant
image.

Till now we have wavelet, curvelet, and contourlet transforms, the resultant
fused image from these individual transformations doesn’t yield good fused image. In
wavelet transform it provide multi resolution fused image, but it fails to capture
curved edge information. This can be overcome by curvelet but it has limited

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 5


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

directionality. Contourlet transform provide both directionality and localization but it


doesn’t provide multi resolution as good as wavelet. For that reason our idea is to
perform hybrid fusion method based on the combinations of wavelet, curvelet and
contourlet for medical image fusion is proposed.
For each source image it is proposed to apply combinations of wavelet,
curvelet and contourlet taking two transforms at any time. For the obtained fused
images in above cases, it is proposed to compute performance metrics like Entropy,
Peak signal to noise ratio and mean square error so as to come out with best
combination of transformations that yield highly informative fused image in order to
provide better medical diagnosis. Proposed method will be simulated using MATLAB
tool.

1.5 Organization of Thesis:

This report is organized as follows: Chapter 2 deals with literature survey on


types of image fusion, levels in image fusion and fusion techniques. Chapter 3 deals
with existing method which is wavelet-curvelet image fusion technique and its
limitations. Chapter 4 deals with our proposed methods which are wavelet-contourlet
and curvelet-contourlet image fusion techniques Chapter 5 deals with result and
analysis Chapter 6 derives conclusion and future scope of our project.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 6


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

CHAPTER 2

LITERATURE SURVEY

2.1 Types of Image Fusion:

Based on the input data of the fusion process and also based on the purpose of
fusion, fusion can be classified into the following types:

2.1.1 Multi Modal:

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 7


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Fusion of images coming from different sensors (CT, MRI, visible, infrared,
ultra violet etc) is called Multi Modal Image fusion. It is used to decrease the amount
of data, to emphasize band-specific information. In our Project, we are focusing on
this type of image fusion.

(a) NMR (b) PET (c) Fused Image

Figure 2.1: Multi modal image fusion

Generally for a physician to analyze the condition of a patient, in most of cases


he needs to study different images like MRI, CT, PET, SPECT etc. This is a time
taking process. So, our idea is to fuse all these images into a single image to provide
better diagnosis. The fusion of NMR and PET is considered as shown in Figure 2.1.

2.1.2 Multifocal Images:

In applications of digital cameras, when a lens focuses on a subject at a certain


distance, all subjects at that distance are not sharply focused. A possible way to solve
this problem is by image fusion, in which one can acquire a series of pictures with
different focus settings and fuse them to produce a single image with extended depth
of field. The fusion of multifocal images as shown in Figure 2.2.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 8


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

(a)Near focused image (b) Far focused image (c) Fused image

Figure 2.2: Multi focus image fusion

2.1.3 Multi View:

It is defined as fusion of images from the same modality and taken at the same
time but from different viewpoints. A non-blind, shift-invariant image processing
technique that fuses multi-view three-dimensional image data sets into a single, high
quality three-dimensional image is presented in Figure 2.3.

Figure 2.3: Detection results of the motion-based tracker of the first run of the subject
“Alba”, for all camera views

It is effective for

1) Improving the resolution and isotropy in images of transparent specimens.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 9


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2) Improving the uniformity of the image quality of partially opaque samples.

2.1.4 Multi Temporal:

It is defined as fusion of images taken at different times in order to detect


changes between them or to synthesize realistic images of objects which were not
photographed in a desired time. It is explained clearly in Figure 2.4.

Figure 2.4 : Multi-temporal Wetland Identification and Delineation products

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 10


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2.2 Levels of Image Fusion:

Figure 2.5: Classification of Levels of Abstraction

The classification of levels of image fusion is clearly explained above in


Figure 2.5. It explains clearly about various categories of fusion techniques that
implemented using appropriate level of abstraction.

1) Pixel level
2) Feature level
3) Decision level

2.2.1 Pixel Level:

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 11


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Pixel based fusion is performed on a pixel-by-pixel basis, generating a fused


image in which information associated with each pixel is selected from a set of pixels
in the source images to get fused image. It is also called pixel level image fusion.

2.2.2 Feature Level:

Fusion first employs feature extraction stage is preceded by an object


segmentation routine employed in only one of the input sensors (denoted by sensor)

A). The general block diagram of feature based fusion is shown in Figure 2.6.

Figure 2.6: Feature based fusion

The object segmentation routine is used only to bootstrap the feature selection
process, and hence any method that provides even rough, incomplete object
segmentation can be employed at this stage. Feature fusion techniques are used to
increase the accuracy of the feature measurement. Data fusion techniques at the
feature level rely on feature attribute combination techniques such as Kalman
filtering.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 12


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2.2.3 Decision Level:

Symbol-level fusion allows the information from multiple sensors to be


effectively combined at the highest level of abstraction. A common type of symbol-
level fusion is decision fusion. Symbolic level fusion or decision level fusion is used
to increase the probability of a symbol representing a decision. Data fusion techniques
at the symbol level rely on logical and statistical inference techniques such as
Bayesian analysis, Dempster Shafer evidential reasoning, and fuzzy set theory.

Figure 2.7: Decision level based fusion


Techniques are required to effectively fuse symbolic data from multiple sensors
for the purpose of identification. The general block diagram of decision level fusion is
given Figure 2.7. This can be difficult when the sensors provide complementary
information or the sensors provide different levels of information. Here various
parameters of levels of fusion such as pixel level, feature level, decision level as list
out in tabular form as shown below in Table 2.1.

Table 2.1: Comparison of various levels of image fusion

Level Feature Pixel Level Feature Level Decision Level

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 13


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

The Amount of Maximum Medium Minimum


information
Information Loss Minimum Medium Maximum
Dependence of the Maximum Medium Minimum
sensor
Immunity The worst Medium The best
Detection The best Medium The worst
Performance

2.3 Pre-processing of Datasets:

Image registration is one of the pre-processing techniques which align data sets in
an image using feature base algorithm. Before performing fusion we have to set images
to pre-processing stage.

2.3.1 System level consideration:

The system level considerations that are required to implement image fusion is
shown in Figure 2.8. It contains the following stages.

 Image registration
 Image pre-processing
 Image post-processing

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 14


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure: 2.8 System level considerations

Image registration is nothing but correct alignment of input images as perfectly as


possible to obtain good fusion results. Initially the acquired image is set to pre-
processing. Image registration is one of the pre-processing techniques which align
data sets in an image using feature base algorithm.

For example if the sizes of images vary, so before fusion, the images are
needed to be resized so both the images are of the same size. This is done by
interpolating the smaller size image by rows and columns duplication.

Pre-processing makes images best suited for fusion algorithm.

Post Processing stage depends on type of display, fusion system is being used and the
preference of human operator.

2.4 Image Registration:

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 15


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Image registration is one of the necessary pre-processing techniques that


significantly affect the fusion results. Image registration can also be called as image
alignment, in such a way as to align the input images as perfectly as possible in order
to produce the best fusion results. If the input image datasets are not aligned to each
other, it is impossible to obtain good fusion results although fusion framework,
scheme and algorithm are optimum. Therefore, it is necessary to align or register input
images as much as possible prior to the main fusion process.

2.5 Fusion Techniques:

The general requirement of an image fusing process is to preserve all valid and
useful information from the source images, while at the same time it should not
introduce any distortion in resultant fused image. There are various methods that have
been developed to perform image fusion. These methods can be divided into two
types, spatial domain method and frequency domain method.

The image fusion applications are extensively used in medical imaging,


microscopic imaging, remote sensing, computer vision, and robotics. Several
approaches to image fusion can be distinguished, depending on whether the images
are fused in the spatial domain or Spectral domain. The actual fusion process can take
place at different levels of information representation adopted in several approaches.
These approaches can be divided into two types, spatial domain method and
Frequency domain method.

Image fusion method can be broadly classified into two groups:

1) Spatial domain fusion method.


2) Transform domain fusion method.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 16


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Spatial domain methods work by combining the pixel values of the two or
more images to be fused in a linear or nonlinear way. The simplest form is a Weighted
Averaging method. Here, the resultant image is obtained by averaging every
corresponding pixel in the input to give the fused image.

In Frequency domain methods, the input images are decomposed into Multi-
scale coefficients initially. Various fusion rules are used in the selection or
manipulation of these coefficients and synthesized via inverse transforms to form the
fused image. The fusion techniques are classified as given in Figure 2.9.

Figure 2.9: Techniques of image fusion

2.5.1 Spatial Domain Fusion Method:

Spatial Domain Fusion Method directly deals with image pixels by manipulating the
pixel values to achieve desired results.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 17


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

1) Averaging
2) Select maximum.
3) Weighted average method.
4) Intensity-hue-saturation (IHS) transforms.
5) Brovery.
6) Principal component analysis (PCA).

1) Averaging: In this method, the resultant image is obtained by averaging every


corresponding pixel in the input images. It is one of the simplest method and easy to
understand and implement. Works well when images to be fused from same type of
sensor and contain additive noise. This method proves good for certain particular
cases where in the input images have an overall high brightness and high contrast.

K (i, j) = {X (i, j) + Y (i, j)}/2 (2.1)

Where X (i, j) and Y ( i, j) are two input images.

Advantages of averaging algorithm:

1. It is very simple method.


2. Easy to understand and implement.
3. Averaging works well when images to be fused from same type of sensor and
contain additive noise.
4. This method proves good for certain particular cases where in the input images
have an overall high brightness and high contrast.

Disadvantages of averaging algorithm:

1. It leads to undesirable side effect such as reduced contrast.


2. With this method some noise is easily introduced into the fused image, which
will reduce the resultant image quality consequently.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 18


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2) Select maximum: The greater the pixel values the more in focus the image. Thus
this algorithm chooses the in-focus regions from each input image by choosing the
greatest value for each pixel, resulting in highly focused output. The value of the pixel
P (i, j) of each image is taken and compared to each other. The greatest pixel value is
assigned to the corresponding pixel.

p (i , j )= { A ( i. j ) + B (i , j ) } /2 (2.2)

Advantages of select maximum algorithm:

1) Resulting in highly focused image output obtained from the input image as
compared to average method.

Dis advantages of select maximum algorithm:

1) Pixel level method is affected by blurring effect which directly affect on the
contrast of the image.

3) Weighted Average Method: In this method the resultant fused image is obtained
by taking the weighted average intensity of corresponding pixels from both the input
images.

m n
p(i , j) ∑ ∑ W A ( i, j )+ ( 1−w ) B(i , j) (2.3)
i=0 j=0

4) Intensity-hue-saturation (IHS) transforms:

Methods based on Intensity, Hue and Saturation (IHS) transform are probably
the most popular approaches used for enhancing the spatial resolution of multi-sensor
images. The IHS method is capable of quickly merging the massive volume of data. It

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 19


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

can transform the colour space form Red (R), Green (G), Blue (B) to Hue (H),
Saturation (S) and Intensity (I) space.

The IHS colour transformation effectively separates spatial (I) and spectral (H,
S) information from a standard RGB image. It relates to the human colour perception
parameters. The mathematical context is expressed by Eq. 2.4. I relates to the
intensity, while ‘v1’ and ‘v2’ represent intermediate variables which are needed in the
transformation. H and S stand for Hue and Saturation.

[] [ ] []
I 1/√ 3 1/√ 3 1/√ 3 R
v1 = 1 /√ 6 1/√ 6 −2/√ 6 G
v2 1/√ 2 −1 /2 0 B

H= tan
−1
[ ]
v2
v1

2
v1
S= (¿ + v 22 ) (2.4)
√¿

There are two ways of applying the IHS technique in image fusion: direct and
substitutional. The first refers to the transformation of three image channels assigned
to I, H and S. The second transforms three channels of the data set representing RGB
into the IHS colour space which separates the colour aspects in its average brightness
(intensity). The schematic diagram of IHS is shown in Figure 2.10.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 20


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 2.10: Standard IHS fusion process

This corresponds to the surface roughness, its dominant wavelength


contribution (hue) and its purity (saturation). Both the hue and the saturation in this
case are related to the surface reflectivity or composition. Then, one of the
components is replaced by a fourth image channel which is to be integrated. In many
published studies the channel that replaced one of the IHS components is contrast
stretched to match the latter. A reverse transformation from IHS to RGB as presented
in Eq. 2.5 converts the data into its original image space to obtain the fused image.
The IHS technique has become a standard procedure in image analysis. It serves
colour enhancement of highly correlated data, feature enhancement, the improvement
of spatial resolution and the fusion of disparate data sets.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 21


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

[] [
R
G
B
=
1 /√ 3 1/ √ 6 1/√ 2
1 /√ 3 1/ √ 6 −1/√ 2
1 /√ 3 −2/√ 6 0 ] [] I
v1
v2
(2.5)

The use of IHS technique in image fusion is manifold, but based on one
principle: the replacement of one of the three components (I, H or S) of one data set
with another image. Most commonly the intensity channel is substituted. Replacing
the intensity (sum of the bands) by a higher spatial resolution value and reversing the
IHS transformation leads to composite bands. These are linear combinations of the
original (re-sampled) multispectral bands and the higher resolution panchromatic
band.

A variation of the IHS fusion method applies a stretch to the hue saturation
components before they are combined and transformed back to RGB. This is called
colour contrast stretching. The IHS transformation can be performed either in one or
in two steps. The two step approach includes the possibility of contrast stretching the
individual I, H and S channels. It has the advantage of resulting in colour enhanced
fused imagery. A closely related colour system to IHS is the HSV: hue, saturation and
value.

Procedure to apply IHS Algorithm:

1) Perform image registration (IR) to PAN and MS(Multi spectral), and resample
MS
2) Convert MS from RGB space into IHS space.
3) Match the histogram of PAN to the histogram of the I component.
4) Replace I component with PAN.
5) Convert the fused MS back to RGB space.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 22


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Advantages of IHS Algorithm:

1) It provides high spatial quality.


2) It is a simple method to merge the images attributes.
3) It provides a better visual effect.
4) It gives the best result for fusion or remote sensing images.

Disadvantages of IHS Algorithm:

1) It produces a significant colour distortion with respect to the original image.


2) It suffers from artefacts and noise which tends to higher contrast.
3) The major limitation that only three bands are involved

5) Brovery Transform:

Brovery Transform (BT) also known as colour normalized fusion is based on


the chromaticity transform and the concept of intensity modulation. It is a simple
method to merge data from different sensors, which can preserve the relative spectral
combination of each pixel but replace its overall brightness with the high spatial
resolution image. It is a combination of arithmetic operations and normalizes the
spectral bands before they are multiplied with the images. It retains the corresponding
spectral feature of each pixel and transforms all the luminance information into multi
sensor image of high resolution. The formula used for the brovery transform can be
described as follows

Red = (band 1/Σband n) * High Resolution Band

Green = (band 2/Σband n) * High Resolution Band

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 23


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Blue = (band 3/Σband n) * High Resolution Band

The spatial domain provide high spatial resolution and easy to perform, but it
has image blurring problem and outputs are less informative. Spatial distortion
becomes a negative factor.

Advantages of Brovery Transform:

1) Increases the contrast in the low and high ends of an image histogram.
2) It is a simple method to merge the data from different sensors.
3) This method is simple and fast.
4) It provide superior visual and high resolution multispectral image.
5) Very useful for visual Interpretation.

Disadvantages of Brovery Transform:

1) Three bands at a time should be merged from multi spectral scene.


2) It should not be used if preserving the original scene radiometry is important.
3) This method ignores the requirement of high quality synthesis of spectral
information. It produces spectral distortion

6) Principal Component Analysis:

The limitation of IHS leads to the development of Principal Component


analysis. PCA is a general statistical technique that transforms multivariate data with
correlated variables into one with uncorrelated variables called principal components.
These new variables are obtained as linear combinations of the original variables.
PCA has been widely used in image encoding, image data compression, image
enhancement and image fusion. In the fusion process, PCA method generates
uncorrelated images (PC1, PC2…, PCn, where n is the number of input multispectral
bands.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 24


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

PCA based fusion is very suitable for merging the MS and PAN images.
Compared to the IHS fusion, the PCA fusion has the advantage that it does not have
the three band limitation and can be applied to any number of bands at a time. As
shown in Figure 2.11.

Figure 2.11: Standard PCA fusion scheme

The PCA is useful for image encoding, image data compression, image
enhancement, digital change detection, multi-temporal dimensionality and image
fusion. It is a statistical technique that transforms a multivariate data set of inter-
correlated variables into a data set of new un-correlated linear combinations of the
original variables. It generates a new set of axes which are orthogonal. The approach
for the computation of the principal components (PCs) comprises the calculation of:

1) Covariance (unstandardized PCA) or correlation (standardized PCA) matrix


2) Eigen values and eigenvectors
3) PCs

An inverse PCA transforms the combined data back to the original image
space. The use of the correlation matrix implies a scaling of the axes so that the
features receive a unit variance. It prevents certain features from dominating the
image because of their large digital numbers. The signal-to-noise ratio (SNR) is

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 25


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

significantly improved applying the standardized PCA. Better results are obtained if
the statistics are derived from the whole study area rather than from a subset area.

Two types of PCA can be performed: selective or standard. The latter uses all
available bands of the input image and the selective PCA uses only a selection of
bands which are chosen based on a priori knowledge or application purposes.

PCA in image fusion has two approaches:

1) PCA of multichannel image replacement of first principal component by


different images (Principal Component Substitution - PCS).
2) PCA of all multi-image data channels.

The first version follows the idea of increasing the spatial resolution of a
multichannel image by introducing an image with a higher resolution. The channel
which will replace PC1 is stretched to the variance and average of PC1. The higher
resolution image replaces PC1 since it contains the information which is common to
all bands while the spectral information is unique for each band.

The second procedure integrates the disparate natures of multi-sensor input


data in one image. The image channels of the different sensor are combined into one
image file and a PCA is calculated from all the channels. The flow diagram is shown
in Figure 2.12.

Procedure to apply PCA Algorithm:

1. Perform IR to PAN and MS, and resample MS.


2. Convert the MS bands into PC1, PC2, PC3…, by PCA transform.
3. Match the histogram of PAN to the histogram of PC
4. Replace PC1 with PAN.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 26


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

5. Convert PAN, PC2, PC3, back by reverse PCA.

Figure 2.12: Flow diagram of PCA

PCA is used amply in all forms of analysis-from neuroscience to computer


graphics because it is a simple, non-parametric method of extracting relevant
information from mystifying data sets. This technique is applied to the multispectral
bands.

Advantages of PCA Algorithm:

1) This method is very simple to use and the images fused by this method have
high spatial quality.
2) It prevents certain features from dominating the image because of their large
digital numbers.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 27


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Disadvantages of PCA Algorithm:

1) It suffers from spectral degradation.


2) This method is highly criticized because of the distortion of the spectral
Characteristic between the fused images and the original low resolution
Images.

Drawbacks of Spatial Domain Image Fusion:

Here the high frequency details are injected into up-sampled version of MS
images. The disadvantage of spatial domain approaches is that they produce spatial
distortion in the fused image. It does not give directional information and also leads to
spectral distortion, while the arithmetic combination will lose original details as a
result of low contrast of the fused image. It becomes a negative factor while we go for
further processing, such as classification problem, of the fused image.

The spatial distortion can be very well handled by transform domain


approaches on image fusion. The multi-resolution analysis has become a very useful
tool for analyzing remote sensing images. The discrete wavelet transform has become
a very useful tool for fusion. Some other fusion methods are also there, such as
pyramid based, curvelet transform based etc. These methods show a better
performance in spatial and spectral quality of the fused image compared to other
spatial methods of fusion.

2.5.2 Transform Domain Fusion Method:

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 28


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

This is also called frequency domain method. Transform Domain Fusion


method can be categorized into two groups

1) Multi-scale Decomposition
2) Multi-scale Geometric Analysis

Wavelet comes under Multi-scale Decomposition method, Curvelet and


contourlet method comes under Multi-scale Geometric. Our proposed Idea is on
Transform Domain methods. The general block diagram representation of spectral
domain based image fusion is shown in below Figure.2.13

Figure 2.13: General Diagram for Image Fusion in Transform Domain

Image registration is one of the necessary pre-processing techniques that


significantly affect the fusion results. Image registration can also be called as image
alignment, in such a way as to align the input images as perfectly as possible in order
to produce the best fusion results. If the input image datasets are not aligned to each
other, it is impossible to obtain good fusion results although fusion framework,
scheme and algorithm are optimum. Therefore, it is necessary to align or register input
images as much as possible prior to the main fusion process.

Example: If the sizes of images vary, so before fusion, the images are needed to be
resized so both the images are of the same size. This is done by interpolating the
smaller size image by rows and columns duplication. In spectral domain the original

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 29


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

image is first transferred into frequency domain. It means that the Fourier transform
of the image is computed first.

Then the image is transferred in to frequency domain. It means that the Fourier
transform of the image is computed first. After that the image is transformed into
frequency domain using one of our proposed techniques. For the obtained image we
apply appropriate fusion rule. Now we perform inverse transformation to represent the
fused image in spatial domain.

Multi scale transform based fusion:

(a) High-pass filtering method


(b) Pyramid method:-(i) Gaussian pyramid (ii) Laplacian Pyramid (iii)
Morphological pyramid (iv) Gradient pyramid (v) Ratio of low pass pyramid
(c) Wavelet transforms:- (i) Discrete wavelet transforms (DWT) (ii) Stationary
wavelet transforms (iii) Multi-wavelet transforms.
Multi geometrical image fusion:
a) Curvelet transform
b) Contourlet transform.
(a) High Pass Filtering Methods: High pass filtering methods are used for image
sharpening in frequency domain. Because edges and other abrupt changes in
intensities are associated with high frequency components, image sharpening can be
achieved in the frequency domain by High pass filtering, which attenuates low
frequency components without disturbing high frequency information in the Fourier
transform. Some of the popular Frequency Filtering methods for image sharpening are
the High-Pass Filter additive (HPFA), High Frequency Modulation (HFM).
(b) Pyramid Methods: The pyramid offers a useful image representation for a number of
tasks. It is efficient to compute: indeed pyramid filtering is faster than the equivalent
filtering done with a fast Fourier transform. The information is also available in a

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 30


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

format that is convenient to use, since the nodes in each level represent information
that is localized in both space and spatial frequency.

There are various types of pyramid transforms. Some of the pyramids


transforms considered in the project are as the following:

1) Filter Subtract Decimate Pyramid


2) Gradient Pyramid
3) Laplacian Pyramid
4) Ratio Pyramid
5) Morphological Pyramid

Typically, every pyramid transform consists of three major phases:

 Decomposition
 Formation of the initial image for decomposition.
 Recomposition

Decomposition is the process where a pyramid is generated successively at


each level of the fusion. The depth of fusion or number of levels of fusion is pre
decided. The number of levels of fusion is decided based on the size of the input
image. The recomposition process, in turn, forms the finally fused image, level wise,
by merging the pyramids formed at each level to the decimated input images.

Decomposition phase basically consists of the following steps. These steps are
performed l number of times, l being the number of levels to which the fusion will be
performed.

 Low Pass filtering: The different pyramidal methods have a pre defined filter
with which are the input images convolved/filtered with.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 31


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

 Formation of the pyramid for the level from the filtered/convolved input
images using Burt’s method or Lis method.
 The input images are decimated to half their size, which would act as the input
image matrices for the next level of decomposition.

Merging the input images is performed after the decomposition process. This
resultant image matrix would act as the initial input to the recomposition process.

Figure 2.14: Pyramid Transform description with an example

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 32


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

The recomposition is the process wherein, the resultant image is finally


developed from the pyramids formed at each level of decomposition. The various
steps involved in the recomposition phase are discussed below. These steps are
performed l number of times as in the decomposition process as shown in Figure 2.14.

 The input image to the level of recomposition is undecimated


 The undecimated matrix is convolved/filtered with the transpose of the filter
vector used in the decomposition process
 The filtered matrix is then merged, by the process of pixel intensity value
addition, with the pyramid formed at the respective level of decomposition.
 The newly formed image matrix would act as the input to the next level of
recomposition.
 The merged image at the final level of recomposition will be the resultant
fused image. The flow of the pyramid based image fusion can be explained by
the following example.

(i) Gaussian Pyramid

Pyramid construction is equivalent to convolving the original image with a set


of Gaussian-like weighting functions. These equivalent weighting functions for three
successive pyramid levels. The convolution acts as a low-pass filter with the band
limit reduced correspondingly by one octave with each level. Because of this
resemblance to the Gaussian density function we refer to the pyramid of low-pass
images as the "Gaussian pyramid."

(ii) Laplacian Pyramid

The Laplacian pyramid is a fundamental tool in image processing. The


Laplacian pyramid is derived from the Gaussian pyramid representation, which is

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 33


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

basically a sequence of increasingly filtered and down-sampled versions of an image.


The set of difference images between the sequential Gaussian pyramid levels, along
with the first i.e. most down-sampled level of the Gaussian pyramid, is known as the
Laplacian pyramid of an image. The difference levels are commonly referred to as the
detail levels, and the additional level as the approximation level. The Laplacian
pyramid transform is specifically designed for capturing image details over multiple
scales. Laplacian pyramid represents the edge of the image detail at every levels, so
by comparing the corresponding Laplace-level pyramid of two images, it is possible
to obtain the fused image which merge their respective outstanding detail, and makes
the integration of the image retaining the amount of information as rich as possible.
The source image is decomposed into a series of resolution spaces, and how to choose
integration factor and fusion rule will directly affect the final quality of fused image.

(iii) Morphology Pyramids:

Morphological operators make use of the connectedness between pixels either


to improve the spatial arrange of the pixels or to distort them to extract useful features
from the subset of spatially localized pixel features. The filters designed with
morphological operators have been successfully applied in the problem of diagnosis
of brain conditions to analyze and identify tumours. The morphological operators are
used for fusing the images from multiple modalities such as CT and MR, with a
varied degree of success. The success of these operators depends on the size and
design of the structuring operator that invariably controls the opening and closing

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 34


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

operations in morphological filtering. Among many, the major operators used for
fusion are averaging, morphology towers, K-L transforms, and morphology pyramids.

The pyramid is produced by low-pass filtering the image and then sampling to
generate the next lower resolution level of the hierarchy. The basis for a
morphological pyramid requires a morphological sampling theorem. The overall
fusion strategy is shown in Figure 2.15. According to this strategy a morphological
pyramid is first produced for each of the input images. Then a morphological
difference pyramid is constructed, for each of the above pyramids. This is achieved by
taking the differences between the morphological images residing at successive levels
in the original pyramid. An intermediate pyramid is constructed. Combining
information from the two difference pyramids at each level. Finally, reconstruction of
the intermediate pyramid, using appropriate morphological operations, produces the
required fused image.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 35


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 2.15: Morphological Pyramid Fusion

The morphological pyramid construction scheme is applied to each input


image. The process which generates each image from its predecessor is called a
Pyramid Construction (PC) operation, which is known as a reducing operation, since
both resolution and sampling density are decreased. After the PC operation is applied,
the new morphological image is sampled to generate the next level of the pyramid.
This process is repeated to construct two pyramids, one for the MR data and one for
the CT data. As well as being fiat, the structuring element, K, is also symmetric and it
is used at each level during the pyramid construction process. St is the sampling
lattice corresponding to the level t of the pyramid.

(c) Wavelet based fusion:

A mathematical tool developed originally in the field of signal processing can


also be applied to fuse image data following the concept of the multi-resolution
analysis (MRA). Another application is the automatic geometric registration of
images, one of the pre-requisites to pixel based image fusion. The wavelet transform
creates a summation of elementary functions (wavelets) from arbitrary functions of
finite energy. The weights assigned to the wavelets are the wavelet coefficients which
play an important role in the determination of structure characteristics at a certain
scale in a certain location. The interpretation of structures or image details depend on
the image scale which is hierarchically compiled in a pyramid produced during the
MRA.

The wavelet transform in the context of image fusion is used to describe


differences between successive images provided by the MRA. Once the wavelet
coefficients are determined for the two images of different spatial resolution, a

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 36


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

transformation model can be derived to determine the missing wavelet coefficients of


the lower resolution image. Using these it is possible to create a synthetic image from
the lower resolution image at the higher spatial resolution. This image contains the
preserved spectral information with the higher resolution, hence showing more spatial
detail.

(i) Discrete Wavelet Transform Method: Wavelet transforms are multi-resolution


image decomposition tool that provide a variety of channels representing the image
features by different frequency sub-bands at multi-scale. It is a famous technique in
analysing signals.

When decomposition is performed, the approximation and detail component


can be separated 2-D Discrete Wavelet Transformation (DWT) converts the image
from the spatial domain to frequency domain. The image is divided by vertical and
horizontal lines and represents the first-order of DWT, and the image can be separated
with four parts those are LL, LH, HL and HH.

The DWT of a signal x is calculated by passing it through a series of filters.


First the samples are passed through a low pass filter with impulse response g
resulting in a convolution of the two:


y [ n ] =( x∗g ) [ n ] = ∑ x [ k ] g [n−k ] (2.6)
k=−∞

The signal is also decomposed simultaneously using a high-pass filter h. The


outputs giving the detail coefficients (from the high-pass filter) and approximation
coefficients (from the low-pass). It is important that the two filters are related to each
other and they are known as a quadrature mirror filter.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 37


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

However, since half the frequencies of the signal have now been removed, half
the samples can be discarded according to Nyquist’s rule. The filter outputs are then
sub-sampled by 2 (It should be noted that Mallat's and the common notation is the
opposite, g- high pass and h- low pass):


y [ n ] ∨low= ∑ x [ k ] g [2 n−k ] (2.7)
k=−∞


y [ n ] ∨high= ∑ x [ k ] h[2 n−k ] (2.8)
k=−∞

This decomposition has halved the time resolution since only half of each filter
output characterizes the signal. However, each output has half the frequency band of
the input so the frequency resolution has been doubled. The 2D multi resolution
wavelet decomposition shown in Figure 2.16.

Figure 2.16: 2D multi resolution wavelet decomposition

With the sub-sampling operator  ,

( y  k )[n] = y[kn] (2.9)

The above summation can be written more concisely.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 38


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

y│low = (x*g)2

y│high = (x*h)2 (2.10)

However computing a complete convolution x * g with subsequent down-


sampling would waste computation time. The Lifting scheme is an optimization
where these two computations are interleaved.

(ii) Cascading and Filter banks

This decomposition is repeated to further increase the frequency resolution and


the approximation coefficients decomposed with high and low pass filters and then
down- sampled. The three-level filter bank is known in Figure 2.17.

Figure 2.17 Three-level one-dimensional DWT.

At each level in the above diagram the signal is decomposed into low and high
frequencies. Due to the decomposition process the input signal must be a multiple of
2n where n is the number of levels. General frame work for DWT is shown in Figure
2.18.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 39


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 2.18: General framework for wavelet based image fusion

p
DWT
(i, j) = ∑ ∑ (DWT (i , j))2
(2.11)

General process of image fusion using DWT

Step 1: Implement DWT on both the input image to create wavelet lower
decomposition.

Step 2: Fuse each decomposition level by using different fusion rule.

Step 3: Carry inverse discrete wavelet transform on fused decomposed level, which
means to reconstruct the image, while the image reconstructed is the fused image

Multi directional transform based fusion:

1) Curvelet transform.
2) Contourlet transform.

1) Curvelet Transform: Curvelet Transform is a non-adaptive technique for multi-


scale object representation. Curvelet transform is also a multi-resolution
decomposition technique. The 2D-FFT is applied on images to obtain the Fourier

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 40


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

samples. The Fourier samples are wrapped around the origin. Finally the image is
reconstructed by performing the inverse FFT transform.

The main benefit of curvelets is its capability of representing a curve as a set of


superimposed functions of various lengths and widths. The curvelet transform, unlike
wavelet transform, is a multi-scale transforms, but, unlike wavelets, contains
directional elements. Curvelets are based on multi-scale ridge lets with a band pass
filtering to separate image into disjoint scales. The side length of the localizing
windows is doubled at every other dyadic sub-band. The steps that are being followed
by the Curvelet Transform Process are explained with the help of flow diagram as
shown below in Figure 2.19. But it gives limited directionality i.e., 00 , 900 ,
0
180 , 2700 .

Figure 2.19 Flow diagram of curvelet transforms

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 41


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2) Contourlet-based Fusion:

The distribution of the coefficients of contourlet transform is related with the


parameter n-levels given in the DFB stage decomposition where n-levels are one-
dimensional vector. The parameter, n-levels is used to store the parameters of the
decomposition level of each level of pyramid for DFB. If the parameter of the
decomposition level is 0 for DFB, DFB will use the wavelet to process the sub-image
of pyramid. If the parameter is lj, the decomposition level of DFB is 2lj , which
means that the sub-image is divided into 2lj directions. Corresponding to the vector
parameter n-levels, the coefficient Y of the contourlet decomposition is a vector too.
The length of Y is equal to the length (n-levels) +1. Y{1} is the sub-image of the low
frequency. Y{i}(i = 2,... Len) is the directional sub-image obtained by DFB
decomposition, where i denotes the i-th level pyramid decomposition.

Figure 2.20: General framework for contourlet based image fusion

Fusion methods based on contourlet analysis combine decomposition


coefficients of two or more source images using a certain fusion algorithm. Then, the
inverse transform is performed on the combined coefficients resulting in the fused

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 42


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

image. A general scheme for contourlet-based fusion methods is shown in Figure


2.20, where Image 1 and Image 2 denote the input images, CT represents the
contourlet transform, and Image F is the final fused image.

In this chapter literature survey is given. In next chapter implementation of existing


method Wavelet-Curvelet transform is presented.

CHAPTER 3

EXISTED FUSION METHOD

3.1 Hybrid Image Fusion Technique:

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 43


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

A single method of fusion may not be as efficient as it always lacks in one


point or the other. Therefore their exists the need of developing a method which takes
into consideration the advantages of various different fusion rules. Thus the hybrid
image fusion is used. It performs processing of the image based upon the different
fusion rules and then integrates these results together to obtain a single image. The
results of various fusion techniques are extracted and then they are again fused by
implementing a hybrid method presenting better quality results. A single method may
not effectively result in removing the ringing artifacts and the noise in the source
images. These inadequacies result in development of fusion rules which follow a
hybrid algorithm and improve to great extent the visual quality of the image.
Therefore Hybrid Image fusion leads to minimum Mean Square Error Value and
maximum Signal to Noise (S/N) Ratio value.

3.2 Wavelet-Curvelet Fusion Technique:

The existing method which is hybrid of two methods that is the wavelet based
image fusion and the curvelet based image fusion (hybrid of wavelet and curvelet
fusion rules). Curvelet based image fusion efficiently deals with the curved shapes,
therefore its application in medical fields would result in better fusion results than
obtained using wavelet transform alone.

On the other hand wavelet transform works efficiently with multi-focus,


multispectral images as compared to any other fusion rule. It increases the frequency
resolution of the image by decomposing it to various bands again and again till
different frequencies and resolutions are obtained. Thus a hybrid of wavelet and
curvelet would lead to better results as compare to previously existed methods. The

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 44


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

flow diagram of existed method with is the combination of wavelet and curvelet
transformation is shown in Figure 3.1.

The flow diagram shows procedure of combining image 1 and image 2 into
single fused wavelet coefficients. These bands obtained are then passed through
curvelet transform which segments it into various additive components each of which
is sub-band of the image. These bands are then passed through tiling operation which
divides the band to overlapping tiles.

Figure 3.1 Hybrid fusion of Wavelet-Curvelet transforms

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 45


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

A hybrid of wavelet and curvelet integrates various pixel level rules in a single
fused image. Pixel based rules operates on individual pixels in the image but ignores
some important details such as edges, boundaries of the image. Wavelet based rule
alone may reduce the contrast in some images and cannot effectively remove the
ringing effects and noise appearing in the source images. Curvelet method can work
well with edges and boundaries and curve portions of the images using Ridgelet
transforms.

In the hybrid method first the decomposition of the input images is done up to
level N by passing the image through series of low and high pass filters. The low and
high pass bands are then subjected to curvelet transform by decomposing it further
into small tiles and then fused using wavelet transform and inverse wavelet transform
to get full size images. This will take into account the drawbacks of wavelet and
effectively remove it using curvelet transform and visual quality of the image is
improved. Wavelet transform of an image up to level N till different resolution is
obtained. This gives various frequency bands. In chapter 4 we discussed about the
operation of wavelet and curvelet transform in detail. There we understand clearly
how the medical images are fused using these transformations.

3.3 Algorithm for DWT-Contourlet Transform:

Step 1: Consider two source images.

Step 2: These images are set to pre-processing which includes RGB to gray scale
conversation and also ensured image alignment.

Step 3: The images obtained from step-2 are first decomposed using Discrete Wavelet
Transform (WT).

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 46


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Step 4: We get fused image in wavelet domain using following fusion rules.

1) Fuse approximate coefficients of source image using average method.


2) Fuse detail coefficients of source image using Maxima method.

Step 5: Apply inverse DWT to reconstruct fused image in spatial domain.

Step 6: Apply contourlet transform on the fused image which is obtained in step-5.

Step 7: Implement step-4 again.

Step 8: To get final hybrid fused image in spatial domain we apply inverse contourlet
transform.

3.4 Limitation in Existing Method:

Even though the hybrid image fusion of wavelet-curvelet transform provide


better fusion results compare to previous spatial domain transformation techniques as
we already discussed in previous chapter 2. Existing method can efficiently captures
curved information and also provide multi resolution, but it fails to provide
directionality and anisotropy information which is very important in medical
diagnosis. For that reason to overcome this drawback in chapter 4 we discuss
proposed two hybrid multi modal medical image fusion using different combinations
of curvelet, wavelet, and contourlet transformations in order to come out with best
combination which yield best fused image which provide better diagnosis.

In this chapter implementation of existing method Wavelet-Curvelet transform


is explained. In next chapter implementation of proposed methods Wavelet-
Contourlet, Curvelet-Contourlet transforms are explained are presented.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 47


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

CHAPTER 4

PROPOSED METHODOLOGIES

4.1 Proposed Method-1(Wavelet-Contourlet):

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 48


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

The proposed method-1 which is the hybrid combination of discrete wavelet


and Contourlet transform where limitation of directionality in wavelet transform is
overcome by Contourlet transform.

4.1.1 Pre-processing Stage:

Source images are necessary to align or register as perfectly as possible prior


to the main fusion process in order to produce the best fusion results. Image
registration process can also be called as image alignment. The datasets of input
images are not aligned to each other, it is impossible to yield best fusion results
although fusion framework, scheme and algorithm are optimum.

4.1.2 Decomposition -1:

After pre-processing these source images are first decomposed using Wavelet
Transform (WT) in order to realize multi-scale sub band decompositions with no
redundancy. These sub-bands coefficients are predominantly low and the high
frequency sub-bands of the image. Now, the obtained approximation and detailed

coefficients after application of appropriate fusion rule are reconstructed using the
inverse DWT. The entire process carried out in this stage serves to provide significant
localization leading to a better preservation of features in the fused image.

4.1.3 Decomposition -2:

After reconstruction at Stage-1, fusion algorithm is again applied at stage-2 in


Contourlet domain. The significance of such an approach is to overcome the
limitation of directionality in wavelets (in stage-1). Contourlet transform (CT) is
applied in order to achieve angular decompositions.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 49


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

After applying sub-band decomposition using CT, a set of coefficients are


obtained (for both the images). These frequency coefficients are fused together based
on certain fusion algorithms are reconstructed using the inverse CT. The schematic
diagram of our proposed methodology which is the hybrid image fusion using
wavelet-contourlet transform is shown in Figure 4.1.

Contourlet offer high degree of directionality and anisotropy where as wavelet


transform works efficiently with multi-focus, multispectral images as compared to
any other fusion rule. It increases the frequency resolution of the image by
decomposing it to various bands again and again till different frequencies and
resolutions are obtained. Thus a hybrid of wavelet and contourlet would lead to better
results that could be used for medical diagnosis when compared with Existing
methods those are individual results of wavelet and contourlet transform.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 50


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 4.1 Schematic of the proposed fusion method-1

4.2 General Image Fusion Based on Dwt:

It comes under multi-scale decomposition. This is used to apply the wavelet


transform to digital world. Filter banks are used to approximate the behaviour of the
continuous wavelet transform. The coefficients of these filters are computed using
mathematical analysis. The general block diagram is shown below in Figure 4.2.
DWT comes under the classification of multi-scale decomposition. This is used to
map the wavelet transform to digital world. Filter banks are used to approximate the
behaviour of the continuous wavelet transform. Double channel filter bank is used in

Figure 4.2: General block diagram of DWT

discrete wavelet transforms (DWT). The coefficients of these filters are evaluated
using mathematical analysis. The wavelet transform is used to identify local features
in an image. It also used for decomposition of two dimensional (2D) signals such as
2D gray-scale image for multi-resolution analysis. The available filter banks
decompose the image into two different components i.e. high- and low- frequency.
When decomposition is carried out, the approximation and detail components can be
separated 2-D Discrete Wavelet Transformation (DWT) converts the image from the

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 51


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

spatial domain to transform domain. The image is divided by vertical and horizontal
lines and represents the first-order of DWT, and the image can be separated with four
parts those are LL1, LH1, HL1 and HH1.

4.2.1 Wavelet Decomposition of Images:


Wavelet separately filters and down samples the 2-D data (image) in the
vertical and horizontal directions (separable filter bank). The input (source) image is
I(x, y) filtered by low pass filter L and high pass filter H in horizontal direction and
then down sampled by a factor of two (keeping the alternative sample) to create the
coefficient matrices IL(x, y) and IH(x, y) .

The coefficient matrices IL (x, y) and IH(x, y) are both low pass and high pass
filtered in vertical direction and down sampled by a factor of two to create sub bands
(sub images) ILL(x, y) , ILH (x, y) , IHL(x, y), and IHH (x, y). Wavelet decomposition can
be implemented by two channel filter bank shown in Figure 4.3.

Figure 4.3: Two level WT decomposition

Keep 1 column out of 2 (down sampling in columns)

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 52


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Keep 1 row out of 2 (down sampling in columns)

The Discrete Wavelet Transform has the property that the spatial resolution is
small in low-frequency bands but large in high frequency bands. This is because the
scaling function is treated as a low pass filter and the mother wavelet as high pass
filter in DWT implementation. The wavelet transform decomposition and
reconstruction take place column and row wise. Firstly row by row decomposition is
performed and then column by column. This can be shown in Figure 4.4.

Figure 4.4 Image decomposition using DWT

The ILL(x, y) sub-band is the original image at the coarser resolution level,
which can be considered as a smoothed and sub-sampled version of the original
image. Most information of their source images is kept in the low frequency sub-
band. It represents the frequency usually contains slowly varying grey value
information in an image so called approximation.

The ILH (x, y), IHL(x, y) and IHH(x, y) are sub-bands contain the detail
coefficients of an image, which usually have large absolute values correspond to sharp
intensity changes and preserve salient information in the image.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 53


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

1, 2, 3 - - - Decomposition Levels

H - - - - - High Frequency Bands

L - - - - - Low Frequency Bands

Figure 4.5 Different levels of decomposition

There are different levels of decomposition which are shown in Figure 4.5.
After one level of decomposition, there will be four frequency bands, as listed above.
By recursively applying the same scheme to the LL sub-band a multi-resolution
decomposition with a desires level can then be achieved.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 54


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

The schematic diagram of wavelet- based image fusion is shown in Figure 4.6.
In wavelet image fusion scheme, the source images I1(x, y) and I2(x, y), are
decomposed into approximation and detailed coefficients at required level using
DWT. Two images, I1(X1, X2) and I2(X1, X2) are registered. Wavelet transform is
applied on two images. It can be represented by the following equation,

I (X1, X2) =W (I1 (X1, X2 )),W (I2 (X1, X2 )) (4.1)

Where W is the wavelet transform operator. Wavelet coefficients are fused


using the fusion rule. IDWT is applied on the fused wavelet coefficients to obtain the
fused image If(X1, X2) given by

If(X1, X2) =W−1 (φ (W (I1 (X1, X2)), W (I2 (X1)))) (4.2)

Where W −1 and φ are the Inverse Discrete Wavelet Transform operator and fusion
operator. There is several wavelet fusion rules that can be used for the selection of the
wavelet coefficients from the wavelet transform of the images to be fused. The most
frequently used rule is the maximum frequency rule which selects the coefficients that
have the maximum absolute values. The Wavelet Transform concentrates on
representing the image in multi-scales and it’s appropriate to represent linear edges.
The multi-level image fusion using DWT is shown in Figure 4.6.

Fused Wavelet

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 55


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Registered Wavelet
Source images coefficients

Figure 4.6 Multi-level image fusion using DWT

The fusion rule used in this paper is simply averages the approximation
coefficients and picks the detailed coefficient in each sub band with the largest
magnitude. Thus, N-level decomposition will finally have 3N+1 different frequency
bands, which include 3N high frequency bands and just one LL frequency band. This
decomposition carried out until desired resolution is reached. It depends upon the
ratio of spatial resolution of the image.

4.2.2 Procedure for Implementation of Wavelet Transform:

Step 1: The images to be fused must be registered to assure that the corresponding
pixels are aligned.

Step 2: These images are decomposed into wavelet transformed images, respectively,
based on wavelet transformation. The transformed images with K -level
decomposition will finally have 3K+1 different frequency bands, which
include one low-frequency portion (ILL) and 3K high-frequency portions
(low-high bands, high-low bands, and high-high bands).

Step 3: The transform coefficients of different portions or bands are performed with a
certain fusion rule.

a) Fuse approximate coefficients of source image using average method.


b) Fuse detail coefficients of source image using Maxima method.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 56


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Step 4: The fused image in spatial domain is constructed by performing an inverse


wavelet transform based on the combined transform coefficients from Step 3.

Advantages of wavelet transform:

1) Time and frequency Localization.


2) Provides Multi-scale decomposition.

Limitations of wavelet transform:

1) Curved singularity representation and absence of anisotropic element.


2) Limited orientation (vertical, Horizontal and Diagonal).

4.3 General Image Fusion Based on Contourlet Transforms:

Contourlet transform is a new two-dimensional transform method for image


representations. The Contourlet transform has properties of

1) Multi-resolution: The representation should allow images to be successively


approximated, from coarse to fine resolutions.
2) Localization: The basic elements in the representation should be localized in both
the spatial and the frequency domains.
3) Critical sampling: For some applications (e.g., compression), the representation
should form a basis, or a frame with small redundancy.
4) Directionality: The representation should contain basis elements oriented at a
variety of directions, much more than the few directions that are offered by
separable wavelets.
5) Anisotropy: To capture smooth contours in images, the representation should
contain basis elements using a variety of elongated shapes with different aspect
ratios.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 57


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

A. Need for contourlet transform: Among these desiderata, the first three are
successfully provided by separable wavelets, while the last two require new
constructions. Moreover, a major challenge in capturing geometry and directionality
in images comes from the discrete nature of the data. For this reason we construct
multi-resolution and multi-direction image expansion using non-separable filter
banks.

Figure 4.7: General block diagram of contourlet

This new construction results in a flexible multi-resolution, local, and


directional image expansion using contour segments, and thus it is named the
contourlet transform. It is of interest to study the limit behaviour when such schemes
are iterated over scale and/or direction, which has been analyzed in the connection
between filter banks, their iteration, and the associated wavelet construction. The
general block diagram is shown below in Figure 4.7. The contours of original images
can be captured effectively with a few coefficients by using Contourlet transform.

Contourlet transform provides multi-scale and multi directional


decomposition. It consists of two stages

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 58


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

 Laplacian Pyramid
 Directional Filter Bank

The overall result is an image expansion using basic elements like contour
segments, and thus is named contourlet. In particular, contourlet have elongated
supports at various scales, directions and aspect ratios. This allows contourlets to
efficiently approximate a smooth contour at multiple resolutions.

4.3.1 Laplacian Pyramid: In the image is first decomposed into four sub images and
also captures the point discontinuities in those images. The Figure 4.8 shows general
representation of Laplacian pyramid

Figure 4.8: Block diagram of Laplacian Pyramid

It is similar to wavelet decomposition. In each scaling the image is first


decomposed into four sub images. It separates low frequency components and high
frequency components. The LP decomposition at each level generates a down
sampled low pass version of the original and the difference between the original and
the prediction, resulting in a band pass image as shown in Figure 4.9.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 59


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 4.9: Detailed view of contourlet transform

In LP at each stage the original image is multiplied with Gaussian filter by


down sampling to obtain Low pass version of the original image or blur image and
then the obtained low-pass image is separated from the original image by up-sampling
to obtain band-pass image (or High Frequency components). This band-pass image is
applied to directional filter bank to obtain directionality information. The
decomposition process in Laplacian pyramid is implemented as shown below Figure
4.10.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 60


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 4.10: Laplacian Pyramid decomposition process

This is the process that, Laplacian pyramids separates low frequency and high
frequency components. The obtained LF Sub-band (Scaled) image is further
decomposed to get the desired fused image.
4.3.2 Directional Filter Bank:
1) The High frequency components are given to the directional filter bank which links
point discontinuities into linear structure.

Figure 4.11: Frequency partitioning in DFB

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 61


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2) The High pass sub-band images are applied to Directional filter bank to further
decompose the frequency spectrum using an n-level iterated tree structured filter
banks as shown in Figure 4.11.
3) By doing this we capture smooth contours and edges at any orientation. Finally we
combine the scaled information with scaled multiplication. Since the directional
filter bank (DFB) was designed to capture the high frequency (representing
directionality) of the input image, the low frequency content is poorly handled.

Figure 4.12: N-level iterated structured filter bank

4) In fact, with the frequency partition low frequency would “leak” into several
directional sub-bands, hence the DFB alone does not provide a sparse representation
for images. This fact provides another reason to combine the DFB with a multi-scale
decomposition as shown in Figure 4.12., where low frequencies of the input image
are removed before applying the DFB.

4.4 Fusion Frame Work:

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 62


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

The fusion framework used in the experiments is shown in Figure 4.13. First,
source images are decomposed into multi-scale and multi-directional components
using contourlet transform, and these components are fused together based on a
certain fusion scheme. Next, inverse contourlet transform is performed in order to
obtain a final fused image.

Figure 4.13 Fusion frame work

4.4.1 Fusion Scheme:

The source images are fused according to the fusion scheme and fusion rule that are
described as follows in Figure 4.14.

The source images are decomposed using contourlet transform in order to


obtain multi-scale or multi-directional frequency coefficients. For each decomposition
level K, K approximation sub-band and 3K detail sub-bands are produced. In our
experiments, decomposition level of 3 was used since the level beyond 3 significantly
degraded the fusion performance.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 63


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 4.14: Fusion scheme

Once the source images are decomposed, high frequency components are
selected from the PAN source image and then injected into detail sub-bands of the MS
source image via maximum frequency fusion rule which compares and selects the
frequency coefficient with the highest absolute value at each pixel.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 64


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

4.5 Flow Chart of proposed method-I:

Figure 4.15: Flow diagram of proposed method-1

4.6 Algorithm for DWT-Contourlet Transform:

Step 1: Consider two source images.

Step 2: These images are set to pre-processing which includes RGB to gray scale
conversation and also ensured image alignment.

Step 3: The images obtained from step-2 are first decomposed using Discrete Wavelet
Transform (WT).

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 65


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Step 4: We get fused image in wavelet domain using following fusion rules.

1) Fuse approximate coefficients of source image using average method.


2) Fuse detail coefficients of source image using Maxima method.

Step 5: Apply inverse DWT to reconstruct fused image in spatial domain.

Step 6: Apply contourlet transform on the fused image which is obtained in step-5.

Step 7: Implement step-4 again.

Step 8: To get final hybrid fused image in spatial domain we apply inverse contourlet
transform.

4.7 Proposed Method-2 (Curvelet-Contourlet)

The proposed method-2 which is the hybrid combination of Curvelet and


Contourlet transform where limitation of directionality in wavelet transform is
overcome by Contourlet transform.

4.7.1 Pre-processing Stage:

Source images are necessary to align or register as perfectly as possible prior


to the main fusion process in order to produce the best fusion results. Image
registration process can also be called as image alignment. The datasets of input
images are not aligned to each other, it is impossible to yield best fusion results
although fusion framework, scheme and algorithm are optimum.

4.7.2 Decomposition -1:

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 66


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

After pre-processing these source images are first decomposed using Curvelet
Transform (WT) in order to isolate different frequency components present of the
image into different planes without down sampling as in the traditional wavelet
transform. It efficiently deals with the curved shapes; therefore its application in
medical fields would result in better fusion results than obtained using wavelet
transform alone. Curvelet method can work well with edges and boundaries and curve
portions of the images using Ridgelet transforms. Now, the obtained approximation
and detailed coefficients after application of appropriate fusion rule are reconstructed
using Curvelet transform. The entire process carried out in this stage serves to provide
significant localization leading to a better preservation of features in the fused image.

4.7.3 Decomposition -2:

After reconstruction at Stage-1, fusion algorithm is again applied at stage-2 in


Contourlet domain. The significance of such an approach is to overcome the
limitation of directionality in curvelet (in stage-1). Contourlet transform (CT) is
applied in order to achieve angular decompositions. After applying sub-band
decomposition using CT, a set of coefficients are obtained (for both the images).

These frequency coefficients are fused together based on certain fusion


algorithms are reconstructed using the inverse CT. The schematic diagram of our
proposed methodology which is the hybrid image fusion using wavelet-contourlet
transform is shown in Figure 4.16.

Contourlet Transform (CT) works with two dimensional multi scale and directional
filter bank (DFB). In addition CT also uses iterated filter bank which makes it
computationally efficient. The perfect directional basis for discrete signals is created
with the help of DFB which was major drawback of the wavelet transformation.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 67


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 4.16: Schematic of the proposed fusion method 2

The proposed algorithm is the hybrid of Contourlet transformation and


Curvelet transformation. Both algorithms have their own features and limitation.
Contourlet offer high degree of directionality and anisotropy where as Curvelet
transformation is effective in images which have bounded curves as it provides
smoothening of curves. So hybrid of Contourlet and Curvelet transformation would
lead to better results that could be used for fusion of medical images when compared
with previous methods i.e. wavelet-curvelet, proposed method1.

4.8 General Image Fusion Based on Curvelet Transform:

It is evolved as a tool for representation of curved shapes in graphical


applications, edge detection and image de-noising.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 68


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Need for Curvelet: The wavelet transform concentrates on representing the image in
Multi-scales and it’s appropriate to represent linear edges. For curved edges, the
accuracy of edge localization in the wavelet transform is low. So, there is a need for
an alternative approach which has a high accuracy of curve localization such as the
curvelet transform.
4.8.1 Image Fusion by Discrete Curvelet Transform Method:

Curvelet can be performed in three steps as shown in Figure 4.17.

1) Sub band filtering


2) Tiling
3) Ridgelet Transforms

Figure 4.17: General diagram of curvelet

1. Sub band filtering: The purpose of this step is to decompose the image into additive
components, each of which is a Sub band of that image. This step isolates the different
frequency components of the image into different planes without down sampling as in
the traditional wavelet transform.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 69


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Each layer contains details of different frequencies P0 indicates the low-pass


filter and P1, P2, P3 etc are the band-pass (high-pass) filters. So the original image
can be reconstructed from the sub-bands by the following equation;

f =P 0 ( P 0 f ) +∑ Ps( Psf ) (4.3)


s

2. Tiling: Tiling is the process by which the image is divided into overlapping tiles. It
follows the sub-band decomposition, each of the sub-band filtered image is then
partitioned into blocks of NxN (N- blocks in horizontal direction & N-blocks in
vertical direction). These tiles are small in dimensions to transform curved lines into
small straight lines in the sub bands P1and P2. The tiling improves the ability of the
curvelet transform to handle curved edges. As observing in Figure 4.18.
3. Renormalization: Renormalization is nothing but centering each dyadic square to the
unit square [0, 1]x[0, 1].
4. Ridgelet Analysis: Before the Ridgelet Transform we need to perform the Ridgelet
tiling. The renormalized ridges have an aspect ratio of width= length2 . Now these
ridges can be encoded ridges efficiently using the Ridgelet Transform.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 70


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 4.18: Few steps of Image Fusion procedure using DCT

5. Ridgelet Transform: It can be viewed as a wavelet analysis in the Radon domain.


This transform is primarily a tool of ridge detection or shape detection of the objects
in an image. Ridgelet Transform divides the frequency domain to dyadic coronae It
samples the s-th corona at least 2s times in the angular direction, whereas in the
radial direction, it samples using local wavelets. As shown in Figure 4.19.

Fig 4.19: Ridgelet transform with in Tiling

4.8.2 Procedure for Curvelet Transform:

1) Perform FFT on the original image.


2) Divide FFT into collection of tiles
3) For each tile apply

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 71


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 4.20: Detailed diagram of curvelet


a) Translate tile to the origin.
b) Wrap parallelogram shaped support of tile around the rectangle with centre as
the origin as shown in Figure 4.20.
c) Take inverse FFT of wrapped one
d) Add curvelet array to collection of curvelet coefficients.
e) Merit:
f) Captures Curved edges more efficiently than DWT is shown in Figure 4.21.

Figure 4.21: Comparison of DWT and Curvelet

Demerits of curvelet transform:

1) Not applicable for discrete domains


2) Shift variant

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 72


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

3) Curvelet transform provide limited directionality. So that it poorly handle


smooth contours at different orientations.

The curvelet transform was developed initially in the continuous domain via
multi-scale filtering and then applying a block Ridgelet transform on each band-pass
image. Later, the second generation curvelet transform was proposed that defined
directly via frequency partitioning without using the Ridgelet transform. Both curvelet
constructions require a rotation operation and correspond to a 2-D frequency partition
based on the polar coordinate. This makes the curvelet construction simple in the
continuous domain but causes difficulty in implementation for discrete images. So
that we go for contourlet transform.

As we already discussed the Contourlet transform in detailed.

4.9 Flow Chart of proposed method-2

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 73


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 4.22: Flow diagram of proposed method-2

4.10 Algorithm for Curvelet-Contourlet Transform:

Step 1: Consider two source images.

Step 2: These images are set to pre-processing which includes RGB to gray scale
conversation and also ensured image alignment.

Step 3: The images obtained from step-2 are first decomposed using Curvelet
Transform to estimate the coefficients. Curvelet transformation has four stages

1. Sub-band decomposition
2. Smooth Partitioning
3. Renormalization
4. Ridgelet Analysis

Step 4: We get fused image in wavelet domain using following fusion rules.

1) Fuse approximate coefficients of source image using average method.


2) Fuse detail coefficients of source image using Maxima method.

Step 5: Reconstruct fused image in spatial domain.

Step 6: Apply contourlet transform on the fused image which is obtained in step-5.

Step 7: Implement step-4 again.

Step 8: To get final hybrid fused image in spatial domain we apply inverse contourlet
transform.

4.11 Introduction to Matlab:

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 74


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

MATLAB is a high-performance language for technical computing. It


integrates computation, visualization, and programming in an easy-to-use
environment where problems and solutions are expressed in familiar mathematical
notation. MATLAB stands for matrix laboratory, and was written originally to provide
easy access to matrix software developed by LINPACK (linear system package) and
EISPACK (Eigen system package) projects. MATLAB is therefore built on a
foundation of sophisticated matrix software in which the basic element is array that
does not require pre dimensioning which to solve many technical computing
problems, especially those with matrix and vector formulations, in a fraction of time.

MATLAB features a family of applications specific solutions called toolboxes.


Very important to most users of MATLAB, toolboxes allow learning and applying
specialized technology. These are comprehensive collections of MATLAB functions
(M-files) that extend the MATLAB environment to solve particular classes of
problems. Areas in which toolboxes are available include signal processing, control
system, neural networks, fuzzy logic, wavelets, curvelets and contourlets simulation
and many others.

Basic Building Blocks of MATLAB:

The basic building block of MATLAB is MATRIX. The fundamental data type
is the array. Vectors, scalars, real matrices and complex matrix are handled as specific
class of this basic data type. The built in functions are optimized for vector operations.
No dimension statements are required for vectors or arrays.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 75


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

MATLAB Window: The MATLAB works based on five windows: Command


window, Workspace window, Current directory window, Command history window,
Editor Window, Graphics window and Online-help window.

Command Window: The command window is where the user types MATLAB
commands and expressions at the prompt (>>) and where the output of those
commands is displayed. It is opened when the application program is launched. All
commands including user-written programs are typed in this window at MATLAB
prompt for execution.

Work Space Window: MATLAB defines the workspace as the set of variables that
the user creates in a work session. The workspace browser shows these variables and
some information about them. Double clicking on a variable in the workspace
browser launches the Array Editor, which can be used to obtain information.

Current Directory Window: The current Directory tab shows the contents of the
current directory, whose path is shown in the current directory window. For example,
in the windows operating system the path might be as follows: C:\MATLAB\Work,
indicating that directory “work” is a subdirectory of the main directory “MATLAB”;
which is installed in drive C. Clicking on the arrow in the current directory window
shows a list of recently used paths.

MATLAB uses a search path to find M-files and other MATLAB related files.
Any file run in MATLAB must reside in the current directory or in a directory that is
on search path.

Command History Window: The Command History Window contains a record of


the commands a user has entered in the command window, including both current and

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 76


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

previous MATLAB sessions. Previously entered MATLAB commands can be selected


and re-executed from the command history window by right clicking on a command
or sequence of commands. This is useful to select various options in addition to
executing the commands and is useful feature when experimenting with various
commands in a work session.

Editor Window: The MATLAB editor is both a text editor specialized for creating
M-files and a graphical MATLAB debugger. The editor can appear in a window by
itself, or it can be a sub window in the desktop. In this window one can write, edit,
create and save programs in files called M-files.

MATLAB editor window has numerous pull-down menus for tasks such as
saving, viewing, and debugging files. Because it performs some simple checks and
also uses color to differentiate between various elements of code, this text editor is
recommended as the tool of choice for writing and editing M-functions.

Graphics or Figure Window: The output of all graphic commands typed in the
command window is seen in this window.

Online Help Window: MATLAB provides online help for all it’s built in functions
and programming language constructs. The principal way to get help online is to use
the MATLAB help browser, opened as a separate window either by clicking on the
question mark symbol (?) on the desktop toolbar, or by typing help browser at the
prompt in the command window.

The help Browser is a web browser integrated into the MATLAB desktop that
displays a Hypertext Mark-up Language (HTML) documents. The Help Browser
consists of two panes, the help navigator pane, used to find information, and the

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 77


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

display pane, used to view the information. Self-explanatory tabs other than navigator
pane are used to perform a search.

MATLAB Commands Used in Our Project:

Clc: This command clears the command window.

Syntax: clc

Char: This command converts to character array.

Syntax: char (array)

Uigetfile: Displays a modal dialog box that lists files in the current directory and
enables the user to select or type the name of a file to be opened. If the filename is
valid and if the file exists, uigetfile returns the filename when the user clicks Open.

Syntax=uigetfile (FilterSpec,DialogTitle,DefaultName)

Imread: This command reads the image from the file specified by filename with the
standard file extension indicated by file type as given below:

Syntax: A=imread (‘filename.filetype’)

Imshow: This command displays image A in a Handle Graphics figure, where I is a


greyscale, RGB (true color), or binary image. For binary images, imshow displays
pixels with the value 0 (zero) as black and 1 as white.

Syntax: imshow (A)

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 78


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Imresize: This command resizes the image of any type using this specified
interpolation method.

Syntax: B = imresize (A, scale)

Figure: This command creates figure graphical object>

Syntax: figure(I)

Im2double: This command convert image to double precision.

Syntax: im2double(image)

Zeros: This command creates array of all zeros.

Syntax: zeros(n)

Ones: This command creates array of all ones.

Syntax: ones(n)

Size: This command returns the sizes of each dimension of array x in a vector d with
ndims(x).

Syntax: d=size(x)

Rgb2gray: This command converts RGB image or colourmap to grayscale.

Syntax: i=rgb2gray(i)

Imfuse: This command creates a composite of two image A and B.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 79


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Syntax: imfuse(A,B)

In this chapter implementation of proposed methods Wavelet-Contourlet, Curvelet-


Contourlet transforms are explained. In next chapter simulation results, comparisons
of performance metrics of various proposed methods with existing method are
presented.

CHAPTER 5

RESULTS AND ANALYSIS

5.1 Simulation Results of DWT – Contourlet Transform:

In this section we are discussing the simulation results of hybrid Discrete


Wavelet Transform and Contourlet Transform for various medical images obtained
from different modalities

Example 1: Fusion of CT and MRI: Image Size [256 X 256] having tumour in
brain

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 80


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

In Figure 5.1 we considered CT and MRI images of brain with tumour. The
first one is the input source image which is obtained from CT scan of brain. CT
provides more accurate information about calcium deposit, air, bones, and any
blockages. The second input image which is considered as a reference which is
obtained from MRI scan of brain. MRI provides information about Nerve system, soft
tissues and muscles. We have applied Discrete Wavelet Transform (DWT) on the
images which then followed by contourlet transform.

Figure 5.1: Hybrid DWT-contourlet fusion of brain tumour

We fuse these images using appropriate fusion rules which we have already
discussed in previous chapters. The result we obtained from contourlet transform
which will be in frequency domain. We need to reconstruct final hybrid fused image

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 81


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

in spatial domain, for that we apply inverse wavelet transform followed by inverse
contourlet transform.

Example 2: Fusion of MRI and PET: Image Size [256 X 256] for FTD (Neuron
Degeneration)

In Figure 5.2 we considered MRI and PET images of brain’s Neuron


Degeneration. This Neuron Degradation can be seen in aged people. The first one is
the input source image which is obtained from MRI scan of brain. The second input
image which is considered as a reference which is obtained from PET scan of brain.
PET can be used to provide better information on blood flow and flood activity with
low spatial resolution. As a result, the anatomical and functional medical images are
needed to be combined for a compendious view.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 82


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.2: Hybrid DWT-contourlet fusion of Alzheimer’s disease

We have applied Discrete Wavelet Transform (DWT) on the images which


then followed by contourlet transform. We fuse these images using appropriate fusion
rules which we have already discussed in previous chapters. The result we obtained
from contourlet transform which will be in frequency domain. We need to reconstruct
final hybrid fused image in spatial domain, for that we apply inverse wavelet
transform followed by inverse contourlet transform.

Example 3: Fusion of CT and MRI: Image Size [512 X512] of skull from top view
In Figure 5.3 we considered CT and MRI images of skull. The first one is the
input source image which is obtained from CT scan of skull. The second input image
which is considered as a reference which is obtained from MRI scan of skull. We have
applied Discrete Wavelet Transform (DWT) on the images which then followed by
contourlet transform.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 83


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.3: Hybrid DWT-contourlet fusion of skull

We fuse these images using appropriate fusion rules. The result we obtained
from contourlet transform which will be in frequency domain. We need to reconstruct
final hybrid fused image in spatial domain, for that we apply inverse wavelet
transform followed by inverse contourlet transform.

Example 4: Fusion of MRI-T1 and MRI –T2: Image Size [256x 256] of brain
(cholesterol)

In Figure 5.4 we considered CT and MRI images of brain which shows the
cholesterol level. The first one is the input source image which is obtained from CT

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 84


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

scan of brain. The second input image which is considered as a reference which is
obtained from MRI scan of brain. We have applied Discrete Wavelet Transform
(DWT) on the images which then followed by contourlet transform. We fuse these
images using appropriate fusion rules

Figure 5.4: Hybrid DWT-contourlet fusion of brain (cholesterol)

The result we obtained from contourlet transform which will be in frequency


domain. We need to reconstruct final hybrid fused image in spatial domain, for that
we apply inverse wavelet transform followed by inverse contourlet transform.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 85


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Example 5: Fusion of CT and MRI: Image Size [512x 512] of Head

In Figure 5.5 we considered CT and MRI images of brain head. The first one is
the input source image which is obtained from CT scan of head. The second input
image which is considered as a reference which is obtained from MRI scan of brain.
We have applied Discrete Wavelet Transform (DWT) on the images which then
followed by contourlet transform.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 86


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.5: Hybrid DWT-contourlet fusion of head

We fuse these images using appropriate fusion rules. The result we obtained
from contourlet transform which will be in frequency domain. We need to reconstruct
final hybrid fused image in spatial domain, for that we apply inverse wavelet
transform followed by inverse contourlet transform.
5.2 Simulation Results of Curvelet – Contourlet Transform : In
this section we are discussing the simulation results of hybrid Curvelet
Transform and Contourlet Transform for various medical images obtained
from different modalities

Example 1: Fusion of CT and MRI: IMAGE SIZE [256 x256] having tumour in
brain

In Figure 5.6 we considered CT and MRI images of brain with tumour. The
first one is the input source image which is obtained from CT scan of brain. CT
provides more accurate information about calcium deposit, air, bones, and any
blockages. The second input image which is considered as a reference which is
obtained from MRI scan of brain. MRI provides information about Nerve system, soft
tissues and muscles.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 87


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

CT-Input 1 MRI-Input 2 Curvelet Transform

Contourlet Transform Inv Contourlet Transform

Figure 5.6: Hybrid curvelet-contourlet fusion of brain tumour

We have applied Curvelet Transform on the images which then followed by


contourlet transform. We fuse these images using appropriate fusion rules which we
have already discussed in previous chapters. The result we obtained from contourlet
transform which will be in frequency domain. We need to reconstruct final hybrid
fused image in spatial domain, for that we apply inverse contourlet transform. Here
there is no need of inverse curvelet transform as it will be performed internally.

Example 2: Fusion of MRI and PET: Image Size [256 x256] for FTD (Neuron De-
Generation)

In Figure 5.7 we considered MRI and PET images of brain’s Neuron


Degeneration. This Neuron Degradation can be seen in aged people. The first one is
the input source image which is obtained from MRI scan of brain. The second input

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 88


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

image which is considered as a reference which is obtained from PET scan of brain.
PET can be used to provide better information on blood flow and flood activity with
low spatial resolution. As a result, the anatomical and functional medical images are
needed to be combined for a compendious view.

MRI-Input 1 PET-Input 2 Curvelet Transform

Contourlet Transform Inv Contourlet Transform

Figure 5.7: Hybrid curvelet-contourlet fusion of Alzheimer’s disease

We have applied curvelet transform on the images which then followed by


contourlet transform. We fuse these images using appropriate fusion rules which we
have already discussed in previous chapters. The result we obtained from contourlet

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 89


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

transform which will be in frequency domain. We need to reconstruct final hybrid


fused image in spatial domain, for that we apply inverse contourlet transform.

Example 3: Fusion of CT and MRI: Image Size [512x 512] of skull

In Figure 5.8 we considered CT and MRI images of skull. The first one is the
input source image which is obtained from CT scan of skull. The second input image
which is considered as a reference which is obtained from MRI scan of skull. We have
applied curvelet transform on the images which then followed by contourlet
transform.

CT-Input 1 MRI-Input 2 Curvelet Transform

Contourlet Transform Inv Contourlet Transform

Figure 5.8: Hybrid curvelet-contourlet fusion of skull

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 90


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

We fuse these images using appropriate fusion rules. The result we obtained
from contourlet transform which will be in frequency domain. We need to reconstruct
final hybrid fused image in spatial domain, for that we apply inverse contourlet
transform.

Example 4: Fusion of MRI-T1 and MRI –T2: IMAGE SIZE [256x 256] of brain
(cholesterol)

In Figure 5.9 we considered two images that are obtained from MRI Scans;
one is MRI_T1 and the second one MRI_T2. MRI_T1-weighted imaging is used to
differentiate anatomical structures mainly on the basis of T1 values.

MRI –T1-Input 1 MRI-T2-Input 2 Curvelet Transform

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 91


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Contourlet Transform Inv Contourlet Transform

Figure 5.9: Hybrid curvelet-contourlet fusion of brain (cholesterol)

Tissues with high fat content (e.g. white matter) appear bright and
compartments filled with water (e.g. CSF) appear dark. This is good for
demonstrating anatomy. MRI_T2 is vice versa. We have applied curvelet transform on
the images which then followed by contourlet transform. We fuse these images using
appropriate fusion rules. The result we obtained from contourlet transform which will
be in frequency domain. We need to reconstruct final hybrid fused image in spatial
domain, for that we apply inverse contourlet transform.

Example 5: Fusion of CT and MRI: Image Size [512x 512] of Head

In Figure 5.10 we considered CT and MRI images of head. The first one is the
input source image which is obtained from CT scan of head. The second input image
which is considered as a reference which is obtained from MRI scan of brain.

CT-Input 1 MRI-Input 2 Curvelet Transform

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 92


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Contourlet Transform Inv Contourlet Transform

Figure 5.10: Hybrid curvelet-contourlet fusion of head

We have applied curvelet transform on the images which then followed by


contourlet transform. We fuse these images using appropriate fusion rules. The result
we obtained from contourlet transform which will be in frequency domain. We need
to reconstruct final hybrid fused image in spatial domain, for that we apply inverse
contourlet transform

5.3 Comparison of Results and Analysis

In this section we will check the effectiveness of the proposed schemes that is
hybrid of the DWT-Contourlet, Curvelet- Contourlet transformation. Various
parameters such as Entropy, PSNR, and MSE are used to evaluate the effectiveness
and compared the performance metrics among the proposed methods and existed one.
We assume the source images to be in perfect registration. Here we consider different
source images like CT, MRI, and PET of brain tumour, Skull, MRI_T1, MRI_T2, and
Alzheimer’s disease (which is widely seen in aged people).

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 93


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

(a)CT-SKULL(b)MRI-SKULL(c)DWT-Curvelet(d)DWT-contourlet(e)Curvelet-Contourlet

(f)CT-TUMOR (g)MRI-TUMOR(h)DWT-Curvelet(i)DWT-contourlet(j)Curvelet-Contourlet

(k)PET-FTD (l) MRI-FTD (m) DWT-Curvelet (n)DWT-contourlet (o)Curvelet-Contourlet

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 94


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

(p) MRI_T1 (q) MRI-T2 (r) DWT-Curvelet (s) DWT-Contourlet (t) Curvelet-Contourlet

(u)CT-head (v) MRI-head (w)DWT-Curvelet (x) DWT-contourlet (y) Curvelet-Contourlet

Figure 5.11: Comparison of existing and proposed methods results

The above images shown in Figure 5.11 gives the comparison of results obtained
from different techniques. The Figure (a) and (b) gives bone and tissues information
of skull, by fusing these images we obtained complete information using appropriate
hybrid technique. Here Figure (c), (d) and (e) represents existed and proposed
methods. Similarly the Figure (f) and (g) gives bone and tissues information of brain
having tumour, by fusing these images we obtained exact location of tumour using
appropriate hybrid technique. Here Figure (h), (i) and (j) represents existed and
proposed methods. Similarly the Figure (k) and (l) gives soft tissues and PET can be
used to provide better information on blood flow and flood activity with information
of brain, by fusing these images we obtained exact location of tumour using
appropriate hybrid technique. Here Figure (m), (n) and (o) represents existed and
proposed methods.

Similarly the Figure (p) and (q) gives MRI_T1-weighted imaging is used to
differentiate anatomical structures mainly on the basis of T1 values. Tissues with high
fat content (e.g. white matter) appear bright and compartments filled with water (e.g.
CSF) appear dark. This is good for demonstrating anatomy. MRI_T2 is vice versa. By

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 95


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

fusing we obtained exact amount of cholesterol deposition in brain using appropriate


hybrid technique. Here Figure (r), (s) and (t) represents existed and proposed
methods. The Figure (u) and (v) gives bone and tissues information of head, by fusing
these images we obtained complete information using appropriate hybrid technique.
Here Figure (w), (x) and (y) represents existed and proposed methods. From these
results we can observe that the output of Curvelet-contourlet is the appropriate one for
better diagnosis as we could see the features are more visible than remaining methods.
The drawback of the existed methods such as wavelet, curvelet is that they don’t
provide directionality. This can be obtained by our proposed methods. Even though
we got better results with the proposed method-1 (Wavelet- Contourlet), it doesn’t
capture curved information efficiently even though it provides directionality. This
drawback from this can be overcome by curvelet-contourlet transform which provides
both directionality and captures curved information efficiently. Hence the curvelet-
contourlet method has come up as a better method which produced good results and
the numerical calculation of performance metrics is explained below.

5.4 Performance Evaluation of Image Fusion Techniques

Quantitative performance of the various Hybrid Image Fusion techniques in


Transform domain can be done by the following metrics.

A. Entropy

Entropy measures amount of information contained in an image. Higher entropy value


of the fused image indicates presence of more information and improvement in fused
image. If L indicates the total number of grey levels and p= {p0, p1 ... pL-1} is the
probability distribution of each level, Entropy is defined as,

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 96


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

L=1
E= ∑ Pilog (Pi) (5.1)
i=0

B. Mean Square Error

MSE between an image, X, and an approximation, Y, is the squared norm of the


difference divided by the number of elements in the image. If i and j are pixel row
column indices, M and N are the number of rows and columns, MSE is defined by

M N
2
∑ ∑ [ Xij−Yij ] (5.2)
MSE= i=1 j=1
MN

C. Peak Signal to Noise Ratio

PSNR is the ratio between the maximum possible power of a signal and the power of
corrupting noise that affects the fidelity of its representation.

2 B−1
PSNR=20 log 10 ( ) (5.3)
MSE

5.5 Comparison of performance metrics for Proposed Methods and


existing method:

Table 5.1: Performance metrics for Fusion of CT and MRI image size of skull

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 97


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.12: Comparison of PSNR, MSE and Entropy in terms of bar charts for CT
and MRI of skull

Table 5.2: Performance metrics for Fusion of CT and MRI image size having tumour
in brain

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 98


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Chart 5.13: Comparison of PSNR, MSE and Entropy in terms of bar charts for CT and
MRI for brain having tumour

Table 5.3: Performance metrics for Fusion of MRI and PET image size for FTD
(NEURON DE-GENERATION)

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 99


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.14: Comparison of PSNR, MSE and Entropy in terms of bar charts for MRI
and PET for FTD (NEURON DE-GENERATION)

Table 5.4: Performance metrics for Fusion of CT and MRI image size for head

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 100


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.15: Comparison of PSNR, MSE and Entropy in terms of bar charts for CT
and MRI of Head

Table 5.5: Performance metrics for Fusion of MRI_T1 AND MRI_T2 image size [256
256] for brain (cholesterol)

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 101


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.16: Comparison of PSNR, MSE and Entropy in terms of bar charts for
MRI_T1 AND MRI_T2 for brain (cholesterol)

These bar charts and Tabular values give the experimental values performed on
different images we discussed previously. From these we can observe that curvelet-
contourlet transform has given better results for which PSNR and Entropy values are

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 102


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

of high values when compared to other transformation techniques and the MSE is less
when compared with other methods, which satisfied the conditions of better image
quality that will obtain after fusion.

CHAPTER 6

CONCLUSION

In this project, a hybrid technique for image fusion using the combinations of
wavelet, curvelet, contourlet is being simulated. The simulated results for different
hybrid combinations of above mentioned transforms are tested and compared for

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 103


Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

various medical image combinations like MRI, and also for various input image sizes.
In all cases , curvelet-contourlet based hybrid technique is observed to be outsmarting,
which provide best quality fused image than other two combinations in terms PSNR ,
MSE and Entropy. Here Curvelet-Contourlet based hybrid technique suites best for
medical diagnosis

Future scope

In my project we have applied fusion on non noise images and this can be
further modified so that we can fuse and extract the features of noisy images. Also we
have performed fusion of only two images which can be further extended in fusing
multiple images.

M.Tech, Dept.of ECE,BCETFW,Kadapa Page 104


REFERENCES
[1] Jyoti Agarwal1, Sarabjeet Singh Bedi, ”Implementation of hybrid image fusion
technique for feature enhancement in medical diagnosis”, Springer, Human-centric
Computing and Information Sciences, page 1-17, (2015) 5:3.
[2] S. Yang, M. Wang, L. Jiao, R. Wu, and Z. Wang, “Image fusion based on a new
contourlet packet”, Inf. Fusion, vol. 11, no. 2, pp. 78–84, 2010.
[3] J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala and R. Arbiol, “Multiresolution-based
image fusion with additive wavelet decomposition”, IEEE Transactions on Geo-
science and Remote sensing, vol. 37, no. 3, 1999, pp. 1204-1211.
[4] Sweta Mehta, Bijith Mara, “CT and MRI image fusion using curvelet transform,”
ISSN: 0975 – 6779, nov 12 to oct 13, volume – 02, issue – 02, page 848-852.
[5] Navneet kaur, Madhu Bahl, Harsimran Kaur, “Review On: Image Fusion Using
Wavelet and Curvelet Transform” IJCSIT, Vol. 5 (2), page 2467-2470, 2014.
[6] Deron Ray Rodrigues, “Curvelet Based Image fusion techniques for Medical Images”,
IJRASET Volume 3, Issue 3, pp 902-908, March 2015.
[7] R. J. Sapkal, S. M. Kulkarni, “Image Fusion based on Wavelet Transform for Medical
Application”, IJERA, Vol. 2, Issue 5, September- October 2012, pp.624-627.
[8] S. Ibrahim and M. Wirth, “Visible and IR Data Fusion Technique Using the
Contourlet Transform”, International conference on computational science and
engineering, CSE 09, IEEE, vol. 2, pp. 42-47, 2009.
[9] M. N. Do and M. Vetterli, “The contourlet transform: An efficient directional Multi-
resolution image representation”, IEEE Transactions on Image Processing, vol. 14,
No.12, pp. 2091–2106, 2005.
[10] Miao Qiguang, Wang Baoshul, “A Novel Image Fusion Method Using Contourlet
Transform”, IEEE trans. geosci. remote sens., vol. 43, no. 6, pp. 1391-1402, June
2005.
[11] Huang, Junfeng Gao, Zhongsheng Qian “Multi-focus Image Fusion Using an
Effective Discrete Wavelet Transform Based algorithm measurement”, SCIENCE
REVIEW, Volume 14, No. 2, 2014,102-108.

You might also like