You are on page 1of 70

A

PROJECT REPORT
ON
COMPARISON & IMPROVEMENT OF IMAGE FUSION USING WAVELETS
ABSTRACT
Title: Analysis of pixel level Multi sensor medical image fusion
Aim: The goal of image fusion is to create new images that are more suitable for the
purposes of human visual perception, object detection and target recognition.
Descriti!":
The objective of the image fusion is to combine the source images of the same
scene to form one composite image that contains a more accurate description of the scene
than any individual source images. Image fusion methods can be broadly classified into
two - spatial domain fusion and transform domain fusion. The fusion methods such as
averaging, rovey method, principal component analysis !"#A$ and I%& based methods
fall under spatial domain approaches.
In this project, we propose a new multiresolution data fusion scheme based on the
principal component analysis !"#A$ transform and the pixel-level weights wavelet
transform. In order to get a more ideal fusion result, a linear local mapping which based
on the "#A is used to create a new 'origin' image of the image fusion. (aubechies
wavelet is chosen as the wavelet basis.
)avelet based fusion techni*ues have been reasonably effective in combining
perceptually important image features. &hift invariance of the wavelet transform is
important in ensuring robust sub band fusion.
Applications of image fusion also includes Image #lassification , Aerial and
&atellite imaging, Medical imaging , +obot vision , #oncealed weapon detection, Multi-
focus image fusion, (igital camera application, #oncealed weapon detection,attle field
monitoring.
,xperimental results confirm that the proposed algorithm is the best image
sharpening method and can best maintain the spectral information of the original image.
Also, the proposed techni*ue performs better than the other ones, more robust and
effective, from both subjective visual effects and objective statistical analysis results. The
performance of the image fusion is evaluated by normali-ed least s*uare error, entropy,
overall cross entropy, standard deviation and mutual information.
Re#ere"ce:
Multi-sensor image data fusion based on pixel-level weights of wavelet and the "#A
transform !I,,,-.//0$.
INTRODUCTION TO DIGITAL IMAGE PROCESSING
Im$%e:
A digital image is a computer file that contains graphical information
instead of text or a program. "ixels are the basic building bloc1s of all digital images.
"ixels are small adjoining s*uares in a matrix across the length and width of your digital
image. They are so small that you don2t see the actual pixels when the image is on your
computer monitor.
"ixels are monochromatic. ,ach pixel is a single solid color that is blended from
some combination of the 3 primary colors of +ed, 4reen, and lue. &o, every pixel has a
+,( component, a 4+,,5 component and 67, component. The physical dimensions
of a digital image are measured in pixels and commonly called pixel or image resolution.
"ixels are scalable to different physical si-es on your computer monitor or on a photo
print. %owever, all of the pixels in any particular digital image are the same si-e. "ixels
as represented in a printed photo become round slightly overlapping dots.
Pi&el V$l'es: As shown in this bitonal image, each pixel is assigned a tonal value,
in this example / for blac1 and 8 for white.
PI(EL DIMENSIONS are the hori-ontal and vertical measurements of an image
expressed in pixels. The pixel dimensions may be determined by multiplying both the
width and the height by the dpi. A digital camera will also have pixel dimensions,
expressed as the number of pixels hori-ontally and vertically that define its resolution
!e.g., .,/9: by 3,/0.$. #alculate the dpi achieved by dividing a document;s dimension
into the corresponding pixel dimension against which it is aligned.
,xample<
Fi%: An :' x 8/' document that is scanned at 3// dpi has the pixel dimensions of .,9//
pixels !:' x 3// dpi$ by 3,/// pixels !8/' x 3// dpi$.
Im$%es i" MATLAB:
The basic data structure in MAT6A is the array, an ordered set of real or
complex elements. This object is naturally suited to the representation of images, real-
valued ordered sets of color or intensity data.
MAT6A stores most images as two-dimensional arrays !i.e., matrices$, in which
each element of the matrix corresponds to a single pixel in the displayed image. !"ixel is
derived from picture element and usually denotes a single dot on a computer display.$
=or example, an image composed of .// rows and 3// columns of different
colored dots would be stored in MAT6A as a .//-by-3// matrix. &ome images, such as
color images, re*uire a three-dimensional array, where the first plane in the third
dimension represents the red pixel intensities, the second plane represents the green pixel
intensities, and the third plane represents the blue pixel intensities. This convention
ma1es wor1ing with images in MAT6A similar to wor1ing with any other type of
matrix data, and ma1es the full power of MAT6A available for image processing
applications.
IMAGE REPRESENTATION
An image is stored as a matrix using standard Matlab matrix conventions. There are four
basic types of images supported by Matlab<
8. inary images
.. Intensity images
3. +4 images
9. Indexed images
Bi"$r) Im$%es:
In a binary image, each pixel assumes one of only two discrete values< 8 or /. A binary
image is stored as a logical array. y convention, this documentation uses the variable
name ) to refer to binary images.
The following figure shows a binary image with a close-up view of some of the pixel
values.
Fi%: Pi&el V$l'es i" $ Bi"$r) Im$%e
Gr$)sc$le Im$%es:
A grayscale image !also called gray-scale, gray scale, or gray-level$ is a data
matrix whose values represent intensities within some range. MAT6A stores a grayscale
image as an individual matrix, with each element of the matrix corresponding to one
image pixel. y convention, this documentation uses the variable name I to refer to
grayscale images.
The matrix can be of class uint:, uint8>, int8>, single, or double. )hile grayscale
images are rarely saved with a color map, MAT6A uses a color map to display them.
=or a matrix of class single or double, using the default grayscale color map, the
intensity / represents blac1 and the intensity 8 represents white. =or a matrix of type
uint:, uint8>, or int8>, the intensity intmin !class !I$$ represents blac1 and the intensity
intmax !class !I$$ represents white.
The figure below depicts a grayscale image of class double.
Fi%: Pi&el V$l'es i" $ Gr$)sc$le Im$%e De#i"e Gr$) Le*els
C!l!r Im$%es:
A color image is an image in which each pixel is specified by three values ? one
each for the red, blue, and green components of the pixel;s color. MAT6A store color
images as an m-by-n-by-3 data array that defines red, green, and blue color components
for each individual pixel. #olor images do not use a color map. The color of each pixel is
determined by the combination of the red, green, and blue intensities stored in each color
plane at the pixel;s location.
4raphics file formats store color images as .9-bit images, where the red, green,
and blue components are : bits each. This yields a potential of 8> million colors. The
precision with which a real-life image can be replicated has led to the commonly used
term color image.
A color array can be of class uint:, uint8>, single, or double. In a color array of
class single or double, each color component is a value between / and 8. A pixel whose
color components are !/, /, /$ is displayed as blac1, and a pixel whose color components
are !8, 8, 8$ is displayed as white. The three color components for each pixel are stored
along the third dimension of the data array. =or example, the red, green, and blue color
components of the pixel !8/,@$ are stored in +4!8/,@,8$, +4!8/,@,.$, and
+4!8/,@,3$, respectively.
The following figure depicts a color image of class double.
Fi%: C!l!r Pl$"es !# $ Tr'e c!l!r Im$%e
I"+e&e+ Im$%es:
An indexed image consists of an array and a colormap matrix. The pixel values in
the array are direct indices into a colormap. y convention, this documentation uses the
variable name A to refer to the array and map to refer to the colormap.
The colormap matrix is an m-by-3 array of class double containing floating-point
values in the range B/, 8C. ,ach row of map specifies the red, green, and blue components
of a single color. An indexed image uses direct mapping of pixel values to colormap
values. The color of each image pixel is determined by using the corresponding value of
A as an index into map.
A colormap is often stored with an indexed image and is automatically loaded
with the image when you use the imread function. After you read the image and the
colormap into the MAT6A wor1space as separate variables, you must 1eep trac1 of the
association between the image and colormap. %owever, you are not limited to using the
default colormap--you can use any colormap that you choose.
The relationship between the values in the image matrix and the colormap
depends on the class of the image matrix. If the image matrix is of class single or double,
it normally contains integer values 8 through p, where p is the length of the colormap.
The value 8 points to the first row in the colormap, the value . points to the second row,
and so on. If the image matrix is of class logical, uint: or uint8>, the value / points to
the first row in the colormap, the value 8 points to the second row, and so on.
The following figure illustrates the structure of an indexed image. In the figure, the image
matrix is of class double, so the value @ points to the fifth row of the colormap.
Fi%: Pi&el V$l'es I"+e& t! C!l!rm$ E"tries i" I"+e&e+ Im$%es
Di%it$l Im$%e File T)es:
T,e - m!st c!mm!" +i%it$l im$%e #ile t)es $re $s #!ll!.s:
/0 JPEG is a compressed file format that supports .9 bit color !millions of colors$. This
is the best format for photographs to be shown on the web or as email attachments. This
is because the color informational bits in the computer file are compressed !reduced$ and
download times are minimi-ed.
10 GIF is an uncompressed file format that s'!rts !"l) 1-2 +isti"ct c!l!rs. est used
with web clip art and logo type images. 4I= is not suitable for photographs because of its
limited color support.
30 TIFF is $" '"c!mresse+ #ile #!rm$t .it, 14 !r 45 6it c!l!r s'!rt0
7ncompressed means that all of the color information from your scanner or digital
camera for each individual pixel is preserved when you save as TI==. TI== is the best
format for saving digital images that you will want to print. Tiff supports embedded file
information, including exact color space, output profile information and ,AI= data. There
is a lossless compression for TI== called 6D). 6D) is much li1e ;-ipping; the image file
because there is no *uality loss. An 6D) TI== decompresses !opens$ with all of the
original pixel information unaltered.
40 BMP is a )indows !only$ operating system uncompressed file format that supports .9
bit color. M" does not support embedded information li1e ,AI=, calibrated color space
and output profiles. Avoid using M" for photographs because it produces approximately
the same file si-es as TI== without any of the advantages of TI==.
-0 C$mer$ RAW is a lossless compressed file format that is proprietary for each digital
camera manufacturer and model. A camera +A) file contains the ;raw; data from the
camera;s imaging sensor. &ome image editing programs have their own version of +A)
too. %owever, camera +A) is the most common type of +A) file. The advantage of
camera +A) is that it contains the full range of color information from the sensor. This
means the +A) file contains 8. to 89 bits of color information for each pixel. If you
shoot E",4, you only get : bits of color for each pixel. These extra color bits ma1e
shooting camera +A) much li1e shooting negative film. Fou have a little more latitude
in setting your exposure and a slightly wider dynamic range.
Im$%e C!!r+i"$te S)stems:
Pi&el C!!r+i"$tes
4enerally, the most convenient method for expressing locations in an image is to
use pixel coordinates. In this coordinate system, the image is treated as a grid of discrete
elements, ordered from top to bottom and left to right, as illustrated by the following
figure.
Fi%: T,e Pi&el C!!r+i"$te S)stem
=or pixel coordinates, the first component r !the row$ increases downward, while
the second component c !the column$ increases to the right. "ixel coordinates are integer
values and range between 8 and the length of the row or column.
There is a one-to-one correspondence between pixel coordinates and the coordinates
MAT6A uses for matrix subscripting. This correspondence ma1es the relationship
between an image;s data matrix and the way the image is displayed easy to understand.
=or example, the data for the pixel in the fifth row, second column is stored in the matrix
element !@, .$. Fou use normal MAT6A matrix subscripting to access values of
individual pixels.
=or example, the MAT6A code
I !., 8@$
+eturns the value of the pixel at row ., column 8@ of the image I.
S$ti$l C!!r+i"$tes:
In the pixel coordinate system, a pixel is treated as a discrete unit, uni*uely
identified by a single coordinate pair, such as !@, .$. =rom this perspective, a location
such as !@.3, ...$ is not meaningful.
At times, however, it is useful to thin1 of a pixel as a s*uare patch. =rom this
perspective, a location such as !@.3, ...$ is meaningful, and is distinct from !@, .$. In this
spatial coordinate system, locations in an image are positions on a plane, and they are
described in terms of x and y !not r and c as in the pixel coordinate system$.
The following figure illustrates the spatial coordinate system used for images. 5otice that
y increases downward.
Fi%: T,e S$ti$l C!!r+i"$te S)stem
This spatial coordinate system corresponds closely to the pixel coordinate system
in many ways. =or example, the spatial coordinates of the center point of any pixel are
identical to the pixel coordinates for that pixel.
There are some important differences, however. In pixel coordinates, the upper
left corner of an image is !8,8$, while in spatial coordinates, this location by default is
!/.@,/.@$. This difference is due to the pixel coordinate system;s being discrete, while the
spatial coordinate system is continuous. Also, the upper left corner is always !8,8$ in
pixel coordinates, but you can specify a non default origin for the spatial coordinate
system.
Another potentially confusing difference is largely a matter of convention< the
order of the hori-ontal and vertical components is reversed in the notation for these two
systems. As mentioned earlier, pixel coordinates are expressed as !r, c$, while spatial
coordinates are expressed as !x, y$. In the reference pages, when the syntax for a function
uses r and c, it refers to the pixel coordinate system. )hen the syntax uses x and y, it
refers to the spatial coordinate system.
Di%it$l im$%e r!cessi"%:
Di%it$l im$%e r!cessi"% is the use of computer algorithms to perform image
processing on digital images. As a subfield of digital signal processing, digital image
processing has many advantages over analog image processingG it allows a much wider
range of algorithms to be applied to the input data, and can avoid problems such as the
build-up of noise and signal distortion during processing.
Im$%e +i%iti7$ti!":
An image captured by a sensor is expressed as a continuous function f!x,y$ of two
co-ordinates in the plane. Image digiti-ation means that the function f!x,y$ is sampled
into a matrix with M rows and 5 columns. The image *uanti-ation assigns to each
continuous sample an integer value. The continuous range of the image function f!x,y$ is
split into H intervals. The finer the sampling !i.e., the larger M and 5$ and *uanti-ation
!the larger H$ the better the approximation of the continuous image function f!x,y$.
Im$%e Pre8r!cessi"%:
"re-processing is a common name for operations with images at the lowest level of
abstraction -- both input and output are intensity images. These iconic images are of the
same 1ind as the original data captured by the sensor, with an intensity image usually
represented by a matrix of image function values !brightness$. The aim of pre-processing
is an improvement of the image data that suppresses unwanted distortions or enhances
some image features important for further processing. =our categories of image pre-
processing methods according to the si-e of the pixel neighborhood that is used for the
calculation of new pixel brightness<
o "ixel brightness transformations.
o 4eometric transformations.
o "re-processing methods that use a local neighborhood of the processed
pixel.
o Image restoration that re*uires 1nowledge about the entire image.
Im$%e Se%me"t$ti!":
Image segmentation is one of the most important steps leading to the analysis of
processed image data. Its main goal is to divide an image into parts that have a strong
correlation with objects or areas of the real world contained in the image.Two 1inds of
segmentation
8. #omplete segmentation< This results in set of disjoint regions uni*uely
corresponding with objects in the input image. #ooperation with higher
processing levels which use specific 1nowledge of the problem domain is
necessary.
.. "artial segmentation< in which regions do not correspond directly with image
objects. Image is divided into separate regions that are homogeneous with
respect to a chosen property such as brightness, color, reflectivity, texture, etc.
In a complex scene, a set of possibly overlapping homogeneous regions may
result. The partially segmented image must then be subjected to further
processing, and the final image segmentation may be found with the help of
higher level information.
&egmentation methods can be divided into three groups according to the dominant
features they employ
8. =irst is %l!6$l 9"!.le+%e about an image or its partG the 1nowledge is
usually represented by a histogram of image features.
.. E+%e86$se+ segmentations form the second groupG and
3. +egion-based segmentations
Im$%e e",$"ceme"t
The aim of image enhancement is to improve the interpretability or perception of
information in images for human viewers, or to provide Ibetter; input for other automated
image processing techni*ues. Image enhancement techni*ues can be divided into two
broad categories<
8. &patial domain methods, which operate directly on pixels, and
.. =re*uency domain methods, which operate on the =ourier transform of an image.
7nfortunately, there is no general theory for determining what Igood2 image enhancement
is when it comes to human perception. If it loo1s good, it is goodJ %owever, when image
enhancement techni*ues are used as pre-processing tools for other image processing
techni*ues, then *uantitative measures can determine which techni*ues are most
appropriate.
IMAGE FUSION
I"tr!+'cti!":
In computer vision, Multisensor Im$%e F'si!" is the process of combining
relevant information from two or more images into a single image. The resulting image
will be more informative than any of the input images. In remote sensing applications, the
increasing availability of space borne sensors gives a motivation for different image
fusion algorithms. &everal situations in image processing re*uire high spatial and high
spectral resolution in a single image. Most of the available e*uipment is not capable of
providing such data convincingly. The image fusion techni*ues allow the integration of
different information sources. The fused image can have complementary spatial and
spectral resolution characteristics. ut, the standard image fusion techni*ues can distort
the spectral information of the multispectral data, while merging.
In satellite imaging, two types of images are available. The panchromatic image
ac*uired by satellites is transmitted with the maximum resolution available and the
multispectral data are transmitted with coarser resolution. This will be usually, two or
four times lower. At the receiver station, the panchromatic image is merged with the
multispectral data to convey more information.
Many methods exist to perform image fusion. The very basic one is the high pass filtering
techni*ue. 6ater techni*ues are based on ()T, uniform rational filter ban1, and
6aplacian pyramid.
Multisensor data fusion has become a discipline to which more and more general
formal solutions to a number of application cases are demanded. &everal situations in
image processing simultaneously re*uire high spatial and high spectral information in a
single image. This is important in remote sensing. %owever, the instruments are not
capable of providing such information either by design or because of observational
constraints. Kne possible solution for this is data fusion
St$"+$r+ Im$%e F'si!" Met,!+s:
Image fusion methods can be broadly classified into two - spatial domain fusion and
transform domain fusion. The fusion methods such as averaging, rovey method,
principal component analysis !"#A$ and I%& based methods fall under spatial domain
approaches. Another important spatial domain fusion method is the high pass filtering
based techni*ue. %ere the high fre*uency details are injected into upsampled version of
M& images. The disadvantage of spatial domain approaches is that they produce spatial
distortion in the fused image. &pectral distortion becomes a negative factor while we go
for further processing, such as classification problem, of the fused image. The spatial
distortion can be very well handled by transform domain approaches on image fusion.
The multiresolution analysis has become a very useful tool for analy-ing remote
sensing images. The discrete wavelet transform has become a very useful tool for fusion.
&ome other fusion methods are also there, such as 6aplacian pyramid based, #urvelet
transform based etc. These methods show a better performance in spatial and spectral
*uality of the fused image compared to other spatial methods of fusion.
Alic$ti!"s:
8. Image #lassification
.. Aerial and &atellite imaging
3. Medical imaging
9. +obot vision
@. #oncealed weapon detection
>. Multi-focus image fusion
0. (igital camera application
:. #oncealed weapon detection
L. attle field monitoring
S$tellite Im$%e F'si!":
&everal methods are there for merging satellite images. In satellite imagery we can have
two types of images
"anchromatic images - An image collected in the broad visual wavelength range
but rendered in blac1 and white.
Multispectral images - Images optically ac*uired in more than one spectral or
wavelength interval. ,ach individual image is usually of the same physical area
and scale but of a different spectral band.
The &"KT "A5 satellite provides high resolution !8/m pixel$ panchromatic data while
the 6A5(&AT TM satellite provides low resolution !3/m pixel$ multispectral images.
Image fusion attempts to merge these images and produce a single high resolution
multispectral image.
The standard merging methods of image fusion are based on +ed-4reen-lue !+4$ to
Intensity-%ue-&aturation !I%&$ transformation. The usual steps involved in satellite
image fusion are as follows<
8. +egister the low resolution multispectral images to the same si-e as the
panchromatic image
.. Transform the +,4 and bands of the multispectral image into I%& components
3. Modify the panchromatic image with respect to the multispectral image. This is
usually performed by %istogram Matching of the panchromatic image with
Intensity component of the multispectral images as reference
9. +eplace the intensity component by the panchromatic image and perform inverse
transformation to obtain a high resolution multispectral image.
Me+ic$l Im$%e F'si!":
Image fusion has recently become a common term used within medical
diagnostics and treatment. The term is used when patient images in different data formats
are fused. These forms can include magnetic resonance image !M+I$, computed
tomography !#T$, and positron emission tomography !",T$. In radiology and radiation
oncology, these images serve different purposes. =or example, #T images are used more
often to ascertain differences in tissue density while M+I images are typically used to
diagnose brain tumors.
=or accurate diagnoses, radiologists must integrate information from multiple
image formats. =used, anatomically-consistent images are especially beneficial in
diagnosing and treating cancer. #ompanies such as Heosys, MIMvista, IHK,, and
rain6A have recently created image fusion software to use in conjunction with
radiation treatment planning systems. )ith the advent of these new technologies,
radiation oncologists can ta1e full advantage of intensity modulated radiation therapy
!IM+T$. eing able to overlay diagnostic images onto radiation planning images results
in more accurate IM+T target tumor volumes.
FUSION ALGORIT:MS:
The details of wavelets and "#A algorithm and their use in image fusion along
with simple average fusion algorithm are described in this section.
Pri"ci$l C!m!"e"t A"$l)sis:
The "#A involves a mathematical procedure that transforms a number of
correlated variables into a number of uncorrelated variables called principal components.
It computes a compact and optimal description of the data set. The first principal
component accounts for as much of the variance in the data as possible and each
succeeding component accounts for as much of the remaining variance as possible. =irst
principal component is ta1en to be along the direction with the maximum variance. The
second principal component is constrained to lie in the subspace perpendicular of the
first. )ithin this subspace, this component points the direction of maximum variance.
The third principal component is ta1en in the maximum variance direction in the
subspace perpendicular to the first two and so on. The "#A is also called as Harhunen-
6oMve transform or the %otelling transform. The "#A does not have a fixed set of basis
vectors li1e ==T, (#T and wavelet etc. and its basis vectors depend on the data set.
6et A be a d-dimensional random vector and assume it to have -ero empirical mean.
Krthonormal projection matrix N would be such that FON
T
A with the following
constraints. The covariance of F, i.e., cov!F$ is a diagonal and inverse of N is e*uivalent
to its transpose ! N
-8
O N
T
$.
7sing matrix algebra
;;;;; </=
Multiplying both sides of above e*n by N, one gets
Kne could write N as NO BN8, N.P , NdC and
cov!F$ as
PPPPP !.$
&ubstituting ,*n !8$ into the ,*n !.$ gives
BQ8N8, Q.N., P. , QdNdC O Bcov!A$ N8, cov!A$ N., P , cov!A$ NdC PPP. !3$
This could be rewritten as
QiNi O cov!A$ N8 PPP !9$
where iO8,.,...,d and Ni is an eigenvector of cov!A $ .
PCA Al%!rit,m:
6et the source images !images to be fused$ be arranged in two-column vectors. The steps
followed to project this data into .-( subspaces are<
8. Krgani-e the data into column vectors. The resulting matrix D is of dimension . x n.
.. #ompute the empirical mean along each column. The empirical mean vector M has a
dimension of 8 x ..
3. &ubtract the empirical mean vector M from each column of the data matrix D. The
resulting matrix A is of dimension . x n.
9. =ind the covariance matrix # of A i.e. OAAT mean of expectation O cov!A$
@. #ompute the eigenvectors N and eigenvalue ( of # and sort them by decreasing
eigenvalue. oth N and ( are of dimension . x ..
>. #onsider the first column of N which corresponds to larger eigenvalue to compute "8
and ". as

And

Im$%e F'si!" 6) PCA:
The information flow diagram of "#A-based image fusion algorithm is shown in
figure below. The input images !images to be fused$ I8 !x, y$ and I. !x, y$ are arranged in
two column vectors and their empirical means are subtracted. The resulting vector has a
dimension of n x ., where n is length of the each image vector. #ompute the eigenvector
and eigenvalues for this resulting vector are computed and the eigenvectors
corresponding to the larger eigenvalue obtained. The normali-ed components "8 and ".
!i.e., "8 R ". O 8$ using e*uation !3$ are computed from the obtained eigenvector. The
fused image is<
=igure< Information flow diagram in image fusion scheme employing "#A.
Im$%e F'si!" 6) Simle A*er$%e:
This techni*ue is a basic and straightforward techni*ue and fusion could be achieved by
simple averaging corresponding pixels in each input image as<
Im$%e F'si!" 6) W$*elet Tr$"s#!rms:
F!'rier $"$l)sis:
&ignal analysts already have at their disposal an impressive arsenal of tools. "erhaps
the most well-1nown of these is =ourier analysis, which brea1s down a signal into
constituent sinusoids of different fre*uencies. Another way to thin1 of =ourier analysis is
as a mathematical techni*ue for transforming our view of the signal from time-based to
fre*uency-based.
Fi%'re 1
=or many signals, =ourier analysis is extremely useful because the signal2s
fre*uency content is of great importance. &o why do we need other techni*ues, li1e
wavelet analysisS

=ourier analysis has a serious drawbac1. In transforming to the fre*uency domain,
time information is lost. )hen loo1ing at a =ourier transform of a signal, it is impossible
to tell when a particular event too1 place. If the signal properties do not change much
over time ? that is, if it is what is called a stationary signal?this drawbac1 isn2t very
important. %owever, most interesting signals contain numerous non stationary or
transitory characteristics< drift, trends, abrupt changes, and beginnings and ends of
events. These characteristics are often the most important part of the signal, and =ourier
analysis is not suited to detecting them.
S,!rt8Time F!'rier A"$l)sis

In an effort to correct this deficiency, (ennis 4abor !8L9>$ adapted the =ourier
transform to analy-e only a small section of the signal at a time?a techni*ue called
windowing the signal.4abor2s adaptation, called the &hort-Time =ourierTransform
!&T=T$, maps a signal into a two-dimensional function of time and
fre*uency.
Fi%'re 3
The &T=T represents a sort of compromise between the time- and fre*uency-based
views of a signal. It provides some information about both when and at what fre*uencies
a signal event occurs. %owever, you can only obtain this information with limited
precision, and that precision is determined by the si-e of the window. )hile the &T=T
compromise between time and fre*uency information can be useful, the drawbac1 is that
once you choose a particular si-e for the time window, that window is the same for all
fre*uencies. Many signals re*uire a more flexible approach?one where we can vary the
window si-e to determine more accurately either time or fre*uency.
W$*elet A"$l)sis

)avelet analysis represents the next logical step< a windowing techni*ue with
variable-si-ed regions. )avelet analysis allows the use of long time intervals where we
want more precise low-fre*uency information, and shorter regions where we want high-
fre*uency information.
Fi%'re 4
%ere2s what this loo1s li1e in contrast with the time-based, fre*uency-based,
and &T=T views of a signal<
Fi%'re -
Fou may have noticed that wavelet analysis does not use a time-fre*uency region, but
rather a time-scale region. =or more information about the concept of scale and the lin1
between scale and fre*uency, see T%ow to #onnect &cale to =re*uencySU
W,$t C$" W$*elet A"$l)sis D!>

Kne major advantage afforded by wavelets is the ability to perform local analysis,
that is, to analy-e a locali-ed area of a larger signal. #onsider a sinusoidal signal with a
small discontinuity ? one so tiny as to be barely visible. &uch a signal easily could be
generated in the real world, perhaps by a power fluctuation or a noisy switch.
Fi%'re 2
A plot of the =ourier coefficients !as provided by the fft command$ of this signal
shows nothing particularly interesting< a flat spectrum with two pea1s representing a
single fre*uency. %owever, a plot of wavelet coefficients clearly shows the exact location
in time of the discontinuity.
Fi%'re ?
)avelet analysis is capable of revealing aspects of data that
other signal analysis techni*ues miss, aspects li1e trends, brea1down points,
discontinuities in higher derivatives, and self-similarity. =urthermore, because it affords a
different view of data than those presented by traditional techni*ues, wavelet analysis can
often compress or de-noise a signal without appreciable degradation. Indeed, in their brief
history within the signal processing field, wavelets have already proven themselves to be
an indispensable addition to the analyst2s collection of tools and continue to enjoy a
burgeoning popularity today.
W,$t Is W$*elet A"$l)sis>

5ow that we 1now some situations when wavelet analysis is useful, it is worthwhile
as1ing T)hat is wavelet analysisSU and even more fundamentally,
T)hat is a waveletSU
A wavelet is a waveform of effectively limited duration that has an average value of -ero.
#ompare wavelets with sine waves, which are the basis of =ourier analysis.
&inusoids do not have limited duration ? they extend from minus to plus
infinity. And where sinusoids are smooth and predictable, wavelets tend to be
irregular and asymmetric.
Fi%'re 5
=ourier analysis consists of brea1ing up a signal into sine waves of various
fre*uencies. &imilarly, wavelet analysis is the brea1ing up of a signal into shifted and
scaled versions of the original !or mother$ wavelet. Eust loo1ing at pictures of wavelets
and sine waves, you can see intuitively that signals with sharp changes might be better
analy-ed with an irregular wavelet than with a smooth sinusoid, just as some foods are
better handled with a for1 than a spoon. It also ma1es sense that local features can be
described better with wavelets that have local extent.
T,e C!"ti"'!'s W$*elet Tr$"s#!rm:
Mathematically, the process of =ourier analysis is represented by the =ourier
transform<
which is the sum over all time of the signal f!t$ multiplied by a complex exponential.
!+ecall that a complex exponential can be bro1en down into real and imaginary
sinusoidal components.$ The results of the transform are the =ourier coefficients =!w$,
which when multiplied by a sinusoid of fre*uency w yields the constituent sinusoidal
components of the original signal. 4raphically, the process loo1s li1e<
Fi%'re @
&imilarly, the continuous wavelet transform !#)T$ is defined as the sum over all
time of the signal multiplied by scaled, shifted versions of the wavelet function V V
The result of the #)T is a series many wavelet coefficients C, which are a function
of scale and position0
Multiplying each coefficient by the appropriately scaled and shifted wavelet yields the
constituent wavelets of the original signal<
Fi%'re /A
Sc$li"%
)e2ve already alluded to the fact that wavelet analysis produces a time-scale
view of a signal and now we2re tal1ing about scaling and shifting wavelets.
)hat exactly do we mean by scale in this contextS
&caling a wavelet simply means stretching !or compressing$ it.
To go beyond collo*uial descriptions such as Tstretching,U we introduce the scale factor,
often denoted by the letter a.
If we2re tal1ing about sinusoids, for example the effect of the scale factor is very easy to
see<
Fi%'re //
The scale factor wor1s exactly the same with wavelets. The smaller the scale factor, the
more TcompressedU the wavelet.
Fi%'re /1
It is clear from the diagrams that for a sinusoid sin !wt$ the scale factor Wa2 is related
!inversely$ to the radian fre*uency Ww2. &imilarly, with wavelet analysis the scale is
related to the fre*uency of the signal.
S,i#ti"%
&hifting a wavelet simply means delaying !or hastening$ its onset. Mathematically,
delaying a function VV !t$ by k is represented by VV !t-1$
Fi%'re /3
Fi*e E$s) Stes t! $ C!"ti"'!'s W$*elet Tr$"s#!rm:

The continuous wavelet transform is the sum over all time of the
signal multiplied by scaled, shifted versions of the wavelet. This process produces
wavelet coefficients that are a function of scale and position.
It2s really a very simple process. In fact, here are the five steps of an easy recipe for
creating a #)T<
/0 Ta1e a wavelet and compare it to a section at the start of the original signal.
10 #alculate a number # that represents how closely correlated the wavelet is with this
section of the signal. The higher # is, the more the similarity. More precisely, if the signal
energy and the wavelet energy are e*ual to one, # may be interpreted as a correlation
coefficient.
5ote that the results will depend on the shape of the wavelet you choose.
Fi%'re /4
30 &hift the wavelet to the right and repeat steps 8 and . until you2ve covered the whole
signal.
Fi%'re /-
40 &cale !stretch$ the wavelet and repeat steps 8 through 3.
Fi%'re /2
-0 +epeat steps 8 through 9 for all scales.
)hen you2re done, you2ll have the coefficients produced at different scales by
different sections of the signal. The coefficients constitute the results of a regression of
the original signal performed on the wavelets.
%ow to ma1e sense of all these coefficientsS Fou could ma1e a plot on which the x-
axis represents position along the signal !time$, the y-axis represents scale, and the color
at each x-y point represents the magnitude of the wavelet coefficient #. These are the
coefficient plots generated by the graphical tools.
Fi%'re /?
These coefficient plots resemble a bumpy surface viewed from above.
If you could loo1 at the same surface from the side, you might see something li1e this<
Fi%'re /5
The continuous wavelet transform coefficient plots are precisely the time-scale view of
the signal we referred to earlier. It is a different view of signal data than the time-
fre*uency =ourier view, but it is not unrelated.
Sc$le $"+ FreB'e"c):
5otice that the scales in the coefficients plot !shown as y-axis labels$ run from 8 to
38. +ecall that the higher scales correspond to the most TstretchedU wavelets. The more
stretched the wavelet, the longer the portion of the signal with which it is being
compared, and thus the coarser the signal features being measured by the wavelet
coefficients.
Fi%'re /@
Thus, there is a correspondence between wavelet scales and fre*uency as revealed by
wavelet analysis<
C 6ow scale a=> #ompressed wavelet OX +apidly changing details OX %igh
fre*uency Ww2.
C %igh scale a=>&tretched waveletOX&lowly changing, coarse featuresOX6ow
fre*uency Ww2.
T,e Sc$le !# N$t're:

It2s important to understand the fact that wavelet analysis does not produce a time-
fre*uency view of a signal is not a wea1ness, but a strength of the techni*ue.
5ot only is time-scale a different way to view data, it is a very natural way to view data
deriving from a great number of natural phenomena.
#onsider a lunar landscape, whose ragged surface !simulated below$ is a result of
centuries of bombardment by meteorites whose si-es range from gigantic boulders to dust
spec1s.
If we thin1 of this surface in cross-section as a one-dimensional signal, then it is
reasonable to thin1 of the signal as having components of different scales?large features
carved by the impacts of large meteorites, and finer features abraded by small meteorites.

Fi%'re 1A
%ere is a case where thin1ing in terms of scale ma1es much more sense than
thin1ing in terms of fre*uency. Inspection of the #)T coefficients plot for this signal
reveals patterns among scales and shows the signal2s possibly fractal nature.

Fi%'re 1/
,ven though this signal is artificial, many natural phenomena ? from the intricate
branching of blood vessels and trees, to the jagged surfaces of mountains and fractured
metals ? lend themselves to an analysis of scale.
T,e Discrete W$*elet Tr$"s#!rm:
#alculating wavelet coefficients at every possible scale is a fair amount of wor1,
and it generates an awful lot of data. )hat if we choose only a subset of scales and
positions at which to ma1e our calculationsS It turns out rather remar1ably that if we
choose scales and positions based on powers of two?so-called dyadic scales and
positions?then our analysis will be much more efficient and just as accurate. )e obtain
such an analysis from the discrete wavelet transform !()T$.
An efficient way to implement this scheme using filters was developed in 8L:: by
Mallat. The Mallat algorithm is in fact a classical scheme 1nown in the signal processing
community as a two-channel sub band coder. This very practical filtering algorithm
yields a fast wavelet transform ? a box into which a signal passes, and out of which
wavelet coefficients *uic1ly emerge. 6et2s examine this in more depth.
O"e8St$%e Filteri"%: Ar!&im$ti!"s $"+ Det$ils:
=or many signals, the low-fre*uency content is the most important part. It is what
gives the signal its identity. The high-fre*uency content on the other hand imparts flavor
or nuance. #onsider the human voice. If you remove the high-fre*uency components, the
voice sounds different but you can still tell what2s being said. %owever, if you remove
enough of the low-fre*uency components, you hear gibberish. In wavelet analysis, we
often spea1 of approximations and details. The approximations are the high-scale, low-
fre*uency components of the signal. The details are the low-scale, high-fre*uency
components.
The filtering process at its most basic level loo1s li1e this<
Fi%'re 13
The original signal & passes through two complementary filters and emerges as
two signals.
7nfortunately, if we actually perform this operation on a real digital signal, we
wind up with twice as much data as we started with. &uppose, for instance that the
original signal & consists of 8/// samples of data. Then the resulting signals will each
have 8/// samples, for a total of .///.
These signals A and ( are interesting, but we get ./// values instead of the 8///
we had. There exists a more subtle way to perform the decomposition using wavelets. y
loo1ing carefully at the computation, we may 1eep only one point out of two in each of
the two .///-length samples to get the complete information. This is the notion of own
sampling. )e produce two se*uences called cA and c(.
Fi%'re 14
The process on the right which includes down sampling produces ()T
#oefficients. To gain a better appreciation of this process let2s perform a one-stage
discrete wavelet transform of a signal. Kur signal will be a pure sinusoid with
high- fre*uency noise added to it.
%ere is our schematic diagram with real signals inserted into it<
Fi%'re 1-
The MAT6A code needed to generate s, c(, and cA is<
s O sin!./Ylinspace!/,pi,8///$$ R /.@Yrand!8,8///$G
BcA,c(C O dwt!s,;db.;$G
where db. is the name of the wavelet we want to use for the analysis.
5otice that the detail coefficients c( is small and consist mainly of a high-fre*uency
noise, while the approximation coefficients cA contains much less noise than does the
original signal.
Blength!cA$ length!c($C
ans O @/8 @/8
Fou may observe that the actual lengths of the detail and approximation coefficient
vectors are slightly more than half the length of the original signal. This has to do with
the filtering process, which is implemented by convolving the signal with a filter. The
convolution TsmearsU the signal, introducing several extra samples into the result.
M'ltile8Le*el Dec!m!siti!":
The decomposition process can be iterated, with successive approximations being
decomposed in turn, so that one signal is bro1en down into many lower resolution
components. This is called the wavelet decomposition tree.
Fi%'re 12
6oo1ing at a signal2s wavelet decomposition tree can yield valuable information.
Fi%'re 1?
N'm6er !# Le*els:
&ince the analysis process is iterative, in theory it can be continued indefinitely. In
reality, the decomposition can proceed only until the individual details consist of a single
sample or pixel. In practice, you2ll select a suitable number of levels based on the nature
of the signal, or on a suitable criterion such as entropy.
W$*elet Rec!"str'cti!":
)e2ve learned how the discrete wavelet transform can be used to analy-e or
decompose, signals and images. This process is called decomposition or analysis. The
other half of the story is how those components can be assembled bac1 into the original
signal without loss of information. This process is called reconstruction, or synthesis. The
mathematical manipulation that effects synthesis is called the inverse discrete wavelet
transforms !I()T$. To synthesi-e a signal in the )avelet Toolbox, we reconstruct it from
the wavelet coefficients<

Fi%'re 15
)here wavelet analysis involves filtering and down sampling, the wavelet
reconstruction process consists of up sampling and filtering. 7p sampling is the process
of lengthening a signal component by inserting -eros between samples<
Fi%'re 1@
The )avelet Toolbox includes commands li1e idwt and waverec that perform
single-level or multilevel reconstruction respectively on the components of one-
dimensional signals. These commands have their two-dimensional analogs, idwt. and
waverec..
Rec!"str'cti!" Filters:
The filtering part of the reconstruction process also bears some discussion, because
it is the choice of filters that is crucial in achieving perfect reconstruction of the original
signal. The down sampling of the signal components performed during the decomposition
phase introduces a distortion called aliasing. It turns out that by carefully choosing filters
for the decomposition and reconstruction phases that are closely related !but not
identical$, we can Tcancel outU the effects of aliasing.
The low- and high pass decomposition filters !6 and %$, together with their
associated reconstruction filters !6; and %;$, form a system of what is called *uadrature
mirror filters<
Fi%'re 3A
Rec!"str'cti"% Ar!&im$ti!"s $"+ Det$ils:
)e have seen that it is possible to reconstruct our original signal from the
coefficients of the approximations and details.
Fi%'re
3/
It is also
possible to
reconstruct the
approximations
and details themselves from their coefficient vectors.
As an example, let2s consider how we would reconstruct the first-level
approximation A8 from the coefficient vector cA8. )e pass the coefficient vector cA8
through the same process we used to reconstruct the original signal. %owever, instead of
combining it with the level-one detail c(8, we feed in a vector of -eros in place of the
detail coefficients
vector<
Fi%'re 31
The process yields a reconstructed approximation A8, which has the same length
as the original signal & and which is a real approximation of it. &imilarly, we can
reconstruct the first-level detail (8, using the analogous process<

Fi%'re 33
The reconstructed details and approximations are true constituents of the original
signal. In fact, we find when we combine them that<
A8 R D8 O S
5ote that the coefficient vectors cA8 and c(8?because they were produced by
(own sampling and are only half the length of the original signal ? cannot directly be
combined to reproduce the signal.
It is necessary to reconstruct the approximations and details before combining
them. ,xtending this techni*ue to the components of a multilevel analysis, we find that
similar relationships hold for all the reconstructed signal constituents.
That is, there are several ways to reassemble the original signal<
Fi%'re 34
Rel$ti!"s,i !# Filters t! W$*elet S,$es:
In the section T+econstruction =iltersU, we spo1e of the importance of choosing
the right filters. In fact, the choice of filters not only determines whether perfect
reconstruction is possible, it also determines the shape of the wavelet we use to perform
the analysis. To construct a wavelet of some practical utility, you seldom start by drawing
a waveform. Instead, it usually ma1es more sense to design the appropriate *uadrature
mirror filters, and then use them to create the waveform. 6et2s see
how this is done by focusing on an example.
#onsider the low pass reconstruction filter !6;$ for the db. wavelet.
W$*elet #'"cti!" !siti!"
Fi%'re 3-
The filter coefficients can be obtained from the dbaux command<
6prime O dbaux!.$
6prime O /.398@ /.@L8@ /.8@:@ Z/./L8@
If we reverse the order of this vector !see wrev$, and then multiply every even
sample by Z8, we obtain the high pass filter %;<
%prime O Z/./L8@ Z/.8@:@ /.@L8@ Z/.398@
5ext, up sample %prime by two !see dyadup$, inserting -eros in alternate
positions<
%7 OZ/./L8@ / Z/.8@:@ / /.@L8@ / Z/.398@ /
=inally, convolve the up sampled vector with the original low pass filter<
%. O conv!%7,6prime$G
plot!%.$

Fi%'re 32
If we iterate this process several more times, repeatedly up sampling and
convolving the resultant vector with the four-element filter vector 6prime, a pattern
begins to emerge<
Fi%'re 3?
The curve begins to loo1 progressively more li1e the db. wavelet. This means
that the wavelet2s shape is determined entirely by the coefficients of the reconstruction
filters. This relationship has profound implications. It means that you cannot choose just
any shape, call it a wavelet, and perform an analysis. At least, you can2t choose an
arbitrary wavelet waveform if you want to be able to reconstruct the original signal
accurately. Fou are compelled to choose a shape determined by *uadrature mirror
decomposition filters.
T,e Sc$li"% F'"cti!":
)e2ve seen the interrelation of wavelets and *uadrature mirror filters. The wavelet
function V is determined by the high pass filter, which also produces the details of the
wavelet decomposition.
There is an additional function associated with some, but not all wavelets. This is
the so-called scaling function . The scaling function is very similar to the wavelet
function. It is determined by the low pass *uadrature mirror filters, and thus is associated
with the approximations of the wavelet decomposition. In the same way that iteratively
up- sampling and convolving the high pass filter produces a shape approximating the
wavelet function, iteratively up-sampling and convolving the low pass filter produces a
shape approximating the scaling function.
M'lti8ste Dec!m!siti!" $"+ Rec!"str'cti!":
A multi step analysis-synthesis process can be represented as<
Fi%'re 35
This process involves two aspects< brea1ing up a signal to obtain the wavelet
coefficients, and reassembling the signal from the coefficients. )e2ve already discussed
decomposition and reconstruction at some length. Kf course, there is no point brea1ing
up a signal merely to have the satisfaction of immediately reconstructing it. )e may
modify the wavelet coefficients before performing the reconstruction step. )e perform
wavelet analysis because the coefficients thus obtained have many 1nown uses, de-
noising and compression being foremost among them. ut wavelet analysis is still a new
and emerging field. 5o doubt, many uncharted uses of the wavelet coefficients lie in wait.
The )avelet Toolbox can be a means of exploring possible uses and hitherto un1nown
applications of wavelet analysis. ,xplore the toolbox functions and see what you
discover.
WAVELET DECOMPOSITION:
Images are treated as two dimensional signals, they change hori-ontally and
vertically, thus .( wavelet analysis must be used for images. .( wavelet analysis uses
the same 2mother wavelets2 but re*uires an extra step at every level of decomposition.
The 8( analysis filtered out the high fre*uency information from the low fre*uency
information at every level of decompositionG so only two sub signals were produced at
each level.
In .(, the images are considered to be matrices with 5 rows and M columns. At
every level of decomposition the hori-ontal data is filtered, then the approximation and
details produced from this are filtered on columns.

Fi% /: Dec!m!siti!" !# $" Im$%e
At every level, four sub-images are obtainedG the approximation, the vertical
detail, the hori-ontal detail and the diagonal detail. elow the &aturn image has been
decomposed to one level. The wavelet analysis has found how the image changes
vertically, hori-ontally and diagonally.
=ig .<.-( (ecomposition of &aturn Image to level 8
To get the next level of decomposition the approximation sub-image is decomposed, this
idea can be seen in figure 3.
=ig 3< &aturn Image decomposed to 6evel 3. Knly the L detail sub-images and the final
sub-image is re*uired to reconstruct the image perfectly.
)hen compressing with orthogonal wavelets the energy retained is<
The number of -eros in percentage is defined by<
Fi%'re: I"#!rm$ti!" #l!. +i$%r$m i" im$%e #'si!" sc,eme eml!)i"% m'lti8sc$le
+ec!m!siti!"0
The information flow diagram of wavelet- based image fusion algorithm is shown in
above figure. In wavelet image fusion scheme, the source images I8 !x,y$ and I. !x,y$, are
decomposed into approximation and detailed coefficients at re*uired level using ()T.
The approximation and detailed coefficients of both images are combined using fusion
rule [.
The fused image !If !x, y$$ could be obtained by ta1ing the inverse discrete wavelet
transform !I()T$ as<
If !x, y$ O I()T B[\ ()T !I8 !x,y$$, ()T !I. !x,y$$]C PPP. !@$
The fusion rule used in this project is simply averages the approximation coefficients and
pic1s the detailed coefficient in each sub band with the largest magnitude.
E"tr!):
,ntropy of grayscale image
, O entropy !I$
,ntropy is a statistical measure of randomness that can be used to characteri-e the
texture of the input image. ,ntropy is defined as -sum !p.Ylog. !p$$ where p contains the
histogram counts returned from imhist. y default, entropy uses two bins for logical
arrays and .@> bins for uint:, uint8>, or double arrays. I can be a multidimensional
image. If I have more than two dimensions, the entropy function treats it as a
multidimensional grayscale image and not as an +4 image. Image can be logical, uint:,
uint8>, or double and must be real, nonempty, and nonsparse. , is double. ,ntropy
converts any class other than logical to uint: for the histogram count calculation so that
the pixel values are discrete and directly correspond to a bin value.
INTRODUCTION TO MATLAB

W,$t Is MATLAB>
MAT6A
^
is a high-performance language for technical computing. It integrates
computation, visuali-ation, and programming in an easy-to-use environment where
problems and solutions are expressed in familiar mathematical notation. Typical uses
include
8. Math and computation
.. Algorithm development
3. (ata ac*uisition
9. Modeling, simulation, and prototyping
@. (ata analysis, exploration, and visuali-ation
>. &cientific and engineering graphics
0. Application development, including graphical user interface building.
MAT6A is an interactive system whose basic data element is an array that does
not re*uire dimensioning. This allows you to solve many technical computing problems,
especially those with matrix and vector formulations, in a fraction of the time it would
ta1e to write a program in a scalar non interactive language such as # or =K+T+A5.

The name MAT6A stands for matrix laboratory. MAT6A was originally written to
provide easy access to matrix software developed by the 6I5"A#H and ,I&"A#H
projects. Today, MAT6A engines incorporate the 6A"A#H and 6A& libraries,
embedding the state of the art in software for matrix computation.
MAT6A has evolved over a period of years with input from many users. In
university environments, it is the standard instructional tool for introductory and
advanced courses in mathematics, engineering, and science. In industry, MAT6A is the
tool of choice for high-productivity research, development, and analysis.
MAT6A features a family of add-on application-specific solutions called
toolboxes. Nery important to most users of MAT6A, toolboxes allow you to learn and
apply speciali-ed technology. Toolboxes are comprehensive collections of MAT6A
functions !M-files$ that extend the MAT6A environment to solve particular classes of
problems. Areas in which toolboxes are available include signal processing, control
systems, neural networ1s, fu--y logic, wavelets, simulation, and many others.
T,e MATLAB S)stem:
The MAT6A system consists of five main parts<
De*el!me"t E"*ir!"me"t:
This is the set of tools and facilities that help you use MAT6A functions and
files. Many of these tools are graphical user interfaces. It includes the MAT6A des1top
and #ommand )indow, a command history, an editor and debugger, and browsers for
viewing help, the wor1space, files, and the search path.
T,e MATLAB M$t,em$tic$l F'"cti!":
This is a vast collection of computational algorithms ranging from elementary
functions li1e sum, sine, cosine, and complex arithmetic, to more sophisticated functions
li1e matrix inverse, matrix eigen values, essel functions, and fast =ourier transforms.
T,e MATLAB L$"%'$%e:
This is a high-level matrix_array language with control flow statements, functions,
data structures, input_output, and object-oriented programming features. It allows both
'programming in the small' to rapidly create *uic1 and dirty throw-away programs, and
'programming in the large' to create complete large and complex application programs.
Gr$,ics:
MAT6A has extensive facilities for displaying vectors and matrices as graphs, as
well as annotating and printing these graphs. It includes high-level functions for two-
dimensional and three-dimensional data visuali-ation, image processing, animation, and
presentation graphics. It also includes low-level functions that allow you to fully
customi-e the appearance of graphics as well as to build complete graphical user
interfaces on your MAT6A applications.
T,e MATLAB Alic$ti!" Pr!%r$m I"ter#$ce <API=:
This is a library that allows you to write # and =ortran programs that interact with
MAT6A. It includes facilities for calling routines from MAT6A !dynamic lin1ing$,
calling MAT6A as a computational engine, and for reading and writing MAT-files.
MATLAB WORDING ENVIRONMENT:
MATLAB DESDTOP:8
Matlab (es1top is the main Matlab application window. The des1top contains five
sub windows, the command window, the wor1space browser, the current directory
window, the command history window, and one or more figure windows, which are
shown only when the user displays a graphic.
The command window is where the user types MAT6A commands and
expressions at the prompt !XX$ and where the output of those commands is displayed.
MAT6A defines the wor1space as the set of variables that the user creates in a wor1
session. The wor1space browser shows these variables and some information about them.
(ouble clic1ing on a variable in the wor1space browser launches the Array ,ditor, which
can be used to obtain information and income instances edit certain properties of the
variable.
The current (irectory tab above the wor1space tab shows the contents of the current
directory, whose path is shown in the current directory window. =or example, in the
windows operating system the path might be as follows< #<`MAT6A`)or1, indicating
that directory Twor1U is a subdirectory of the main directory TMAT6AUG )%I#% I&
I5&TA66,( I5 (+IN, #. clic1ing on the arrow in the current directory window shows
a list of recently used paths. #lic1ing on the button to the right of the window allows the
user to change the current directory.
MAT6A uses a search path to find M-files and other MAT6A related files,
which are organi-e in directories in the computer file system. Any file run in MAT6A
must reside in the current directory or in a directory that is on search path. y default, the
files supplied with MAT6A and math wor1s toolboxes are included in the search path.
The easiest way to see which directories are on the search path. The easiest way to see
which directories are soon the search path, or to add or modify a search path, is to select
set path from the =ile menu the des1top, and then use the set path dialog box. It is good
practice to add any commonly used directories to the search path to avoid repeatedly
having the change the current directory.
The #ommand %istory )indow contains a record of the commands a user has
entered in the command window, including both current and previous MAT6A sessions.
"reviously entered MAT6A commands can be selected and re-executed from the
command history window by right clic1ing on a command or se*uence of commands.
This action launches a menu from which to select various options in addition to executing
the commands. This is useful to select various options in addition to executing the
commands. This is a useful feature when experimenting with various commands in a
wor1 session.
Usi"% t,e MATLAB E+it!r t! cre$te M8Files:
The MAT6A editor is both a text editor speciali-ed for creating M-files and a
graphical MAT6A debugger. The editor can appear in a window by itself, or it can be a
sub window in the des1top. M-files are denoted by the extension .m, as in pixelup.m. The
MAT6A editor window has numerous pull-down menus for tas1s such as saving,
viewing, and debugging files. ecause it performs some simple chec1s and also uses
color to differentiate between various elements of code, this text editor is recommended
as the tool of choice for writing and editing M-functions. To open the editor , type edit at
the prompt opens the M-file filename.m in an editor window, ready for editing. As noted
earlier, the file must be in the current directory, or in a directory in the search path.
Getti"% :el:
The principal way to get help online is to use the MAT6A help browser, opened as
a separate window either by clic1ing on the *uestion mar1 symbol !S$ on the des1top
toolbar, or by typing help browser at the prompt in the command window. The help
rowser is a web browser integrated into the MAT6A des1top that displays a %ypertext
Mar1up 6anguage!%TM6$ documents. The %elp rowser consists of two panes, the help
navigator pane, used to find information, and the display pane, used to view the
information. &elf-explanatory tabs other than navigator pane are used to perform a search.
Fi%: Im$%e /
Fi%: Im$%e 1
Fi%'re: F'se+ im$%e 6) simle $*er$%e0
Fi%'re: F'se+ im$%e 6) PCA
Fi%'re: F'se+ im$%e 6) W$*elets
CONCLUSIONS
"ixel-level image fusion using wavelet transform and principal component
analysis are implemented in "# MAT6A. (ifferent image fusion performance metrics
with and without reference image have been evaluated. The simple averaging fusion
algorithm shows degraded performance. Image fusion using wavelets with higher level of
decomposition shows better performance in some metrics while in other metrics, the "#A
shows better performance. &ome further investigation is needed to resolve this issue.
REFERENCES
8. 4on-alo, "ajares a Eesus Manuel, de la #ru-. A wavelet-based image fusion tutorial.
"attern +ecognition, .//9, 30, 8:@@-:0..
.. Narsheny, ".H. Multi-sensor data fusion. ,lec. #omm. ,ngg., 8LL0, L!8.$, .9@-@3.
3. Mallet, &.4. A heory for multiresolution signal decomposition< The wavelet
representation. I,,, Trans. "attern Anal. Mach. Intel., 8L:L, 88!0$, >09-L3.
9. )ang, %.G "eng, E. a )u, ). =usion algorithm for multisensor image based on discrete
multiwavelet transform. I,, "roc. Nisual Image &ignal "rocess., .//., 89L!@$.
@. Mitra Ealili-Moghaddam. +eal-time multi-focus image fusion using discrete wavelet
transform and 6aplasican pyramid transform. #halmess 7niversity of Technology,
4oteborg, &weden, .//@. Masters thesis.
>. (aubechies, I. Ten lectures on wavelets. In +egular #onference &eries in Applied
Maths, Nol. L8, 8LL., &IAM, "hiladelphia.
0. h t t p < _ _ e n . w i 1 i p e d i a . o r g _ w i 1 i _ "rincipalbcomponentsbanalysis.
:. 5aidu, N.".&.G 4irija, 4. a +aol, E.+. ,valuation of data association and fusion
algorithms for trac1ing in the presence of measurement loss. In AIAA #onference on
5avigation, 4uidance and #ontrol, Austin, 7&A, August .//3, pp. 88-89.
L. Arce, 4on-alo +. 5onlinear signal processing A statistical approach. )iley-
Interscience Inc. "ublication, 7&A, .//@.

You might also like