You are on page 1of 3

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882

Volume 3, Issue 6, September 2014

Comparative analysis of DWT, PCA+DWT AND LDA+DWT for face


recognition
Taqdir
Assistant Professor
Computer Science and Egg. Department,
Guru Nanak Dev University Regional Campus, Gurdaspur - 143521

ABSTRACT
The face detection is a challenging task that needs to be
performed robustly and efficiently regardless of
variability in scale, location, orientation, illumination.
Hence the main objective of this paper is to evaluate the
different attributes of face detection and recognition for
invariant dataset. The DWT, PCA+DWT, LDA+DWT
techniques are used to evaluate the performance on the
basis of RMS error, CC, PSNR, MSE to recognize the
same face with invariant condition like expression,
location, scale, aging , lightning etc.

1. INTRODUCTION
Image processing is method to convert an image into
digital form and perform some operations on it. The
purpose of these operations is to get the enhanced or best
quality of the image or to extract some useful feature or
information from it. In which input is image, any video
frame or photograph and output may be image or some
features related to image. Image processing is a method
to convert 3D images into 2D images. Here we have a
question in mind that what is a digital image because we
use this term when we discuss about image processing.
A digital image is a representation of a two-dimensional
image as a finite set of digital values called picture
element or pixels. Pixels values typically represent gray
levels, colors, heights, opacities etc. [2] [6]
Why image processing needed:
a) For visualization: Visualization is done to
observe or identify the objects that are not
properly or clearly visible.
b) For image sharpening and restoration: It is
done to create a best quality of image or to
improve image quality.
c) For image retrieval: It is done to find the
images of our interest.
d) For measurement of pattern: It is done for
measure various object in an image that are
important.

e) For image recognition: It is main part of image


processing it distinguish the various objects in an
image. A face recognition system is a process of
identifying or verifying a person from an image or
video frame. In simple manner, this is done by
comparing the facial faces of an image with the
images in a database. The feature such as nodal
point on the face, distance between eyes, and
shape of cheekbones are some of the key features.
Among these, some features may have more
discriminating power than others. The higher the
number of feature may not lead to high recognition
rate. Hence, the selection of the optimum features
becomes primary concern. It reduces the feature
size and increases the recognition rate. The
performance of face recognition depends on the
controlled environment. The quality of image
degrades on lighting, illumination condition, noise
and occlusions. [8]
Face recognition is the process of identifying one or
more people in image or videos. Many techniques and
algorithm for face recognition typically extract features
and compare them to a database or knowledge base to
find the best match.
It is important part of many biometric, security, and
surveillance system, as well as image and video indexing
systems.
Face is the index of mind. It is a complex
multidimensional structure and requires an intelligent
computing technique for recognition. While using
automatic system for face recognition, computers are
easily confused by changes in illumination, variation in
poses and change in angles of faces. Various techniques
are being used for security, law enforcement and
authentication purposes which includes areas in
detective agencies and military purpose. [11]
We can recognize faces with computer vision using
variety of models including.
a) Extract feature and boosted learning algorithm.
b) Principal component analysis model such as
eigen faces.

www.ijsret.org

963

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 3, Issue 6, September 2014

c) Neural network models such as dynamic link


matching.
d) Template matching.

e.g. head Pose when we are matching' a frontal image


with a profile image. Distinguishes between three
different extraction methods: [7]

2. FACE RECOGNITION METHODS

I. Generic methods based on edges, lines, and curves


II. Feature-template-based methods
III. Structural matching methods that take into
consideration geometrical Constraints on the features.

Method based on Template matching : The template


based approaches represent the most popular technique
used to recognize and detect faces Unlike the geometric
feature based approaches, the template based approaches
use a feature vector that represent the entire face
template rather than the most significant facial features.
In this matching perform on whole face and face will act
as template.
Matching by using geometric feature: The geometric
feature based approaches are the
earliest approaches to face recognition and detection. In
these systems the significant or important facial features
are detected and the distances between them and other
geometric characteristic are combined in a feature vector
that is used to represent the face.
Eigen face approach: In this approach, face recognition
problem is treated as an intrinsically two dimensional
recognition problem. The system works by projecting
face images which represents the significant variations
among known faces. This significant feature is
characterized as the Eigen faces. They are actually the
eigenvectors. Their goal is to develop a computational
model of face recognition that is fact, reasonably simple
and accurate in constrained environment. Eigen face
approach is motivated by the information theory [8]. It is
leading to the idea of basing face recognition on small
set of image features that best approximates the set of
known images. [9]
Holistic Matching Methods: In holistic approach, the
complete face region is taken as input data into face
catching system. One of the best example of holistic
methods are Eigen faces (most widely used method for
face recognition), Principal Component Analysis, Linear
Discriminant Analysis and independent component
analysis etc.
Feature-based(structural) Methods: In this methods
local features such as eyes, nose and mouth are first of
all extracted and their locations and local statistics
(geometric and/or appearance) are fed into a structural
classifier. A big challenge for feature extraction methods
is feature "restoration", this is when the system tries to
retrieve features that are invisible due to large variations,

Hybrid Methods:
Hybrid face recognition systems use a combination of
both holistic and feature extraction methods. Generally
3D Images are used in hybrid methods. The image of a
person's face is caught in 3D, allowing the system to
note the curves of the eye sockets, for example, or the
shapes of the chin or forehead. The 3D system usually
proceeds thus: Detection, Position, Measurement,
Representation and Matching.
Detection
Capturing a face either a scanning a photograph or
photographing a person's face in real time.
Position
Determining the location, size and angle of the head.
Measurement
Assigning measurements to each curve of the face to
make a template with specific focus on the outside of the
eye, the inside of the eye and the angle of the nose.
Representation
Converting the template into a code - a numerical
representation of the face.
Matching
Comparing the received data with faces in the existing
database. [3]

3. FACE RECOGNITION TECHNIQUES


Principal Component Analysis (PCA) widely used in
pattern recognition and computer vision. In face
recognition, the principal component of the faces in
training ser is calculated. Recognition is achieved using
projection of the training face images. PCA is used
better recognition of the images. PCA can deal with low
noise sensitivity and has great efficiency with smaller
dimension of the data. However, construction of the
covariance matrix is difficult and unless the training data
is given, the invariance cannot be calculated.
Linear Discriminate Analysis (LDA) reduces
dimensionality while preserving much of class
information. This technique aims to maximize between
class variance and minimize within class variance. In

www.ijsret.org

964

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 3, Issue 6, September 2014

high dimensional data, a small sample size problem


arises where the number of training images is less than
the dimensionality of sample space. As LDA assumes
Gaussian distribution of the data, therefore it fails when
discriminatory information is not in the mean of the data
but in invariance of the data.
Discrete Wavelet Transform (DWT) for an image can
be obtained by transforming information of a signal into
number that can be manipulated analyzed and
reconstructed. A 2-D image is filtered by low pass filter
and high pass filter along horizontal and vertical
direction. One level of decomposition yields four
subbands, one smooth sub band LL and three detailed
sub bands LH, HL, HH. DWT can be calculated by
passing previous approximation coefficients through low
and high pass filters. [10]

4. COMPARATIVE ANALYSIS
In the following table we are comparing the performance
of DWT, PCA+DWT, LDA+DWT by taking seven
images dataset from YALE database and training the
eighth image on the data set to calculate the following
attributes.
RMS (root mean square error)
CC (correlation coefficient)
PSNR (peak signal to noise ratio)
MSE (mean square error)

Parameters for Comparative Analysis


Approach
RMS
Error

CC

PSNR

MSE

DWT
TRANSFORM

78.744

0.761

8.053

1.0180e+004

DWT + LDA
TRANSFORM

78.876

0.587

8.039

1.0214e+004

DWT + PCA
TRANSFORM

68.881

0.519

7.011

1.0010e+004

5. CONCLUSION
The implementation of all techniques has been done in
MATLAB tool. The Yale database has been used for
testing .it is the most common database used for
invariant faces system. We can extend the evaluation by
making comparative analysis on the whole yale database
and can also find the recognition rate by using the
Euclidian distance as classifier.

REFERENCES
1) J. Yang, D. Zhang, A. Frangi and J.-Y. Yang, Twodimensional PCA: a new approach to appearancebased face representation and recognition, IEEE
Trans. Patt. Anal. Mach. Intell.
2) Ligang Zhang, student member IEEE and Din
Tjondronegoro, facial expression recognition
using facial
movement features IEEE
transactionson affective computing, volume II,
no.4.
3) M.Hotelling, Analysis of a Complex of Statistical
Variables into Principal Components, Journal of
Educational Psychology, Vol.24, pp.498-520,
(1933).
4) C.R.Rao, The Use and Interpretation of Principal
Component Analysis in Applied Research,
Sankhya A, Vol.26, pp.329-358, (1964).
5) 6 L. Sirovich and M. Kirby, Low-dimensional
Procedure for the Characterization of Human
Faces, Journal of the Optical Society of America
A- Optics, Image Science and Vision, 4, Vol.4,
No.3, pp.519524, March (1987).
6) J.L.Rodgers, and W.A. Nicewander, Thirteen
Ways to Look at the Correlation Coefficient, The
American Statistician, Vol. 42, No. 1, pp.59-66,
February 1988
7) K.Fukunaga, Introduction to Statistical Pattern
Recognition, 2nd edition, New York: Academic
Press, (1990).
8) Paul Ekman, Basic Emotions, University of
California, Francisco, USA. (1990)
9) M. Kirby and L. Sirovich, Application of the
Karhunen-Love procedure for the characterization
of human faces, Proceedings of IEEE Transactions
on Pattern Analysis and Machine Intelligence
(PAMI), Vol.12, No.1, pp.103108, (1990).
10) M.Turk and A.Pentland, Eigenfaces for
Recognition, Journal of Cognitive Neuroscience,
Vol. 3, No. 1, pp. 71-86,( 1991).

www.ijsret.org

965