Professional Documents
Culture Documents
A THESIS REPORT
Submitted in partial fulfillment of the requirement to
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY KAKINADA
Submitted by
K. APARNA JYOTHI
(Regd. No: 14EM1D4702)
2015-2016
DEPARTMENT OF ELECTRONICS & COMMUNICATION
ENGINEERING
CERTIFICATE
This is to certify that this dissertation work entitled Fake Biometric
EXTERNAL EXAMINER
ACKNOWLEDGEMENT
Above all, I thank my parents. I feel deep sense of gratitude for my family
who formed part of my vision. Finally I thank one and all that have contributed
directly or indirectly to this thesis.
K.Aparna jyothi
(14EM1D4702)
i
ABSTRACT
ii
LIST OF CONTENTS
ACKNOWLEDGEMENT i
ABSTRACT ii
LIST OF CONTENTS iii
LIST OF FIGURES v
LIST OF TABLES vi
CHAPTER 1: INTRODUCTION 1
1.1 INTRODUCTION 2
1.2 AIM AND OBJECTIVES 3
1.3 LITERATURE SURVEY 4
iii
4.1 LIVENESS ASSESSMENT IN AUTHENTICATION SYSTEM 25
4.2 SOFTWARE-BASED TECHNIQUES 25
4.3 IMAGE QUALITY ASSESSMENT FOR LIVENESS
26
DETECTION
4.4 FULL-REFERENCE IQ MEASURES 28
4.4.1 Different types FR-IQ Measurements 30
4.5 NO-REFERENCE IQ MEASURES 33
4.6 DUAL TREE COMPLEX WAVELETS TRANSFORM 36
CHAPTER 5: SOFTWARE ASPECTS 39
5.1 INTRODUCTION TO MATLAB 40
CHAPTER 6: RESULTS 47
6.1 TOP MODULE 48
6.2 INPUT PROCESS WINDOW 49
6.3 IDENTIFICATION PROCESS 50
6.4 IQA PARAMETERS 51
6.5 FINAL RESULT 52
6.6 Advantages 53
6.7 Applications 53
CHAPTER 7: CONCLUSION & FUTURE SCOPE 54
7.1 CONCLUSION 55
7.2 FUTURE SCOPE 55
REFERENCES 56
BIBLIOGRAPHY 59
APPENDIX 61
iv
LIST OF FIGURES
v
LIST OF TABLES
vi
FAKE BIOMETRIC DETECTION USING SOFTWARE BASED LIVENESS DETECTION
TECHNIQUE
CHAPTER 1
INTRODUCTION
CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION
over other approaches and presented a case study of different techniques. This
project can be enhanced by reducing the image using DTCWT (Dual Tree Complex
Wavelet Transform).
The objective of this project is modification can decrease the image size and
execution can be reduced by enhancing the image clarity.
Biometric key generation: In this approach, a key is derived directly from the
biometric signal. The advantage is that there is no need for user-specific keys
or tokens as required by biometric salting methods and that it is therefore
scalable. A key, parameterized by the biometric B, is stored instead of the
actual biometric itself.The major problem with this approach is achieving
error tolerance in the key. The defining feature of this category is the attempt
to derive robust binary representations (keys) from noisy biometric data
without the use of additional information.
Euclidian, set distance, etc.) on noisy biometric data B and B0. Among the
different threats analyzed, the so-called direct or spoofing attacks have
motivated the biometric community to study the vulnerabilities against this
type of fraudulent actions in modalities such as the iris, the fingerprint, the
face, the signature, or even the gait and multimodal approaches.
1.4 ORGANISATION REPORT:
This thesis consists of total SEVEN chapters that include introduction and
conclusions. Chapter 2 describes about existing system. in chapter 3 conceptual
analysis of proposed system.chapter 4 having implementation of project. chapter 5
describes software aspects. Chapter 6 result analysis. in chapter 7 advantages and
applications are discussed.
CHAPTER-2
EXISTING SYSTEM
CHAPTER 2
EXISTING SYSTEM
Fig 2.1: Types of attacks detected by hardware-based & software-based liveness detection
Liveness detection methods are usually classified into one of two groups shown in
figure 2.1.
(i) Hardware-based techniques, which add some specific device to the sensor in
order to detect particular properties of a living trait (e.g., fingerprint sweat,
blood pressure, or specific reflection properties of the eye).
(ii) Software-based techniques, in this case the fake trait is detected once the
sample has been acquired with a standard sensor (i.e., features used to
distinguish between real and fake traits are extracted from the biometric
sample, and not from the trait itself).
The two types of methods present certain advantages and drawbacks over the
other and, in general, a combination of both would be the most desirable protection
approach to increase the security of biometric systems.
2.2 DRAWBACKS OF EXISTING SYSTEM:
In the existing system we are using hardware techniques for implementing
biometrics. In Hardware-based techniques, which add some specific device to the
sensor in order to detect particular properties of a living trait (e.g., fingerprint sweat,
blood pressure, or specific reflection properties of the eye), as a coarse comparison,
hardware-based schemes usually present a higher fake detection rate. Furthermore, as
they operate directly on the acquired sample (and not on the biometric trait itself),
and the drawbacks in existing system are
More costlier,
More complexity,
more expertise is required to create a dummy from such a print, but every
dental technician has the skills and equipment to create one. This is an
accurate description of how to create a dummy of the fingerprint. A picture of
a stamp that is created using this method can be found in Figure 2.
acquired with a standard sensor (i.e., features used to distinguish between real and
fake traits are extracted from the biometric sample, and not from the trait itself). In
this several advances that originated both from the cryptographic and biometric
community to address this problem.
As a comparison, hardware-based schemes usually present a higher fake
detection rate, while software-based techniques are in general less expensive (as no
extra device is needed), and less intrusive since their implementation is transparent to
the user. Furthermore, as they operate directly on the acquired sample (and not on the
biometric trait itself), software-based techniques may be embedded in the feature
extractor module which makes them potentially capable of detecting other types of
illegal break-in attempts not necessarily classified as spoofing attacks. For instance,
software-based methods can protect the system against the injection of reconstructed
or synthetic samples into the communication channel between the sensor and the
feature extractor.
CHAPTER 3
PROPOSED SYSTEM
CHAPTER 3
PROPOSED SYSTEM
This process may use a smart card, username or ID number (e.g. PIN) to
In the first step, reference models for all the users are generated and stored
in the model database.
In the second step, some samples are matched with reference models to
generate the genuine and impostor scores and calculate the threshold.
Third step is the testing step.
The latter function can only be achieved through biometrics since other
methods of personal recognition such as passwords, PINs or keys are ineffective.
The first time an individual uses a biometric system is called enrollmen1t. During
the enrolment, biometric information from an individual is captured and stored. In
subsequent uses, biometric information is detected and compared with the
information stored at the time of enrolment. Note that it is crucial that storage and
retrieval of such systems themselves be secure if the biometric system is to be
robust.
In the block diagram of Biometric system, the first block (sensor) is the
interface between the real world and the system; it has to acquire all the necessary
data. It is an image acquisition system, but it can change according to the
characteristics desired.
The second block performs all the necessary pre-processing: it has to
removing background noise from the sensor, to enhance the input, to use some
kind of normalization, etc.
In the third block necessary features are extracted. This step is an
important step as the correct features need to be extracted in the optimal way. A
vector of numbers or an image with particular properties is used to create a
template. A template is a synthesis of the relevant characteristics extracted from
the source. Elements of the biometric measurement that are not used in the
comparison algorithm are discarded in the template to reduce the file size and to
protect the identity of the enrollee during the enrollment phase, the template is
simply stored somewhere (on a card or within a database or both). During the
matching phase, the obtained template is passed to a matcher that compares it with
other existing templates, estimating the distance between them using any
algorithm (e.g. Hamming distance).
The matching program will analyze the template with the input. This will
then be output for any specified use or purpose (e.g. entrance in a restricted area)
Selection of biometrics in any practical application depending upon the
characteristic measurements and user requirements. We should consider
Performance, Acceptability, Circumvention, Robustness, Population coverage,
Size, Identity theft deterrence in selecting a particular biometric. Selection of
biometric based on user requirement considers Sensor availability, Device
availability, Computational time and reliability, Cost, Sensor area and power
consumption.
apply the model-based method on these mixed minutiae (including the real and
virtual minutiae).
After that, the reconstructed orientation field is used into the matching stage by
combining with conventional minutiae-based matching. In this way we have to
create an oriented field fingerprint using minutia extraction techniques.
There are many different minutia extraction fields for different fingerprint
ridges; here we use mainly three different types of minutia extraction techniques.
Our present proposal is liveness detection method which uses this technique
whether person who uses the system is original or not. Generally a person can
cheat others but a machine not do that but nowadays the cheaters can try to cheat
system also by preparing the dummy fingerprints. To protect the world from these
unwanted users we are using the fingerprint recognization with minutia extraction
fields.
When the fingerprint is complemented with the minutiae, we can get more
information and better performance can be obtained by fusing the results in
biometric systems. The three main minutia extraction fields used in fingerprint
ridges are given below;
The major minatue features of fingerprint ridges are: Ridge ending and
bifurcation &short ridge.
The ridge ending is the point at which a ridge terminates. Bifurcations are points at
which a single ridge splits into two ridges. Short ridges (or dots) are ridges which
are significantly shorter than the average ridge length on the fingerprint.
Minutiae and patterns are very important in the analysis of fingerprints since
no two fingers have been shown to be identical.
fingerprint ridge detail come from the same finger. There exist multiple algorithms
that do fingerprint matching in many different ways. Some methods involve
matching minutiae points between the two images, while others look for
similarities in the bigger structure of the fingerprint. In this project we propose a
method for fingerprint matching based on minutiae matching.
However, unlike conventional minutiae matching algorithms our algorithm also
takes into account region and line structures that exist between minutiae pairs.
This allows for more structural information of the fingerprint to be accounted for
thus resulting in stronger certainty of matching minutiae. Also, since most of the
region analysis is pre-processed it does not make the algorithm slower. The
Evidence from the testing of the pre-processed images gives stronger assurance
that using such data could lead to faster and stronger matches.
3.3 IRIS:
The processing capabilities of the IRIS vision systems specifically using
the IRIS v. The first stage of the architecture embeds sensors, parallel processing
analog and mixed-signal circuitry, control circuitry and memory. This front-stage
is implemented through dedicated bio-inspired chips. The second stage of the IRIS
vision system architecture is a digital microprocessor. The combination of parallel
preprocessing and serial post-processing makes the IRIS systems very efficient
particularly the IRIS systems are capable to close the sensor-processing-actuation
loop at a high speed. In this demo, the IRIS v is used to recognize data matrix
codes at more than 200 codes/sec rate.
3.3.1 Iris image enhancement:
Iris Image enhancement is used to improve the image clarity for the easy of
further operations. Since the iris images acquired from sensors or other media are
not assured with perfect quality, enhancement methods, for increasing the contrast
between ridges and furrows and for connecting the false broken points of ridges
due to insufficient amount of ink, are very useful to keep a higher accuracy to iris
recognition. Two methods are used for the iris image enhancement, the first one is
Histogram Equalization; the second one is Fourier Transform.
just a single pixel width [5, 7]. The requirements of a good thinning algorithm
with respect to an iris are
The thinned iris image obtained should be of single pixel width with no
discontinuities.
Each ridge should be thinned to its centre pixel.
Noise and singular pixels should be eliminated.
No further removal of pixels should be possible after completion of
thinning process.
Use an iterative, parallel thinning algorithm, in each scan of the full iris
image, the algorithm marks down redundant pixels in each small image window
(3x3). And finally removes all those marked pixels after several scans. But it is
tested that such an iterative, parallel thinning algorithm has bad efficiency
although it can get an ideal thinned ridge map after enough scans. Uses a one-in-
all method to extract thinned ridges from gray-level iris images directly. Their
method traces along the ridges having maximum gray intensity value. However,
binarization is implicitly enforced since only pixels with maximum gray intensity
value are remained.
The advancement of each trace step still has large computation complexity
although it does not require the movement of pixel by pixel as in other thinning
algorithms. Thus the third method is bid out which uses the built-in Morphological
thinning function in MATLAB to do the thinning and after that an enhanced
thinning algorithm is applied to obtain an accurately thinned image.
3.4.2 Enhanced Thinning:
Ridge Thinning is to eliminate the redundant pixels of ridges till the ridges
are just one pixel wide. Ideally, the width of the skeleton should be strictly one
pixel. However, this is not always true. There are still some locations, where the
skeleton has a two-pixel width at some erroneous pixel locations. An erroneous
pixel is defined as the one with more than two 4-connected neighbours. These
erroneous pixels exist in the fork regions where bifurcations should be detected,
but they have CN = 2 instead of CN>2. The existence of erroneous pixels may
destroy the integrity of spurious bridges and spurs,
exchange the type of minutiae points, and
miss detect true bifurcations,
recognition was better than its visible counterpart outdoors. Also, the recognition
results of thermal imagery for both indoors and outdoors was found to be similar,
thus proving that illumination had very little effect on thermal imagery.
CHAPTER 4
IMPLEMENTATION OF PROJECT
CHAPTER 4
IMPLEMENTATION OF PROJECT
In this case the fake trait is detected once the sample has been acquired with a
standard sensor (i.e., features used to distinguish between real and fake traits are
extracted from the biometric sample, and not from the trait itself).
produced image is directly injected to the communication channel before the feature
extractor, this fake sample will most likely lack some of the properties found in
natural images. Following this quality-difference hypothesis, in the present
research work we explore the potential of general image quality assessment as a
protection method against different biometric attacks (with special attention to
spoofing). As the implemented features do not evaluate any specific property of a
given biometric modality or of a specific attack, they may be computed on any
image. This gives the proposed method a new multi-biometric dimension which is
not found in previously described protection schemes. In the current state-of-the-art,
the rationale behind the use of IQA features for liveness detection is supported by
three factors:
Image quality has been successfully used in previous works for image
manipulation detection and stag analysis in the forensic field. To a certain extent,
many spoofing attacks, especially those which involve taking a picture of a facial
image displayed in a2D device (e.g., spoofing attacks with printed iris or face
images), may be regarded as a type of image manipulation which can be effectively
detected, as shown in the present research work, by the use of different quality
features.
In addition to the previous studies in the forensic area, different features
measuring trait-specific quality properties have already been used for Liveness
detection purposes in fingerprint and iris applications. However, even though these
two works give a solid basis to the use of image quality as a protection method in
biometric systems, none of them is general. For instance, measuring the ridge and
valley frequency may be a good parameter to detect certain fingerprint spoofs, but it
cannot be used in iris Liveness detection.
On the other hand, the amount of occlusion of the eye is valid as an iris anti-
spoofing mechanism, but will have little use in fake fingerprint detection. This same
reasoning can be applied to the vast majority of the Liveness detection methods
found in the state-of-the art. Although all of them represent very valuable works
which bring insight into the difficult problem of spoofing detection, they fail to
generalize to different problems as they are usually designed to work on one specific
modality and, in many cases, also to detect one specific type of spoofing attack.
Human observers very often refer to the different appearance of real and fake
samples to distinguish between them. As stated above, the different metrics and
methods designed for IQA intend to estimate in an objective and reliable way the
perceived appearance of images by humans. Different quality measures present
different sensitivities to image artifacts and distortions. For instance, measures like
the mean squared error respond more to additive noise, whereas others such as the
spectral phase error are more sensitive to blur; while gradient-related features react to
distortions concentrated around edges and textures.
Therefore, using a wide range of IQMs exploiting complementary image quality
properties should permit to detect the aforementioned quality differences between
real and fake samples expected to be found in many attack attempts (i.e., providing
the method with multi- attack protection capabilities). All these observations lead us
to believe that there is sound proof for the quality-difference hypothesis and that
image quality measures have the potential to achieve success in biometric protection
tasks.
image manipulation detection and for stage analysis, is implemented here. The
input grey-scale image I (of size N M) is filtered with a low-pass Gaussian
kernel ( = 0.5 and size 3 3) in order to generate a smoothed version I .Then,
the quality between both images (I and I) is computed according to the
corresponding full-reference IQA metric. This approach assumes that the loss of
quality produced by Gaussian filtering differs between real and fake biometric
samples.
Gaussian filter:
In electronics and signal processing, a Gaussian filter is a filter whose impulse
response is a Gaussian function (or an approximation to it). Gaussian filters have the
properties of having no overshoot to a step function input while minimizing the rise
and fall time. This behavior is closely connected to the fact that the Gaussian filter
has the minimum possible group delay. It is considered the ideal time domain filter,
just as the since is the ideal frequency domain filter. These properties are important
in areas such as oscilloscope and digital telecommunication systems.
Mathematically, a Gaussian filter modifies the input signal by convolution with a
Gaussian function; this transformation is also known as the Weierstrass transform.
The Gaussian function is non-zero for x \in (-\infty,\infty) and would theoretically
require an infinite window length. However, since it decays rapidly, it is often
reasonable to truncate the filter window and implement the filter directly for narrow
windows, in effect by using a simple rectangular window function. In other cases, the
truncation may introduce significant errors. Better results can be achieved by instead
using a different window function; see scale space implementation for details.
impulse response filter, with the only difference that the Fourier transform of the
filter window is explicitly known. Due to the central limit theorem, the Gaussian can
be approximated by several runs of a very simple filter such as the moving average.
The simple moving average corresponds to convolution with the constant B-spline (
a rectangular pulse ), and, for example, four iterations of a moving average yields a
cubic B-spline as filter window which approximates the Gaussian quite well.
4.4.1 Different types FR-IQ Measurements:
Error Sensitivity Measures: Traditional perceptual image quality assessment
approaches are based on measuring the errors (i.e., signal differences) between the
distorted and the reference images, and attempt to quantify these errors in a way that
simulates human visual error sensitivity features. Although their efficiency as signal
fidelity measures is somewhat controversial, up to date, these are probably the most
widely used methods for IQA as they conveniently make use of many known
psychophysical features of the human visual system, they are easy to calculate and
usually have very low computational complexity. Several of these metrics have been
included in the 25-feature parameterization proposed in the present work. For clarity,
these features have been classified here into five different methods those are listed
below:
(MAMS).
the HVS channel for the test image. Therefore, to compute the VIF metric, the entire
reference image is required as quality is assessed on a global basis. On the other
hand, the RRED metric approaches the problem of QA from the perspective of
measuring the amount of local information difference between the reference image
and the projection of the distorted image onto the space of natural images, for a given
sub-band of the wavelet domain.
Unlike the objective reference IQA methods, in general the human visual
system does not require of a reference sample to determine the quality level of an
image. Following this same principle, automatic no-reference image quality
assessment (NR-IQA) algorithms try to handle the very complex and challenging
problem of assessing the visual quality of images, in the absence of a reference.
Presently, NR-IQA methods generally estimate the quality of the test image
according to some pre-trained statistical models. Depending on the images used to
train this model and on the a priori knowledge required, the methods are coarsely
divided into one of three trends:
sharpness of the image by computing the difference between the power in the lower
and upper frequencies of the Fourier Spectrum. In the HLFI entry in Table I, il , ih, jl
, jh are respectively the indices corresponding to the lower and upper frequency
thresholds considered by the method. In the current implementation, il = ih = 0.15N
and jl = jh = 0.15M.
Training-based approaches: Similarly to the previous class of NR-IQA methods,
in this type of techniques a model is trained using clean and distorted images. Then,
the quality score is computed based on a number of features extracted from the test
image and related to the general model. However, unlike the former approaches,
these metrics intend to provide a general quality score not related to a specific
distortion. To this end, the statistical model is trained with images affected by
different types of distortions. This is the case of the Blind Image Quality Index
(BIQI) described in, which is part of the 25 feature set used in the present work. The
BIQI follows a two-stage framework in which the individual measures of different
distortion-specific experts are combined to generate one global quality score.
Natural Scene Statistic approaches: These blind IQA techniques use a priori
knowledge taken from natural scene distortion-free images to train the initial model
(i.e. no distorted images are used). The rationale behind this trend relies on the
hypothesis that undistorted images of the natural world present certain regular
properties which fall within a certain subspace of all possible images.
If quantified appropriately, deviations from the regularity of natural statistics
can help to evaluate the perceptual quality of an image. This approach is followed by
the Natural Image Quality Evaluator (NIQE) used in the present work. The NIQE is a
completely blind image quality analyzer based on the construction of a quality aware
collection of statistical features (derived from a corpus of natural undistorted
images).
Measured Parameters From An Image:
Image quality assessments measures can be done in two different ways fully
reference method and no reference method. Here we are calculating 25 different
image quality measurements those are mentioned in the below table
SC Structural Content
MD Maximum Difference
AD Average Difference
RAMD R-Average MD
This redundancy of two provides extra information for analysis but at the
expense of extra computational power. It also provides approximate shift-invariance
(unlike the DWT) yet still allows perfect reconstruction of the signal.
The design of the filters is particularly important for the transform to occur correctly
and the necessary characteristics are:
The low-pass filters in the two trees must differ by half a sample period.
The Discrete Wavelet Transform (DWT) has been a founding stone for all
applications of digital image processing: from image denoising to pattern
recognition, passing through image encoding and more. While being a complete and
(quasi)invertible transform of 2D data, the Discrete Wavelet Transform gives rise to
a phenomenon known as checker board pattern, which means that data orientation
analysis is impossible. Furthermore, the DWT is not shift-invariant, making it less
useful for methods based on the computation of invariant features. In an attempt to
solve these two problems affecting the DWT, Freeman and Adelson first introduced
the concept of Steerable filters, which can be used to decompose an image into a
Steerable Pyramid, by means of the Steerable Pyramid Transform (SPT).
Thus, a further development of the SPT, involving the use of a Hilbert pair of
filters to compute the energy response, has been accomplished with the Complex
Wavelet Transform (CWT). Similarly to the SPT, in order to retain the whole Fourier
spectrum, the transform needs to be over-complete by a factor of 4, i.e. there are 3
complex coefficients for each real one. While the CWT is also efficient, since it can
be computed through separable filters, it still lacks the Perfect Reconstruction
property. Therefore, Kingsbury also introduced the Dual-tree Complex Wavelet
Transform (DTCWT), which has the added characteristic of Perfect Reconstruction
at the cost of approximate shift-invariance.
The present work has made several contributions to the state-of-the-art in the
field of biometric security, in particular:
i ) it has shown the high potential of image quality assessment for securing biometric
CHAPTER 5
SOFTWARE ASPECTS
CHAPTER 5
SOFTWARE ASPECTS
5.1 INTRODUCTION TO MATLAB
In this section we present the basics of working with images in Matlab. We
will see how to read, display, write and convert images. We will also talk about the
way images are represented in Matlab and how to convert between the different
types.
The Matlab command for reading an image is
imread('filename')
Note that we suppress the output with a semicolon, otherwise we will get in the
output all the numbers that make the image. 'filename' is the name of a file in the
current directory or the name of a file including the full path. Try
>> f = imread('chest-xray.tif');
We now have an array f where the image is stored
>> whos f
NameSize Bytes Class
f 494x600 296400 uint8 array
Grand total is 296400 elements using 296400 bytes
f is an array of class uint8, and size 494x600. That means 494 rows and 600
columns. We can see some of this information with the following commands
>> size(f)
ans = 494 600
>> class(f)
ans = uint8
We will talk later on about the different image classes.
Sometimes it is useful to determine the number of rows and columns in an image.
We can achieve this by means of
>> [M, N] = size(f);
To display the image we use imshow
>> imshow(f)
You will get a window similar to this
Note that in the figure toolbar we have buttons that allow us to zoom parts of the
image. The syntax imshow(f, [low high])displays all pixels with values less than or
equal to low as black, all pixels with values greater or equal to high as white. Try
in the image.
Once the image has been labeled, use the regionprops command to obtain
quantitative information about the objects:
D = regionprops(L, properties)
Theres a lot of useful statistical information about objects that can be extracted
using regionprops. Heres a list: >> imshow(f,[10 50])
Finally,
>> imshow(f,[])
sets the variable low to the minimum value of array f and high to its maximum value.
This is very useful for displaying images that have a low dynamic range. This occurs
very frequently with 16-bit images from a microscope.
>> imshow(f(200:260,150:220))
Another matlab tool available to display images and do simple image manipulations
is imtool. Try
>> imtool(f)
In the figure window we have now available the following tools: overview, pixel
region, image information, adjust contrast and zoom. Try them.
Images can be written to disk using the function imwrite. Its format is
imwrite(f, 'filename')
with this syntax, filename must include the file format extension. Alternatively
Intensity images
o uint16 [0, 65535] (CCD cameras on microscopes)
o uint8[0, 255] (From your standard digital camera)
308 308
o double [-10 , 10 ]
Binary images (black and white)
o logical, 0 or 1
Raw images typically come in the form of an unsigned integer (uint16 denotes 16-bit
unsigned integer, and uint8 denotes 8-bit unsigned integer). However floating-point
operations (mathematical operations that involve a decimal point, such as log(a)) can
only be done with arrays of class double.
Hence, to work on a raw image, first convert it from uint16 or uint8 to double using
the double function:
>> f = imread('actin.tif');
>> g = double(f);
Now type
>> whos;
to see the different data types associated with each variable. Note that while the data
type changes, the actual numbers after the conversion remain the same.
Many MATLAB image processing operations operate under the assumption that the
image is scaled to the range [0,1]. For instance, when imshow displays an double
image, it displays an intensity of 0 as black and 1 as white. You can automatically
create a scaled double image using mat2gray:
>> h = mat2gray(g);
Certain image processing commands only work with scaled double images.
Finally, we can convert an intensity image into a binary image using the
command im2bw(f, T), where T is a threshold in the range [0, 1]. Matlab converts f
to class double, and then sets to 0 the values below T and to 1 the values above T.
The result is of class logical. See the following example.
We wish to convert the following double image
>> f = [1 2; 3 4]
1 2
f=
3 4
to binary such that values 1 and 2 become 0 and the other two values become 1. First
we convert it to the range [0, 1]
>> g = mat2gray(f)
g= 0 0.3333
0.6667 1.0000
We can convert the previous image to a binary one using a threshold, say, of value
0.6:
>> gb = im2bw(g, 0.6)
gb = 0 0
1 1
Note that we can obtain the same result using relational operators
>> gb = f > 2
gb =0 0
1 1
Binary images generated by thresholding often form the basis for extracting
morphological features in microscopy images. In the next section, we will extract
some basic quantitative information about objects in an image by first using
thresholding to generate a binary image and then using the region props command
to extract quantitative information from the binary image.
Basic Segmentation using Thresholding
Many biological images comprise of light objects over a constant dark
background (especially those obtained using fluorescence microscopy), in such a
way that object and background pixels have gray levels grouped into two dominant
modes. One obvious way to
extract the objects from the background is to select a threshold T that separates these
modes:
g(x,y) = 1 if f(x,y) > T
= 0 otherwise
where g(x,y) is the threshold binary image of f(x,y). We can implement the
thresholding operation in MATLAB by the following function:
g = im2bw(f,T)
The first argument f gives the input image, and the second argument T gives the
threshold value.
Image histograms
We need to choose a threshold value T that properly separates light objects from the
dark background. Image histograms provide a means to visualize the distribution of
grayscale intensity values in the entire image. They are useful for estimating
background values, determining thresholds, and for visualizing the effect of contrast
adjustments on the image (next section). The matlab function to visualize image
histograms is imhist
>> f = imread('chest-xray.tif');
>> imhist(f);
The histogram has 256 bins by default. The following command makes 20 bins
>> imhist(f,20);
A good value for T can be obtained by visually inspecting the image histogram
obtained using the imhist command:
>> im = imread('rice.png');
>>im=imread('rice.png');
>> im = mat2gray(im);
and create a new binary image using the obtained threshold value:
>> imb = im2bw(im,level);
Note that the thresholding operation segments the rice grains quite well.
However, a problem in this image is that the rice grains near the bottom of the image
arent segmented well the background is uneven and is low at the bottom, leading
to incorrect segmentation. Well see a way to correct for this uneven background
using image processing later.Using the binary image, we can then calculate region
properties of objects in the image, such as area, diameter, etc An object in a binary
image is a set of white pixels (ones) that are connected to each other. We can
enumerate all the objects in the figure using the bwlabel command:
[L, num] = bwlabel(f)
where L gives the labeled image, and num gives the number of objects. To label the
binary image of the rice grains, type:
Extract the area and perimeter of individual objects in the labeled image as follows:
>> D = regionprops(L, 'area', 'perimeter');
The information in D is stored in an object called a structure array. A
structure array is a variable in MATLAB that contains multiple fields for storage of
information. You can access the field information in D as follows:
>> D
D =151x1 struct array with fields:
Area
Perimeter
Access an individual element in the structure array by referring to its index in
parenthesis:
>> D(1)
ans = Area: 145
Perimeter: 58.2843
CHAPTER 6
RESULT ANALYSIS
CHAPTER 6
RESULT ANALYSIS
6.1 TOP MODULE:
After selecting the Process Label another Window is opened which is shown
in figure 6.2 this window is called Input window.
Based on the above calculating parameters for the loaded image it shows
that it is fake. Why because some of the parameters get negative values shown in
figure 6.4, so that the loaded image is fake shown in figure 6.5.
6.6 ADVANTAGES:
The error rates achieved by the proposed protection scheme are in many
cases lower than those reported by other trait-specific state-of-the-art anti-
spoofing systems.
It is also used in Forensic labs for the detection of finger prints of criminals.
CHAPTER 7
CONCLUSION & FUTURE SCOPE
CHAPTER 7
CONCLUSION & FUTURE SCOPE
7.1 CONCLUSION:
REFERENCES
REFERENCES
BIBLIOGRAPHY
BIBLIOGRAPHY
1. Digital image processing , S.jayaraman,S.Esakkirajan, Mc Graw hill
Publications.
2.Digital image processing, Rafael C.Gonzalez and Richard E.Woods,
Pearson education.
3.S.Sridhar, Digital image processing Oxford publishers.
4.Digital image processing and analysis , B.Chanda and D.Dutta Majumder, Prentice
Hall of India.
5.Digital signal processing ,P.ramesh Babu, Scitech
Publications.
6.www.scirp.org/journal/wsn
7.jwcn.eurasipjournals.com
APPENDIX
APPENDIX
SOURCE CODE
function varargout = DeskGUI(varargin)
% DESKGUI MATLAB code for DeskGUI.fig
% DESKGUI, by itself, creates a new DESKGUI or raises the existing
% singleton*.
% H = DESKGUI returns the handle to a new DESKGUI or the handle to
% the existing singleton*.
% DESKGUI('CALLBACK',hObject,eventData,handles,...) calls the local
% function named CALLBACK in DESKGUI.M with the given input arguments.
% DESKGUI('Property','Value',...) creates a new DESKGUI or raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before DeskGUI_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to DeskGUI_OpeningFcn via varargin.
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
% See also: GUIDE, GUIDATA, GUIHANDLES
% --- Outputs from this function are returned to the command line. function
varargout = DeskGUI_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% a = imread('icon\a.jpg');
% b=imresize(a,0.4);
% set(handles.input, 'CData', b);
% a = imread('icon\e.jpg');
% b=imresize(a,0.4);
% set(handles.exit, 'CData', b);
% a = imread('icon\h.jpg');
% b=imresize(a,0.4);
% set(handles.help, 'CData', b);
% a = imread('icon\p.jpg');
% b=imresize(a,0.2);
SNR=log(sum(sum((c)/(n*m*MSE))));
SC=sum(sum((c)/(d))); MD=max(mad(a-b));
AD=sum(sum((a-b)/(n*m))); NAE=(mad(a-
b))/(mad(a)); NXC=sum(sum((a*b)/c));
ed1=edge(img1,'sobel'); ed2=edge(img2,'sobel');
Ted=mad(ed1-ed2)
TED=sum(Ted);
C1=corner(img1);
D1=round(mean(C1));
CD1=D1(1);
C2=corner(img2);
D2=round(mean(C2));
CD2=D2(1);
TCD=(CD1-CD2)/(max(CD1,CD2));
SSI=ssim(img1,img2);
psnval=psnr(img1,img2,vv)
warndlg(psnval)
x=[MSE PSNR SNR SC MD AD NAE NXC TED TCD SSI];
LIST OF ACRONYMS