You are on page 1of 287

NICE-2010

Acharya Institute of Technology, Bangalore-560090 1



A Framework for Wireless Sensor Network Localization
using Triangular flips

E. Rama Krishna
Assoc. professor
VITS, Karimnagar
erroju.ramakrishna@gmail.com
G. Rajinikar
Assoc. Professor
VCE, Warangal
ganta.rajanikar@gmail.com
N. Rajendar Reddy
Assistant Professor
VCE, Wranagal
nallarajendar@gmail.com

_____________________________________________________________________________________________________________________________
ABSTRACT:
Localization has been regarded as one of
the fundamental and supporting technology for
many applications of wireless sensor networks. The
problem of localization of nodes in Wireless Sensor
Networks is approached by the implementation of
different methods. After explaining each method,
the characteristics of Localization and Positioning
is introduced to understand the problem.
Classification of every technique, depending on how
a solution is approached and also comparisons
from each method to another is done.
Current Study presents an overview of
localization techniques and surveys the currently
available algorithms for localization based on
Triangulation Methods and Flip based
improvements
INTRODUCTION
Schemes for localization in WSN have been
developed in the last 20 years, mostly being
motivated by military use. Numerous studies have
been performed since then for civil uses.
Researchers have pointed out the influence of
noise on the localization process, and the
importance of various system parameters on the
accuracy and efficiency of the localization process,
but there is no consensus of a single best
algorithm for localization in sensor networks.
This indeed depends on the environment and the
specifications of the used motes.
Many applications have a need for
localization, be it for locating people or objects.
Most of the time, data recorded from a wireless
sensor only makes sense if correlated to a
position, for example the temperature recorded in
a given machine room or cold-store. Similarly,
many end-user programs are location-aware, for
example people would like to find the closest bus
stop or mailbox, and emergency services need to
localize persons to be rescued. In the following,
we refer to a person, object or computer coupled
with a wireless sensor to be localized as an
(unknown) node. In both ubiquitous computing
and wireless sensor networks (WSNs),
localization has drawn considerable attention.
The major difference between these two fields lies
in the capabilities of the considered computing
devices. Ubiquitous computing usually considers
devices such as laptops and PDAs that are rather
powerful compared to a wireless sensor.
A sensor node has both a very limited memory
footprint and CPU power, and energy provided
most of the time by a small battery is a scarce
resource. As such, localization algorithms for
wireless sensor networks have to be efficient,
both in terms of computation and power
consumption. Another difference between
ubiquitous computing and wireless sensor
networks is that laptops and PDAs have often
been considered mobile while most of the existing
experiments in wireless sensor networks have
concentrated on static networks of sensors. At the
moment, few low-cost localization algorithms
exist that have been specifically designed with
sensor movement in mind.
Nowadays, the most simple, off-the-shelf,
mechanism to determine the location of a mobile
node is to use the Global Positioning System
(GPS). GPS offers 3D localization based on direct
line-of-sight with at least four satellites, providing
accuracy up to three meters. However, some
limitations of GPS ask for alternative localization
methods. First, GPS is at the moment barely
usable indoors, in cluttered urban areas and
under dense foliage. Second, while the cost for
GPS equipment has been dropping over the years,
it is still not suited for mass-produced cheap
sensor boards, phones and even PDAs. Third, GPS
equipment requires both hardware space and
energy, which are two limiting factors for
integration on miniaturized sensor boards.
To overcome GPS limitations, researchers have
developed fully GPS-free techniques for locating
nodes as well as techniques where few nodes,
commonly called anchors, use GPS to determine
their location and, by broadcasting it, help other
NICE-2010

Acharya Institute of Technology, Bangalore-560090 2

nodes in calculating their own position without
using GPS.

Localization in WSNs
Recent advances in hardware and wireless
communication technologies have resulted in the
development of low cost, low power, multi
functional sensor devices called sensor nodes.
These tiny nodes with sensing, data processing
and communication capabilities collaborate
among themselves to establish a multi hop
network called Wireless Sensor Networks (WSN).
Important WSN applications involve monitoring
of the physical world. Location of sensor nodes is
crucial information in many sensor network
applications like environment monitoring and
target tracking.
The localization problem in WSN involves
Centralized localization method: Centralized
localization algorithms require base station to
gather network-wide environment information
and with plenty of computational power. Base
station determines the location of each node by
collected data and transport them back into
network. The collection of information performed
by message exchange between nodes, hence, with
the number of nodes increased in network,
centralized localization algorithms become lower
energy-efficiency, longer delay and larger
network communication traffic. In another hand,
it will obtain relative high precise location.
Common node has light calculation burden. In
general, it was suitable for static small networks.
Distributed localization method: In the process
of distributed localization, each node
independently determined its location with only
limited communication with one-hop or multi-
hops neighbor nodes. It has the characteristics of
small traffic, equal calculation burden of each
node, little storage requirements, good scalability.
However, due to the lack of global information,
location accuracy is susceptible to the number of
beacon nodes and the distribution of nodes.
Range-based localization method: A Range-
based localization method depends on distance or
angle between nodes to obtain unknown nodes
location. The first step is to know the distance
estimates and angle estimates. A number of
approaches, such as time of arrival, time
difference of arrival, angle of arrival, received
signal strength, have been presented. The
algorithms based on these attributes are called
anchors or beacons. Range-based localization
methods have the advantage of fine resolution.
However, extra hardware and additional energy
consumption restricted the application of range-
based methods.
Range-free localization method: Range-free
localization methods use the information of
topology and connectivity for location estimation.
Range-free methods have some advanced
characteristics, such as low cost, small
communication traffic, no extra hardware and
flexible localization precision. Because of these
special characteristics of range-free methods, they
were been regard as a promising solution for the
localization problem in WSN.
COMPARISON OF LOCALIZATION
ALGORITHMS
Localization Algorithms evaluation is based on:
Scalability: Is a localization algorithm scalable to
the number of nodes in network.
Self-organized: Because of the lack of localization
infrastructure in wireless sensor network, the
self-organization of localization based algorithm
is necessary.
Robustness: Localization algorithm is immune to
node failure and distance estimation error.
Efficient energy: Message exchange is an
indicator of energy consumption. How much
communication overhead is required?
Distributed calculation: Does localization
algorithm only uses localized information?
Comparison: different localization algorithms
advantages and disadvantages. Finally the aim is
to choose a suitable location algorithm for
different applications of wireless sensor
networks.


Acharya Institute of Technology, Bangalore

Selection of Localization Schemes for Sensor
Networks
Summarizing the state of the art, a comparison of
the present localization techniques is presented.
For this three aspects have been consi
The accuracy and resolution of the obtained
positional data
The spatial range of the localization, and
The ability to build power-aware nodes, which is
from special interest for sensor networks due to
the typical need for low power consumpt
In order to give an overview, these considerations
are taken on three axes in a coordinate system
and the presented localization techniques are
classified as
Range-based localization method
A relative complete description of adhoc
positioning systems is given in, comparing DV
Hop (Distance Vector), DV-Distance and Euclidian
propagation methods. The first one computes
estimation for the size of one hop, while DV
distance measures the radio signal strength and is
propagated in meters. The Euclidian scheme
propagates the true distance to the anchor. DV
schemes are suitable in most cases, while

, Bangalore-560090
Selection of Localization Schemes for Sensor
Summarizing the state of the art, a comparison of
the present localization techniques is presented.
For this three aspects have been considered:
The accuracy and resolution of the obtained
The spatial range of the localization, and
aware nodes, which is
from special interest for sensor networks due to
the typical need for low power consumption.
In order to give an overview, these considerations
are taken on three axes in a coordinate system
and the presented localization techniques are
classified as

based localization method
A relative complete description of adhoc
positioning systems is given in, comparing DV-
Distance and Euclidian
propagation methods. The first one computes
estimation for the size of one hop, while DV-
nal strength and is
propagated in meters. The Euclidian scheme
propagates the true distance to the anchor. DV
schemes are suitable in most cases, while
Euclidian is more accurate, but costs much more
communications.
DV-Distance
Using estimate ranges betw
node can determine its distance to an anchor. If a
node take advantage of this range estimates with
a sufficient number of neighbours, the Euclidean
distance to a far away anchor can be estimated.
This Euclidean method precision depends
number of nodes. If the number of nodes
increases the better accuracy results are obtained.
How the Euclidean method works
DV-Hop
Count the number of hops along the shortest path,
between any two anchors, to estimate the average
length of a single hop by dividing the sum of the
distances to other anchors by the sum of the hop
counts. Every anchor computes this estimated hop
length and spread it into the network, so that
other nodes with unknown position could use this
information to estimate a mult
perform the multilateration algorithm.
NICE-2010
560090 3
Euclidian is more accurate, but costs much more
Using estimate ranges between neighbours, a
node can determine its distance to an anchor. If a
node take advantage of this range estimates with
a sufficient number of neighbours, the Euclidean
distance to a far away anchor can be estimated.
This Euclidean method precision depends on the
number of nodes. If the number of nodes
increases the better accuracy results are obtained.

How the Euclidean method works
Count the number of hops along the shortest path,
between any two anchors, to estimate the average
hop by dividing the sum of the
distances to other anchors by the sum of the hop
counts. Every anchor computes this estimated hop
length and spread it into the network, so that
other nodes with unknown position could use this
information to estimate a multi-hop range and
perform the multilateration algorithm.


Acharya Institute of Technology, Bangalore

How DV-hop works
Anchors:
flood network with known position
flood network with avg hop distance
Nodes
count the number of hops to anchors
multiply with avg hop distance
Range-free methods
A description of ad-hoc localization system is the
devices were individually tuned (built
calibration interface or original long life
calibration). In sensor networks, as large number
of sensors are used, that cannot be the case.
Calamari, an ad-hoc localization system was
developed that also integrates a calibration
process. Regarding localization, it uses fusion of
RF received signal strength information and
acoustic time of flight. There is an interesting
definition of a distributed algorithm for random
WSN. The minimal density of known nodes is
presented. The main objective of their algorithm
is to broadcast a request (Do you hear me?
compute the estimated localization by the
interpretation of the answer of all the known
nodes. A related method, called APIT
APIT
The APIT (Approximate Point In
Triangulation) idea is to divide the environment
into triangles, given by beaconing nodes. An
individual nodes presence or absence in each of
those triangles will allow to reduce the possible
location area. This goes until all the possible sets
are exhausted, or the desired accuracy reached.
The APIT algorithm is then run at every node:
1. Receive locations from n anchors.
2. For each possible triangle, test if inside or not.

, Bangalore-560090


flood network with avg hop distance
count the number of hops to anchors
hoc localization system is the
devices were individually tuned (built-in
calibration interface or original long life
calibration). In sensor networks, as large number
of sensors are used, that cannot be the case.
zation system was
developed that also integrates a calibration
process. Regarding localization, it uses fusion of
RF received signal strength information and
acoustic time of flight. There is an interesting
definition of a distributed algorithm for random
WSN. The minimal density of known nodes is
presented. The main objective of their algorithm
Do you hear me?) and
compute the estimated localization by the
interpretation of the answer of all the known
called APIT
The APIT (Approximate Point In
Triangulation) idea is to divide the environment
into triangles, given by beaconing nodes. An
individual nodes presence or absence in each of
those triangles will allow to reduce the possible
. This goes until all the possible sets
are exhausted, or the desired accuracy reached.
The APIT algorithm is then run at every node:
1. Receive locations from n anchors.
2. For each possible triangle, test if inside or not.
3. If yes, add it to the Inside Set.
4. Break if accuracy reached.
5. Estimate position as

Two triangulations are related by a flip if they are
the only two refinements of a polyhedral
subdivision that can only be refined by
triangulations. We called such a subdivision an
almost triangulation the following
characterization of almost
polyhedral subdivision of a configuration
not a triangulation is an almost
and only if all its cells are either simplices or have
corank one (that is, they have two more elements
than their affine dimension), and all the
which are not simplices share one and the same
circuit.
Every flip happens on a circuit, where a circuit is
a minimal affinely dependent subset of points, and
splits in a unique way into a pair (Z+,Z_) with the
property that conv(Z+)conv(Z_)
In the plane there are the following three
possibilities, depending on the type of circuit in
question. Remember that the type of a circuit
(Z+,Z_) is the pair (|Z+|,|Z_|):
1. If the circuit is of type (2;2), that is, it consists of
the four vertices of a convex quadrilateral, then
the almost triangulation
simplicial cell, consisting of these four points,
because a cell strictly containing this circuit has
corank greater than one.
Hence, the two refinements of
obtained by inserting one or the other diagonal of
this quadrilateral. The flip is normally called a
diagonal edge flip, and an example of it is depicted
on the top part of Figure below
2. If the circuit is of type (3;1), that is, it consists of
a point a in the interior of the triangle with
vertices the other three {b;c;d}, then again the
almost triangulation S has a unique non
cell, consisting of these four points. Its two
refinements are obtained forgetting the interior
point a (which produces a non
and inserting a as a new vertex incident to the
three triangles {a,b,c},{a,c,d}
is called an insertion-deletion flip because it
inserts or deletes a vertex from the triangulation.
Find this situation in Figure.
3. If the circuit is of type (2,1), that is, it consists of
three collinear points, then one or two cells of
NICE-2010
560090 4
ide Set.
4. Break if accuracy reached.

Two triangulations are related by a flip if they are
the only two refinements of a polyhedral
subdivision that can only be refined by
triangulations. We called such a subdivision an
almost triangulation the following
characterization of almost-triangulations: a
polyhedral subdivision of a configuration A that is
not a triangulation is an almost-triangulation if
and only if all its cells are either simplices or have
corank one (that is, they have two more elements
than their affine dimension), and all the cells
which are not simplices share one and the same
Every flip happens on a circuit, where a circuit is
a minimal affinely dependent subset of points, and
splits in a unique way into a pair (Z+,Z_) with the
conv(Z_)0.
In the plane there are the following three
possibilities, depending on the type of circuit in
question. Remember that the type of a circuit
(Z+,Z_) is the pair (|Z+|,|Z_|):
1. If the circuit is of type (2;2), that is, it consists of
the four vertices of a convex quadrilateral, then
the almost triangulation S has a unique non-
simplicial cell, consisting of these four points,
because a cell strictly containing this circuit has

Hence, the two refinements of S are
obtained by inserting one or the other diagonal of
this quadrilateral. The flip is normally called a
diagonal edge flip, and an example of it is depicted
on the top part of Figure below
he circuit is of type (3;1), that is, it consists of
a point a in the interior of the triangle with
vertices the other three {b;c;d}, then again the
has a unique non-simplicial
cell, consisting of these four points. Its two
ts are obtained forgetting the interior
point a (which produces a non-full triangulation)
and inserting a as a new vertex incident to the
{a,c,d}, and {a,b,d}. The flip
deletion flip because it
letes a vertex from the triangulation.
Find this situation in Figure.
3. If the circuit is of type (2,1), that is, it consists of
three collinear points, then one or two cells of S
NICE-2010

Acharya Institute of Technology, Bangalore-560090 5

will contain it, depending whether the collinearity
lies in the boundary or the interior of conv(A).
In any case, the two refinements are obtained
forgetting the central point of the collinearity, or
inserting it (which removes one or two triangles
of the triangulation and inserts two or four new
ones). The bottom part of Figure shows this flip in
the case of an interior collinearity.

Strictly speaking there is a forth type of flip that
can happen, but we do not need to care much
about it. If we consider a configuration A with a
repeated point, the two copies of this point form a
(1,1)-circuit. The flip on this circuit consists
merely in changing our mind as to what copy of
this point we want to consider a vertex in our
triangulation.
All triangulations of a point set in the plane
are connected by flips
The most natural question to ask about flips is
whether any pair of triangulations of a
configuration A can be connected to one another
by a finite sequence of flips. One can also wonder
about the diameter of the graph of flips, or its
maximum or minimum degree. In this section we
discuss these questions.
A Delaunay triangulation for a set P of points in
the plane is a triangulation DT(P) such that no
point in P is inside the circumcircle of any triangle
in DT(P). Delaunay triangulations maximize the
minimum angle of all the angles of the triangles in
the triangulation; they tend to avoid skinny
triangles
For a set of points on the same line there is no
Delaunay triangulation (in fact, the notion of
triangulation is undefined for this case). For four
points on the same circle (e.g., the vertices of a
rectangle) the Delaunay triangulation is not
unique: the two possible triangulations that split
the quadrangle into two triangles satisfy the
"Delaunay condition", i.e., the requirement that
the circumcircles of all triangles have empty
interiors. By considering circumscribed spheres,
the notion of Delaunay triangulation extends to
three and higher dimensions. Generalizations are
possible to metrics other than Euclidean.
However in these cases a Delaunay triangulation
is not guaranteed to exist or be unique.

A Delaunay triangulation in the plane with
circum circles
Many algorithms for computing Delaunay
triangulations rely on fast operations for
detecting when a point is within a triangle's
circumcircle and an efficient data structure for
storing triangles and edges. In two dimensions,
one way to detect if point D lies in the circum
circle of A, B, C is to evaluate the determinant:
Assuming A, B and C to lie counter-clockwise, this is
positive if and only if D lies in the circumcircle.
Effective enumeration of triangulations
There is also the practical problem of computing
exactly the number of distinct triangulations for
concrete instances of point configurations. A way to
list all triangulations is via a depth-first search or
breadth-first search traversal of the graph of flips.
By the connectivity results we have just seen, we
are guaranteed to visit all triangulations. The
trouble with this approach is that we may need a
large amount of storage during these search


Acharya Institute of Technology, Bangalore

traversals (saving all triangulations that have been
visited). But there is a memory-efficient method of
listing all triangulations of a point set in the plane:
The reverse search enumeration of Avis and
Fukuda
Reverse-search is in fact a very general method of
listing all vertices of some directed graphs, under
certain special assumptions. The amount of
memory used for book-keeping is very small and it
is independent of the size of the graph. The actual
visit to the nodes is done in a depth
order and the running time of the reverse
enumeration is polynomial on the number of
triangulations. Thus the algorithm has a good
output-sensitive complexity. For brevity we will not
present reverse-search in full generality here, but
only outline it for the graph of triangulations of
point configurations in the plane.
The main point of the algorithm is that, for point
configurations in general position and with no co
circular points, the graph of flips can be orien
such a way that it becomes an acyclic graph with a
unique sink, the sink being at the Delaunay
triangulation (which is unique under the
noncocircularity assumption). From any other
triangulation we can reverse back to a parent
triangulation by an oriented flip.
For each triangulation we have an
adjacency oracle that tells us its flip neighbors. We
can also define a successor oracle that given a
triangulation T assigns a unique successor. This
should be another triangulation, closer to becoming
a Delaunay triangulation.
We saw that a triangulation is the Delaunay
triangulation if each edge is locally Delaunay. Order
all possible edges of the point configuration in some
way, for instance lexicographically. From a
triangulation T its successor is T
1
if
from T by flipping the first (in our ordering)
flippable edge which is not locally Delaunay. Note
that such successor T
1
exists unless
Delaunay triangulation. All triangulations of a
generic hexagon with their successors
figure.
Consider a point configuration in the plane
an A polygon R. Let T be a triangulation of
line that intersects T properly, meaning it cuts
through the interior of edges.
We define the path of T along
it pathl(T ) as the unique chain of edges from the
triangulation T , such that
(a) l properly intersects all edges in the chain,

, Bangalore-560090
traversals (saving all triangulations that have been
efficient method of
listing all triangulations of a point set in the plane:
The reverse search enumeration of Avis and
arch is in fact a very general method of
listing all vertices of some directed graphs, under
certain special assumptions. The amount of
keeping is very small and it
is independent of the size of the graph. The actual
is done in a depth-first search
order and the running time of the reverse-search
enumeration is polynomial on the number of
triangulations. Thus the algorithm has a good
sensitive complexity. For brevity we will not
enerality here, but
only outline it for the graph of triangulations of
The main point of the algorithm is that, for point
configurations in general position and with no co-
circular points, the graph of flips can be oriented in
such a way that it becomes an acyclic graph with a
unique sink, the sink being at the Delaunay
triangulation (which is unique under the
noncocircularity assumption). From any other
triangulation we can reverse back to a parent
For each triangulation we have an
adjacency oracle that tells us its flip neighbors. We
can also define a successor oracle that given a
assigns a unique successor. This
should be another triangulation, closer to becoming
We saw that a triangulation is the Delaunay
triangulation if each edge is locally Delaunay. Order
all possible edges of the point configuration in some
way, for instance lexicographically. From a
if T
1
is obtained
by flipping the first (in our ordering)
flippable edge which is not locally Delaunay. Note
exists unless T is already the
Delaunay triangulation. All triangulations of a
generic hexagon with their successors are shown in
Consider a point configuration in the plane A and
be a triangulation of R and l a
properly, meaning it cuts
along l, and denote
) as the unique chain of edges from the
properly intersects all edges in the chain,
(b) The chain starts and ends at two boundary
edges of
R such that the segment of
intersection points lies in the interior of
(c) Alternating vertices on the chain lie in opposite
sides of l, and
(d) The area bounded by the chain and
other points of the configuration.



The main property of paths is that they
allow us to identify all triangul
paths coming from line l and thus we can apply a
divide-and-conquer enumeration strategy
NICE-2010
560090 6
(b) The chain starts and ends at two boundary
such that the segment of l between the two
the interior of R,
(c) Alternating vertices on the chain lie in opposite
(d) The area bounded by the chain and l has no
other points of the configuration.


The main property of paths is that they
allow us to identify all triangulations by their
paths coming from line l and thus we can apply a
conquer enumeration strategy


Acharya Institute of Technology, Bangalore


(a) Given a point configuration in the plane A, a
triangulation T of an A-polygon R, and a line l
that intersects properly T there exists always a
path pathl(T ).
(b) Given a point configuration in the plane A,
triangulations T 1; T 2 of an A-polygon R, and a
line l that intersects properly T 1, any two paths
pathl(T 1) and pathl(T 2) with same start and
end edges are either identical or properly
intersect each other.
Proof: We can assume that l is a vertical line,
otherwise a rotation can be applied. To construct a
path, we proceed inductively, starting with the top
boundary edge e1 where l first enters the interior
of R. Let T1 be the triangle containing
be the other edge of T1 crossed by l. Let the vertices
of T1 be called p
0
, p
1
and p
2
in such a way that
p
0
p
1
and e2 = p
0
p
2
.
In the general inductive step, assume that we have
already constructed a path e1 = p
0
p
ei = p
i-1
p
i
in the desired conditions. If
boundary edge of R we have finished, otherwise let
us show how to continue the process.
Let p0 be the third vertex of the triangle in
T based on ei and below it. Either p0 lies on the
opposite side of l from pi, in which case we can
continue the path with e
i+1
= p
i
p
0
, or
same side as pi in which case we abandon the last
edge e
i
= p
i-1
p
i
of our provisional path and make
1
p
0
be the new i-th edge. This does not increase the
number of edges of the path, but it makes the path
reach lower along l as before, so that the process
eventually terminates, once we hit the boundary of
R again to exit (the edge by construction has its
extremes in opposite sides)
To prove the second part of the lemm
proceed by contradiction. If we have two paths
pathl(T 1) and pathl(T 2) different paths which do
not properly intersect. Thus if they do not have
common vertices then path pathl(T
to the left of pathl(T 2). But since pathl(
path, no edge of pathl(T 1) can intersect l. This is a
contradiction. Next, suppose the two paths have

, Bangalore-560090

a) Given a point configuration in the plane A, a
polygon R, and a line l
there exists always a
(b) Given a point configuration in the plane A,
polygon R, and a
1, any two paths
2) with same start and
end edges are either identical or properly
is a vertical line,
otherwise a rotation can be applied. To construct a
path, we proceed inductively, starting with the top
first enters the interior
ntaining e1 and let e2
. Let the vertices
in such a way that e1 =
In the general inductive step, assume that we have
p
1
,e2 = p
1
p
2
,..,
in the desired conditions. If e
i
is a bottom
we have finished, otherwise let
us show how to continue the process.
Let p0 be the third vertex of the triangle in
based on ei and below it. Either p0 lies on the
of l from pi, in which case we can
, or p
0
is in the
same side as pi in which case we abandon the last
of our provisional path and make p
i-
th edge. This does not increase the
ges of the path, but it makes the path
reach lower along l as before, so that the process
eventually terminates, once we hit the boundary of
R again to exit (the edge by construction has its
To prove the second part of the lemma we
proceed by contradiction. If we have two paths
) different paths which do
not properly intersect. Thus if they do not have
T 1) lies entirely
). But since pathl(T 2) is a
) can intersect l. This is a
contradiction. Next, suppose the two paths have
indeed a common vertex p, but the successor of
different in each of the paths, say


The main point is that now the p
cannot be placed anywhere without violating part
(d) of the definition or forcing a proper intersection
between the two paths.
CONCLUSION:
The Delaunay triangulation Method
addresses the practical problem of computing
exactly the number of distinct triangulations for
concrete instances of point configurations which
helps effective calculation of localization in both
rage based and rage free methods and also in
distributed and centralized localization methods
which follow the triangle approac












NICE-2010
560090 7
, but the successor of p is
different in each of the paths, say p
1
,p
11
,


The main point is that now the point p
1
cannot be placed anywhere without violating part
) of the definition or forcing a proper intersection

The Delaunay triangulation Method
addresses the practical problem of computing
distinct triangulations for
concrete instances of point configurations which
helps effective calculation of localization in both
rage based and rage free methods and also in
distributed and centralized localization methods
which follow the triangle approaches
NICE-2010

Acharya Institute of Technology, Bangalore-560090 8

REFERENCES:
1) Location Management and Routing in Mobile
Wireless Networks Amitava Mukherjee
Somprakash Bandyopadhyay Debashis Saha
2) Algorithms for localization range based Jorge Luis
Ariza Alvarez, Jairo Enrique Durango Carrasquilla
3) http://en.wikipedia.org/wiki/Delaunay_triangula
tion
4) Algorithms and computation in mathematics
Springer Triangulations: Structures for Algorithms
and Applications Jesus A. De Loera, Jrg Rambau,
Francisco Santos
5) F. Santos and R. Seidel. A better upper bound on
the number of triangulations of a planar point set.
6) F. Santos. Geometric bistellar flips: the setting, the
context and a construction. In International
Congress of Mathematicians
7) Convex Position Estimation in Wireless Sensor
Networks, Lance Doherty, Kristofer S. J. Pister,
Laurent El Ghaoui.
8) Convex Position Estimation in wireless sensor
network, Lance Doherty/Kristofer
S.J.Pister/Laurent El Ghaoui, Dept. of Electrical
Engineering and Computer Sciences-University of
California, Berkeley
9) Location System for ubiquitous computing, Jeffrey
Highttower/Gaetano Borriello, University of
Washington








































NICE-2010

Acharya Institute of Technology, Bangalore-560090 9

A COMPARATIVE STUDY ON WAVELET BASED IMAGE COMPRESSION
TECHNIQUES AND ITS APPLICATIONS
D. Bhan Anushya
1
, G. Gomathi Priya
2
12
UG Student, Dr. Sivanthi Aditanar College of Engineering, Tiruchendur, Tamilnadu, India
bhananushya@gmail.com
_________________________________________________________________________________________________
ABSTRACT

The objective of this paper is to discuss the different
type of Wavelet based image compression techniques.
The techniques involved in this process are SPIHT,
EZW, SPECK, WDR, ASWDR, SFQ, CREW, EPWIC,
EBCOT and SR. This paper focuses the important
features of wavelet based image compression
techniques and its applications.

KEYWORDS
Wavelet based Image Compression.
I. INTRODUCTION

Data compression techniques help in efficient
data transmission, storage and utilization of
hardware resources. Uncompressed multimedia
requires considerable storage capacity and
transmission bandwidth. Despite rapid progress in
mass- storage density, processor speeds and digital
communication system performance, demand for
data storage capacity and data transmission
bandwidth continues to outstrip the capabilities of
available technologies. The recent growth of
intensive digital audio, image and video
(multimedia) based applications, have not only
sustained the compression of such signals central to
signal storage and digital communication
technology.
Image compression reduces the amount of data
required to represent an image by removing
redundant information. Three types of
redundancies typically exist in digital images that
can be exploited by compression. These are, coding
redundancy that arises from the representation of
the image gray levels, interpixel redundancy that
exists due to high correlation between neighboring
pixels, and psycho visual redundancy that is
obtained based on Human perception of the image
information. An image compression system consists
of an encoder that exploits one or more of the
above redundancies to represent the image data in
a compressed manner, and a decoder that is able to
reconstruct the age from the compressed data. The
compression that is performed on images can either
be lossless or lossy. Images compressed in a
lossless manner can be reconstructed exactly
without any change in the intensity values. This
limits the amount of compression that can be
achieved in images encoded using lossless
techniques. However, many applications such as
satellite image processing and certain medical and
document imaging, do not tolerate any losses in
their data and are frequently compressed using
lossless compression methods. Lossy encoding is
based on trading off the achieved compression or
bit rate with the distortion of the reconstructed
image. Lossy encoding for images is usually
obtained using transform encoding methods.
Transform domain coding is used in images to
remove the redundancies by mapping the pixels
into a transform domain prior to encoding. The
mapping is able to represent image information
containing most of the energy into a small region in
the transform domain requiring only a few
transform coefficients to represent. For
compression, only the few significant coefficients
must be encoded, while a majority of the
insignificant transform coefficients can be
discarded without significantly affecting the quality
of the reconstructed image. An ideal transform
mapping should be reversible and able to
completely dcor relate the transform coefficients.


Fig. 1. Wavelet based Image Coding System

Fig. 1 shows the wavelet based coder has three
basic components: a transformation, a quantizer
and an encoder. Most existing high performance
image coders in applications are transform based
coder. In the transform coder, the image pixels are
converted from the spatial domain to the transform
domain through a linear orthogonal or bi-
orthogonal transform. A good choice of transform
accomplishes a decorrelation of the pixels, while
simultaneously providing a representation in which
NICE-2010

Acharya Institute of Technology, Bangalore-560090 10

most of the energy is usually restricted to a few
(relatively large) coefficients. This is the key to
achieve an efficient coding (i.e., high compression
ratio). Indeed, since most of the energy rests in a
few large transform coefficients, we may adopt
entropy coding schemes, e.g., runlevel coding or bit
plane coding schemes, that easily locate those
coefficients and encodes them. Because the
transform coefficients are highly decorrelated, the
subsequent quantizer and entropy coder can ignore
the correlation among the transform coefficients,
and model them as independent
random variables. The zerotree concept, provided
an efficient and embedded representation of
quantized wavelet coefficients and lead to an image
compression method, the embedded zerotree
wavelet (EZW) coding. A zero tree is used to
represent a particular group of wavelet coefficients
across different wavelet sub bands that have
insignificant values. The zero tree approach
exploits the multiresolution nature of the wavelet
decomposition and has lead to several other low
complexity and extremely efficient image
compression schemes. One of the more popular
methods based on similar principles of the zerotree
is the SPIHT which improves upon the EZW with
better management of the zerotrees. These
encoding schemes described above are fast, are
efficient, have low complexity, and provide high
quality images at extremely low bit rates, making
them suitable for image transmission across
channels with bandwidth constraints. The
progressive nature of these schemes, however,
results in the encoded data being highly susceptible
to bit errors, causing severe distortions to the
resulting image. Steps must be therefore taken to
protect the encoded bit stream from being affected
by various bit errors and losses.
According to, wavelet-based coding provides
substantial improvements in the quality at higher
compression ratios. Wavelet compression allows
the integration of various compression techniques
into one algorithm. Over the past few years, a
variety of sophisticated wavelet-based image
coding schemes have been developed. These
include Embedded Zero tree Wavelet (EZW), Set-
Partitioning in Hierarchical Trees (SPIHT), Set
Partitioned Embedded block coder (SPECK),
Wavelet Difference Reduction (WDR), Adaptively
Scanned Wavelet Difference Reduction (ASWDR),
Space-Frequency Quantization (SFQ), Compression
with Reversible Embedded Wavelet (CREW),
Embedded Predictive Wavelet Image Coder
(EPWIC), Embedded Block Coding with Optimized
Truncation (EBCOT), and Stack Run (SR).
II. SET PARTITIONING IN HIERARCHICAL
TREES (SPIHT)

The SPIHT algorithm was introduced by Said [6]
and Pearlman [7]. It is a powerful, efficient and yet
computationally simple image compression
algorithm. By using this algorithm, the highest
PSNR values for given compression ratios for a
variety of images can be obtained. It provides a
better comparison standard for all subsequent
algorithms. SPIHT stands for Set Partitioning in
Hierarchical Trees. SPIHT was designed for optimal
progressive transmission, as well as for
compression. One of the important features of
SPIHT is that at any point during the decoding of an
image, the quality of the displayed image is the best
that can be achieved for the number of bits input by
the decoder up to that moment. Another important
SPIHT feature is its use of embedded coding. The
pixels of the original image can be transformed to
wavelet coefficients by using wavelet filters.
The wavelet coefficients can be referred as ci,j. In
a progressive transmission method, the decoder
starts by setting the reconstruction image to zero. It
then inputs (encoded) transform coefficients,
decodes them, and uses them to generate an
improved reconstruction image. The main aim in
progressive transmission is to transmit the most
important image information first. This is the
information that results in the largest reduction of
the distortion.
III. EMBEDDED ZERO TREE WAVELET
(EZW)

The EZW algorithm was one of the first and
powerful algorithms based on Wavelet based Image
compression. The other algorithms were created
depending upon the fundamental concepts of EZW.
The EZW algorithm was introduced in the paper of
Shapiro [8]. The expansion of EZW is Embedded
Zerotree Wavelet. The core of the EZW
compression is the exploitation of self-similarity
across different scales of an image wavelet
transform. In other words EZW approximates
higher frequency coefficients of a wavelet
transformed image. Because the wavelet transform
coefficients contain information about both spatial
and frequency content of an image, discarding a
high-frequency coefficient leads to some image
degradation in a particular location of the restored
image rather then across the whole image.
IV. WAVELET DIFFERENCE REDUCTION
(WDR)
NICE-2010

Acharya Institute of Technology, Bangalore-560090 11


One of the defects of SPIHT is that it only
implicitly locates the position of significant
coefficients. This makes it difficult to perform
operations, such as region selection on compressed
data, which depend on the exact position of
significant transform values. By region selection,
also known as region of interest (ROI), we mean
selecting a portion of a compressed image which
requires increased resolution. This can occur, for
example, with a portion of a low resolution medical
image that has been sent at a low bpp rate in order
to arrive quickly. Such compressed data operations
are possible with the Wavelet Difference Reduction
(WDR) algorithm of Tian and Wells [11]. The term
difference reduction refers to the way in which
WDR encodes the locations of significant wavelet
transform values, which we shall describe below.
Although WDR can produce perceptually superior
images, especially at high compression ratios. The
WDR compression and decompression systems are
shown in Fig. 2(a) and Fig. 2(b).


Fig. 2(a): WDR Compression System


Fig. 2(b): WDR Decompression System

The only difference between WDR and the Bit-
plane encoding is in the significance pass. In WDR,
the output from the significance pass consists of the
signs of significant values along with sequences of
bits which concisely describe the precise locations
of significant values.
V. SET PARTITIONED EMBEDDED BLOCK
(SPECK)

The SPECK coding algorithm [1] belongs to a
class of embedded tree structured significance
mapping schemes. They are all based on a
hierarchical structure of pyramid subband
transformation, such as wavelet transform. The
transform coefficients are grouped into subband
subsets related through a quadtree structure. In the
SPECK algorithm, the quadtree is formed by
successive recursive splitting of a subband block
(parent) into four quadrants (children). The coding
process consists of a sorting pass and a refinement
pass. The sorting pass includes two steps, first, if set
S is significant, it will be partitioned into four small
child sets, each of these four child sets is further
tested and partitioned until all the significant
coefficients are found. It is shown in Fig. 3. Second,
if I is significant, it will divided into four sets (three
identical S sets and one I set ) of course, the new I
will also be tested and divided till I is empty.



Fig. 3. Quardtree partition & I partition

In the refinement pass, the significant
coefficients found in the sorting pass are
transmitted to decoder according to the bit-plane
transmission. On the whole, the SPECK algorithm
makes full use of the characteristics of wavelet
coefficients involing the energy clustering and the
energy attenuation along with the increase of
scalability. Furthermore, bying combining the
quadtree partition with the bit-plane encoding, this
method can nearly achieve the same compressing
performance with the SPIHT.
VI. ADAPTIVELY SCANNED WAVELET
DIFFERENCE REDUCTION (ASWDR)

One of the most recent image compression
algorithms is the Adaptively Scanned Wavelet
Difference Reduction (ASWDR) algorithm of Walker
[11]. The adjective adaptively scanned refers to the
fact that this algorithm modifies the scanning order
used by WDR in order to achieve better
performance. ASWDR adapts the scanning order so
as to predict locations of new significant values. If a
prediction is correct, then the output specifying
that location will just be the sign of the new
significant value the reduced binary expansion of
the number of steps will be empty. Therefore a
good prediction scheme will significantly reduce
NICE-2010

Acharya Institute of Technology, Bangalore-560090 12

the coding output of WDR. The scanning order of
ASWDR dynamically adapts to the locations of edge
details in an image, and this enhances the
resolution of these edges in ASWDR compressed
images. In this chapter, the Arithmetic encoding is
added with the ASWDR compression technique.
VII. SPACE FREQUENCY QUANTIZATION
(SFQ)

SFQ coder [13] that undertaken the joint
optimisation of the spatial zero tree quantization
and the scalar frequency quantizer. It solves the
joint transform and quantizer design problem by
means of the dynamic programming based fast
singletree algorithm, thus significantly lowering the
computational complexity. The single tree
algorithm and the SFQ scheme both rely on the
rate-distortion optimisation framework, the
combination of these two is a natural choice for
joint transform and quantizer design, where the
lagrangian multiplier The minimum SFQ cost
associated with the full wavelet packet tree is then
compared with those associated with its pruned
versions, where the leaf nodes are merged into
their respective parent nodes. At the end of single
tree pruning process, when the root of the full
wavelet packet tree is reached, the optimal wavelet
basis from the entire family and its best SFQ are
found for fixed values of quantization step size q
and trade off. The best scalar quantization step size
q is searched over a set of admissible choices to
minimize the lagrangian cost and the optimal.
Finally the smaller cost is considered to be the
winner.
There are two quantization modes in SFQ:
zerotree quantization with respect to spatial
groupings of coefficients in tree structures and
scalar quantization with respect to frequency
groupings of coefficients in subbands. Zerotree
quantization assigns all descendants of a parent
node in the spatial coefficient tree either their
original values or all zeros. SFQ uses zerotree
quantization to identify a pruned subset of
significant wavelet coefficients to be scalar
quantized and discards the rest. The goal of SFQ is
to jointly optimize its two-quantization modes, i.e.,
to search for the optimal balance between choosing
a large subset of coefficients to be scalar quantized
with low precision and a small subset of coefficients
to be scalar quantized with high precision.
VIII. COMPRESSION WITH REVERSIBLE
EMBEDDED WAVELET (CREW)

Compression with Reversible Embedded
Wavelets (CREW) [2] is a unified lossless and lossy
continuous-tone still image compression system. It
is wavelet-based using a reversible
approximation of one of the best wavelet filters.
Reversible wavelets are linear filters with non-
linear rounding which implement exact-
reconstruction systems with minimal precision
integer arithmetic. Wavelet coefficients are
encoded in a bit-significance embedded order,
allowing lossy compression by simply truncating
the compressed data. For coding of coefficients,
CREW uses a method similar to Shapiros zerotree,
and a completely novel method called Horizon.
Horizon coding is a context based coding that takes
advant.age of the spatial and spectral information
available in the wavelet domain. CREW provides
state of the art lossless compression of medical
images (greater than 8 bits deep), and lossy and
lossless compression of 8-bit deep images with a
single system. CREW has reasonable software and
hardware implement at ions.


Fig. 4. CREW System Model
Figure 4 shows a block diagram of the CREW
system. The input image (in the correct color space)
is either encoded with the transform mode or the
binary mode. The decision as to which mode is used
is data dependent. In either case, the data is
encoded by bit-plane using a context model and a
binary coder. In the case of the transform mode, the
bit-planes are importance level planes of the
transform coefficients and the Horizon context
model, that takes advantage of the spatial and
spectral information, is used. With the binary mode,
a BIG-like context model is used on Gray coded
pixels. In both cases the same binary entropy coder
is used. It is important to note that CREW can be
performed on the entire image, or, more commonly,
on tiled segments of the image. If tiles are used then
random access on a tile basis is possible. Also,
regions of interest can be decoded separately to a
higher fidelity. Finally, the choice of whether to use
the transform or binary mode can be decided on a
tile by tile basis.
IX. EMBEDDED BLOCK CODING WITH
OPTIMIZED TRUNCATION (EBCOT)
NICE-2010

Acharya Institute of Technology, Bangalore-560090 13

A new image compression algorithm is
proposed, based on independent Embedded Block
Coding with Optimized Truncation [10] of the
embedded bit-streams (EBCOT). The algorithm
exhibits state-of-the-art compression performance
while producing a bit-stream with a rich set of
features, including resolution and SNR scalability
together with a random access property. The
algorithm has modest complexity and is suitable for
applications involving remote browsing of large
compressed images. The algorithm lends itself to
explicit optimization with respect to MSE as well as
more realistic psychovisual metrics, capable of
modeling the spatially varying visual masking
phenomenon.
It is therefore more error resilient than many
other wavelet-based schemes. However, the loss of
data of a block in any lower frequency subband in
EBCOT can still degrade the perceptual image
quality considerably. This paper investigates the
use of reversible variable length codes (RVLC) and
data partitioning for the coding of coefficients of
low frequency subbands in EBCOT (instead of
arithmetic codes). RVLCs are known to have a
superior error recovery property due to their two-
way decoding capability.
X. EMBEDDED PREDICTIVE WAVELET
IMAGE
CODER (EPWIC)

EPWIC [3] is an embedded image coder based
on a statistical characterization of natural images in
the wavelet transform domain. The joint
distribution between pairs of coefficients at
adjacent spatial locations, orientations, and scales
are defined. Although the raw coefficients are
nearly, uncorrelated, their magnitudes are highly
correlated. A linear magnitude predictor coupled
with both multiplicative and additive uncertainties,
provides a reasonable description of the conditional
probability densities. In EPWIC, subband
coefficients are encoded one bit-plane at a time
using a non-adaptive arithmetic encoder. Bit-planes
are ordered using an algorithm that considers the
MSE reduction per encoded bit. The overall
ordering of bitplanes is determined by the ratio of
their encoded variance to compressed size. The
coder is inherently embedded, and should prove
useful in applications requiring progressive
transmission.
XI. STACK RUN (SR)
SR [9] image coding is a new approach in which
a 4-ary arithmetic coder is used to represent
significant coefficient values and the length of zero
runs between coefficients. This algorithm works by
raster scanning within subbands and therefore
involves much lower addressing complexity than
other algorithms such as zero tree coding which
requires creation and maintenance of lists of
dependencies across different decomposition
levels. Despite its simplicity and the fact that these
dependencies are not explicitly used, this algorithm
is competitive with best enhancements of zero tree
coding.
XII. CONCLUSIONS
In this paper, the different types of wavelet based
image compression techniques are presented. The
various wavelet based image coding schemes are
discussed in this paper. Each of these schemes finds
use in different applications owing to their unique
characteristics. Though there a number of coding
schemes available, the need for improved
performance and wide commercial usage, demand
newer and better techniques to be developed. The
effects of different wavelet functions, filter orders,
number of decompositions, image contents and
compression ratios will be examined in the future.
These compression algorithms provide a better
performance in picture quality at low bit rates.
These techniques will be successfully tested in
many images in future.
REFERENCES

[1] Asad Islam & Pearlman, An embedded and
efficient low-complexity, hierarchical image
coder,Visual Communication and Image
processing99proceedings of SPIE.,Vol 3653,pp294-
305, Jan., 1999.
[2] Boliek, M., Gormish, M. J., Schwartz, E. L., and
Keith, A. (1997) Next Generation Image
Compression and Manipulation Using CREW, Proc.
IEEE ICIP, http://www.crc.ricoh.com/CREW.
[3] Buccigrossi, R., and Simoncelli, E. P. EPWIC:
Embedded Predictive Wavelet Image Coder,
GRASPLaboratory,TR#414, http://www.cis.upenn.
edu/~butch/EPWIC/index.html
[4] Islam, A. and Pearlman, A. (1999) An embedded
and efficient low-complexity, hierarchical image
coder, Visual Communication and Image
Processing99 Proceedings of SPIE., Vol 3653,
Pp.294-305.
[5] Sudhakar, R., Karthiga, R and Jayaraman, S.
(2005) Image Compression using Coding of Wavelet
NICE-2010

Acharya Institute of Technology, Bangalore-560090 14

Coefficients A Survey, ICGST-GVIP Journal, Vol. 5,
Issue 6, Pp.25-38.
[6] A. Said, W.A. Pearlman. Image compression
using the spatial-orientation tree. IEEE Int. Symp. on
Circuits and Systems, Chicago, IL, pp. 279-282, 1993.
[7] A. Said, W.A. Pearlman. A new, fast, and efficient
image codec based on set partitioning in hierarchical
trees. IEEE Trans. on Circuits and Systems for Video
Technology, Vol. 6, No. 3, pp. 243-250, 1996.
[8] Shapiro J.M. Embedded image coding using
zerotrees of wavelet coefficients. IEEE Trans. Signal
Proc., Vol. 41, No. 12, pp. 3445{3462, 1993.
[9] Tsai, M. J., Villasenor, J. D., and Chen, F. (1996)
Stack-Run Image Coding, IEEE Trans. Circuit and
systems for video technology, Vol. 6, No. 5, Pp.519-
521.
[10] Taubman, D.High Performance Scalable Image
Compression with EBCOT, submitted to IEEE
Transactions on. Image Processing, Mar.1999,
http://maestro.ee.unsw.edu.au/~taubman/activiti
es/pre prints/ebcot.zip.
[11] Walker, J.S. (2000) A lossy image codec based
on adaptively scanned wavelet difference reduction,
Optical Engineering, Vol.39, No.7, Pp.18911897.
[12] Wallace, G.K. (1991) The JPEG still picture
compression standard, Comm. of the ACM, Vol.34,
No.4, 3044.
[13] Xiong, Z., Ramachandran, K. and Orchard, M. T.
Space-Frequency Quantization for Wavelet Image
Coding, IEEE Trans.on Image processing, vol. 6, no.
5, pp.677-693, May 1997.



























NICE-2010

Acharya Institute of Technology, Bangalore-560090 15

A FUTURISTIC MATURITY MODEL FOR CLOUD COMPUTING
1
A.Viswanatha Reddy,
2
A.Aswini
1
Lecturer, Department of I.S.E,The Oxford College of Engineering,Bangalore-560068,
2
B.Tech 3
rd
I.T. , Moula Ali College of Engg & Tech,Anantapur.
1
vissutcs@gmail.com,
2
itaswini@gmail.com-
_____________________________________________________________________________________________________________________________
ABSTRACT

The need for a specification for Cloud Computing
has emerged. The commoditization of the term
Cloud Computing for various marketing purposes
has diluted, and in some cases compromised the
underlying intent of Cloud Computing. However,
the recognition of the foundational computing
model inferred by the generic term as a significant
technique for computing in the 21st century is
clear. Cloud Computing represents a disruptive,
watershed event no less significant that the
compute model revolutions that preceded it. This
paper presents a futuristic maturity model for
Cloud Computing with the goal of providing an
engineered description of the concept. A by-product
of creating this specification is an applied
analytical method for understanding Cloud
Computing within any business setting. This
method can be used by technical and business
professionals to assist in the design of an
appropriate Cloud Computing strategy for their
particular enterprise uninfluenced by marketing
hype, misinterpretation of the term, or
inappropriate architectural decisions based on
incomplete knowledge. This neutralizing approach
drives the organization to a set of features that can
then be aligned to products that meet requirements
and solve mission issues or provide competitive
value. Only through an engineered perspective can
realization of holistic end to end architectural
specification for Cloud Computing be achieved. This
will allow for the compartmentalization of the
various vendor and product types into a common
framework structure that can be used to
understand and apply the concepts as needed. The
audiences for this specification are the architects
and engineers building and integrating these
components as well as the owners and managers
who will be seeking the benefits and promise of
Cloud Computing. Trend of Large Vendors Entering
Cloud Computing Will Accelerate
Amazon, Google, CA, Microsoft and IBM have all
announced various initiatives in cloud computing.
In 2009 this trend will accelerate with more
coming from these vendors as well as VMWare,
Citrix, Sun, HP, Cisco, Intuit,
Symantec, Yahoo (if they remain independent)
and others.
Keywords: CCMM, Architecture, Maturity Model.
1. INTRODUCTION

Cloud Computing has recently seen enormous
attention in the trade press around the world, and
has become the latest next-thing in computing
today. Large public and private enterprises have
begun to recognize the significance of the Cloud
Computing concept by the documented successes
of Amazon and others in the hardware
infrastructure arena. Success of newcomers like
Google Apps and Amazon EC2, as well as those
who have been operating for more than ten years
like Salesforce.com, in the software area indicate
the evolving cloud trend has arrived as a
significant market force.
Cloud computing, with the revolutionary
promise of computing as a utility, has the
potential to transform how IT services are
delivered and managed. Yet, despite its great
promise, even the most seasoned professionals
know little about cloud computing or how to
define it. A recent study revealed that 41% of
senior IT professionals admit that they dont
know what cloud computing is. This research
follows a similar survey highlighting that two-
thirds of senior finance professionals are confused
about cloud computing (Version One, 2009).
The reasons for the increasing interest among
government agencies are myriad. To begin, cloud
computing offers an entirely new way of looking
at IT infrastructure. From a hardware point of
view, cloud computing offers seemingly
neverending computing resources available on
demand, thereby eliminating the need to budget
for hardware that may only be used in high peak
timeframes. Cloud computing eliminates an up-
front commitment by users, thereby allowing
agencies to start small and increase hardware
resources only when there is an increase in their
needs. Moreover, cloud computing provides the
ability to pay for use of computing resources on a
NICE-2010

Acharya Institute of Technology, Bangalore-560090 16

short-term basis as needed (e.g., processors by
the hour and storage by the day) and release them
as needed (Berkeley, 2009).As for the bottom line,
cloud computing enables governments to lower
the expense of existing IT services and to cost-
effectively introduce enhanced services.
Moreover, government agencies not only benefit
from increased productivity engendered by cloud
computing, but citizens as well benefit from the
more efficient use of tax dollars (IN PUT , 2009).
Costs associated with IT operations in many cases
decrease significantly, because services can be
purchased on-demand. Finally, administrative
time spent attending to the needs of the IT
infrastructure can be reduced, with personnel
freed to devote more time to an agencys core
mission objectives.

1.1 Understanding The Cloud Computing
Specification

Complete understanding of a contemporary
Cloud Computing specification requires knowing
the evolution and roots of the concept. Historical
understanding helps frame a vendor neutral
scoping of a cloud endeavor for any particular
entity today. Cloud Computing is far easier to
declare once the fundamental components are
recognized and understood. The following are key
milestones shaping todays Cloud Computing
components.
Technological advancement in the areas of
hardware, software and network have today
enabled the new compute model called cloud.
On this new model hardware, software, and their
related aspects of orchestration, infrastructure
and platform reside. They are provided as
services, meaning from a business perspective,
cost is based on a lease model as opposed to an
owned model. This distinction is proving
revolutionary in the marketplace as cost drivers
and efficiencies from this new technical style
provide significant motivation towards a cloud
computing model. From a technical perspective
services imply decoupled highly distributed
components reminiscent of Service Oriented
Architectures (SOA) which Cloud Computing may
actually be compared. This confluence of
technology has enormous implications for all
firms in the future.
Because these advances have allowed the
ability to leverage heretofore owned elements as
a service, significant cost options now enter into
the cost/benefit trades of technology portfolios.
Having an entire development model cloud
resident in a secure cloud through a browser
without the encumbrance of hardware elements
engenders a far different software construction
environment than traditional coding tethered to a
developers machine. Significant cost savings due
to massive economies of scale disrupt traditional
software license models and the flexibility of
extending and contracting compute power as
needed present enormously compelling cost
savings drivers.


1.2. Contemporary Technology Forecasts

As shown in the historical tracing, maturation of
various technologies has given rise to a new
product types and styles. The general idea of
remote computing power leveraging the Internet,
Internet technologies or similar technology is the
common complexion of this style called Cloud
Computing. Generally speaking the ability to
lower costs and increase utility is the driving
factor that will be very disruptive to traditional
client server, and fat1 client based software.

1.3. Engineering a Cloud Computing
Specification

There are historical thresholds marked by
advances in technology and computing styles that
lead to todays Cloud Computing furor. Because of
the resulting economies engendered by many
aspects of Cloud Computing, enterprises are
attempting to adopt a Cloud Computing strategy
for their organizations. However there is no
roadmap to successfully engineering a viable
solution. Thus it becomes vitally important to
recognize and orient to the proper perspective
when engineering any particular Cloud
Computing initiative.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 17

There can generally be identified three core
elements of a Cloud Computing system that
mirror traditional computing components. The
hardware, the software and the transport
mechanism characterize the fundamentals of any
compute system. Data of course, is the common
component being transported, stored or
manipulated into knowledge. There is nothing
new within these fundamental components.
However the Cloud Computing style provides new
options in transporting, storing and manipulating
data. Recognizing these foundational elements,
understanding the elemental core precepts of
Cloud Computing, and aligning them to
contemporary cloud computing styles is required
when considering the engineered decomposition
of a particular cloud solution.
But what is cloud architecture, and more
importantly what is the right answer for any
particular organization relative to the Cloud
Computing style? Determining this requires
applied analytics within a framework that can
assist in determining the correct solution for any
particular firm.

2. CLOUD COMPUTING MATURITY MODEL

The establishment of a cloud computing
maturity model (CCMM) provides a framework
for successful implementation. This paper
proposes a phased approach to the CCMM,
encompassing five key components:
Consolidation
Virtualization
Automation
Utility
Cloud
Step 1: Consolidation

An agencys migration towards cloud
computing begins with the consolidation of
server, storage, and network resources, which
works to reduce redundancy, decrease wasted
space, and increase equipment usage, all through
the measured planning of both architecture and
process.
Consolidation is achieved primarily through
virtualization but can also be approached by the
use of denser computing hardware or even high
performance computing. By boosting the speed of
critical processes and enabling greater flexibility,
the consolidation of data centers and desktops
allows agencies to do more with fewer resources
a significant concern in todays economic
environment. Moreover, the shift to a unified
fabric provides both physical and virtual access to
the storage area network (SAN), creating greater
efficiency and cost savings by allowing more
storage to be consolidated in the SAN.Network
and application modernization is also an
important initial step in enabling the transition to
a cloud computing environment. A viable
alternative to replacing infrastructure
components or rewriting critical applications,
modernization promotes communication between
older systems and newer solutions, all while
preserving the value in existing IT systems. Freed
from the bonds of a mainframe environment,
critical applications modernized through a
service-oriented architecture provide agencies
with the increased ability to leverage newer
technologies. As for security concerns
surrounding cloud computing, modernization
actually works to enhance the security of sensitive
information stored on critical applications. When
established properly, the cloud platform provides
security of all data in motion, traveling between
the cloud and the desktop, and all data at rest in
cloud storage.
Step 2: Virtualization
Virtualization forms a solid foundation for all
cloud architectures. It enables the abstraction and
aggregation of all data center resources, thereby
creating a unified resource that can be shared by
all application loads. Hardware such as servers,
storage devices, and other components are
treated as a pool of resources rather than a
discrete system, thereby allowing the allocation of
resources on demand. By decoupling the physical
IT infrastructure from the applications and
services being hosted, virtualization allows
greater efficiency and flexibility, without any
effect on system administration productivity or
tools and processes.
By separating the workload from the
underlying OS and hardware, virtualization allows
extreme portability. When extended to every
system component, desktop, network, storage,
and servers it enables the mobility of
applications and data, not only across servers and
NICE-2010

Acharya Institute of Technology, Bangalore-560090 18

storage arrays, but also across data centers and
networks. Moreover, through consolidation one
of the critical applications of virtualization
agencies can regain control of their distributed
resources by creating shared pools of
standardized resources that enable centralized
management, speeding up service provisioning
and reducing unplanned down time. Ultimately,
the result is increased use of assets and simplified
lifecycle management through the mobility of
applications and data.
Although many agencies turn to virtualization
to improve resource usage and decrease both
capital and operating costs, the ultimate goal in
cloud computing is the use of the abstraction
between applications and infrastructure to
manage IT as a Service (IaaS) in a true cloud
environment.
Step 3: Automation
In this stage, automation optimizes an
agencys virtualized IT resources. Through a
transformative procedure, the infrastructure is
automated, and critical IT processes become more
dynamic -- and greater control is achieved by
trusted policies. With automation, data centers
can systematically remove manual labor
requirements for run-time operations. Among the
various forms of automation in practice today,
provisioning automation is perhaps the best
known and most often implemented. Rather than
managing underlying infrastructure, agencies in
pursuit of cloud computing need to move toward
managing service levels based on what is
appropriate for the application users, whether its
minimum tolerable application latency or the
availability level of an application whatever are
deemed critical factors. In this regard, automation
becomes an essential element. With centralized IT
and self-service for end users, automation helps
agencies to disentangle themselves from the
burden of repetitive management procedures, all
while enabling end users to quickly access what
they require. Ultimately, automation can help
agencies to reduce their operating expenses by:
Reallocating computing resources on-demand
Establishing run-time responses to capacity
demands
Automating trouble-ticket responses (or
eliminating trouble tickets for most automated
response scenarios)
Integrating system management and
measurement
Step 4: Utility
In addition to automation, both selfservice
and metering -- feedback about the cost of the
resources allocated are necessary requirements
in creating a cloud service. With breakthrough
capabilities for end users and agencies, selfservice
and metering facilitate not only better IT
management but the further extension of the user
experience.
In the cloud, there is no intermediary between
the user of a resource and the processes for
acquiring and allocating resources for critical
mission needs and initiatives. Since the user
initiates the service requests, IT becomes an
ondemand service and the costs of operation drop
significantly, because costs are incurred only
when the service is used and fewer dollars are
spent attending to the needs of the infrastructure.
Essential to IT administration is the question
of how to maintain service delivery in a fully
virtualized, multi-tenancy environment while at
the same time providing the highest levels of
security especially for information and services
that might leave the data center. A private cloud
utility model answers the question, by enabling
agencies to retain the data within their network
security while scaling and expanding as user
demands change, pooling IT resources in a single
operating system or management platform. As a
result, anywhere from tens to thousands of
applications and services can be supported and
new architectures that target large-scale
computing activities easily installed.
Step 5: Cloud
Through cloud internetworking federation,
disparate cloud systems can be linked in such a
way as to accommodate both the particular nature
of cloud computing and the running of IT
workloads. This federation allows the sharing of a
range of IT resources and capabilities including
capacity, monitoring, and management and the
movement of application loads between clouds.
Moreover, since federation can occur across data
center and agency boundaries, it enables such
processes as unified metering and billing and one-
stop self-service provisioning.
With cloud computing, communication
increases significantly, as data sharing between
previously separate systems is fully enabled and
NICE-2010

Acharya Institute of Technology, Bangalore-560090 19

collaboration within and between government
agencies grows exponentially. Ultimately, rather
than each agency operating in isolation,
constricted by the boundaries of its own data
center, not only can services be shared among
groups, but also costs can be shared and lessened
The future of cloud computing
In todays economy, with limited budgets and a
highly dynamic market, it is critical to be able to
refocus organizations resources and check the
viable options with cloud computing which can
provide expected benefits. Without getting
trapped in to the cloud hype cloud customers can
start in experimentation and pilot mode in public
cloud with non critical applications and once the
security and service level assurance related issues
improves ,they can
migrate hosting environment and data centres to
dedicated private cloud services. The market right
now is really a subset of the managed hosting
business - which is a $9 to $10 billion a year that
just continues to grow. The computing
environment of future will always be cloudy with
variety of sizes and shapes touching each other,
one can chose the right environment, tools and
cloud vendor to prove that organization is always
on 7th cloud in it computing. Widespread
acceptance of virtualization environment and
clouds will happen in stages - a couple of issues
associated with data security, interoperability,
separation of service layer still need to get
addressed. The new trend in innovations will
bring in lot of maturity and reliability in cloud
computing along with better governance and
security models once technologies start
functioning seamlessly and reliably. With drastic
improvements in WAN speed, the future trend
will
increasingly see the front end of applications
separated from the backend which has scalable
databases. This may happen more easily and
effectively even by allowing companies to host the
data inside their own private data centers and
simply allow the front end apps running in the
cloud to tunnel in and connect to the data. There
could be move towardsdevelopment of offline
component of web application in addition to the
standard online component which stores the
application locally and caches user data so that
any hiccups to a Web session or connectivity
outages allow users to continue to work
uninterrupted when Internet connectivity is
restored, any work and changes made offline are
simply synced up with the online version
of the application.
3. CONCLUSION
With its convenient, on-demand model for
network access to a shared pool of configurable
computing resources, cloud computing is rapidly
emerging as a viable alternative to traditional
approaches and is carrying a host of proven
benefits to government agencies.
Costs are being significantly reduced, along
with personnel time spent on computing issues.
Storage availability increases, high automation
eliminates worries about keeping applications up
to date, and flexibility and mobility are
heightened, allowing workers to access
information anytime, anywhere. Cloud computing
can be rapidly provisioned and released with
minimal management effort or service provider
interaction. Ultimately, with its offering of
scalable, real-time, internet-based information
technology services and resources, the cloud can
satisfy the computing needs of a universe of users,
without the users incurring the costs of
maintaining the underlying infrastructure.
4. THE FUTURE
Anything-as-a-service offerings need time to
develop.
The hype is strong around anything-as-a-
service, but given the fact that your peers are
adopting it very slowly, it makes sense to wait on
this. Its likely to be several years before offerings
are mature, so dont rush into anything here. That
said, the pain of keeping up with storage growth is
real, so momentum could gather around a
successful offering quickly. When thinking about
anything-as-a service, consider the following:
Fit with business processes is critical.
Storage capacity alone is not enough to be
compelling in this space. Figuring out how
storage-as-a-service offerings will integrate with
your existing applications and processes is a key
consideration. For storage-as-a-service to be
successful, they need to do a better and cheaper
job of solving real business problems. So far, the
case for offerings meeting these criteria is
questionable.
Backup-as-a-service does make sense.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 20

Improving backup of critical data is a key
priority for many IT organizations, and a backup-
as-a-service offering can offer a lower capital
intensive path to improving capabilities than
going it alone. If you dont have an effective plan
to get data to a second site, a cloud offering might
make sense for PC backup corporate wide or
server backup for smaller systems or systems in
remote offices.
Measure the cost over time.
The point of leveraging a service instead of
building it yourself is to get better capabilities at a
lower cost. Compare the total cost of running
storage internally with the total cost of getting
capacity as a service over several years, at least
three.
If the numbers dont add up, then you might
want to wait until the offerings are more mature
and the cost has declined even further.
REFERENCES
[1] Hewlett Packard Laboratories, Taking
Account of Privacy when Designing Cloud
Computing Services, 2009.
[2] IN PUT, Federal Industry Insights, Evolution
of the Cloud: The Future of Cloud Computing
in Government, March 2009.
[3] Sun Microsystems, Introduction to Cloud
Computing Architecture, 2009.
[4] VMware: Clearing the Fog for a Look into the
Clouds, Mark Bowker, Enterprise Strategy
Group.











































NICE-2010

Acharya Institute of Technology, Bangalore-560090 21

ADVANCED HYBRID, ENERGY-EFFICIENT, DISTRIBUTED CLUSTERING
APPROACH FOR AD HOC SENSOR NETWORKS (AHEED)
Prathibhavani P M
1

, Lecturer, Harish G.
2
, Asst. Prof.
Acharya Institute of Technology
1
, Dr. Ambedkar Institute of Technology
2

prathibhavani@acharya.ac.in
1
, c_harishg@yahoo.com
2

_______________________________________________________________________________________________________________________
ABSTRACT

In clustering-based wireless sensor
networks (WSNs), a certain sensing area is divided
into many sub-areas. Cluster formation and cluster
head selection are well done in the setup phase.
With the predetermined probability and random,
every round in the WSNs has the different cluster
numbers and cluster heads. In this paper, we
propose a novel distributed clustering approach for
long-lived ad hoc sensor networks. We present a
protocol, AHEED (Advanced Hybrid Energy-
Efficient Distributed clustering), that periodically
selects cluster heads according to a hybrid of the
node residual energy and a secondary parameter,
such as node proximity to its neighbors or node
degree. Therefore, in order to evenly consume
nodes energy, this paper proposes a fixed optimal
cluster (FOC) numbers that is to analyze the entire
network first to have the optimal cluster numbers
and then apply it to form the optimal cluster
numbers. Moreover, there are two different types of
the optimal cluster numbers depending on the
location of the base station. One is that the base
station is setup at the center of the sensing area.
The other is that the base station is setup at the far
way of the sensing area. The simulation results
show the entire network lifetime can be extended
very well with base station is setup at the center of
the sensing area.

I. INTRODUCTION
Due to the technology improved quickly, sensor
nodes are becoming smaller and smaller. With
this tiny sensor node, it contains the power supply
unit, processing unit, receiver unit, transceiver
with amplifier unit and antenna unit as shown in
Fig. 1 In recent years, not only the sensor node
become smaller, but also it comes with the
characteristics of chips smaller and faster, less
power needed and transmission distance longer
because of the advance technology.

Fig.1. Architecture of Sensor Node


By these characteristics of sensor nodes
improved, the wireless sensor networks (WSNs)
lifetime can be extended well. The main purpose
of sensor nodes with the wireless technique is to
collect useful data and transmit these data back to
the base station for possible needs. These sensor
nodes are normally deployed into a certain hardly
reachable area to monitor specific event. Hence,
the energy of sensor nodes needs to be seriously
considered to have longer surveillance.
Traditionally, sensor nodes directly transmit the
data to base station and their energy will be
drained out very quickly because of the distance
constrain. In order not to directly transmit the
data back to the base station, the technique of
multi-path can have better energy performance
compared to the direct transmission technique.
That is every sensor node transmits the data to
the closer sensor node. However, sensor nodes in
these two techniques consume different energy
that is sensor nodes consume their energy
unevenly. The result of these two techniques
show some of sensor nodes run out of energy
quickly. Therefore, the clustering-based WSN was
proposed for unevenly energy consumption of
WSNs. The centralized WSNs that the base station
is taken over for all processing and calculating
problem so that the entire network performance
is much better. That is because the power supply
of the base station is city power supply and most
of calculations are done here. In this paper, WSNs
supplied by the city power is not considered.
However, what needed to be considered of power
consumption is deploying sensor nodes to the
place where the battery of sensor nodes are hard
to be recharged and these sensor nodes are
NICE-2010

Acharya Institute of Technology, Bangalore-560090 22

organized themselves into clusters. Therefore, to
extend the network lifetime via sensor nodes
energy saved is much considered in this paper.
Moreover, this paper analyzes the amount of
cluster numbers for the entire WSN first. By the
optimal cluster numbers applied to the WSNs, the
lifetime of WSNs then can be extended very well.
The essential operation in sensor node clustering
is to select a set of cluster heads from the set of
nodes in the network, and then cluster the
remaining nodes with these heads. Cluster heads
are responsible for coordination among the nodes
within their clusters and aggregation of their data
(intracluster coordination), and communication
with each other and/or with external observers
on behalf of their clusters (intercluster
communication). Fig. 1 depicts an application
where sensors periodically transmit information
to a remote observer (e.g., a base station). The
figure illustrates that clustering can reduce the
communication overhead for both single-hop and
multihop networks. Periodic reclustering can
select nodes with higher residual energy to act as
cluster heads. Network lifetime is prolonged
through
1. Reducing the number of nodes contending for
channel access,
2. Summarizing information and updates at the
cluster heads, and
3. Routing through an overlay among cluster
heads, which has a relatively small network
diameter.

In this work, we present a general distributed
clustering approach that considers a hybrid of
energy and communication cost and placing of a
base station at the centre to minimize the
consumption of energy during steady state phase.
Based on this approach, we present the AHEED
(Advanced Hybrid, Energy-Efficient, Distributed)
clustering protocol. AHEED has four primary
objectives

1. prolonging network lifetime by
distributing energy consumption,
2. terminating the clustering process within
a constant number of iterations,
3. minimizing control overhead (to be linear
in the number of nodes), and
4. Producing well-distributed cluster heads.
5. Placing base station at the centre

II. RELATED WORK
Many protocols have been proposed for ad hoc
and sensor networks in the last few years.
Reducing energy consumption due to wasteful
sources has been primarily addressed in the
context of adaptive MAC protocols, such as
PAMAS, DBTMA, EAR, and S-MAC. For example, S-
MAC periodically puts nodes to sleep to avoid idle
listening and overhearing. TinyOS introduces
random delays to break synchronization. Blue
Noise Sampling selects well-distributed nodes to
awaken in order to achieve optimal field coverage.
Data dissemination protocols proposed for sensor
networks consider energy efficiency a primary
goal. SPIN attempts to reduce the cost of flooding
data, assuming that the network is source-centric
(i.e., sensors announce any observed event to
interested observers). Directed diffusion on the
other hand, selects the most efficient paths to
forward requests and replies on, assuming that
the network is data-centric (i.e., queries and data
are forwarded according to interested observers).
Rumor routing provides a compromise between
the two approaches (source-centric versus data-
centric). The dissemination problem is formulated
as a linear programming problem with energy
constraints. This approach assumes global
knowledge of node residual energy, and requires
sensors with specific processing capabilities. A
disjoint path routing scheme is proposed in which
energy efficiency is the main parameter.
Clustering can be a side effect of other protocol
operations. For example, in topology management
protocols, such as GAF, SPAN and ASCENT nodes
are classified according to their geographic
location into equivalence classes. A fraction of
nodes in each class (representatives) participate
in the routing process, while other nodes are
turned off to save energy. In GAF, geographic
information is assumed to be available based on a
positioning system such as GPS. SPAN infers
geographic proximity through broadcast
messages and routing updates. GAF, SPAN, and
ASCENT share the same objective of using
redundancy in sensor networks to turn radios on
and off and prolong network lifetime. In
CLUSTERPOW, nodes are assumed to be no
homogeneously dispersed in the network. A node
uses the minimum possible power level to
forward data packets, in order to maintain
connectivity while increasing the network
capacity and saving energy. The Zone Routing
Protocol (ZRP) for MANETs divides the network
into overlapping, variable-sized zones. Several
NICE-2010

Acharya Institute of Technology, Bangalore-560090 23

distributed clustering approaches have been
proposed for mobile ad hoc networks and sensor
networks. The Distributed Clustering Algorithm
(DCA) assumes quasi-stationary nodes with real-
valued weights. The Weighted Clustering
Algorithm (WCA) combines several properties in
one parameter (weight) that is used for clustering.
The authors propose using a spanning tree (or
BFS tree) to produce clusters with some desirable
properties. Energy efficiency, however, is not the
primary focus of this work. The authors propose
passive clustering for use with on-demand routing
in ad hoc networks. Earlier work also proposed
clustering based on degree (connectivity) or
lowest identifier heuristics. Dependent on the
network diameter, unlike HEED which terminates
in a constant number of iterations. LEACH
clustering [8] terminates in a constant number of
iterations (like HEED), but it does not guarantee
good cluster head distribution and assumes
uniform energy consumption for cluster heads.
III. NETWORK MODEL
This paper proposes a fixed optimal cluster
numbers for the entire network. In this paper, 2-
dimension is assumed. Based on the location of
the base station, the optimal cluster numbers are
applied to the two different locations that are
both the centre of the sensing area and the
outside of the sensing area. The optimal cluster
number for the centre of the sensing area is given
by

Kopt =n. (1)

Where n represents the number of sensor nodes
are randomly deployed. The optimal cluster
number for the outside of the sensing area is
given by

Kopt =n

... (2)

where n represents the number of sensor nodes
are randomly deployed, d is the distance from a
node to the cluster head, M is the sensing area of x
or y axis, and B is the distance from the centre of
sensing area to the outside location of the base
station. Moreover, this paper also contains two
phases for the entire network that are setup
phase and transmission phase. By the setup
phase, sensor nodes are organized into clusters
with the proposed fixed optimal cluster numbers.
With the proposed fixed optimal cluster numbers,
every round uses the optimal cluster number to
form clusters. During the transmission phase, the
data is transmitted back to the cluster head and
then cluster head will forward the data back to
the base station

IV. A FIXED OPTIMAL CLUSTER
NUMBER PROTOCOL
As we can see the architecture of LEACH and
HEED cannot evenly dissipate the energy of all
nodes because of the uneven clusters selection for
each round in the network Therefore, we firstly
find the optimal cluster numbers for the LEACH
architecture and then use it with the random
deployment. However, there are two phases in the
network. The first phase is setup phase including
sensor nodes randomly deployment, and k
optimal cluster number founded for each round.
The second phase is transmission phase including
TDMA and data aggregation.




Fig. 2: Fixed optimal cluster number protocol.
The following lists are two phases in the network.
1. Nodes firstly deployed into M x M region.
2. Given the B.S. location.
3. B.S. is set to the centre of the sensing area; uses
the fixed optimal cluster numbers in (1) for the
cluster formation phase.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 24

4. B.S. is set to the outside of the sensing area;
uses the fixed optimal cluster numbers in
(2) for the cluster formation phase.
5. Nodes organize themselves into clusters with
the given B.S. location address.
6. Every round will have the same fixed optimal
cluster numbers.
7. Data can be sent back to the cluster head and
forward to base station with TDMA and data
aggregation.

V. SIMULATION RESULTS
In order to have better simulation result, the
simulation tool, Mat lab, is used as simulator. In
the simulation environment, 100 sensor nodes are
randomly deployed as shown in Fig. 3

Fig. 3: 100 sensor nodes are randomly deployed
Fig. 4 shows the formation of clusters and how
many nodes are still alive there. o denotes
sensor node is still alive where as . denotes node
is no longer alive and * denotes cluster head.
Also, sensor nodes colour stands for which node
belongs to which cluster.

.
Fig.4: Cluster formation and cluster head selection.
By evaluating 310 rounds in Fig. 5 and 1200
rounds in Fig. 6, numbers of cluster head from
both figures are different and nodes number
belongs to each cluster from these two figures are
also different. Therefore, by Fig. 10, it shows those
sensor nodes closer to the base station will be
runout of energy quickly.

Fig. 5: Sensor nodes are still alive after 310 rounds.


Fig. 6 Sensor nodes are still alive after 310 rounds.
As the base station is set at the centre of the
sensing area, Fig. 6 shows how many nodes are
still alive and how many nodes are no longer alive.
It is obvious that energy usage for nodes far away
from the base station is less than those nodes
closer to the base station. Therefore, the
probability and cluster formation need to be
adjusted in order to have the dissipated energy
evenly.
VI. CONCLUSION
In this paper, we have presented a distributed,
energy efficient
Clustering approach for ad hoc sensor networks.
Our approach is hybrid: Cluster heads are
probabilistically selected based on their residual
energy, and nodes join clusters such that
communication cost is minimized. We assume
quasi-stationary networks where nodes are
location-unaware and have equal significance. A
key feature of our approach is that it exploits the
availability of multiple transmission power levels
at sensor nodes and placing the base station at the
centre. Based on this approach, we have
introduced the AHEED protocol, which terminates
in a constant number of iterations, independent of
network diameter. Simulation results
demonstrate that AHEED prolongs network
lifetime, and the clusters it produces exhibit
several appealing characteristics. AHEED achieves
a connected multihop intercluster network when
a specified density model and a specified relation
between cluster range and transmission range
NICE-2010

Acharya Institute of Technology, Bangalore-560090 25

hold. Our approach can be applied to the design of
several types of sensor network protocols that
require scalability, prolonged network lifetime,
fault tolerance, and load balancing. Although we
have only provided algorithms for building a two-
level hierarchy, we can extend the protocols to
multilevel hierarchies. This can be achieved by
recursive application at upper tiers using bottom-
up cluster formation .The sensors which become
the cluster-head in LEACH architecture spend
relatively more energy than other sensors
because they have receive information from all
the sensors within their cluster, aggregate this
information and then communicate to the base
station. Hence, they run out of their energy faster
than other sensors. We have found the optimal
number of cluster-heads for the proposed
algorithm that minimize the energy spent in the
network, when sensors are uniformly distributed
in a bounded region. We know the number of
cluster-heads depend on the distance between
base station and sensor network system.
VII. BIBLIOGRAPHY
VIII.
[1] D. Estrin, L. Girod, G. Pottie, and M. Srivastava,
Instrumenting the World with Wireless Sensor
Networks, Proc. Intl Conf. Acoustics, Speech, and
Signal Processing (ICASSP 2001), May
2001,http://citeseer.nj.nec.com/estrin01instrum
enting.html.
[2] G.J. Pottie and W.J. Kaiser, Wireless Integrated
Newtork Sensors, Comm. ACM, vol. 43, no. 5, pp.
51-58, May 2000.
[3] V. Kawadia and P.R. Kumar, Power Control
and Clustering in Ad Hoc Networks, Proc. IEEE
INFOCOM, Apr. 2003.
[4] S. Narayanaswamy, V. Kawadia, R.S. Sreenivas,
and P.R. Kumar, Power Control in Ad-Hoc
Networks: Theory, Architecture, Algorithm and
Implementation of the COMPOW protocol, Proc.
European Wireless 2002. Next Generation
Wireless Networks: Technologies, Protocols,
Services and Applications, pp. 156-162, Feb. 2002.
[5] C. Intanagonwiwat, R. Govindan, and D. Estrin,
Directed Diffusion: A Scalable and Robust
Communication Paradigm for Sensor Networks,
Proc. ACM/IEEE Intl Conf. Mobile Computing and
Networking (MOBICOM), 2000.

[6] J. Kulik, W. Heinzelman, and H. Balakrishnan,
Negotiation- Based Protocols for Disseminating
Information in Wireless Sensor Networks}, ACM
Wireless Networks, vol. 8, nos. 2-3, pp. 169-185,
2002,
citeseer.nj.nec.com/kulik99negotiationbased.
html.
[7] J.-H. Chang and L. Tassiulas, Energy
Conserving Routing in Wireless Ad-Hoc
Networks, Proc. IEEE INFOCOM, Mar. 2000,
http://www.ieee-
infocom.org/2000/papers/417.ps.
[8] W. Heinzelman, A. Chandrakasan, and H.
Balakrishnan, An Application-Specific Protocol
Architecture for Wireless Microsensor Networks,
IEEE Trans. Wireless Comm., vol. 1, no. 4, pp. 660-
670, Oct. 2002.
[9] A. Cerpa and D. Estrin, ASCENT: Adaptive
Self-Configuring Sensor Networks Topologies,
Proc. IEEE INFOCOM, June 2002.
[10] Y. Xu, J. Heidemann, and D. Estrin,
Geography-Informed Energy Conservation for Ad
Hoc Routing, Proc. ACM/IEEE Intl Conf. Mobile
omputing and Networking (MOBICOM), pp. 70-84,
July 2001.
[11] B. Chen, K. Jamieson, H. Balakrishnan, and R.
Morris, Span: An Energy-Efficient Coordination
Algorithm for Topology Maintenance in Ad Hoc
Wireless Networks, ACM Wireless Networks, vol.
8, no. 5, Sept. 2002.
[12] C.R. Lin and M. Gerla, Adaptive Clustering for
Mobile Wireless Networks, IEEE J. Selected Areas
Comm., Sept. 1997.
[13] S. Banerjee and S. Khuller, A Clustering
Scheme for Hierarchical Control in Multi-Hop
Wireless Networks, Proc. IEEE INFOCOM, Apr.
2001.
[14] B. McDonald and T. Znati, Design and
Performance of a Distributed Dynamic Clustering
Algorithm for Ad-Hoc Networks, Proc. Ann.
Simulation Symp., 2001.

NICE-2010

Acharya Institute of Technology, Bangalore-560090 26

BUILDING EXTRACTION IN AERIAL IMAGES USING SCALING AND
MORPHOLOGICAL TECHNIQUES
Anjum mujawar, Prof. J.N.Ingole
Assist. Prof., Electronics Dept, Dr Meghe Institue of technology & Research,Amravati university
pro.anjum@gmail.com
_____________________________________________________________________________________________________________________________
ABSTRACT
This paper presents the applicability of
image scaling and morphological operators to the
detection of building in grey scale aerial
photography. The basic idea of scaling is to
generate a multiscale representation from one
parameter family of derived signals. The second
method involves several morphological operators
among which an adaptive hit-or-miss transform
with varying size and shapes of structuring element
intended to determine the optimal filtering
parameters automatically. We have implemented
and tested both the operations on various images.
KEYWORDS
Scaling, Homogeneity operator, Mathematical
Morphology, Hit Or Miss Transform,
Bidimensional Granulometry.
I. INTRODUCTION

A significant amount of work that has been
done in the field of aerial image understanding
has concentrated on the detection of buildings
and roads.
We examine the use of image scaling and
morphological operator. The basic idea of scale
space theory is to generate a multi scale
representation one parameter family of derived
signals. The scale parameter is intended to
describe the current level of scale. The space is
constructed by smoothing the original image with
Gaussian kernels of successively increasing width.
Morphology has proved to be effective for
many applications in remote sensing. Our method
is
based on a sequence of different morphological
operators applied on binary images among which
the Hit- or-Miss transforms We compute a
granulometry
[13] to determine the parameters of the operators
in an automatic way.
The paper is organized as follows. In
section II, we present the building extraction
method based on scaling. In section III,
morphological operator method is
explained. In section IV, conclusion is given
references are given in section V.
II. SCALING
A. Theory
Scaling an image involves the generation of a
family of images at different scales from a single
source image [].The image family is created by
convolving Gaussian kernels of successively
increasing widths with the original image. This
results in the source image becoming increasingly
smoothed and successive suppression of fine-
scale information [6]. The discrete Gaussian
kemel (h (n1, n2)) in two dimensions is given by
the following two equations.
( )
( )
2 2 2
1 2
, / 2
1 2
,
n n
g
h n n e

= (0.1)
( )
( )
1 2
1 2
,
,
g
g
h n n
h n n
h
=

(0.2)
Where is the standard deviation and (n1,n2)
indexes a specific pixel in the kernel.
Scaling an image cannot be carried out with
any kind of smoothing or averaging operation. It
is essential that the smoothing operation does not
introduce any artifacts as the transformation from
coarse to fine scales occurs. Only Gaussian
smoothing obeys this criterion
Scaling or scale-space events are linked to
abstraction. Abstraction according to Mayer [6] is
defined as the increase of the degree of
simplification and emphasis. As an aerial image is
progressively smoothed the substructure of
houses and other objects are suppressed. At high
levels of smoothing small objects are completely
annihilated and large objects appear as blobs,
having lost all substructure. According to Mayers
definition abstraction is occurring - unnecessary
information in the image has been eliminated
resulting in an emphasis of what remains.

NICE-2010

Acharya Institute of Technology, Bangalore-560090 27



Figure (1): Source Image
B. Investigation

An investigation into the effects of image
scaling has been carried out on the following
aerial image depicts a sparsely built-up area.
There are a number of houses, a few road
segments, some natural vegetation, and large
barren sand patches. This variety of scenery is
useful as it allows the effects of smoothing and
abstraction to be more accurately evaluated.
Figure 2 is a Gaussian smoothed version of Figure
1. A Gaussian kemel of size 15 by 15 pixels was
used with a standard deviation of 3 Notice how
the fine image detail is suppressed, in particular,
the road markings of the image have begun to blur
into the road itself and various cars in the image
are almost unrecognizable.
Figure 3 represents the source image after it has
convolved with a Gaussian kemel of the same size
but having a standard deviation of 3. As expected
the image is more smoothed





Figure (2): Smoothed Image 1
than that in Figure 2. The houses now appear as
blobs with their actual shapes becoming
unrecognizable. Smoothing is advantageous
because non-building objects smaller than
buildings are being
removed from the scene - simplifying the
recognition task.
The advantages of image scaling can be
practically demonstrated by applying the
homogeneous operator [7] to the source image
(Figure 1), as well as to a smoothed version
(Figure 2). The homogeneous operator detects
regions of
uniform intensity in the image. This is useful if the
objects being extracted have a uniform
appearance.
In both figures the roads, house roof panels and
barren sand patches are all identified by the
operator because of their uniform appearance.
However, notice that the regions in Figure 5 are
far less fragmented than those in Figure 4.
The house roof panels have been detected in both
images However, in Figure 12 some of the roof
panels have begun to merge with surrounding
regions. This indicates that there is a limit to the
amount of scaling that can be performed in order
to improve object extraction, Overall, it is felt' that
the image scaling approach has much potential.
III. MORPHOLOGIAL OPERATORS
A. Theory
The method we propose to extract building
objects from VHR-images relies on the use of
binary mathematical morphology operators
which are based on set theory [13]. The two

Figure (3): Smoothed Image 2

Figure(3): Source Image Regions

NICE-2010

Acharya Institute of Technology, Bangalore-560090 28

operators in MM are the erosion (I S) and the
dilation (IS) respectively defined as:
( ) { }
( ) { }
2
'
2
:
:
I S x S I
I S x S I
=
=

with S' and Sx respectively denoting reflex ion and
translation by x of the set S. From these basic
operators it is possible to define more complex
operators as we will see throughout this paper. It
is composed of three main steps. The first one
consists in the input grey level image binarization.
. The second step is an automatic morphological
filtering intended


Figure (4): Smoothed Image Regions

to eliminate some objects in the image and to
determine the size of the structuring elements.
The third step is the building extraction step itself
based on the use of an adaptive Hit-or-Miss
transform. Each of these steps is

Fig(5).VHR Images containing buildings

B. Generation of binary images
To generate binary image B, histogram of input
image is plot and from histogram threshold value
T related to min value between two maxima is
found out .:

( ) ( ) 1 if
0 otherwise
B x x T =
(0.3)

C. Morphological filtering
Before to extract the buildings from the generated
images, an automatic morphological filtering is
also performed. The aim of this filtering is to
remove objects whose size is lower than the
minimum size

Fig.2. Output of hit and miss image

of a building in the raw image. These objects may
be seen as noisy data capable of disturbing the
extraction process. The filtering used is a
morphological opening
defined as a combination of erosion and
dilatation:
( ) ( )
s
I I S S = (0.4)

D. Building detection
For building detection we propose to use the Hit
Or Miss Transform (HMT) which consists in a
double erosion of the image I and its complement
I
C
(i.e. the background) with two disjoint
structuring elements E and F. This transform is
particularly useful for template matching and is
defined as:
{ } ) ) (( ) ) (( :
) ( ) ( ) , )( (
C
X x
C
I F I E x
F I E I F E HMT I
=
=

(0.5)
Since we try to detect square or rectangular
buildings of various sizes, we adapt the HMT to be
able to take into account some structuring
elements E and F with varying sizes and shapes.
Our adaptive HMT is defined as:

U


L
K
C
F I E I F E HMT I ) ( ) ( ) , )( (
, ,
=

(1.6)
NICE-2010

Acharya Institute of Technology, Bangalore-560090 29

Thus, the result of this HMT is defined as the
union of all the results of the transform applied
with a given pair of structuring elements. The two
variable structuring elements Ea,b and Fc,d are
respectively defined as a rectangle of size a x b
and a frame of size c x d, with the constraints c >a
and


Fig.3. Output of extracted image
d > b. The sets and contain respectively all the
possible heights and widths of the SE, and a is a
coefficient used to determine the uncertain
area between E and F. In other words, it helps to
mark the area between pixels which surely belong
to buildings and pixels which surely belong to
background At the end of this operation, if the
parameters of the HMT have been correctly
defined, only the buildings are retained with their
respective position. However, the shape of these
buildings does not correspond any more to the
initial shape. Indeed, the HMT is based on
erosions which reduce the size of the objects.
Thus, a post processing is necessary to rebuild the
shape of the detected buildings. An additional
morphological operator is used for this task which
corresponds to a reconstruction .Using two
images, an input image I and a marker image M,
and applying until convergence a conditional
dilation with SE B defined as:

( ) T B M B M
T
=
(1.7)

Figure (2) shows the output of hit and miss image
in which some information is miss. Figure(3)
shows the final extracted output image which
gives the more information which is lost in hit and
miss transform.

IV CONCLUSION

The feasibility of applying scaling and
morphological operator for the extraction of
buildings from monocular grayscale aerial
photography has been examined.
Scaling an image reduces its information content.
Both noise and meaningful information are
removed due to scale-space events. The
suppression of meaningful information has to be
shown to actually improve object detection in
some instances. This is because redundant
substructure is removed, emphasizing the objects
themselves. The abstraction achieved by scaling
offers the potential to develop object model:; that
more closely match the actual objects. This is
because models based on abstracted objects can
be less rigid - there is no need to model object
substructure and object details. For instance, if an
image is smoothed enough a square house and
round hut will appear almost identical. A single
model can be developed to find the blobs - specific
object shape has ceased to be criterion.
We extended our previous works [12] by
introducing a bidimensional anemometry in the
filtering step. This morphological profile helps to
define automatically the structuring elements
used in the hit or miss transform. The
morphological operator produces a far better
Segmentation than the homogeneous operator in
certain classes of aerial photographs.
REFERENCES
[1] Hodgson, M.E., Window Size and and Visual
Image Classification Accuracy: An
Experimental Approach, ASPRS/ACSM
Technical Paper, 1994.
[2] Bums, J.B., Hanson, A.R. and Riseman, E.M.,
Extracting Straight Lines, IEEE Transaction
on Pattern Anafysisand Machin Intelligence,
vol. PAMI- 8, no. 4, July, 1986,pp 425-455.
[3] Venkatesww, V. and Chellapa, R., Extraction
of Straight Lines in Aerial Images, IEEE
Transactions on Partern Analysis and Machine
Intelligence, vol. 14, no. 11, Nov, 1992, pp 11
11-1 114.
[4] Huertas, A. and Nevatia, R., Detecting
Buildings in Aerial Images, Computer Vision,
Graphics and Image Processing, vol. 41, 1988,
pp 131-152.
[5] Mohan, R. and Nevatia, R., Using Perceptual
Organization to Extract 3- D Structures,
NICE-2010

Acharya Institute of Technology, Bangalore-560090 30

IEEE Transactions on Pattern Ana&srs and
Machine Intelligence, vol. 11, no. 11, Nov,
1989, pp 1121 1139
[6] Mayer. H., Abstractlion and Scale-Space
Events in Image understand, International
Archives of Photogrammetry and Remote
Sensing, vol 31, part 3, Vienna, 1996, pp 523-
528.
[7] Levitt, S. and Aghdasi, F., Texture Measures
for Building Recognition in Aerial
Photographs, Proceedings of COMSIG 1997,
Grahamstown South Africa, Sept, 1997, pp 75-
80.
[8] S. Lhomme, C. Weber, D. He, and D. Morin,
"Building extraction from vhrs images," in
ISPRS Congress, Istanbul, Turkey, 2004.
[9] I. Destival, "Mathematical morphology applied
to remote sensing," Acta Astronautica, vol. 13,
no. 6/7, pp. 371-385, 1986.
[10] E Laporterie, G. Flouzat, and 0. Amram,
"Mathematical morphology
[11] multi-level analysis of trees patterns in
savannas," in IEEE International
[12] Geosciences And Remote Sensing
Symposium, 2001, pp. 1496-1498.
[13] P. Soille and M. Pesaresi, Advances in
mathematical morphology applied to
geoscience and remote sensing," IEEE
Transactions on Geoscience and Remote
Sensing, vol. 40, no. 9, pp. 2042-2055,
September 2002.
[14] J. Weber, S. Lefevre, and D. Sheeren,
"Building extraction in vhrs images with
mathematical morphology," in International
Conference on Spatial Analysis and Geomatics,
Strasbourg, France, September 2006.
[15] J. Serra, Image Analysis and Mathematical
Morphology. Academic Press, 1982.





NICE-2010

Acharya Institute of Technology, Bangalore-560090 31

AUTOMATIC DETECTION OF HUMAN SKIN ALLERGYS USING IMAGE
PROCESSING TECHNIQUE
Mr. Shamshekhar Patil., Asst prof., Dept of CSE, Dr. Ambedkar Institute of Technology, Bangalore
Mrs. Shiva leela S.C., Lecture, Dept of MCA, Dr. Ambedkar Institute of Technology, Bangalore
Mr. Arvind C.S., M.Tech, Dr. Ambedkar Institute of Technology, Bangalore
csarvind2000@gmail.com
ABSTRACT
This paper is to develop a system that will
automatically perform and evaluate common skin
allergy tests on the human arm. The system has
two main branches of development; (1) The image
processing and vision system for guidance and
result evaluation and (2) The expert system for
classification of results. This paper is a progress
report for the vision system and the image
processing algorithms. It presents a method for
preliminary image processing for image
enhancement followed by the main digital
processing section which includes items such as
corrections for non-uniform illumination, hair
removal, adaptive thresholding and morphological
issues.
Index Terms: correlation method, adaptive
thresholding, allergy.
1. INTRODUCTION

The design of image processing and vision
system branch of the diagnosis system is not only
demanding and crucial for the performance of the
system as a whole, but also critical by medical
terms, because the test must be performed on
specific skin areas of the arm that are designated
with the help of anatomic criteria. For example,
the avoidance of veins is critical because a strong
reaction to a stimulant may cause a potentially
lethal allergic shock to the patient. Therefore,
careful image processing and test planning is
essential for the design of the whole system.
The first step in determining candidate
locations for the allergy agent placement is to
scan the skin and exclude the areas that have
lesions, wounds and veins. Previous research [1]-
[7] has shown the effectiveness of vein imaging
via the use of infrared (IR) illumination. The same
approach has been used in our system, but the IR
image is supplemented with normal visual images
and augmenting visible, infrared and laser
illumination.
For the complete automation of the
allergy test, the help of a machine vision system is
a strong prerequisite. The machine vision system
needs to address the following issues: i) Detection
of areas not suitable for performing the test. ii)
Monitoring of the subject position and location for
the safe and accurate guidance of the arm. iii)
Selection of the areas for stimulant dispensing. iv)
Evaluation of reactions with respect to the blood
concentration (erythema). v) Evaluation of
reactions in case of rash development.
2. IMAGING SYSTEM

The imaging system will consist of two
cameras, one for the IR imaging and one for the
normal light imaging. The IR camera is used
primarily at the first stage of the test for the
determination of vein location and any unusual
concentration of blood on the skin. The secondary
use of the infrared camera is for the detection of
the reaction results after the placement of the
allergy reactors. Any concentration of blood that
will show up as an erythema, will also be visible in
IR as a dark blob, because of the absorption of IR
light from the blood hemospherin. The visible
light camera serves multiple purposes: For
measuring the distance of the arm to the camera
via calibrated laser beam marks projected on the
object; For measurement of real-life dimensions
of the reactions on the skin; And for scanning
three dimensional anaglyph of reactions in cases
of blisters appearing via help of a projected laser
line on the skin.
The illumination of the work area is
performed via both IR and visible (white) light
LED sources that are sufficiently diffused via
air/polymer diffusers. Reflections are minimised
by polarizing filters in the sources and cameras,
oriented in perpendicular directions. The Infrared
optical system setup is shown in Fig2.1. The
sensitivity of the IR camera is selectively limited
to the IR region via proper rejection filters that
will not allow visible light in the camera. In
Fig.2.2, sample image from a working prototype
setup is shown, clearly displaying the benefits of
IR imaging for subcutaneous vein detection.
_____________________________________________________________________________________________________________________________
NICE-2010

Acharya Institute of Technology, Bangalore-560090 32

1 *
*
)) , ( (
) , (
*
)) , ( (
1 1
1 1
1 1

(
(
(
(
(

(
(
(
(
(

=



= =
= =
= =
N M
N M
y x I
y x I
N M
y x I
Tb
M
x
N
y
M
x
N
y
M
x
N
y




Figure 2.1: Infrared optical system.




Figure 2.2: Capture with IR source
illumination.

3. PROPOSED SYSTEM
3.1 Camera Calibration

In order to be able to perform measurements
on the allergy reactions, the cameras have to be
calibrated so that all real-lens projection
distortion on the image can be compensated in
the image processing software. Camera
calibration is the process that it is used to
estimate the intrinsic and extrinsic parameters of
the camera. In most cases the reliability and the
output of a machine vision system depends on the
accurate definition of the above parameters. The
calibration can be performed once and after the
camera parameters are estimated they can be
reused in the calculations, since the optical system
remains invariant. The estimation of the intrinsic
camera parameters is achieved by presenting
printed checkerboard images of known
dimensions to the camera. From these known
images the camera model can be iterated until the
parameters are estimated to sufficient precision.
Extrinsic camera parameters can be estimated by
varying the distance and angle of the presented
patterns to the camera. After the camera
calibration is performed, images acquired with
the cameras can be compensated for the lens
distortions and the measured image distances can
be correlated to the real world distances on the
patient skin.
3.2. Image Acquisition

At the current stage of the project the
image processing algorithms are prototyped and
developed in MATLAB. We have assumed that
the arm can be held still for about one second, Of
course, the capture rate depends on camera
characteristics and specifications. The multiple
images are then averaged as a first step for noise
removal. Averaging a number of images creates a
clearer picture, with less noisy pixel values [8].
3.3 Background Detection

For the optimal extraction of the test
points, the calculations need to be constrained to
the actual image area that contains the subjects
skin. Therefore, a special setup with a matte black
backdrop is used to ease the exclusion of the
background pixels from the calculations. The
background pixels can therefore be extracted with
the use of simple global thresholding, and the
threshold Tb that was used was set at the level of
the mean value of the image minus the standard
deviation, Equation (1), i.e for the initial greyscale
image I(x,y) ,where (x,y) are the spatial
coordinates, with dimensions MxN pixels, the
threshold Tb equals:
Eq. 1
Afterwards, the greyscale image is
converted to binary (bw) using the calculated
threshold Tb

>
=
Tb j i I if
Tb j i I if
j i bw
) , ( , 1
) , ( , 0
) , (
Eq.2
where (i, j) are the coordinates of each
image pixel. The result of the above process is
shown in Fig.3.1. Knowing the background pixels
coordinates, the algorithm processes only the
forearms pixels that reduces the execution time
and improves the veins detection procedure.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 33


Figure 3.1: Background Detection
3.3 Hair Removal

Before we can proceed with the vein
detection we must first remove all the elements
that may interfere with the final result. Such
elements are the hair that may be present. We
remove the traces of the hair via morphological
closing. The result of this process is shown in
Fig.3.2. The selection of the proper structural
element (SE) to be used in opening the image is
very important [9]-[10]. Experiments have shown
that a vertical and a horizontal line SE in the
shape of cross should be used because of the
irregular orientation of the hair. The size of the SE
is estimated on a per-image basis from the
average hair thickness measured on the images.


Figure 3.2: Hair Removal.
3.3 Uneven Illumination Correction

One of the most frequent problems in
image processing is uneven illumination. The
result of the undesirable non-uniform
illumination is regions of the image being dark
and their contrast too low. The human eye has no
trouble perceiving the images, because the brain
will compensate the differences, taking into
account the ambient illumination. Unfortunately,
the computer cannot automatically compensate
for this effect, so the next goal of the processing is
to flatten the illumination component in the
image. An image is a function consisting of two
coefficients the reflectance and the illumination
which are combined in a non-linear manner [11].
The desired goal is to remove the illumination
gradient from the image and leave only the
objects reflectance.
The illumination component can be found
at the low frequencies of the image. In order to
calculate this component the image is filtered
with a very low pass filter in frequency domain.
This way, an estimation of the background of the
image is created, which is dominated by the
illumination coefficient. To eliminate any border
effects, the images are transformed from- and to-
the frequency domain via symmetric extension of
the image boundaries. The result of the low pass
filtering is the original image subjected to heavy
smoothing. Finally, the reflectance component of
the image can be calculated by dividing the
original (Fig.3.2) image with the smoothed image.
The division is the only way to extract the
reflectance coefficient because it is combined with
the illumination signal by multiplication (Fig.3.3).

Figure 3.3: Un even illumination correction.


3.4 Contrast Enchantement

The image produced after the illumination
correction has very low contrast. In order to
enhance the veins, an intensity transformation
was used which is called contrast stretching. This
transformation function compresses the input
levels lower than m in to a narrow range of dark
levels in the output image; similarly, it
compresses the values above m in to a narrow
band of light levels in the output. The value of m
could be calculated experimentally to optimize
the results. In this case the m coefficient is
calculated using statistical elements of the image
I(x,y).

1 *
*
)) , ( (
) , (
*
*
)) , ( (
1 1
1 1
1 1

(
(
(
(
(

(
(
(
(
(

+
(

=



= =
= =
= =
N M
N M
y x I
y x I
b
N M
y x I
m
M
x
N
y
M
x
N
y
M
x
N
y
Eq.3
NICE-2010

Acharya Institute of Technology, Bangalore-560090 34

The contrast stretching transformation is
described by the limiting function

E
r m
r T
) / ( 1
1
) (
+
=
Eq. 4
where r represents the intensities of the input
image, T(r) the corresponding intensity values
in the output images and E controls the slope of
the function. Fig.3.4 depicts the contrast
stretching effect. It must be noted that the
algorithm does not take into account the
background regions that were identified at the
background detection process [11].






Figure 3.4: Contrast Enhancement.
3.4 Veins Extraction

After the contrast enhancement
procedure, the pixels that they belong to the veins
have dark intensity levels than the pixels that they
belong to the rest skin area of the forearm.
Although that enhancement conduces to the
extraction of the skin areas that are suitable for
performing the allergy test, some problems of the
images segmentation may be still remained. This
situation is caused because of the non-uniform
intensity levels of the pixels that shape the veins.
So the use of a global thresholding method will
not produce satisfactory results. Many methods of
global, adaptive and dynamic thresholding
calculation have been tested in order to extract
the optimal shape of the veins. Finally, a
multilevel thresholding method is used by
producing satisfactory results.

Firstly, the profile of the pixels intensity
levels of each row of the image is computed. After
intensity profiling the maximum and the
minimum value of each distribution is calculated,
Fig.8. These values are the transition limits of the
veins pixels values. The maximum and minimum
values from the previous stage of the process are
averaging to estimate the down and the upper
limiting levels of the multilevel thresholding,
equation (5) and (6).
1=mean(min)-b*(mean(max)-mean(min)) 5
2=mean(max)-b*(mean(max)-mean(min)) 6
Afterwards, the greyscale image is
converted to binary (bw) using the
calculated threshold levels


=
else
j i bw
, 0
2 ) y I(x, 1 , 1
) , (

Eq. 7
where (i, j) are the coordinates of each
image pixel. The result of the above process
is shown in Fig.3.5. The value coefficient b
was determined by experiment and set to a
value of 0.38 for best results with the data
set used.

Figure 3.5: Veins extraction
3.5 Test point dispersion

The most crucial stage of the application
is the selection of points suitable for performing
the allergy test. These points must satisfy rules
that are set according to the standard medical
procedure for allergic tests. Let d denote the
maximum diameter that a test result can reach
(we assume that the test results will be circularly
shaped). The minimal distance of two adjacent
test points must be greater than d to avoid
overlapping of the test results. Another factor to
consider in the algorithm is that certain areas in
the hand forearm such as veins, gland, scratches
etc, must be avoided. In order to build a fast
dispersion algorithm and to ensure that the test
points will not overlap undesirable areas, a
morphological operation of dilation is used. A
structuring element with a disk shape is applied
because of the roughly circular shape of the
allergic test results. Its size must be shortly larger
than the estimated maximum diameter d of an
allergy reaction to secure a safe distance from the
prohibited areas. The examination of the test
points is accomplished through the distance
calculation between any two possible points. For
every new candidate test point the distance
between it and all of the previous points that have
already been set and registered is calculated. The
result of this process is shown in figure 3.6
NICE-2010

Acharya Institute of Technology, Bangalore-560090 35


Figure 3.6: Test point dispersion.

4. READING SKIN TEST REACTIONS

Following the allergens reaction on the skin is the
stage of results recognition. Ten to twenty
minutes after the placement of the allergens at the
designated positions of the skin, the results of the
reactions appear. An allergen reagent causes a
small blister that is called a wheal, and a red
region, which is called an erythema, appears
surrounding the wheal. To obtain best results the
dimensions of the allergen reactions are
measured under high illumination.
The mean values of the maximum vertical and
horizontal diameter of the wheal and erythema
are calculated. These values are used to grade the
reactivity of the allergic response. The grading is
implemented by scoring system [12]. The exact
shape of the wheal can be recovered in 3D via the
scanning of the area with a projected laser line.
This helps pinpoint the exact boundary of the
wheal without being distracted by any color
variations of the skin or the erythema.

5. CONCLUSION

In this paper we have presented a computer aided
testing and diagnosis system for allergy testing.
We have focused mainly on the design of the
vision and image processing system which is
based on the principle that the bloods
hemospherin absorbs IR lighting, therefore
making the detection of regions with high-level
blood concentrations, like veins, easier. The
development of this system is still in its early
stages and the work is still in progress; therefore,
some described features may change for the
benefit of speed and accuracy. In the current
stage of development, several methods are being
evaluated on a laboratory prototype setup to
determine the optimal procedure for allergy
detection (results) and classification.



6. REFERENCES

[1] H. D. Zeman, G. Lovhoiden and H. Desmhukh,
Optimization of subcutaneous vein contrast
enhancement, Proc. SPIE 2000 Biomedical
Diagnostic, Guidance and Surgical-Assist Systems
II, vol.3911, pp.50-57, May 2000.
[2] H. D. Zeman, G. Lovhoiden and H. Desmhukh,
Design of a Clinical Vein Contrast Enhancing
Projector, Proc. SPIE 2001 Biomedical
Diagnostic, Guidance and Surgical-Assist Systems
III, vol.4254, pp.204-215, June 2001.
[3] G. Lovhoiden, H. Desmhukh and H. D. Zeman,
Clinical Evaluation of Vein Contrast
Enhancement, Proc. SPIE 2002 Biomedical
Diagnostic, Guidance and Surgical-Assist Systems
IV, vol.4615, pp.61-70, May 2002.
[4] G. Lovhoiden, H. Desmhukh, C. Vrancken, Y.
Zhang, H. D. Zeman and D. Weinberg,
Commercialization of Vein Contrast
Enhancement, Proc. of SPIE 2003 Advanced
Biomedical and Clinical Diagnostic Systems,
vol.4958, pp.189-200, July 2003.
[5] G. Lovhoiden, H. Desmhukh and H. D. Zeman,
Prototype vein contrast enhancer, Proc. of SPIE
2004 Advanced Biomedical and Clinical
Diagnostic Systems II, vol.5318, pp.39-49, July
2004.
[6] Ph. Schmid and S. Fischer, Colour Segmentation
for the Analysis of Pigmented Skin Lesions,
Proc. of the Sixth International Conference on
Image Processing and its Applications 1997, vol.2,
pp.688-692, July 1997.
[7] Ph. Schmid-Saugeon, J. Guillod and J. P. Thiran,
Towards a computer-aided diagnosis system
for pigment skin lesions, in Computerized
Medical Imaging and Graphics, vol.27, No.1,
pp.65-78, 2003.
[8] R. C. Gonzalez, R. E. Woods, Digital Image
Processing, Prentice Hall, New Jersey, United
States of America, pp.10-18, 2002.
[9] T. Lee, V. NG, R. Gallagher, A. Coldman,
D.McLean, Dullrazor

: A software approach to
hair removal from images, Computers in Biology
and Medicine, vol.27, no 6, pp. 533-543,
November 1997.
[10] P. Soile, Morphological Image Analysis:
Principles and Applications, Springer, Berlin,
Germany, pp. 105-133, 2003.
[11] R. C. Gonzalez, R. E. Woods, Digital Image
Processing, Prentice Hall, New Jersey, United
States of America, pp.28-31, 2002.
[12] R. G. Slavin, R. E. Reisman, Expert Guide to
Allergy & Immunology, American College of
Physicians, pp.44, 1999.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 36

ENERGY-EFFICIENT BIOTELEMETRY SYSTEM WITH NANO IP
Mr.R.Mohan,M.E
1
., P.Baranidaran
2
, K.Sudha
3

Sr. Lecturer/ECE
1
, Final year B.E-ECE
23
M.P.N.M.J.Engineering College, Chennimalai, Erode.
baranidaranp@yahoo.in
2
, sudha88@gmail.com
3
_____________________________________________________________________________________________________________________________

ABSTRACT

Technical advancements in embedded
systems, wireless communications and
physiological sensing allow small size, light weight,
ultra low power, and intelligent monitoring devices.
A number of these devices can be integrated into
energy efficient biotelemetry system with Nano IP,
a new enabling technology for health monitoring,
sports and military applications. The sensing unit
which is wearable device consists of sensors and
integrated circuits. These devices are capable of
performing health monitoring activities such as
heart rate, breath rate, blood glucose, body
temperature and nerve stimulus with a personal
computer and then monitored parameters can be
send to necessary place through Nano IP
communication protocol suit. An energy-efficient
implementation of Wireless Body Sensor Networks
(WBSNs) with embedded technology is designed
featured with Work-on-Demand protocol.
Dedicated for ultra-low-power wireless sensor
nodes, the system consist of a low power
Microcontroller unit (MCU), a Power Management
Unit (PMU), reconfigurable sensor interfaces and
communication ports controlling a wireless
transceiver with GSM technology. The MCU,
together with the PMU, provides quite flexible
communication and power-control modes for
energy efficient operations. This system will
consumes less amount of power than its existing
models, which is less than 3.3 volts. The measured
parameters are interfaced with mobile devices
through nano IP technology

1. INTRODUCTION

The greatest problem faced by most
wireless sensor networks is energy. When a
sensor is depleted of energy, it can no longer fulfill
its role unless the source of energy is replenished.
Therefore, it is generally accepted that the use of a
wireless sensors expires when its battery run out.
Here, we provide a solution to extend the
battery lifetime and to reduce the power
consumption through a technology named energy
harvesting. This can be achieved from various
sources such as vibrational energy, kinetic energy
that is produced from the body movements, body
temperature and solar energy.
One method of harvesting vibrational
energy is through the use of piezoelectric crystal.
Here, we are going to implement this efficient
energy harvesting technology in our proposed
system. This energy harvesting is used to reduce
the power consumption and to reduce the cost of
implementation.

2. NANO IP

In our system, the communication will be
made through the Nano IP. Nano IP stands for the
Nano Internet Protocol which is a concept that
was created to bring internet- like networking
services to embedded and sensor devices, without
the overhead of TCP/IP. Nano IP was designed
with minimal overheads, wireless networking and
local addressing in mind.


Fig 1: Nano IP architecture

The protocol actually consists of two transport
techniques, nanoUDP, which is an unreliable
simple transport, and nanoTCP, which provides
retransmissions and flow control. A socket-
compatible API is provided which makes the use
of the protocols very similar to that of IP
protocols. The only difference is in addressing and
the port range. NanoIP makes use of the MAC
address of underlying network technology rather
NICE-2010

Acharya Institute of Technology, Bangalore-560090 37

than IP addresses, which are not needed for local
networks. The port range is 8-bits, 256 ports each
for source and destination.
With the Nano Socket family, only a few
hours are required to add full-featured Internet
access functionality over LAN or WiFi to an
embedded device, including TCP sockets, SSL
encryption, routing, e-mail, and file transfers. The
logical interface between the host application and
the modules is AT+i Protocol, a simple text-
based API that enables fast and easy
implementation of Internet networking and
security protocols. The Nano Socket family
includes a plethora of security features. The
modules serve as a communications offload
engine and inherent firewall, protecting the
embedded device from the Internet attacks.
This module provides high level of
Internet security and encryption algorithms (AES-
128/256, SHA-128/192/256, 3DES, SSL3/TLS1
protocol for a secure client socket session) for
complete, end-to-end encryption support. The
Nano Socket iWiFi also includes the latest WLAN
encryption schemes (64/128-bit WEP,
WPA/WPA2 enterprise).
Nano Socket offers much more than many
other device servers on the market. It acts as a
security gap between the host application and the
network in Full Internet Controller mode. It
supports upto 10 simultaneous TCP/UDP sockets,
two listening sockets, a web server with two
websites, SMTP and POP3 clients, MIME
attachments, FTP and TELNET clients, and
SerialNET mode for serial to IP bridging. It
supports multiple Certificate Authorities and both
client-side and server-side authentication.
Nano Socket reduces the need to redesign
the host device hardware. Minimal or no software
configuration is needed for Nano Socket to access
the Wireless LAN. The serial operating mode
offers a true plug and play mode that eliminates
any changes to the host application. The Nano
Socket iWiFi also includes the latest WLAN
encryption schemes (64/128-bit WEP,
WPA/WPA2 enterprise).
The Nano Socket family eases
development by including USB, SPI, USART and
RMII interfaces. Both modules share the same
simple header-based pin out, allowing reduced
assembly costs and increased flexibility when
designing solutions. A single product design can
easily accommodate both LAN and WiFi.
The modules support several modes of
operation:
LAN-to-WiFi Bridge allowing transparent
bridging of LAN over WiFi, using direct RMII
connection to existing MAC hardware or PHY-
to-PHY connection.
Serial-to-LAN/WiFi Bridge allowing
transparent bridging of serial data over LAN
or WiFi.
Full Internet Controller mode allowing
simple microcontrollers to perform complex
Internet operations such as e-mail, FTP, SSL,
and others.
PPP emulation allowing existing (e.g.,
cellular modem) designs currently using PPP
to interface to the cellular modem by
connecting transparently over LAN or WiFi
with no changes to application or drivers.
Following are the additional features of
Nano Socket:
Non-volatile, on-chip operational parameter
database
Supports infrastructure and ad-hoc Wireless
LAN networks
SerialNET mode for Serial-to-IP bridging(port
server mode)
Local Firmware update
Remote configuration and firmware update
over the Internet
Retrival of time data from a Network Time
Server
The typical applications of Nano IP Socket
are adding IP communications over WiFi or LAN
to serial embedded devices and adding SSL
security to M2M solutions. Nano socket firmware
is remotely updateable and thus new security or
connectivity protocols do not require application
redesign, increased memory, or faster processor
speeds.
3. ENERGY HARVESTING

A wireless sensor network that is not
dependent on a limited power source (like a
battery) essentially has infinite lifetime. Much of
the research on wireless sensor networks has
assumed the use of a portable and limited energy
source, namely batteries, to power sensors and
focused on extending the lifetime of the network
NICE-2010

Acharya Institute of Technology, Bangalore-560090 38

by minimizing energy usage. Failure due to other
causes (like structural hardware damage) can be
overcome by self-organization and network re-
configuration. This has motivated the search for
an alternative source of energy to power wireless
sensor networks especially for applications that
require sensors to be installed for long durations
(up to decades) or embedded in structures where
battery replacement is impractical.

The harnessing and collection of ambient
energy into a useful form is called Energy
Harvesting. The use of ambient energy to generate
electricity is not a new concept. The ambient
energy that is being harvested to generate
electricity nowadays includes solar, wind,
mechanical and thermal energy. Harvesting
energy for low power devices like wireless
sensors parents is a big challenge as the energy
harvesting device has to be compatible in size (i.e.
small enough) with the sensors. There are
complex tradeoffs to be considered when
designing energy harvesting circuits for wireless
sensor networks arising from the interaction of
various factors.

Following are the some benefits of Energy
Harvesting:
Reduce dependency on battery power as
harvested energy may be sufficient to
eliminate battery completely.
Reduce maintenance costs as Energy
harvesting allows for devices to function
unattended and eliminates service visits to
replace batteries.
Providing sensing and actuation capabilities
in hard-to access hazardous environments on
a continuous basis.
Reduce environmental impact of hazardous
chemicals.

The more efficient and safety energy
harvesting technology is the energy harvesting
from vibration energy through the use of
piezoelectric capacitor. This method is the safest
and efficient method in the sense, this power
generated from vibration is not harmful as even
solar energy is not safe at all instances to human
body. In a vibration based harvesting, micro
power generator is used to scavenge body
vibrations for use in the sensor node.
Experimental results have shown that when a
piezoelectric crystal is depressed, sufficient
energy is harvested to transmit two complete 12-
bit digital words wirelessly.

Similarly, a system that harvests energy
from the forces exerted on a shoe during walking
has been demonstrated and indoor locations, like
staircases, are potential locations to harvest
vibrational energy for powering body sensors.

4. SYSTEM ARCHITECTURE

Fig 2: Functional block diagram of energy efficient
biotelemetry system.
The Bio-Telemetry system consists of body
sensor networks in which slave sensor nodes can
be used for bio-medical information acquisition;
signal preprocessing, data storage, and wireless
transmission (sometimes direct transmission
without any preprocessing). These body sensors
are used to sense the various bio-medical
parameters such as breath rate, temperature of
the body, motion of body parts, glucose level in
the blood and also heart beat rate. The sensed
analog signals are converted to digital signals
appropriate for transmission. This type of slave
sensor node is called as the sensing node. In
addition, the function of sensor nodes can be
expanded to monitoring, diagnosing and for
treatment purpose and this type of slave sensor
node is called as the stimulating node. The sensed
signals are driven to master node through the
slave node. Then the master node transmits the
sensed signals to the desired computer system at
where monitoring of patient has been done. This
transmission can be performed through Nano IP
module.

Energy harvesting is achieved by placing
piezoelectric crystal to the moving body parts.
Piezoelectric crystal works on the principal of
piezoelectric effect, through which vibrational
energy produced by the body parts is converted to
electrical energy that gives sufficient additional
energy for the sensor unit. Energy harvesting
technology increases the life time of all the
sensors, and also reduces the power consumption.

NICE-2010

Acharya Institute of Technology, Bangalore-560090 39


The sensed signals are sent to the
microcontroller PIC 16F877A that transmits the
signals to the Nano IP. This Nano IP protocol acts
as a transceiver system and it helps to transmit to
the desired location and for receiving diagnose
signals from the doctor or physician, if the patient
is in abnormal condition. The abnormal condition
of the patient can be easily predicted with the
highly reliable patient monitoring through Nano
Internet Protocol Transmission. The buffer is used
to store the information temporarily and driver is
used to direct the necessary devices to perform
first-aid treatment by stimulating needles, syringe
dispenser, and by controlling valves and relays.

In addition, this system can perform the first-
aid treatment when the physiological parameters
value exceeds the safety level. For example, when
the breath rate of a patient goes down behind the
normal value, then the valve accompanied with
the oxygen cylinder will automatically supply the
air to the patient and the amount of oxygen that to
be supplied to the patient will be controlled by the
relay unit present in our system.

5. CIRCUIT IMPLEMENTATION

a. Power Management Unit:
The Power management Unit consists of
piezoelectric crystal, bridge rectifier and storage
battery. The piezoelectric crystal accompanied
with bridge rectifier to harvest energy. The
vibrational energy is converted here as AC voltage
by the crystal and then the bridge rectifier
produces the DC voltage. The harvested energy
supplied to the battery which can store energy
and also the battery receives energy from main
power supply. Thus the lifetime of the battery
extends.


Fig 3: Power Management Unit
b. Sensor Unit:
The body parameters which to be measured
are measured through respective sensors. The
heart rate is sensed by an indirect manner such as
using IR transmitter and IR receiver with a
reference voltage. The heart rate is nothing but
number of times the heart pumps the blood. The
receiver associated with microcontroller could
sense the blood flow per unit time. Similarly blood
glucose can be measured using IR LED and IR
LDR.

c. Stimulus Unit:
The stimulus unit is to perform first-aid
process whenever the physiological parameters
exceed its safety level.



Fig 4: Stimulus Unit

d. Communication Unit With Nano IP:
This protocol suit is mainly designed to
reduce the gap between networking parameters
and embedded like devices. The nano IP provides
highly reliable and highly securable network.

Performance specifications of nano IP:
Host data rate: up to 3Mbps in several mode,
12 Mbps in LAN WiFi mode.
Serial data format: asynchronous character;
binary; 8 data bits; no parity; 1,1.5,2 stop bits.
Serial NET mode : asynchronous character;
binary; 7 or 8 data bits; odd, even, or no
parity; 1, 1.5, or 2 stop bits.
Flow control: Hardware (-RTS,0CTS) and
software flow control.

Typical applications of nano IP:
Adding IP communication over WiFi to serial
embedded devices
Piezocrystal Bridge
Rectifier
Battery to all
units
Microcontroller
Unit
Buffer
Syringe
dispenser
Valve
Driver
NICE-2010

Acharya Institute of Technology, Bangalore-560090 40

Replacing a LAN cable with a WiFi connection
Adding SSL security to M2M solutions.



6. CONCLUSION
Hence implementation of the bio
telemetry system using Nano IP has been
proposed. Compared to the conventional
protocols used in bio-telemetry systems, the
proposed Nano Internet Protocol suit gives
efficient and reliable transmission of bio-medical
parameters to the monitoring location.

The power management unit in the
system consists of energy harvesting unit that
harvests energy using piezoelectric crystal, to
reduce the power consumption below 4V and also
to control the implementation expensive and
make it to available to the all categories of people.

7. REFERENCE

1. Fiocchi and C.Gatti, A very flexible BiCMOS
low-voltage high-performance source
follower, in Proc. ISCAS, 1999, vol.2, pp. 212-
215.
2. G.Z.Yang, Body sensor networks, London,
U.K:springer,2006.
3. J.F.Dickson, On-chip high-voltage generation
in MNOS integrated circuits using an
improved voltage multiplier technique, IEEE
Solid-State Circuits, vol. 11, no. 3,pp. 374-378,
Jun.1976.
4. R.Dudde and T.Vering, Advanced insulin
infusion using a control loop (ADICOL)
concept and realization of a control-loop
application for the automated delivery of
insulin, presented at the 4
th
Annu. IEEE Conf.
Information Technology Applications in
Biomedicine, Birmingham, U.K., 2003.
5. R.Puers and P.Wouters, Adaptive interface
circuits for flexible circuits for flexible
monitoring of temperature and movement,
Analog Integr. Circuits Signal Process., vol. 14,
pp. 193-206,1976.
6. Xiaoyu Zhang, Hanjun Jiang, Lingwei Zhang,
Chun Zhang, Zhihua Wang, and Xinkai Chen,
An energy efficient ASIC for wireless body
sensor networks in medical applications,
IEEE transactions on biomedical circuits and
systems, vol.4, No.1, February 2010.





NICE-2010

Acharya Institute of Technology, Bangalore-560090 41

BRAIN WAVE CONTROLLER FOR STRESS REMOVAL AND AUTOMATION OF
AUTOMOBILE IGNITION TO PREVENT DRIVING UNDER INFLUENCE
CONTROLLING BRAINWAVES USING EMBEDDED SYSTEMS
L. Sabarish

sabarish.lakshmanan@yahoo.com
Department Of Electronics and Electronics Engineering
Rajalakshmi Engineering College,
Rajalakshmi Nagar, Thandalam, Chennai 602 105, Ph : 9710350536
___________________________________________________________________________________________________________________________________

ABSTRACT

Stress is a prevalent and costly problem in
today's workplace. It is the harmful physical
and emotional response that occurs when there is a
poor match between job demands and the
capabilities, resources, or needs of the worker.
Persistence of stress results in cardiovascular
disease such as depression, concentration and
memory loss.
Addiction is one of the chronic disorders
that are characterized by the repeated use of
substances or behaviors despite clear evidence of
morbidity secondary to such use. It is a
combination of genetic, biological /
pharmacological and social factors. Example:
gambling, alcohol drinking, taking narcotic drugs
and certain mannerisms. The therapies at present
consume time.
About 24% of the accidents taking place
are due to drunken drive. A driver subjected to long
drive falls sleepy and ends up in accident.
In this paper we briefly discuss about the
brain wave and brains reaction during stress,
addiction and drunk. This paper also explains you
the basic task of Brainwave Controller, that how
stress, addiction is identified with the help of
brainwave and how these are controlled using the
principle binaural beats. Also we have designed a
device to detect the brainwaves and process it to
determine whether it is addiction or stress. In
addition to controlling of brainwaves, it also has a
feature to avoid an individual who consumes
alcohol to drive a vehicle. This paper promises to be
an economical solution for the people who suffer
from stress, addiction and to prevent accidents.

Keywords :
Stress, Brainwave, cardiovascular disease,
binaural beats, addiction, driving under influences
(DUI) and ignition interlock system or device
(IID).
I. INTRODUCTION

A. Brain:
Brain is an electro-chemical organ. The
Brainwaves are produced by the frontal lobe of
the brain. It processes auditory information from
the ears and relates it to Wernicke's area of the
parietal lobe and the motor cortex of the frontal
lobe. The amygdala is located within the temporal
lobe and controls social and sexual behavior
and other emotions. The limbic system is
important in emotional behavior and controlling
movements.

Fig.1 Side view of the human brain with
parts

Researchers have speculated that a fully
functional brain can generate as much as 10 watts
of electrical power. Even though this electrical
power is very limited, it does occur in a very
specific ways that are characteristic of the human
brain.

B. Brainwaves:
Electrical activity emanating from the
brain is displayed in the form of brainwaves.
There are four categories of these brainwaves,


Acharya Institute of Technology, Bangalore

ranging from most activity to least activity. These
are delta waves, theta waves, alpha waves and
beta waves. The waveform corresponding to these
categories are shown in figure 2.

1) Delta waves: These are waves with high
amplitude. It has a frequency of 0.5
Hertz. They never go down to zero
because that would mean that you were
brain dead. But, deep dreamless sleep
would take you down to the lowest
frequency. Typically, 2 to 3 Hertz.


2) Theta waves: These are waves with
amplitude lesser than that of delta wav
and have a greater frequency of 5
Hertz. A person who has taken time off
from a task and begins to daydream is
often in a theta brainwave state.


3) Alpha waves: These are waves with
amplitude lesser than that of theta waves
and have a greater freque
Hertz. A person who takes time out to
reflect or meditate is usually in a alpha
state.


4) Beta waves: These are the waves that have
the lowest amplitude and have the highest
frequency of 15 40 Hertz. These waves
are again classified into low
and high beta waves according to their
range of frequencies. The low beta waves
have a frequency of 15
person making an active conversation
would be in the low beta state. The
beta waves have a frequency of 33
Hertz. A person in a stress, pain or
addiction would be in the high beta state.
Figure 3 shows the representation of high
and low beta waves.



, Bangalore-560090
g from most activity to least activity. These
are delta waves, theta waves, alpha waves and
beta waves. The waveform corresponding to these
These are waves with high
amplitude. It has a frequency of 0.5 4
ertz. They never go down to zero
because that would mean that you were
brain dead. But, deep dreamless sleep
would take you down to the lowest
frequency. Typically, 2 to 3 Hertz.
These are waves with
amplitude lesser than that of delta waves
and have a greater frequency of 5 8
Hertz. A person who has taken time off
from a task and begins to daydream is
often in a theta brainwave state.
These are waves with
amplitude lesser than that of theta waves
and have a greater frequency of 9-14
Hertz. A person who takes time out to
reflect or meditate is usually in a alpha
These are the waves that have
the lowest amplitude and have the highest
40 Hertz. These waves
are again classified into low beta waves
and high beta waves according to their
low beta waves
32 Hertz. A
person making an active conversation
would be in the low beta state. The high
have a frequency of 33 40
rson in a stress, pain or
addiction would be in the high beta state.
Figure 3 shows the representation of high
S.No. Brainwaves
1) Delta
2) Theta
3) Alpha
4) Low Beta
5) High Beta

Table 1 Different brainwave and its frequencies

Fig. 2 Different brainwaves with their names and
the situations when it occurs.

Fig. 3 High beta waves and Low beta waves
respectively

C. Addiction:

There are two types of addiction:
dependency and Psychological dependency.

1. Physical dependency :

Physical dependence on a substance is
defined by appearance of characteristic
NICE-2010
560090 42

Frequency
range
(Hertz)
0.5 - 4
5 - 8
9 14
15 32
32 - 40

Table 1 Different brainwave and its frequencies

Fig. 2 Different brainwaves with their names and
the situations when it occurs.


Fig. 3 High beta waves and Low beta waves
respectively

There are two types of addiction: Physical
dependency and Psychological dependency.
Physical dependency :
Physical dependence on a substance is
defined by appearance of characteristic
NICE-2010

Acharya Institute of Technology, Bangalore-560090 43

withdrawal symptoms when the drug is suddenly
discontinued. Some drugs such as cortisone, beta
blockers etc are better known as
Antidepressants. Some drugs induce physical
dependence or physiological tolerance - but not
addiction - for example many laxatives, which are
not psychoactive; nasal decongestants, which can
cause rebound congestion if used for more than a
few days in a row; and some antidepressants,
most notably Effexor, Paxil and Zoloft, as they
have quite short half-lives, so stopping them
abruptly causes a more rapid change in the
neurotransmitter balance in the brain than many
other antidepressants. So a doctor should be
consulted before abruptly discontinuing them.

2. Psychological dependency:

Psychological addictions are a dependency
of the mind, and lead to psychological withdrawal
symptoms. Addictions can theoretically form for
any rewarding behavior, or as a habitual means to
avoid undesired activity, but typically they only
do so to a clinical level in individuals who have
emotional, social, or psychological dysfunctions,
taking the place of normal positive stimuli not
otherwise attained. Psychological addiction, as
opposed to physiological addiction, is a person's
need to use a drug or engage in a behavior despite
the harm caused out of desire for the effects it
produces, rather than to relieve withdrawal
symptoms. As the drug is indulged, it becomes
associated with the release of pleasure-inducing
endorphins, and a cycle is started that is similar to
physiological addiction. This cycle is often very
difficult to break.
We are going to solely consider the
psychological addictions in designing the
addiction avoider device.

D. Recovery Therapy from Addiction:

Some medical systems, including those of at least
15 states of the United States, refer to an
Addiction Severity Index to assess the severity of
problems related to substance use. The index
assesses problems in six areas: medical,
employment/support, alcohol and other drug use,
legal, family/social, and psychiatric. While
addiction or dependency is related to seemingly
uncontrollable urges, and has roots in genetic
predisposition, treatment of dependency is
conducted by a wide range of medical and allied
professionals. Early treatment of acute
withdrawal often includes medical detoxification,
which can include doses of anxiolytics or
narcotics to reduce symptoms of withdrawal. An
experimental drug, ibogaine, is also proposed to
treat withdrawal and craving. Alternatives to
medical detoxification include acupuncture
detoxification. In chronic opiate addiction, a
surrogate drug such as methadone is sometimes
offered as a form of opiate replacement therapy.
But treatment approaches universal focus on the
individual's ultimate choice to pursue an alternate
course of action. Anti-anxiety and anti-depressant
SSRI drugs such as Lexapro are also often
prescribed to help cut cravings, while addicts are
often encouraged by therapists to pursue
practices like yoga or exercise to decrease
reliance on the addictive substance or behavior as
the only way to feel good. Treatments usually
involve planning for specific ways to avoid the
addictive stimulus, and therapeutic interventions
intended to help a client learn healthier ways to
find satisfaction. Clinical leaders in recent years
have attempted to tailor intervention approaches
to specific influences that effect addictive
behavior, using therapeutic interviews in an effort
to discover factors that led a person to embrace
unhealthy, addictive sources of pleasure or relief
from pain.

E. Driving Under Influence(DUI):

Driving under the influence of alcohol (operating
under the influence, drinking and driving,
impaired driving) or other drugs, is the act of
operating a vehicle (including boat, airplane, or
tractor) after consuming alcohol or other drugs.
DUI or DWI are synonymous terms that represent
the criminal offense of operating (or in some
jurisdictions merely being in physical control of) a
motor vehicle while being under the influence of
alcohol or drugs or a combination of both.

It is a
criminal offense in most countries as it
contributes to some of the major accidents. Now
let us consider few basic concepts upon which this
project is largely based upon.
Blood alcohol content or blood alcohol
concentration (abbreviated BAC) is the
concentration of alcohol in a person's blood. BAC
is most commonly used as a metric of intoxication
for legal or medical purposes. It is usually
expressed in terms of volume of alcohol per
volume of blood in the body. That is a unit-less
ratio commonly expressed as parts per million
NICE-2010

Acharya Institute of Technology, Bangalore-560090 44

(PPM) or as a fractional percentage. That is a
decimal with 2-3 significant digits followed by a
percentage sign, which means 1/100 of the
previous number (E.g., 0.0008 expressed as a
percentage as 0.08%). Since measurement must
be accurate and inexpensive, several
measurement techniques are used as proxies to
approximate the true parts per million measure.
Some of the most common are listed here: (1)
Volume of alcohol per volume of exhaled breath
(E.g. 0.08 mL/L), (2) Mass per volume of blood in
the body (E.g.: 0.08 g/L), and (3) Mass of alcohol
per mass of the body (E.g.: 0.08 g/Kg). After one
drink you reach your peak after 30 minutes and
you should wait a few hours before you drive. The
number of drinks consumed is often a poor
measure of BAC, largely because of variations in
weight, sex, and body fat.

F. Ignition Interlock System:

An ignition interlock device or breath alcohol
ignition interlock device (IID and BIID) is a
mechanism, like a breathalyzer, installed to a
motor vehicle's dashboard. Before the vehicle's
motor can be started, the driver first must exhale
into the device, if the resultant breath-alcohol
concentration analyzed result is greater than the
programmed blood alcohol concentration, usually
0.02% or 0.04%, the device prevents the engine
from being started. At random times after the
engine has been started, the IID will require
another breath sample. The purpose of this is to
prevent a friend from breathing into the device,
enabling the intoxicated person to get behind the
wheel and drive away. If the breath sample isn't
provided, or the sample exceeds the ignition
interlock's preset blood alcohol level, the device
will log the event, warn the driver and then start
up an alarm (e.g., lights flashing, horn honking,
etc.) until the ignition is turned off, or a clean
breath sample has been provided. A common
misconception is that interlock devices will
simply turn off the engine if alcohol is detected;
this would, however, create an unsafe driving
situation and expose interlock manufacturers to
considerable liability. An interlock device cannot
turn off a running vehicle, all that an Interlock
device can do is interrupt the starter circuit and
prevent the engine from starting.


II. PRINCIPLE

The principle behind this device is
Binaural Beats. Binaural beats or binaural tones
are auditory processing artifacts, which are
apparent sounds, the perception of which arises
in the brain independent of physical stimuli. The
brain produces a similar phenomenon internally,
resulting in low-frequency pulsations in the
loudness of a perceived sound when two tones at
slightly different frequencies are presented
separately, one to each of a subject's ears, using
stereo headphones. A beating tone will be
perceived, as if the two tones mixed naturally, out
of the brain. The frequency of the tones must be
below about 1,000 to 1,500 hertz. The difference
between the two frequencies must be small
(below about 30 Hz) for the effect to occur;
otherwise the two tones will be distinguishable
and no beat will be perceived.
Binaural beats can influence functions of
the brain besides those related to hearing. This
phenomenon is called frequency following
response (FFR). The concept is that if one receives a
stimulus with a frequency in the range of brain
waves, the predominant brain wave frequency is
said to be likely to move towards the frequency of
the stimulus (a process called entrainment).
Human hearing is limited to the range of
frequencies from 20 Hz to 20,000 Hz, while the
frequencies of human brain waves are below
about 40 Hz. To account for this, binaural beat
frequencies must be used.
According to this view, when the
perceived beat frequency corresponds to any of
the brainwave frequencies, the brain entrains to
or moves towards the beat frequency. For
example, if a 315 Hz sine wave is played into the
right ear and a 325 Hz one into the left ear, the
brain is supposed to be entrained towards the
beat frequency 10 Hz. Alpha range is usually
associated with relaxation, this is supposed to
have a relaxing effect. Some people find pure sine
waves or pink noise unpleasant, so background
music (e.g. natural sounds such as river noises)
can also be mixed with them.
III.CONSTRUCTION AND WORKING

A. Block diagram

The General block diagram of controlling
addiction/stress is shown in the figure 4.



Acharya Institute of Technology, Bangalore



Fig. 4 General block diagram to control addiction /
stress


Fig. 5 General block diagram to avoid DUI

Fig. 6 General block diagram to avoid accidents

The block diagram used in implementation of
brainwave controller with all its modes is given in
figure 7.

, Bangalore-560090

Fig. 4 General block diagram to control addiction /

Fig. 5 General block diagram to avoid DUI

Fig. 6 General block diagram to avoid accidents
The block diagram used in implementation of
with all its modes is given in
Fig. 7 General block diagram of Brainwave
controller

B. EEG Sensors

EEG sensors is used to measure the
electrical equivalent signal of brain wave.It consist
of a 0.7 inch diameter hard plastic outer disc
housing with a pre-jelled Silver chloride snap style
post pellet insert. These sensors do not contain any
latex. Figure 8 shows the representation of Ag/Agcl
EEG sensor.

Fig. 8 Electroencephalography (EEG) sensors

Fig.9 EEG Fabricated Headset

1 Stress
2 Drunken drive
3 Night drive
R Reset






EEG
EEG
EEG
Amplifier
Amplifier
Amplifier
DSP Processor
Program
Program
Program
NICE-2010
560090 45

Fig. 7 General block diagram of Brainwave
controller
EEG sensors is used to measure the
electrical equivalent signal of brain wave.It consist
of a 0.7 inch diameter hard plastic outer disc
jelled Silver chloride snap style
post pellet insert. These sensors do not contain any
latex. Figure 8 shows the representation of Ag/Agcl

Fig. 8 Electroencephalography (EEG) sensors


Fig.9 EEG Fabricated Headset
Switch
1 2 3

Processor





Motor
Driving
DAC
Program
1
Program
2
Program
3
DAC
DC
Wake-
up
Right-
Side
Left-Side
Headpho
R
NICE-2010

Acharya Institute of Technology, Bangalore-560090 46

The sensor sends the analog brainwave signal into
the instrumentation amplifier circuit.

C. Instrumentation Amplifiers

The amplitude of analog brainwaves is in
between the range of 150 250 micro volts (V).
This is very low. For processing, at least
amplitude above 2 volt is needed. For this a high
gain and low noise amplifier is needed. For that
instrumentation amplifier with high gain and high
CMRR ratio is employed.

Here one operational is not enough to
produce this much high gain. So a series of
amplifier is cascaded to give required gain. The
gain of an individual inverting operational
amplifier is given by.

Gain (A) = -R2/R1


Fig.9 Circuit diagram of cascaded inverting
amplifier with a gain of 20,000. (Designed in
Pspice)

Here we are using four inverting amplifier
cascaded as shown in figure 9. Let the gain of each
inverting amplifier from left to right be A1, A2, A3
and A4. And let Vi and Vo be the input and output
voltages of the amplifier.
Now,
A1 = (-R2/R1)
= (-2/1)
= -2
A2 = (-R4/R3)
= (-10/1)
= -10
A3 = (-R6/R5)
= (-10/1)
= -10
A4 = (-R8/R7)
= (-100/1)
= -100
*all resistors are in kilo ohm

Now Total Gain of the amplifier (Aeff),

A eff = A1 * A2 * A3 * A4
A eff = (-2)*(-10)*(-10)*(-100)
A eff = 20, 000

Therefore,

Vo = Vi * Aeff * Vi
= 15 * 10
-5
* 20, 000 V
Vo = 3 Vi

Hence an amplifier with gain 20,000 is
designed using basic operational amplifier.

D. DSP Processor

The Amplified EEG signal is given to
TMS320C6713DSP Processor where it has a pre-
defined program to select a range of frequency
NICE-2010

Acharya Institute of Technology, Bangalore-560090 47

from 1 50 Hertz. Then that selected range is
converted from analog to digital samples. The DSP
processor has a definite program according to
different modes selected from the switch. There
are 3 modes with which we can use this device.



E. Simulation:

The following simulation images indicate
the recorded brainwave using 2 EEG electrodes:



1. Mode - 1: Controlling addiction / stress:

In mode 1 the DSP processor checks for
the frequency range between 32 40 Hertz. If the
range is between 32 40 Hertz, it is considered
that the person is under addiction/stress and the
processor runs a look-up table which contains the
digital samples of binaural beats. The samples
produce sine wave with a difference of 10 Hertz.
These two waves are sent to each side of
headphone.

Fig.10 Block Diagram Sending two similar tones
with difference in frequency.

This generates the binaural beats. This is
given to the ears . As the difference in these two
waves is 10 Hz which is below 20 Hz, it cannot be
detected by the human ear. But there is a neuron
called afferon neuron inside the ear which senses
this 10 Hz and sends it to brain as a stimulus. Now
this stimulus entrains the brain to generate a
stimulus of brainwaves similar to the supplied
stimulus thereby reducing the brainwaves from
32 40 Hz to 9 14 Hz making the mind relaxed.

2. Mode 2: Avoiding DUI :

When the switch is turned to mode 2, the
device is connected to internal circuitry of a
vehicle. Firstly, an Ignition Interlock Device (IID)
is placed in an automobile and it is made
mandatory for the driver to exhale in to the
breathalyzer to switch on the automobiles
engine. Here, the Blood Alcohol Level (BAC) of the
driver is analyzed and in case of high BAC the
engine does not start. If BAC is low, then a
specially fabricated Electroencephalograph (EEG)
headset (which contains EEG sensors according to
international 10 -20 system) should be placed in
the drivers head to analyze the drivers
brainwaves. By analyzing the drivers brainwaves,
the risk of driving under the influence of drugs is
reduced. Here the BAC is measured as per the
international threshold value of 0.04ml/L. Once
the BAC is declared low, then the driver has to
take up the drug test, which involves the usage of
EEG headset in order to detect the brainwave
activity of the driver. Unless and otherwise the
driver has his brain wave levels at alpha or beta
mode, the engine will not start.

3. Mode 3: Night drive:

NICE-2010

Acharya Institute of Technology, Bangalore-560090 48

When the switch is turned to mode 3, the
device is used to avoid sleep while long night
drive. Here the individual must wear a cap at all
times. This cap is embedded with EEG sensors as
explained in mode 2. Here too DSP processor
checks whether the signal has frequency below 7
Hertz which means the individual is nearly asleep.
If such is the case, the processor triggers an alarm,
so that accidents can be avoided. This alarm could
be of any form. For example it could be horns
honking or the audio system playing loud music
or enabling specific alarm device to perform the
waking up operation.
IV. CONCLUSION
We have designed a device that has three
purposes. Firstly, the brainwaves are controlled
by the principle of binaural beats and frequency
following response thereby controlling addiction
or stress by making the mind relaxed temporarily.
Secondly, the brainwaves are continuously
monitored to avoid drunken drive in a vehicle.
Thirdly, the brainwaves are continuously
monitored to falling asleep while driving long
distance in a vehicle. Though all the above
applications are discreet to each other, it is
absolutely useful to use a same device for all the
three purposes.
V. MERITS
1. The whole device is light weight and can
be carried anywhere.
2. The whole device including sensors and
headphone is cheap and costs only about
Rs. 1500 and slightly above.

VI. DEMERITS
Those meeting any of the following
criteria should not use binaural beats
Epileptics
1. Pregnant women
2. People susceptible to seizures
3. Pacemaker users
4. Photosensitive people.

VII. FUTURE ENHANCEMENTS

1. The concept of frequency following
response can be further researched to
ease communication with deaf and
dumb individuals.
2. The concept of binaural beats can be
further used to study the resonance of
brain during brain diseases.

VIII. REFERENCES

[1] Detection of seizures in epileptic and
non-epileptic patients using GPS and
Embedded Systems by Abhiram
Chakraborty Ukranian Journal of
Telemedicine and medical Telematics
(TOM 3 No.2 pp 211)
[2] Automated realtime interpretation of
brainwaves by Kenneth George Jordan
US patent
[3] Method and apparatus for changing brain
wave frequency by John L Carter US
patent 50368585036858
[4] en.wikipedia.org/wiki/ binaural beats
[5] en.wikipedia.org/wiki/Addiction
[6] en.wikipedia.org/wiki/Drug_Addiction
[7] www.trdrp.org/research/ PageGrant.asp?
grant_id=383

NICE-2010

Acharya Institute of Technology, Bangalore-560090 49

CLUSTER THE UNLABELED DATASETS USING EXTENDED DARK BLOCK
EXTRACTION
Srinivasulu Asadi
1
, Ooruchintala Obulesu
2
, P Sunilkumar Reddy
3

Dept of IT
12
, Dept of MCA
3

S.V.E.C, A.Rangampet, Tirupati - 517 502, India.
srinu_asadi@yahoo.com
1
, oobulesu681@gmail.com
2
, pg.sunilkumar@gmail.com
3


ABSTRACT
Clustering analysis is the problem of
partitioning a set of objects O = {o1 on} into c self-
similar subsets based on available data. In general,
clustering of unlabeled data poses three major
problems: 1) assessing cluster tendency, i.e., how
many clusters to seek? 2) Partitioning the data into
c meaningful groups, and 3) validating the c
clusters that are discovered. We address the first
problem, i.e., determining the number of clusters c
prior to clustering. Many clustering algorithms
require number of clusters as an input parameter,
so the quality of the clusters mainly depends on this
value. Most methods are post clustering measures
of cluster validity i.e., they attempt to choose the
best partition from a set of alternative partitions.
In contrast, tendency assessment attempts to
estimate c before clustering occurs. Here, we
represent the structure of the unlabeled data sets
as a Reordered Dissimilarity Image (RDI), where
pair wise dissimilarity information about a data set
including n objects is represented as nxn image.
RDI is generated using VAT (Visual Assessment of
Cluster tendency), RDI highlights potential clusters
as a set of dark blocks along the diagonal of the
image. So, number of clusters can be easily
estimated using the number of dark blocks across
the diagonal. We develop a new method called
Extended Dark Block Extraction (EDBE) for
counting the number of clusters formed along the
diagonal of the RDI. EDBE method combines
several image and signal processing techniques.
General Terms: Data Mining, Image Processing,
Artificial Intelligence.
KEYWORDS
Clustering, Cluster Tendency, Reordered
Dissimilarity Image, VAT, C-Means Clustering.

1. INTRODUCTION

The main Objective of our work Estimating the
number of clusters in unlabeled data sets is to
determine the number of clusters c prior to
clustering. Many clustering algorithms require
number of clusters c as an input parameter, so
the quality of clusters is largely dependent on the
estimation of the value c. Most methods are post
clustering measures of cluster validity i.e. they
attempt to choose the best partition from a set of
alternative partitions. In contrast, tendency
assessment attempts to estimate c before
clustering occurs. Our focus is on preclustering
tendency assessment.

The existing technique for preclustering
assessment of cluster tendency is Cluster Count
Extraction (CCE). The results obtained from this
are less accurate and less reliable. It does not
concentrate on the perplexing and overlap issues.

Its efficiency is also doubted. Hence we are
introducing a new technique in our work. Our
work mainly includes two algorithms, i.e. Visual
Assessment of Cluster Tendency (VAT) and
Extended Dark Block Extraction (EDBE). Here, we
initially concentrate on representation of
structure in unlabeled data in an image format.
Then for that image VAT algorithm is applied, and
then for the output of VAT, we apply EDBE
algorithm, there by generating the valid number
of peaks (i.e. number of clusters). Pair wise
dissimilarity information of a dataset including n
objects is depicted as an n*n image, where the
objects are potentially reordered so that the
resultant image is better able to highlight the
potential cluster structure of the data. The
intensity of each pixel in the RDI corresponds to
the dissimilarity between the pair of objects
addressed by the row and column of the pixel. A
useful RDI highlights potential clusters as a set
of dark blocks along the diagonal of the image,
corresponding to sets of objects with low
dissimilarity.

This dissimilarity matrix generated will be
provided as input to the VAT algorithm. RDI
NICE-2010

Acharya Institute of Technology, Bangalore-560090 50

(Reordered Dissimilarity Image) that portrays a
potential cluster structure from the pair wise
dissimilarity matrix of the data is created using
VAT. Then, sequential image processing
operations (region segmentation, directional
morphological filtering, and distance
transformation) are used to segment the regions
of interest in the RDI and to convert the filtered
image into a distance-transformed image. Finally,
we project the transformed image onto the
diagonal axis of the RDI, which yields a one-
dimensional signal, from which we can extract the
(potential) number of clusters in the data set
using sequential signal processing operations like
average smoothing and peak detection. The peaks
and valleys are found using peak detection
techniques from the projected signal. These peaks
and valleys are made to satisfy certain conditions.
Only the peaks which satisfy the given condition
will be considered as valid peaks. The number of
valid peaks provides the number of clusters that
can be formed from the unlabeled data sets. The
proposed method is easy to understand and
implement, and thereby encouraging results are
achieved.

2. RELATED WORK

Visual methods for cluster tendency assessment
for various data analysis problems have been
widely studied [10], [5], [9]. For data that can be
projected onto a 2D Euclidean space (which are
commonly depicted with a scatter plot), direct
observations can provide a good insight on the
value of c. Apparently, Ling [1] first automated the
creation of the RDI in
1973 with an algorithm called SHADE, which was
used after the application of the complete linkage
hierarchical clustering
scheme and served as an alternative to visual
displays of hierarchically nested clusters via the
standard dendrogram. Since then, there have been
many studies of the best method for reordering
and for the use of RDIs in clustering. Two general
approaches have emerged, depending on whether
the RDI is viewed before or after clustering. Most
RDIs built for viewing prior to clustering use
algorithms very similar in flavor to single-linkage
to reorder the input dissimilarities, and the RDI is
viewed as a visual aid to tendency assessment.
This is the problem addressed by our new DBE
algorithm, which uses the VAT algorithm of
Bezdek and Hathaway [2] to find RDIs. VAT is
related but not identical to single-linkage
clustering; see [11] for a detailed analysis of this
aspect of VAT. Several algorithms extend VAT for
related assessment problems. The bigVAT [3] and
sVAT [4] offered different ways to approximate
the VAT RDI for very large data sets. The coVAT
[6] extended the idea of RDIs to rectangular
dissimilarity data to enable tendency assessment
for each of the four co-clustering problems
associated with such data.

2.1. Review of VAT
The visual approach for assessing cluster
tendency introduced here can be used in all cases
involving numerical data. It is both convenient
and expected that new methods in clustering have
a catchy acronym. Consequently, we call this new
tool VAT (visual assessment of tendency). The VAT
approach presents pair wise dissimilarity
information about the set of objects O = {o1 on}
as a square digital image with n2 pixels, after the
objects are suitably reordered so that the image is
better able to highlight potential cluster structure.
To go further into the VAT approach requires
some additional background on the types of data
typically available to describe the set O = {o1
on}.

There are two common data representations of O
upon which clustering can be based. When each
object in O is represented by a (column) vector x
in s , the set X = {x1xn} s is called an object
data representation of O. The VAT tool is widely
applicable because it displays a reordered form of
dissimilarity data, which itself can always be
obtained from the original data for O. If the
original data consists of a matrix of pair wise
(symmetric) similarities S = [Sij], then
dissimilarities can be obtained through several
simple transformations.
For example, we can take Rij = Smax Sij, where
Smax denotes the largest similarity value. If the
original data set consists of object data X = {x1
xn} s, then Rij can be computed as Rij = xi x j,
using any convenient norm on s, the VAT
approach is applicable to virtually all numerical
data sets
NICE-2010

Acharya Institute of Technology, Bangalore-560090 51

Fig. 1a is a scatter plot of n 3,000 data points in
R2, These data points were converted to a 3,000 *
3,000 dissimilarity matrix D by computing the
Euclidean distance between each pair of points.
The five visually apparent clusters in Fig. 1a are
reflected by the five distinct dark blocks along the
main diagonal in Fig. 1c, which is the VAT image of
the data after reordering. Compared with Fig. 1b,
which is the image of dissimilarities D in original
input order, we can say that reordering is
necessary to reveal the underlying cluster
structure of the data. The reordering method of
VAT is summarized in Table 1
3. VAT ALGORITHM
Step 1) A dissimilarity matrix m of size n*n is
generated from the input dataset S, where n is
the size of S; //initialization
Step 2) set K{1,2,3.n},
IJ{},P[]{0,0,00};
Step 3) select ( i , j) argmax (mpq ) such that
(p,q) K and set P[1]i; I{i},
JK-{i};
Step 4) for r2,3n
Select(i, j) argmin (mpq) and set P[r] j, IIU{j},
JJ-{j}
Next r
Step 5) Obtain the ordered dissimilarity matrix R
using the ordering array P as
. Rij = mp(i)p(j) for 1<=i, j<=n.
Step 6) Display the Reordered Dissimilarity
Image.
4. EDBE ALGORITHM
The existing system for automatically determining
the number of clusters in unlabeled data sets is
cluster count extraction.
Because of its limitations like perplexing, and its
inability in histogram overlapping, we are moving
on to a new technique. The proposed system is
Extended Dark Block Extraction, which is nearly
a parameter free method developed to
automatically determine the number of clusters in
unlabeled datasets. In short, EDBE is an algorithm
that counts the dark blocks along the diagonal of a
RDI.
EDBE algorithm mainly includes four major steps:
Dissimilarity Transformation and Image
segmentation.
Directional Morphological filtering of binary
image.
Distance transform and diagonal projection of
filtered image.
Detection of major peaks and valleys in the
projected signal

4.1 Dissimilarity transformation and Image
Segmentation (Steps 1-3):
Because information about possible cluster
structure in the data is embodied in the dark
blocks in the RDI, an important preprocessing
step is image thresholding to extract the regions
of interest. Choosing a threshold around the
first mode is thus ideal for image segmentation.
Otsus algorithm [7], which maximizes the
between class variance, has been widely used in
image processing for automatically choosing a
global threshold.
f (t) = 1 exp( -t / )
EDBE ALGORITHM
Step 1) Find the threshold value from m using
otsus algorithm.
Step 2) Transform m in to new dissimilarity
matrix m1 with m1ij = 1- exp(-m/)
Step 3) Form an RDI image I
1
using the previous
module.
Step 4) Threshold I
1
to obtain a binary image I
2

using algorithm of otsu.
Step 5) Filter I
2
using morphological operations
to obtain a filtered binary image I
3
.
Step 6) Perform a distance transform on I
3
to
obtain a gray scale image I
4
and scale the
pixel values to [0, 1].
Step 7) Project the pixel values of the image on to
the main diagonal axis of I
4
to form a
projection signal H
1

Step 8) Smooth the signal H
1
to obtain the
filtered signal H
2
by an average filter.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 52

Step 9) Compute the first order derivative of H
2

to obtain H
3
.
Step 10) Find peak position p
i
and valley
positions v
j
in H
3
.
Step 11) Select valid peaks by considering some
conditions. Number of valid peaks gives
number of clusters.
Step 12) Put the number of clusters into C-Means
Clustering Algorithm and gives very good
accuracy.

This does not affect the reordering by VAT but
changes the histogram of dissimilarities. From the
histogram of D0, we use Otsus algorithm again to
obtain a new threshold to convert the VAT image
shown in fig 2a into a binary image shown in
fig.2b by
Iij
2
= 1, if Iij
2
>
Iij
2 = 0,
otherwise.
It can be seen that the segmentation result after
transformation is far better than that before
transformation.

Directional morphological filtering of binary
image
(Step 4):
To make the segmented image clearer, especially
for the cases in which the degree of overlap
between clusters is large, we use morphological
operations [8] to perform binary image filtering.
Morphological filtering is one type of processing
in which the spatial form or structure of objects
within an image is modified. Dilation and erosion
are two fundamental morphological operations.







The former usually causes objects to grow in size,
while the latter causes objects to shrink. The
morphologically filtered image is as shown in the
fig.3a


Fig 3a: Morphologically filtered Image

4.2. Distance Transform and diagonal
projection of image (Steps 5-6):

In order to convert the morphologically filtered
image into an informative one that clearly shows
the dark block structure information; we need to
consider the values of pixels that are
along or off the main diagonal axis of the image.
First, we perform a DT of the binary image to
obtain a new gray-scale image as shown in the
fig.3b A Distance Transform is a form of
representation of a digital image, which converts
a binary image to a gray-scale image in which the
value of each pixel is the distance from the pixel to
the nearest nonzero pixel in the binary image
fig.3b.
There are several different DTs depending upon
which distance metric is being used to determine
the distance between pixels. We use the Euclidean
distance. After the DT,
NICE-2010

Acharya Institute of Technology, Bangalore-560090 53

we project all pixel values of the DT image onto
the main diagonal axis to obtain a projection
signal as shown in the fig. 3c.


Fig.3b Distance Transformed Image(I
4
)



Fig.3c Diagonal projection signal from (I
4
)
4.3. Detection of major peaks and valleys in the
projected signal (Steps 7-10):
The number of dark blocks in any RDI is
equivalent to the number of major peaks in the
projection signal H
1
. We perform the detection of
peaks and valleys to estimate the (cluster)
number c, based on the first-order derivative of
the projection signal. Although the projection
signal H1

seems to be very smooth, we require
further smoothing to reduce possible false
detections due to noise in the signal. Here, we use
a simple average filter h to filter the projection
signal,
i.e., H
(2)
= h * H
(1),
where * means linear
convolution (see Fig. 3c), and the average filter h
has length l2= 2**n.


Fig.3d First Order derivative signal

After that, the process of peak and valley
detection is performed in a from-rough-to-fine
manner. It is well known that the peaks and
valleys of a signal usually correspond to zero-
crossing points in its first-order derivative as
shown in the fig.3d. Accordingly, we can find the
initial sets of peaks pi and valleys vj by finding the
corresponding from positive-to-negative zero-
crossing points and from negative-to-positive
zero-crossing points. To further remove minor
false peaks, we use a size filter to remove
relatively small valleys by validating the width
between each two neighboring valleys.

That is, the peak pi within the two neighboring
valleys will be kept as a meaningful major peak
if V(k+1) - V(k) > l3
V(k) < P(i) < V(k+1) , where l3=2n
Finally, we determine the number of dark blocks
in the RDI (and, hopefully, the number of clusters
c in the unlabeled data) as the number of resulting
major peaks.

5. CONCLUSION

This paper investigates a nearly parameter-free
method for automatically estimating the number
of clusters in unlabeled data sets. The only user
defined parameter that must be chosen controls
the filter size. It is relatively easy to make a
realistic (and useful) choice for , since it
essentially specifies the smallest cardinality of a
cluster relative to the number of objects in the
data. Cluster number should be EDBE that will
NICE-2010

Acharya Institute of Technology, Bangalore-560090 54

probably reach its useful limit when the RDI
formed by any reordering of D is not from a well
structured dissimilarity matrix. In our
experiments, we used the simple Euclidean
distance to compute pair wise dissimilarities
when the input data are feature vectors. The
Euclidean distance may not be suitable for high
dimensional or complex data valleys (such as
wavelet-based multi-resolution analysis). EDBE
provides an initial estimation of the cluster
number, thus avoiding the requirement of
repeatedly running a clustering algorithm
multiple times over a wide range of c in an
attempt to find useful clusters. In this way, EDBE
compares favorably to post clustering validation
methods in computational efficiency. It is noted
that EDBE does not eliminate the need for cluster
validity, but it simply improves the probability of
success. A possible extension of this work
concerns the initialization of the fuzzy post
clustering algorithm for object data clustering. It
should not be too hard to find an approximate
center sample for each meaningful cluster from
any well structured RDI.

6. REFERENCES

[1] R.F. Ling, Comm. ACM, vol. 16, pp. 355-361,
1973, A Computer Generated Aid for Cluster
Analysis,
[2] J. Huband, J.C. Bezdek, and R. Hathaway,
Pattern Recognition, vol. 38, no. 11, pp. 1875-
1886, 2005, bigVAT: Visual Assessment of
Cluster Tendency for Large Data Sets.
[3] R. Hathaway, J.C. Bezdek, and J. Huband,
Pattern Recognition, vol. 39, pp. 1315-1324,
2006, Scalable Visual Assessment of Cluster
Tendency.
[4] W.S. Cleveland, Visualizing Data. Hobart Press,
1993. [6] J.C. Bezdek, R.J. Hathaway, and J.
Huband, IEEE Trans. Fuzzy Systems, vol. 15,
no. 5, pp. 890-903, 2007, Visual Assessment
of Clustering Tendency for Rectangular
Dissimilarity Matrices.
[5] R.C. Gonzalez and R.E. Woods, Prentice Hall,
2002, Digital Image Processing.
[6] I. Dhillon, D. Modha, and W. Spangler, Proc.
30th Symp. Interface: Computing Science and
Statistics, 1998, Visualizing Class Structure of
Multidimensional Data.
[7] T. Tran-Luu, PhD dissertation, Univ. of
Maryland, College Park, 1996, Mathematical
Concepts and Novel Heuristic Methods for
Data Clustering and Visualization.
[8] J.C. Bezdek and R. Hathaway, Proc. Intl Joint
Conf. Neural Networks (IJCNN 02), pp. 2225-
2230, 2002, VAT: A Tool for Visual
Assessment of (Cluster) Tendency.
[9] Liang Wang, Christopher Leckie, Kotagiri
Ramamohanarao, and James Bezdek, Fellow,
IEEE-MARCH 2009, Automatically
Determining the Number of Clusters in
Unlabeled Data Sets.





















NICE-2010

Acharya Institute of Technology, Bangalore-560090 55

PREDICTION BASED LOSSLESS COMPRESSION SCHEME FOR BAYER COLOR
FILTER ARRAY IMAGE
Khajavali Shaik
1
, Mr.M.N.A.Siddiqui
2
, Mrs. C H. Hima Bindu.
3
P.G student in ECE Department
1
, P.G student in ECE Department
2
, Sr.Associate Prof in Dept of ECE
3
QIS College of Engg & Technology, Ongole,Prakasam(Dt),A.P.
Khajavali.shaik440@gmail.com
1
,

Siddiqui.nadeem07@gmail.com
2
, hb.muvvala@gmail.com
3
_____________________________________________________________________________________________________________________________
ABSTRACT
In most digital cameras Bayer color filter
array images captured and demosaicing is
generally carried out before compression. Recently
it was compression first scheme o u t perform the
conven-tional demosaicing first schemes in terms
of output image quality. An ef f i ci ent prediction
based lossless compression scheme for Bayer filter
color images proposed

INTRODUCTION

BAYER COLOR FILTER ARRAY

A Bayer Filter color array usually coated over the
sensors in these cameras to record only one of
the three colors components at each pixel
location. The resultant image is referred to as a
CFA image.

Fig: Bayer Patter has Red sample in center
Fig shows the Bayer Patter has Red sample in
center, compressed for storage. Then it was
inefficient in a way the demosaicing process
always introduce some redundancy which
should eventually be removed in the following
compression step. We do the compression before
demosaicing digital cameras can have a simpler
design and low power consumption as
computationally heavy process like demosaicing
can be carried in an offline powerful personal
computer. This motivates the demand of CFA
image compression schemes.

Fig: single sensor camera imaging chain (a)
demosaicing and (b) Compression

PRESENT SCHEMES USED

There are different schemes present in the
market such as
Lossy compression schemes JPEG2000
So now we have to look the drawbacks of present
methods.
Lossy schemes compress a CFA Image by
discarding its visually redundant information.
This scheme visually yields a higher
compression ratio as compared with the
lossless schemes.
JPEG-2000 is used to encode a CFA image but
only a fair performance can be attained.
JPEG-2000 is very expensive method to
compress the images.

PROPOSED SCHEME

A Prediction based lossless CFA compression
scheme is proposed. It divides a CFA images into
two sub-images:
(a) A green sub-image which contains all green
samples of the CFA image
(b) Non-green sub image which contains the red
and blue samples in the CFA image.
This system is mainly consists of two parts

(a) Encoder

(b) Decoder


ENCODER:

Fig: Structure of proposed scheme

Green Sub image is coded first and the Non
green Sub image follows based on green sub
image as reference and To reduce the spectral
NICE-2010

Acharya Institute of Technology, Bangalore-560090 56

redundancy, the non green sub image is
processed in the color difference domain
whereas the green sub image is processed in the
intensity domain as a reference for the color
difference content of the non green sub image.
Both sub images are processed in raster scan
sequence with context matching based
prediction technique to remove the spatial
dependency. The prediction residue planes of
the two sub images are then entropy encoded
sequentially with our proposed realization
scheme of adaptive Rice code.

WORKING OF THE SCHEME

This proposed scheme is mainly working on
Prediction on the green plane and Prediction on
the Non-green plane.

PREDICTION ON THE GREEN PLANE:

As the green plane is raster scanned during the
prediction and all prediction errors are
recorded. Now processing a particular green
plane the four nearest processed neighboring
samples of g (i,j) form a candidate set.

We can find the directions associated with the
green pixels it need some process.


Fig: Four possible directions associated with
green pixel

Let g(mk,nk) g(i,j) for k=1,2,3,4 be the four
ranked candidates of sample g(i,j) (Sg(i,j),
Sg(mu,nu)) <= D(Sg(i,j), Sg(mv,nv) ) for
1<=u<=v<=4



If the directions of g(i,j) is identical to the
directions of all green samples in Sg(i,j), pixel (i,j)
will be considered in a homogenous region and
prediction of g(i,j) is
i.e. {w1,w2,w3,w4}={1,0,0,0}

Else the g(i,j) is in heterogenous region and
predicted value of g(i,j) is


i.e. {w1,w2,w3,w4}={5/8,2/8,1/8,0}

FLOW CHART FOR PREDICTION ON THE
GREEN PLANE

ADAPTIVE COLOR DIFFERENCE
ESTIMATION FOR NON GREEN PLANE

When compressing the non green color plane,
color difference information is exploited to
remove the color spectral dependency.

Let c(m,n) be the intensity value at a non green
sampling position(m,n). Green-Red(Green-Blue)
color difference of pixel (m,n) is
d(m,n)=g(m,n)-c(m,n)

g(m,n) estimated green component intensity
value

g(m,n)=round(( H*Gv+V*GH)/(H+ V))
GH=(g(m,n-1)+g(m,n+1))/2 and
Gv=(g(m+1,n)+g(m+1,n))/2

NICE-2010

Acharya Institute of Technology, Bangalore-560090 57




PREDICTION ON THE NON GREEN PLANE



Color difference prediction of a non green sample
c(i,j) with color difference value d(i,j) is


Where k is predictor coefficient
d(mk,nk) is kth ranked candidate in c(i,j)

COMPRESSION SCHEME

The prediction Error of pixel (i, j) in the CFA
image, say e(i, j) is given by



where g(i, j), d(i, j) are respectively, the real green
sample value and the color difference value of
pixel (i, j)
The error residue e(i, j) is then mapped to a
nonnegative integer as follows to reshape its
value distribution to an exponential one from a
Laplacian one.


The E(i, j) s from the green sub-image are raster
scanned and coded with Rice code first. Rice code
is employed to code E(i, j) because of its
simplicity and high efficiency in handling
exponentially distributed sources When
Rice code is used, each mapped Residue E(i, j) is
split into a Quotient Q



Quotient and Remainder are then saved for
storage and transmission.
The Length of code word used for representing

E(i, j) is k dependent and is given by


Parameter k is critical to the compression
performance as it determines the code length of
E(i, j)
Optimal parameter K is given by



For a geometric source with distribution
parameter color spaces I As long as is known,
parameter , and, hence, the optimal coding
parameter k for the whole source can be
determined easily.
is estimated adaptively in course of Encoding



DECODING PROCESS:

NICE-2010

Acharya Institute of Technology, Bangalore-560090 58

Decoding Process is just reverse process of
Encoding. Green Sub-image is decoded first and
then the non-green sub-image is decoded with
the decoded green sub-image as a reference.
Original CFA Image is then reconstructed by
combining the two sub-








EXPERIMENT RESULTS


From the above fig, it shows that = 1 can
provide a good compression performance. We
assume the prediction residue is a local
variable and estimate the mean of its value
distribution adaptively. The divisor used to
generate the Rice code is then adjusted
accordingly so as to improve the efficiency of Rice
code.

COMPRESSION PERFORMANCE
Simulations were carried out to evaluate the
performance of proposed compression
scheme. 24-bit color images of size 512*768
were sub- sampled according to the Bayer
pattern to form 8 bit testing CFA images. These
Images are directly coded by the proposed
compression scheme for evaluation. Some
representative Lossless compression
schemes such as JPEG-LS, JPEG 2000(lossless
mode) and LCMI were used for comparison of
Results.

ADVANTAGES OF PROPOSED METHOD
We can reduce the spectral redundancy mean
time and also can get high quality image.
Reducing the sensors in digital cameras from
3 to 1. Low complexity to design. Compare with
JPEG2000 it gives better performance.

SIMULATION RESULTS



EXPERIMENTAL RESULTS



CONCLUSION

CFA image encodes the sub-image separately
with predictive coding Lossless prediction is
carried out in the intensity domain for the
green. While it is carried out in the color
difference domain for the non green.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 59


REFERENCES

[1] S. Banks, Signal Processing, Image Processing
and Pattern Recognition.
Englewood Cliffs, NJ: Prentice-Hall, 1990.
[2] S. P. Lloyd, Least squares quantization in
PCM, IEEE Trans. Inf.
Theory, vol. IT-28, no. 2, pp. 129136, Mar. 1982.
[3] P. Berkhin, Survey of clustering data mining
techniques, Accrue
Software, San Jose, CA, 2002.
[4] J. Besag, On the statistical analysis of dirty
pictures, J. Roy. Statist.
Soc. B, vol. 48, pp. 259302, 1986.
[5] D. Comaniciu and P. Meer, Mean shift: A
robust approach toward
feature space analysis, IEEE Trans. Pattern Anal.
Mach. Intell., vol.
24, no. 5, pp. 603619, May 2002.
[6] J. Shi and J. Malik, Normalized cuts and image
segmentation, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp.
888905, Aug.
2000.
[7] P. Felzenszwalb and D. Huttenlocher, Efficient
graph-based image
segmentation, Int. J. Comput. Vis., vol. 59, pp. 167
181, 2004.
[8] S. Zhu and A. Yuille, Region competition:
Unifying snakes, region
growing, and Bayes/MDL for multiband image
segmentation, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 18, no. 9, pp.
884900, Sep.
1996.
[9] M. Mignotte, C. Collet, P. Prez, and P.
Bouthemy, Sonar image
segmentation using a hierarchical MRF model,
IEEE Trans. Image
Process., vol. 9, no. 7, pp. 12161231, Jul. 2000.
[10] M. Mignotte, C. Collet, P. Prez, and P.
Bouthemy, Three-class Markovian
segmentation of high resolution sonar images,
Comput. Vis.
Image Understand., vol. 76, no. 3, pp. 191204,
1999.
[11] F. Destrempes, J.-F. Angers, and M. Mignotte,
Fusion of hidden
Markov random field models and its Bayesian
estimation, IEEE
Trans. Image Process., vol. 15, no. 10, pp. 2920
2935, Oct. 2006.
[12] Z. Kato, T. C. Pong, and G. Q. Song,
Unsupervised segmentation of
color textured images using a multi-layer MRF
model, in Proc. Int.
Conf. Image Processing, Barcelona, Spain, Sep.
2003, pp. 961964.
[13] P. Prez, C. Hue, J. Vermaak, and M. Gangnet,
Color-based probabilistic
tracking, in Proc. Eur. Conf. Computer Vision,
Copenhagen,
Denmark, Jun. 2002, pp. 661675.
[14] J. B. Martinkauppi, M. N. Soriano, and M. H.
Laaksonen, Behavior
of skin color under varying illumination seen by
different cameras at
different color spaces, in Proc. SPIE, Machine
Vision Applications in
Industrial Inspection IX, San Jose, CA, Jan. 2001, pp.
102113.
[15] J.-P. Braquelaire and L. Brun, Comparison
and optimization of
methods of color image quantization, IEEE Trans.
Image Process.,
vol. 6, no. 7, pp. 10481952, Jul. 1997.
[16] H. Stokman and T. Gevers, Selection and
fusion of color models for












NICE-2010

Acharya Institute of Technology, Bangalore-560090 60

VIDEO COMPRESSION USING ACC-JPEG
Revathi Amisetty, M.Doss
Asst. professor, JNTU, Anantapur.
_____________________________________________________________________________________________________________________________

ABSTRACT
The high correlation between successive
frames has not been exploited by the present
compression techniques. In this paper, we present a
new video compression approach which tends to
hard exploit the temporal redundancy in the video
frames to improve compression efficiency with
minimum processing complexity. The accordion
representation of video consists on 3D to 2D
transformation of the video frames that allows
exploring the temporal redundancy of the video
using 2D transforms and avoiding the
computationally demanding motion compensation
step. This transformation turns the spatial
temporal correlation of the video into high spatial
correlation. Indeed, this technique transforms each
group of pictures to one picture eventually with
high spatial correlation which is compressed as a
still image by JPEG. Thus, the de-correlation of the
resulting pictures by the DCT makes efficient
energy compaction, and therefore produces a high
video compression ratio. This method proved to be
more efficient especially in high bit rate and with
slow motion video.

INTRODUCTION

The objective of video coding in most video
applications is to reduce the amount of video data
for storing or transmission purposes without
affecting the visual quality. The desired video
performances depend on applications
requirements, in terms of quality, disks capacity
and bandwidth. For portable digital video
applications, highly-integrated real-time video
compression and decompression solutions are
more and more required. Actually, motion
estimation based encoders are the most widely
used in video compression. Such encoders
exploits inter frame correlation to provide more

efficient compression. However, Motion
estimation process is computationally intensive;
its real time implementation is difficult and costly.
This is why motion-based video coding standard
MPEG was primarily developed for stored video
applications, where the encoding process is
typically carried out off-line on powerful
computers. So it is less appropriate to be
implemented as a real-time compression process


for a portable recording or communication device
(video surveillance camera and fully digital video
cameras). In these applications, efficient low
cost/complexity implementation is the most
critical issue. Thus, researches turned towards the
design of new coders more adapted to new video
applications requirements. This led some
researchers to look for the exploitation of 3D
transforms in order to exploit temporal
redundancy. Coder based on 3D transform
produces video compression ratio which is close
to the motion estimation based coding one with
less complex processing. The 3d transform based
video compression methods treat the
redundancies in the 3D video signal in the same
way, which can reduce the efficiency of these
methods as pixel's values variation in spatial or
temporal dimensions is not uniform and so,
redundancy has not the same pertinence. Often
the temporal redundancies are more relevant
than spatial one. It is possible to achieve more
efficient compression by exploiting more and
more the redundancies in the temporal domain;
this is the basic purpose of the proposed method.
The proposed method consists on projecting
temporal redundancy of each group of pictures
into spatial domain to be combined with spatial
redundancy in one representation with high
spatial correlation. The obtained representation
will be compressed as still image with JPEG coder.
PROPOSED APPROACH
The basic idea is to represent video data with high
correlated form. Thus, we have to exploit both
temporal and spatial redundancies in video signal.
The input of our encoder is so called video cube,
which is made up of a number of frames. This
cube will be decomposed into temporal frames
which will be gathered into one frame (2
dimensions). The final step consists of coding the
obtained frame. In the following, we detail the
method design steps.
A. Hypothesis
Many experiences had proved that the variation of
the 3D video signal is much less in the temporal
dimension than the spatial one. Thus, pixels, in 3D
video signal, are more correlated in temporal
domain than in spatial one [3]; this could be
traduced by the following expression: for one
reference pixel I(x;y;t) where:
1. I: pixel intensity value
2. x; y: space coordinate of the pixel
NICE-2010

Acharya Institute of Technology, Bangalore-560090 61

3. t: time (video instance)
We could have generally:
I(x; y;t)-I(x; y; t+1) < I(x;y;t) I(x+1; y;t) (1)
This assumption will be the basis of the proposed
method where we will try to put pixels - which
have a very high temporal correlation - in spatial
adjacency.
Accordion based representation
To exploit this succeeding assumption, we start by
carrying out a temporal decomposition of the 3D
video signal, the shows temporal and spatial
decomposition of one 8X8X8video cube:


Frames obtained following the temporal
decomposition will be called temporal frames.
These latter are formed by gathering the video
cube pixels which have the same column rank.
According to the mentioned assumption, these
frames have a stronger correlation compared to
spatial frames. To increase correlation in
Accordion Representation we reverse the
direction of event frames. Figure 4 illustrates the
principle of this representation.

Thus, the Accordion representation is obtained
as following: first, we start by carrying out a
temporal decomposition of the video 3D. Then,
the event temporal frames will be turned over
horizontally (Mirror effect). The last step consists
of frames successive projecting on a 2D plan
further called "IACC" frame. The Accordion
representation tends to put in spatial adjacency
the pixels having the same coordinates in the
different frames of the video cube. This
representation transform temporal correlation in
the 3D original video source into a high spatial
correlation in the 2D representation ("IACC"). The
goal of turning over horizontally the event
temporal frames is to more exploit the spatial
correlation of the video cube frames extremities.
In this way, Accordion representation also
minimizes the distances between the pixels
correlated in the source. That's could be clearer in
figure 3:

Figure 4 shows the strong correlation obtained in
the Accordion representation made of 4 frames
which show motion of Rhinosaur.

Accordion analytic representation

The Accordion representation is obtained
following a process having as input the GOP
frames(I 1..N) and has as output the resulting
frame IACC. The inverse process has as input the
IACC frame and as output the coded frames (I
1..N). The analysis of these two processes leads to
the following algorithms:

The algorithm 1 describes how to make Accordion
representation (labeled ACC), The algorithm 2
represents the process inverse (labeled ACC
-1
).
Let us note that:
1) L and H are respectively the length and the
height of the video source frames.
2) NR is the number of frames of a GOP.

Algorithm 1: Algorithm of ACC:
1: for x from 0 to (L * N)- 1 do
2: for y from 0 to H - 1 do
3: if ((x div N) mod 2) != 0 then
4: n=(N-1) - (x mod N)
5: else
6: n=x mod N
7: end if
8: IACC(x; y)= In (x div N;y)
9: end for
10: end for

Algorithm 2 : Algorithm of ACC
-1
:
1: for n from 0 to N - 1 do
2: for x from 0 to L - 1 do
3: for y from 0 to H - 1 do
4: if (x mod 2) != 0 then
NICE-2010

Acharya Institute of Technology, Bangalore-560090 62

5: XACC= (N -1) - n (x*N)
6: else
7: XACC= n(x * N)
8: end if
9: In(x;y)=IACC(XACC; y)
10: end for
11: end for
12: end for
3) IACC(x; y) is the intensity of the pixel which is
situated in "IACC" frame with the co-ordinates
(x,y) according to Accordion representation
repair.
4) In(x; y) is the intensity of pixel situated in the
Nth frame in original video source. We can also
present the Accordion Representation With the
following formulas:
ACC formulas:
IACC=In(xdivN;y) (2)
with n=((x div N) mod2)(N-1) + 1-2((x div N) mod
2)(x mod N)
ACC inverse formulas:
In(x;y)=IACC(XACC;y) (3)
with XACC = ((x div N) mod 2)(N-1) + n(1-2('x div
N) mod 2))+x
In the following, we will present the diagram of
coding based on the .Accordion representation.
Diagram of coding ACC -JPEG
ACC-JPEG Coding is proceded as follows:
1) Decomposition of the video in groups of frames
2) Accordion Representation of the GOP.
3)Decomposition of the resulting "IACC" frame
into 8x8 blocks.
4) For each 8x8 block:
Discrete cosine Transformation (DCT).
Quantification of the obtained coefficients.
Course in Zigzag of the quantized coefficients.
Entropic Coding of the coefficients (RLE,
Huffman)


ACC - JPEG FEATURE ANALYSIS

The proposed method presents several
advantages:-
Random Access: 3D transform and motion
estimation based video compression methods
require all the frames of the GOP to allow the
random access to different frames. However, the
proposed method allows the random frame
access. The ACC formula makes it possible to code
and/or decode a well defined zone of the GOP
(Partial coding). As conclusion, we can state that
the ACC - JPEG is very efficient for scenes with a
translator character [9], or with slow motion,
especially without change of video plan.
Furthermore, ACC - JPEG seems to be well
adapted to embedded or portable video devices
such as the IP cameras thanks to its flexibility and
its operating simplicity.
symmetry: On the contrary of coding schemes
based on motion estimation and compensation
whose coding is more complex than decoding, the
proposed encoder and decoder are symmetric
with almost identical structure and complexity,
which facilitates their joint implementation.
Simplicity: The proposed method transform the
3D features to 2D ones, which enormously reduce
the processing complexity. Moreover, the
complexity is independent of the compression
ratio and motions.
Objectivity: Unlike 3D methods that treat
temporal and spatial redundancies in the same
way, the proposed method is rather selective., it
exploits the temporal redundancies more than the
space redundancies; what is more objective and
more efficient.
Flexibility: The parameters of the ACC--JPEG
offer a flexibility that makes it possible to be
adapted to different requirements of video
applications: The latency time, the compression
ratio and the size of required memory depend on
the value of the NR parameter. Indeed, by
increasing the value of NR, the compression ratio,
the latency time and the reserved memory
increase. This parameter allows to optimize the
Compression/ Quality compromise while taking
in consideration memory and latency constraints.

CONCLUSION

In this research, the proposed method could open
new horizons in video compression domain; it
strongly exploits temporal redundancy with the
minimum of processing complexity which
facilitates its implementation in video embedded
systems. This new video compression method
which exploits objectively the temporal
redundancy. It presents some useful functions and
features which can be exploited in some domains
as video surveillance. In high bit rate, it gives the
best compromise between quality and complexity.
It provides better performance than MJPEG and
MJPEG2000 almost in different bit rate values.
Over 2000kb/s bit rate values our compression
method performance becomes comparable to the
NICE-2010

Acharya Institute of Technology, Bangalore-560090 63

MPEG4 especially for low motion sequences.
There are various directions for future
investigations. First of all, we would like to
explore others possibilities of video
representation. Another direction could be to
combine Accordion representation with other
transformations such as wavelet transformation.
The latter allows a global processing on the whole
of the Accordion representation on the contrary of
the DCT which generally acts on blocks.

REFERENCES
[1] R. A.Burg, .A 3d-dct real-time video
compression system for low complexity
singlechip vlsi implementation,. in the Mobile
Multimedia Conf. (MoMuC), 2000.
[2] T. Fryza, Compression of Video Signals by 3D-
DCT Transform. Diploma thesis, Institute of Radio
Electronics, FEKT Brno University of Technology,
Czech Republic, 2002.
[3] E. Q. L. X. Zhou and Y. Chen, .Implementation of
h.264 decoder on general purpose processors
with media instructions, in SPIE Conf. on Image
and Video Communications and Processing, (Santa
Clara, CA), pp. 224.235, Jan 2003.
[4] M. B. T. Q. N. A. Molino, F. Vacca, Low
complexity video codec for mobile video
conferencing,. in Eur. Signal Processing Conf.
(EUSIPCO), (Vienna, Austria), pp. 665.668, Sept
2004.
[5] S. B. Gokturk and A. M. Aaron, .Applying 3d
methods to video for compression,. in Digital
Video Processing (EE392J) Projects Winter Quarter,
2002.
[6] T. Fryza, Compression of Video Signals by 3D-
DCT Transform. Diploma thesis, Institute of Radio
Electronics, FEKT Brno University of Technology,
Czech Republic, 2002.
[7] G. M.P. Servais, .Video compression using the
three dimensional discrete cosine transform,. in
Proc.COMSIG, pp. 27.32, 1997.
[6] R. A.Burg, .A 3d-dct real-time video
compression system for low complexity
singlechip vlsi implementation,. in the Mobile
Multimedia Conf. (MoMuC), 2000.
[7] A. N. N. T. R. K.R., .Discrete cosine transforms,.
in IEEE transactions on computing, pp. 90.93,
1974.
[8] T.Fryza and S.Hanus, .Video signals
transparency in consequence of 3d-dct transform,.
in Radioelektronika 2003 Conference Proceedings,
(Brno,Czech Republic), pp. 127.130, 2003.
[9] N. Boinovi and J. Konrad, .Motion analysis in
3d dct domain and its application to video coding,.
vol. 20, pp. 510.528, 2005.
[10] E. Y. Lam and J. W. Goodman, .A mathematical
analysis of the dct coef_cient distributions for
images,. vol. 9, pp. 1661.1666, 2000.








































NICE-2010

Acharya Institute of Technology, Bangalore-560090 64

AN EFFICIENT IMAGE RETRIEVAL SYSTEM
Mrs. Seema Patil
1
, Mrs. Ashwini N
2

Lecturer, Sr Lecturer CSE Dept,TOCE
9900379841 theseema@gmail.com
1
9900525858 ashwinilaxman@gmail.com
2

_____________________________________________________________________________________________________________________________
ABSTRACT
An image retrieval system is a computer
system for browsing, searching and retrieving
images from a large database of digital images.
Most traditional and common methods of image
retrieval utilize some method of adding metadata
such as captioning, keywords, or descriptions to the
images so that retrieval can be performed over the
annotation words. Manual image annotation is
time-consuming, laborious and expensive .Another
method of image retrieval is content based image
retrieval which aims at avoiding the use of textual
descriptions and instead retrieves images based on
their visual similarity to a user-supplied query
image or user-specified image features. Content-
based image retrieval is the application of
computer vision to the image retrieval problem.
Content-based means that the search will analyze
the actual contents of the image. The term 'content'
in this context might refer to colors, textures, or
any other information that can be derived from the
image itself.
Keywords
Feature extraction, Feature matching, Histogram,
Texture

I. INTRODUCTION

Image retrieval which is based on
automatically extracted primitive feature such as
color. The need for image retrieval is to retrieve
images that are more appropriate, along with
multiple features for better retrieval accuracy.
Usually in search process using any search engine,
which is through text retrieval, which wont be so
accurate. So, we go for image retrieval system
using color feature. Image Retrieval system is also
known as content based image retrieval but we
consider all features of an image for example
Color, Texture and shape . CBIR is considered as
the process of retrieving desired images from
huge databases based on extracted features from
the image themselves without resorting to a
keyword [1]. Features are derived directly from
the images and they are extracted and analyzed
automatically by ,means of computer processing
[2]. Content based image retrieval is also know as
query by image content (QBIC) [3] and content-
based visual information retrieval (CBVIR).
Content-based means that the search makes use
of the contents of image themselves, rather than
relying on human-inputted metadata such as
captions or keywords. The similarity
measurements and the representation of the
visual features are two important issues in Image
Retrieval System.
Given a query image, with single /
multiple object present in it; mission of this work
is to retrieve similar kind of images from the
database based on the features extracted from the
query image. So a content- based search, for high
accuracy multiple features like color is
incorporated. Color feature extraction is done
through Global Color Histogram and Local Color
Histogram and Texture through Co- Occurrence &
Edge Frequency.
The visual features are classified into low
and high level features according to their
complexity and the use of semantics [l]. The use of
simple features like color or shape is not efficient
[4]. When retrieving images using combinations
of these- features there is a need for testing the
accuracy of these combinations and comparing
them with the single features based retrieval in
order to find the combinations that give the best
matches that enhance the performance of image
retrieval systems.[5]
In fact of some image retrieval systems
give good results for special cases of database
images. So one of the most important challenges
facing the evaluation of CBIR systems
performance is creating a common image
collection and obtaining relevance judgments [6].
On the other hand the use of similarity
measurements is very effective in image retrieval
systems. After extracting the required features
from images the retrieval process becomes the
measurement of similarity between the feature
vectors one for the query image and the other for
each database image.

Section II will present some of the related
work on image retrieval system. Section III
presents the proposed solution for image retrieval
NICE-2010

Acharya Institute of Technology, Bangalore-560090 65

system. Section IV shows the features
representations of an image . Section V shows the
feature extraction of an color and texture, .Section
VI presents the results for an image retrieval
system and finally with concluding remark

II.RELATED WORK

Mohamed A. Tahoun [7] suggest image
retrieval system as robust based content-based
image Retrieval System Using Multiple Features
Representations. The similarity measurements
and the representation of the visual features are
two important issues in Content-Based Image
Retrieval (CBIR). In this paper, he has compared
between the combination of wavelet-based
representations of the texture feature and the
color feature with and without using the color
layout feature. To represent the color information,
he has used Global Color Histogram - (GCH)
beside the color layout feature and with respect to
the texture information, we used Haar and
Daubechies wavelets. Based on some commonly
used Euclidean and Non-Euclidean similarity
measures, we tested different categories of
images and measured the retrieval accuracy when
combining such techniques..

Mari Partio [8] suggest most of the
existing mage retrieval systems perform
reasonably when using color features. However,
retrieval accuracy using shape or texture features
does not produce as good results. Therefore, this
thesis investigates different methods of
representing shape and texture in content-based
image retrieval. Later, when appropriate
segmentation algorithms are available some of
these methods could also be applied to video
object retrieval. The paper presents two
contributions: one is shape-based and the second
is texture-based retrieval method.

III. PROPOSED WORK FOR IMAGE
RETRIEVAL SYSTEM
A Definations
Image databases and collections can be
enormous in size, containing hundreds, thousands
or even millions of images. The conventional
method of image retrieval is searching for a
keyword that would match the descriptive
keyword assigned to the image by a human
categorizer. Currently under development, even
though several systems exist, is the retrieval of
images based on their color feature content called
Image Retrieval system using Color Feature.
While computationally expensive, the results are
far more accurate than conventional image
indexing. Hence, there exists a tradeoff between
accuracy and computational cost. This tradeoff
decreases as more efficient algorithms are utilized
and increased computational power becomes
inexpensive.
B Problem Specification
The problem involves entering an image
as a query into a software application that is
designed to employ image retrieval techniques in
extracting visual properties, and matching them.
This is done to retrieve images in the database
that are visually similar to the query image.
C Proposed Solution
The solution initially proposed was to
extract the primitive features of a query image
and compare them to those of database images.
The image features under consideration were
color. Thus, using matching and comparison
algorithms, the color features of one image are
compared and matched to the corresponding
features of another image. This comparison is
performed using color, distance metrics. In the
end, these metrics are performed one after
another, so as to retrieve database images that are
similar to the query. The similarity between
features was to be calculated using algorithms
used by well known CBIR systems such as QBIC.
For each specific feature there was a specific
algorithm for extraction and another for
matching.

D Block Diagram
Different types of user queries is inputted to
image retrieval system . With query by example
,the user searches with a query image and the
software finds images similar to it.[9]

In the image retrieval software each image is
stored in the database has its features extracted
and compared to the features of the query image .
It involves two steps.

NICE-2010

Acharya Institute of Technology, Bangalore-560090 66

Feature Extraction: The first step in image
retrieval is to extract color feature of the query
image.This extraction can be Average RGB or
color Histogram or Texture

Feature Matching
Once the features vectors are created, the
matching process performs the measuring of a
metric distance between the features vectors. And
results into best image retrieval.

Block diagram : When building an image database
,feature vectors from images are to be extracted
and then store the vectors in the database. Given
the query image its feature vectors are computed.
If the distance between feature vectors of the
query image and image in the database is small
enough the corresponding image in the database
is to considered as a match to the query.



Fig: Block Diagram of image retrieval systems

IV VISUAL FEATURES REPRESENTATIONS
One of the most important challenges
when building image based retrieval systems is
the choice and the representation of the visual
features [10]. Color is the most intuitive and
straight forward for the user while shape and
texture are also important visual amibutes but
there is no standard way to use them compared to
color for efficient image retrieval. Many content-
based image retrieval systems use color and
texture features [11].

In order to extract the selected features and index
the database images based on them, we used
Global Color Histogram (GCH) to extract the color
feature. Also in this section, the color layout
feature is extracted and the database images are
indexed based on the color layout feature.

A Color
Global Color Histogram (GCH) is the most
traditional way of describing the color attribute of
an image. It is constructed by computing the
normalized percentage of the color pixels in an
arrange corresponding to each color element. An
example of a true colored (RGB) image and the
corresponding histograms of each component are
displayed in Fig. 1.

To construct the color feature vector (its length is
256x3) for both the query image and all images in
the database, we identified the three-color
components (R, G, and B) and compute the
corresponding histograms of these components.

Fig: A colored image at the top, the three
components Red, Green and Blue in the middle,
and finally from left to right: the corresponding
histograms for Red, Green and Blue components

User
Interface
Query
Subsystem
Feature
Extractio
Feature Matching
Subsystem
Multi-
Modal
Feature Extraction
Clustering /
Query by
Example
Query by
Database Generation
Query and Retrieval
Multi-
Modal
Feature
Imported
NICE-2010

Acharya Institute of Technology, Bangalore-560090 67

The main method of representing color
information of images in CBIR systems is through
color histograms. A color histogram is a type of
bar graph, where each bar represents a particular
color of the color space being used.The color
space can RGB or HSV. The bars in a color
histogram are referred to as bins and they
represent the x-axis. The number of bins depends
on the number of colors there are in an image. The
y-axis denotes the number of pixels there are in
each bin. In other words how many pixels in an
image are of a particular color. Quantization in
terms of color histograms refers to the process of
reducing the number of bins by taking colors that
are very similar to each other and putting them in
the same bin.

There are two types of color histograms,
Global color histograms (GCHs) and Local color
histograms (LCHs). A GCH represents one whole
image with a single color histogram. An LCH
divides an image into fixed blocks and takes the
color histogram of each of those blocks. LCHs
contain more information about an image but are
computationally expensive when comparing
images.
The GCH is the traditional method for
color based image retrieval. However, it does not
include information concerning the color
distribution of the regions of an image. Thus when
comparing GCHs one might not always get a
proper result in terms of similarity of images.

B Global Color Histogram

The color histogram depicts color
distribution using a set of bins. Using the Global
Color Histogram(GCH), an image will be encoded
with its color histogram, and the distance
between two images will be determined by the
distance between their color histograms.

C Local Color Histogram
This approach (referred to as LCH)
includes information concerning the color
distribution of regions. The first step is to
segment the image into blocks and then to obtain
a color histogram for each block. An image will
then be represented by these histograms. When
comparing two images, we calculate the distance,
using their histograms, between a region in one
image and a region in same location in the other
image. The distance between the two images will
be determined by the sum of all these distances.
D Texture
Texture[14] measures look for visual patterns in
images and how they are spatially defined.
Textures are represented by texels (where A texel,
or texture element (also texture pixel) is the
fundamental unit of texture space used in
computer graphics) which are then placed into a
number of sets, depending on how many textures
are detected in the image. These sets not only
define the texture, but also where in the image the
texture is located. Texture is a difficult concept to
represent. The identification of specific textures in
an image is achieved primarily by modeling
texture as a two-dimensional gray level variation.
The relative brightness of pairs of pixels is
computed such that degree of contrast, regularity,
coarseness and directionality may be estimated.
Textures are represented by arrays of texels, just
as pictures are represented by arrays of pixels.
Texture through Co- Occurrence .Co-occurrence
matrix is a statistical method using second order
statistics to model the relationships between
pixels within the region by constructing Spatial
Gray Level Dependency (SGLD) matrices.
V. COLOR FEATURE EXTRACTION
A. Average RGB

Average RGB is to compute the average values in
R, G and B channel of each pixel in an image, and
use this as a descriptor of an image for
comparison purpose

Notation
I - an image
w - width of image I
h - height of image I
I(x,y) - the pixel of image I at row y,
column x
R(p), G(p), B(p) - the red, green and blue
color component of pixel p
ra, ga, ba - the average red, green and blue
component of image Ia
NICE-2010

Acharya Institute of Technology, Bangalore-560090 68

d(Ia,Ib) - the distance measure between
image Ia and Ib
Description Features
The following are 3 equations for
computing the average R, G, B component
of an image I



Here is the equation for distance measure
of image Ia and Ib, we use the weighted
Euclidean distance
The distance between two exact images
will be 0 and the distance between to
most dissimilar images (blank and white)
will be 1 or 255 depending on the range of
RGB is from 0-1 or 0-255.

B. Color Histogram

Color histograms are frequently used to
compare images. We discretize the colorspace of
the image such that there are n distinct
(discretized) colors. A color histogram H is a
vector (h1, h2, ..., hn), in which each bucket hj
contains the number of pixels of color j in the
image. Typically images are represented in the
RGB colorspace, and a few of the most significant
bits are used from each color channel. We use the
2 most significant bits of each color channel, for a
total of n = 64 buckets in the histogram.
For a given image I, the color histogram HI is a
compact summary of the image. A database of
images can be queried to find the most similar
image to I, and can return the image I' with the
most similar color histogram HI'. Typically color
histograms are compared using the sum of
squared differences (L2-distance) or the sum of
absolute value of differences (L1-distance). So the
most similar image to I would be the image I'
minimizing the L2-distance or L1-distance. Note
that we are assuming that weighted evenly across
different color buckets for simplicity.
Notation
M - number of pixels that an image has,
we assumed it for the ease of explanation
H(h1, h2, ..., hn) - a vector, in which each
component hj is the number of pixels of
color j in the image
n - number of distinct(discretized) color
I - an image
HI - the color histogram of image I
L2-distance

L1-distance

Description Features
Global color histogram

Uses 1 vector, H(h1, h2, ..., hn), to describes
an image.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 69

hj = (the number of pixels of color j in the
image) / (total number of pixels in the
image)
Local color histogram
Divides an images into 16 equal sections
as shown in the figure below:

For each section of image, uses 1 vector,
Hk(h1, h2, ..., hn), to represent that section
of the image. So, there is 16 vectors, Hk, to
describes an image.
Default vector comparing method
In DISCOVIR, by default, we measure the
distance of 2 images as follow:


C: Texture Extraction

1. Co-occurrence

Co-occurrence matrix is a statistical method using
second order statistics to model the relationships
between pixels within the region by constructing
Spatial Gray Level Dependency (SGLD) matrices.
The Gray-level co-occurrence matrix is the two
dimensional matrix of joint probabilities Pd, r (i, j)
between pairs of pixels, separated by a distance, d,
in a given direction, r.



Notation
Pd,r(i,j) - joint probabilities between pairs
of pixels in a given direction
d - distance between pairs of pixels in a
given direction
r - a given direction

Description Features
Finding texture features from gray-level co-
occurrence matrix for texture classification are
based on thesecriteria:
Energy:

Homogeneity:

VI RESULTS
Database




A. Average RGB
Input Image


Output Images
NICE-2010

Acharya Institute of Technology, Bangalore-560090 70



B. Local Color Histogram
Input Image



Output Images



C. Global Color Histogram
Input Image


Output Images
+

D. Co-Occurance
Input Image



Output Image





CONCLUSION

Image Retrieval using Color features retrieves the
images depend on the average RGB values of an
image. It also retrieves the images depend on the
color histogram of an image. There are two types
of color histograms, Global color histograms
(GCHs) and Local color histograms (LCHs). A GCH
represents one whole image with a single color
histogram. An LCH divides an image into fixed
blocks and takes the color histogram of each of
those blocks. Texture of an image can also be
extracted and matched using Co-occurance and
edge detection methods.
Image Retrieval can be expanded with other
features of an image. Features like Shape can be
expanded. Images can be extracted using Shape
algorithms and images can be matched using
some of distance metrics.

REFERENCES
[1] John Eakins and Margaret Graham, Content-
based
Image Retrieval, JISC Technology Applications
Programme. University of Northumbria at
Newcastle.
January 1999.

[2] Christopher C. Yang, Content-based image
retrieval:
a comparison between query by example and
image
browsing map approaches , Journal of
Information
NICE-2010

Acharya Institute of Technology, Bangalore-560090 71

Science, pp. 254-267,30 (3) 2004.
[3] Rui Y. & Huang T. S., Chang S. F. Image
retrieval:
current techniques, promising directions, and
open issues.
Joumal of Visual Communication and Image
Representation,
10,39-62, 1999.
[4] Karin Kailing, Hans-Peter Kriegel and i Stefan
Schonauer, Content-Based Image Retrieval Using
Multiple Representations. Proc. 8th Int. Con. On
Knowledge-Based Intelligent Information and
Engineering Systems (KES2004), Wellington,
New Zealand, 2004, pp. 982-988
[5] Ahmed M.Ghanem, Emad M. Rasmy, and
Yasser M,Kadah, Content-Based Image Retrieval
Strategies for
Medical Image Libraries, Proc. SPIE Med. Imag.,
San Diego, Feb. 200 1 *
[6] Henning Muller, Wolfgang Muller, David McG.
Squire and Thierry Pun, Performance Evaluation
in Content-
Based Image Retrieval: Overview and Proposals.
Computing Science Center, University of Geneva,
Switzerland, 2000.
[7] A Robust Content-Based Image Retrieval
System Using Multiple Features Representations,
Mohamed A. Tahoun', Khaled A. Nagag, Taha I. El-
Arie?, Mohammed A-Megeed3
[8] Content-based Image Retrieval using Shape
and Texture Attributes Mari Partio, TAMPERE
UNIVERSITY OF TECHNOLOGY, Department of
Electrical Engineering, Institute of Signal
Processing, Nov 2002.
[9] Graph Based Segmentation in Content Based
Image Retrieval 1P.S.Suhasini, 2K. Sri Rama
Krishna and 3I.V. Murali Krishna
[10] [7 J Vishal Chitkara, Color-Based image
Retrieval Using Binary Signatures. Technical
Report TR 01-08, University of Ulberta, Canada,
May 2001.
[11] Qasim Iqbal and J. K. Aggarwal, Combining
Structure, Color, and Texture for Image Retrieval:
A performance Evaluation. 16th International
Conference on Pattern Recognition (ICPR),
Quebec City, QC, Canada, August 11-15. 2002, vol.
2, pp. 43 8-443

[12] Jiri Walder, Using 2-D wavelet analysis for
matching two images, Technical. University of
Ostrava.2000.
http://www.cg.tuwien.ac.at/studentwork/CESCG-
2000/JWalderl
[13] A Robust CBIR Approach Using Local Color
Histograms By Shengjiu Wang

[14] http://en.wikipedia.org/wiki/texture



















NICE-2010

Acharya Institute of Technology, Bangalore-560090 72


CRYPTOGRAPHY USING COLORS AND ARMSTRONG NUMBERS
S. Pavithra Deepa, V. Keerthika, III B. Tech (IT)
Sri Krishna College Of Engineering and Technology,
pavithradeepa@gmail.com
_____________________________________________________________________________________________________________________________

ABSTRACT
In real world, data security plays an important
role where confidentiality, authentication,
integrity, non repudiation are given importance.
The universal technique for providing
confidentiality of transmitted data is cryptography.
This paper provides a technique to encrypt the data
using a key involving Armstrong numbers and
colors as the password. Three set of keys are used to
provide secure data transmission with the colors
acting as vital security element thereby providing
authentication.

I. INTRODUCTION:
In the present world scenario it is difficult to
transmit data from one place to another with
security. This is because hackers are becoming
more powerful nowadays. To ensure secured
data transmission there are several techniques
being followed. One among them is cryptography
which is the practice and study of hiding
information.

II. CRYPTOGRAPHY:
Cryptography, to most people, is concerned
with keeping communications private.
Encryption is the transformation of data into
some unreadable form. Its purpose is to ensure
privacy by keeping the information hidden from
anyone for whom it is not intended. Decryption is
the reverse of encryption; it is the transformation
of encrypted data back into some intelligible form.
Encryption and decryption require the use of
some secret information, usually referred to as a
key. The data to be encrypted is called as plain
text. The encrypted data obtained as a result of
encryption process is called as cipher text.
Depending on the encryption mechanism used,
the same key might be used for both encryption
and decryption, while for other mechanisms, the
keys used for encryption and decryption might be
different.
A. Types of Cryptographic Algorithms



There are several ways of classifying
cryptographic algorithms. In general they are
categorized based on the number of keys that are
employed for encryption and decryption, and
further defined by their application and use as in
[1]. The three types of algorithms are depicted as
follows
1) Secret Key Cryptography (SKC):
Uses a single key for both encryption and
decryption. The most common algorithms in use
include Data Encryption Standard (DES),
Advanced Encryption Standard (AES).

2) Public Key Cryptography (PKC):
Uses one key for encryption and another for
decryption. RSA (Rivest, Shamir, Adleman)
algorithm is an example.

3) Hash Functions:
Uses a mathematical transformation to
irreversibly "encrypt" information. MD (Message
Digest) algorithm is an example.

III. RGB COLOR FORMAT

A. RGB Color Model:
Any color is the combination of three primary
colors Red, Green and Blue in fixed quantities. A
color is stored in a computer in form of three
numbers representing the quantities of Red,
Green and Blue respectively. This representation
is called RGB representation which is used in
computers to store images in BMP, JPEG and PDF
formats. Here each pixel is represented as values
for Red, Green and Blue. Thus any color can be
uniquely represented in the three dimensional
RGB cube as values of Red, Green and Blue.
The RGB color model is an additive model in
which Red, Green and Blue are combined in
various ways to produce other colors. By using
appropriate combination of Red, Green and Blue
intensities, many colors can be represented.
Typically, 24 bits are used to store a color pixel.
This is usually apportioned with 8 bits each for
red, green and blue, giving a range of 256 possible
NICE-2010

Acharya Institute of Technology, Bangalore-560090 73

values, or intensities, for each hue. With this
system, 16777216 (256^ 3 or 2^24) discrete
combinations of hue and intensity can be
specified.
IV. PROPOSED APPROACH
A. Introduction
The existing techniques involve the use of keys
involving prime numbers and the like. As a step
further ahead let us considers a technique in
which we use Armstrong numbers and colors.
Further we also use a combination of substitution
and permutation methods to ensure data security.
We perform the substitution process by
assigning the ASCII equivalent to the characters.
Permutation process is performed by using
matrices as in [2] and Armstrong number. In this
technique the first step is to assign a unique color
for each receiver. Each color is represented with a
set of three values. For example violet red color is
represented in RGB format as (238, 58,140). The
next step is to assign a set of three key values to
each receiver.


The sender is aware of the required receiver to
whom the data has to be sent. So the receivers
unique color is used as the password. The set of
three key values are added to the original color
values and encrypted at the senders side. This
encrypted color actually acts as a password. The
actual data is encrypted using Armstrong
numbers.

At the receivers side, the receiver is aware of
his own color and other key values. The encrypted
color from the sender is decrypted by subtracting
the key values from the received set of color
values. It is then tested for a match with the color
stored at the senders database. Only when the
colors are matched the actual data can be
decrypted using Armstrong numbers. Usage of
colors as a password in this way ensures more
security to the data providing authentication. This
is because only when the colors at the sender and
receivers side match with each other the actual
data could be accessed.

Layout of the proposed technique
B. Illustration
1) Encryption
As an illustration let us assume that the data has
to be sent to a receiver (say A) who is assigned the
color raspberry (135, 38, 87). Let the key values to
be added with this color value be (-10, +5, +5). Let
the Armstrong number used for data encryption
be 153.

Step 1: (Creating password)

Password designing is a part of first phase of the
entire process.

Initially the sender knows that the required
receiver is A. So the key values are added with the
color values assigned for receiver A.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 74


135 38 87
-10 5 5
-------------------------
125 43 92

Now a newly encrypted color is designed for
security check.

Step 2: (Encryption of the actual data begins
here)
Let the message to be transmitted be
CRYPTOGRAPHY. First find the ASCII equivalent
of the above characters.

C R Y P T O G R A P H Y
67 82 89 80 84 79 71 82 65 80 72
89

Step 3: Now add these numbers with the digits of
the Armstrong number as follows

67 82 89 80 84 79 71 82 65 80 72 89
(+)1 5 3 1 25 9 1 125 27 1 5 3
---------------------------------------------------
68 87 92 81 109 88 72 207 92 81 77
92

Step 4: Convert the above data into a matrix as
follows

A=



Step 5: Consider an encoding matrix...
B =

Step 6: After multiplying the two matrices (B X A)
we get
C =

The encrypted data is...

779, 3071, 13427, 890, 3598, 16082, 1383, 6075,
28431, 742, 2834, 12190

The above values represent the encrypted form of
the given message.
2) Decryption:
Decryption involves the process of getting back
the original data using decryption key. The data
given by the receiver (the color) is matched with
the data stored at the senders end. For this
process the receiver must be aware of his own
color being assigned and the ke values.

Step 1: (Authenticating the receiver)
For the receiver A (as assumed) the actual color
being assigned is Raspberry. (135, 38, 87), the key
values (set of three values) are subtracted from
the color being received to get back the original
color.
The decryption is as follows.
125 43 92 (Received data)
(-) -10 5 5 (Key values)
----------------------------------
135 38 87
(
(
(

12190 28431 16082 13427


2834 6075 3598 3071
742 1383 890 779
(
(
(

27 125 1
9 25 1
3 5 1
(
(
(

92 92 88 92
77 207 109 87
81 72 81 68
NICE-2010

Acharya Institute of Technology, Bangalore-560090 75

The above set of values (135, 38, 87) is compared
with the data stored at the senders side. Only
when they both match the following steps could
be performed to decrypt the original data.

Step 2:
(Decryption of the original data begins here)

The inverse of the encoding matrix is
D =

Step 3: Multiply the decoding matrix with the
encrypted data (D X C) we get


Step 4: Now transform the above result as given
below

68 87 92 81 109 88 72 207 92 81 77
92

Step 5: Subtract with the digits of the Armstrong
numbers as follows

68 87 92 81 109 88 72 207 92 81 77
92
(-)1 5 3 1 25 9 1 125 27 1 5
3
------------------------------------------------------
67 82 89 80 84 79 71 82 65 80
72 89

Step 6: Obtain the characters from the above
ASCII equivalent
67 82 89 80 84 79 71 82 65 80 72
89
C R Y P T O G R A P H Y

C. Advantages
The above technique involves keys with a
minimum length of 8 bits for Armstrong numbers.
This minimum key length reduces the efforts
taken to encrypt the data. The key length can be
increased if needed, with increase in character
length. This increases the complexity thereby
providing increased security.
This technique ensures that the data transfer
can be performed with protection since it involves
two main steps. First step is to convert the
characters into another form, by adding with the
digits of the Armstrong numbers. Second step is to
encode using a matrix to form the required
encrypted data.
Tracing process becomes difficult with this
technique. This is because the Armstrong number
is used differently in each step. The key can be
hacked only if the entire steps involved in the
encoding process is known earlier.
This technique could be considered as a kind of
triple DES algorithm since we use three different
keys namely the colors, key values added with the
colors and Armstrong numbers.
Unless all the three key values along with the
entire encryption and decryption technique is
known the data cannot be obtained. So hacking
becomes difficult mainly because of the usage of
colors.
Simple encryption and decryption techniques
may just involve encoding and decoding the actual
data. But in this proposed technique the password
itself is encoded for providing more security to
the access of original data.
V. CONCLUSION
The above combination of secret key and public
key cryptography can be applied mainly in
military where data security is given more
importance. This technique provides more
security with increase in key length of the
Armstrong numbers. Thus usage of three set of
keys namely colors, additional set of key values
(
(
(

20 120 100
6 24 18
30 240 450
* ) 240 / 1 (
(
(
(

92 92 88 92
77 207 109 87
81 72 81 68
NICE-2010

Acharya Institute of Technology, Bangalore-560090 76

and Armstrong numbers in this technique ensures
that the data is transmitted securely and accessed
only by authorized people.
ACKNOWLEDGMENT
We would like to thank Mr.S.Kannimuthu,
Lecturer, Department of IT, Sri Krishna College of
Engineering and Technology for the support
provided.
REFERENCES
[1] Atul Kahate, Cryptography and Network
Security , Tata McGraw Hill Publications
[2]
http://aix1.uottawa.ca/~jkhoury/cryptography.h
tm
[3]
http://www.scribd.com/doc/29422982/Data-
Compression-and-Encoding-Using-Colors





































































NICE-2010

Acharya Institute of Technology, Bangalore-560090 77

DESIGN OF UHF RFID PASSIVE TAG ANTENNA FOR COMMERCIAL
APPLICATIONS USING IE3D SOFTWARE
Deshraj
Institute of technology and management, Gurgaon, MDUniversity, Rohtak
Deshrajshk580@gmail.com
_____________________________________________________________________________________________________________________________
Abstract

At the very simplest level, Radio
Frequency Identification (RFID) technologies
allow the transmission of a unique serial number
wirelessly, using radio waves. The two key parts
of the system that are needed to do this are the
RFID tag and the reader; attaching an RFID tag
to a physical object allows the object to be seen
and monitored by existing computer networks
and back-office administration systems.This
paper covers the designs and optimization of
antenna for RFID tags at UHF and microwave
frequency such design will focus on the specific
characteristics of RFID application current
development of RFID antenna that meet the
objective size reduction advance design technique
for size reduction such as mender line structure
example will be discussed. The basic need in RFID
is to miniaturize the size of tag, which contains
the antenna and an I

Keywords
Microstripe Patch Antenna, RFID,Meander Line
structure.

I. INTRODUCTION

In recent age of advanced communication
applications, greater interest is shown in printed
delectronics for low cost wireless sensors, RFID
and ambient intelligence [1]. The radio frequency
identification has become very popular in many
serves industry, distribution logistics,
manufacturing companies and goods flow system.
In this system the antenna is main device for RFID
readers. It is required to develop antennas that
are capable of maintaining high performance over
a large spectrum of frequencies .
[2].The printing process is a standard process in
packaging industry and electronic labels in the
form of In recent age of advanced communication
application greater interest is shown in printed
electronics for low cost wireless sensors, RFID
and ambient intelligence.
[1]. the radio frequency printed RFID tags is very
much desired and much efforts are undertaken to
reach expected standards for patch antenna and
tags. This is basically due to multiple services
having large amount of information, which needs
very high bandwidth. This technological trend has
brought changes is antenna design and one of the
design is micro strip (patch) antennas [2].
Presently micro strip patch antennas are popular
in many wireless applications due to their
advantages when compared with conventional
microwave antennas. The patch antenna
geometry is light weight, low volume and thin
profile configurations. This offers low cost and
amenable to mass production. The patch antennas
can easily be fabricated for circular or dual
polarization and very easily fabricated on
microwave IC with feed and matching network.
These can be fabricated using modern day
printing circuit board technology, compatible with
MMICs, and have the ability to conform to planar
and non-planar surfaces [3]. Due to having
specific features all of above characteristics it is
very useful to use the microstrip antennas in the
radio frequency identification (RFID) applications.
One of the main characteristics of patch antennas
is their inherently narrowband performance due
to its resonant nature, which helps to get a sharp
resonant frequency [4].

II. PROPOSED GEOMETRIES

For radio communication system
applications, Omni-directional antenna radiation
is preferred when coverage area is direction free
and need signal in a direction as the case of RFID
is. Antenna performance at UHF and microwave
frequency are dependent on the substrate i.e. and
electromagnetic properties(conductivity,
permittivity and permeability). It also depends on
the conductive surface geometry. In this paper our
main concern is to change the conductive surface
geometry dimensions and then visualize the effect
of frequency. In the proposed design we
considered Meander line structure for getting
better performance with compact design having
greaterefficiency. An Omni-directional pattern can
be accomplished by a small loop antenna whose
circumference may be less than one-tenth of
wavelength due to its inherent property but due
to small input resistance it is very difficult to
realize. This has led to array antennas with
complex feed systems.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 78


Fig.1(a). Proposed Geometry (3.5 steps)



Fig. 1(b). Full Step Dimension


Fig. 1(c). half Step Dimension

In figure 1 the probe fed Meander Line
RFID antenna with meandered structure of a
finite width strip is shown. Due to this design we
get resonance wit a much more compact structure
because we have a large electrical length in a
small planar structure which waves will travel
and results the radiated field of the antenna [5].
For proposed designs with variations in points we
are considering the following parameters:
Height of substrate 0.05mm (2mil )Dielectric
constant of substrate (PET) 3.50 Loss tangent
0.017 In the proposed designs of meander line
antenna all the dimension are indicated in mm.
In figure1(a) we have shown the proposed
compact geometry of meander line structure with
3.5 steps in centre and fig 1(b) is showing full step
dimension for proposed design and in figure 1(c)
the half step dimension is shown. The proposed
design with meander structure has been
fabricated and tested, which has given response in
accordance to available theoretical standards and
conventions. The meander line design has further
reduced the size of antenna for readers.

III. RESULTS

For proposed design given in previous
figures the frequency versus return loss graphs
are shown in this section with various changes
and their outcome. In figure 2, graph has been
plotted between frequency and return loss for 3.5
step dimension. It is clear from the graph that it
has resonated 700 MHz. After achieving the graph
for 3.5 step dimension the variation in steps has
been done and analysed for changes in resonance.



Fig 2 Frequency vs. Return loss [S11]

To analyse the effect of step changes in
proposed meander line design the step changes in
geometry have been shown in figures 3(a), 4(a),
5(a) 6(a) and 7(a) for geometry steps 3.0, 2.5, 2.0,
1.5 and 1.0 respectively. While the effect of
geometry step changes have been shown in
frequency graphs in figures 3(b), 4(b), 5(b), 6(b)
and 7(b). the specific meander line design with
return loss graphs are also shown in fig. 8 and 9.
The Return Loss graph we have clearly shown for
865 and 915 MHz Which is desired frequency of
RFID tag. The effect of changes in geometry steps
and their corresponding frequency of resonance is
also shown in table no. 1 clearly. This will help in
proposing new designs with shift in frequency for
some specific features.

Table No. 1: meander line step dependency

Number of step Frequency(MHz)
3.5 700
3.0 800
NICE-2010

Acharya Institute of Technology, Bangalore-560090 79

2.5 900
2.0 1000
1.5 1150



Fig. No. 3(a) 3.0 step geometry

Fig. No. 3(b) 3.0 Freq vs Return loss [S11]


Fig. No. 4(a) 2.5 step geometry

Fig. No. 4(b) 2.5 Freq vs Return loss [S11]



Fig. 5(a) 2.0 step geometry

Fig. No. 5(b) 2.0 Freq vs Return loss [S11]


Fig. No. 6(a) 1.5 step geometry

Fig. No. 6(b) 1.5 Freq vs Return loss [S11]

NICE-2010

Acharya Institute of Technology, Bangalore-560090 80



Fig. 7(a) Geometry for Frequency 865 MHz

Fig. No. 7(b) Freq vs Return loss [S11]



Fig. 8(a). Geometry for Frequency 915 MHz


Fig. 8(b). Frequency vs. Return loss [S11]




IV. CONCLUSION

Radio frequency identification is a rapidly
developing technology for automatic
identification of object. In this paper, we
presented an overview of antenna design for
passive UHF RFID tags antenna. In this paper we
have shown effect of geometry step change
affecting the resonance frequency of antenna. The
proposed design is simple structure, and can be
constructed with a very low cost. In addition, good
antenna gain and radiation patterns have also
been obtained. We presented a meander line
antenna design for standard frequencies of RFID
with analysis results after fabrication. The design
was simulated in IE3D v.14.1 and fabricated. The
results have justified meander line applications
with optimal simulation results.

V. REFRENCES

[1] H-E. Nilsson, J. Siden, T. Olsson, P. Jonsson and
A. Koptioug, Evaluation of Printed Patch antenna
for robust microwave RFID tags, IET Microw.
Antennas Propag., 2007, 1, (3), pp. 776781

[2] C. Balanis, Antenna Theory Analysis and Design,
and 2
nd
ed: John Wiley & Sons, Inc., 1997.

[3] J.R.James and P.S.Hall,Handbook of Microstrip
Antennas Peter Peregrinus Ltd., IEE, 1989.

[4] Smail Tedjini, Tan-Phu Vuong and Vincent
Beroulle Antennas for RFID tags in joint sOc-
EUSAI conference, pp.19-22, Oct.2005.

[5] K.V. Seshagiri Rao, Pavel V. Nikitin and Sander
F. Lam Antenna Design for UHF RFID Tags: A
Review and a Practical Applicationin IEEE Trans.
pp. 3870-3876 Vol.53, No.12,Dec.2005












NICE-2010

Acharya Institute of Technology, Bangalore-560090 81

EXACT SENSITIVE KNOWLEDGE HIDING THROUGH DATABASE EXTENSION
Muthukarthick B, Kesavan R (III Year)
Department of Information Technology
Velammal College of Engineering & Technology, Madurai
bmuthukarthik@gmail.com, Mob - +919994276153
________________________________________________________________________________________________________________________
ABSTRACT
Sharing data among organizations often
leads to mutual benefit. Recent technology in data
mining has enabled efficient extraction of
knowledge from large databases. This, however,
increases risks of disclosing the sensitive knowledge
when the database is released to other parties. To
address this privacy issue, one may sanitize the
original database so that the sensitive knowledge is
hidden. Sensitive knowledge hiding in large
transactional databases is one of the major goals of
privacy preserving data mining. A novel approach
that performs sensitive frequent item set hiding by
applying an extension to the original database was
proposed. The extended portion of data set
contains transactions that lower the importance of
the sensitive patterns, while minimally affecting the
importance of the non-sensitive ones. We present
the border revision to identify the revised border
for the sanitized database and then we compute the
minimal size for the extension. The hiding process
involves the construction of a Constraints
Satisfaction Problem, by using item sets of revised
borders and its solution through Binary Integer
Programming.

KEYWORDS
Privacy preserving data mining, knowledge
hiding, and association rule mining, binary integer
programming.
I. INTRODUCTION
A subfield of privacy preserving data
mining is knowledge hiding. The paper presents
a novel approach that strategically performs
sensitive frequent itemset hiding based on a new
notion of hybrid database generation. This
approach broadens the regular process of data
sanitization by applying an extension to the
original database instead of either modifying
existing transactions, or rebuilding the dataset
from scratch. The extended portion of the dataset
contains a set of carefully crafted transactions
that achieve to lower the importance of the
sensitive patterns to a degree that they become
uninteresting from the perspective of the data
mining algorithm, while minimally affecting the
importance of the nonsensitive ones. The hiding
process is guided by the need to maximize the
data utility of the sanitized database by
introducing the least possible amount of side-
effects, such as (i) the hiding of non-sensitive
patterns, or (ii) the production of frequent
patterns that were not existent in the initial
dataset. The released database, which consists of
the initial part (original database) and the
extended part (database extension), can
guarantee the protection of the sensitive
knowledge, when mined at the same or higher
support as the one used in the original database.
The approach introduced in this paper is exact in
nature. On the contrary, when an exact solution is
impossible, the algorithm identifies an
approximate solution that is close to the optimal
one. To accomplish the hiding task, the proposed
approach administers the sanitization part by
formulating a Constraint Satisfaction Problem
(CSP) and by solving it through Binary Integer
Programming (BIP).
II. KNOWLEDGE HIDING FORMULATION
This section provides the necessary background
regarding sensitive itemset hiding and sets out
the problem at hand, as well as the proposed
hiding methodology.

A. Frequent Itemset:
Let I = {i1, i2,.. iM} be a finite set of literals, called
items, where M denotes the cardinality of the set.
Any subset I I is an item set over I. A transaction
T over I is a pair T = (tid, I), where I is the item set
and tid is a unique identifier, used to distinguish
among transactions that correspond to the same
item set. A transaction database D = {T1, T2 ...TN}
over I is an
N x M table consisting of N transactions over I
carrying different identifiers, where entry Tnm = 1
if and only if the mth item (m [1, M]) appears in
the nth transaction (n [1, N]). Otherwise, Tnm =
0. A transaction T = (tid, J) supports an item set I
over I, if I J. Let S be a set of items; notation p(S)
denotes the power set of S, which is the set of all
subsets of S.
Given an item set I over I in D, sup (I, D)
denotes the number of transactions T D that
NICE-2010

Acharya Institute of Technology, Bangalore-560090 82

support I and freq (I, D) denotes the fraction of
transactions in D that support I. An item set I is
called large or frequent in database D if and only if
its frequency in D is at least equal to a minimum
threshold mfreq. A hybrid of Apriori and FP-Tree
algorithms are proposed to be used to find the
frequent item set.
A hybrid of Apriori and FP-Tree
algorithms are proposed to be used to find an
optimized set of frequent item set.
1) The Apriori Algorithm - Finding Frequent
Itemset Using Candidate Generation:
Apriori is an influential algorithm for
mining frequent itemset for Boolean
association rules. Apriori employs an iterative
approach known as a level-wise search, where
k-itemset are used to explore (K+1) itemset.
First, the set of frequent 1-itemsets is found.
This set is denoted L1. L1 is used to find L2, the
set of frequent 2-itemsets , which is used to
find L3 , and so on, until no more frequent k-
itemsets can be found. The finding of each Lk
requires one full scan of the database.

To improve the efficiency of the level-wise
generation of frequent itemsets, an important
property called the Apriori property, i.e., all
nonempty subsets of a frequent itemset must also
be frequent, is used to reduce the search space. A
two step process join and prune actions are used
to find Lk from Lk-1.

2) FP-Growth Algorithm-Mining Frequent
patterns without candidate generation:

Frequent pattern growth or simply FP-growth
adopts a divide-and conquer strategy as follows:
compress the database representing frequent
items into a frequent pattern tree, or FP tree, but
retain the itemset association information and
then divide such a compressed database into a set
of conditional databases, each associate with one
frequent item, and mine each such database
separately.
Major steps to mine FP-tree

1) Construct conditional pattern base for
each node in the FP-tree

2) Construct conditional FP-tree from each
conditional pattern-base

3) Recursively mine conditional FP-trees and
grow frequent patterns obtained so far

If the conditional FP-tree contains
a single path, simply enumerate
all the patterns
B .Hiding methodology
To properly introduce the hiding methodology,
one needs to consider the existence of three
databases, all depicted in binary format. They are
defined as follows:
Database Do, is the original transaction
database that, when mined at a certain
support threshold msup, and leads to the
disclosure of some sensitive knowledge in the
form of sensitive frequent patterns. This
sensitive knowledge needs to be protected.
Database Dx ,is a minimal extension of Do that
is created by the hiding algorithm during the
sanitization process, in order to facilitate
knowledge hiding.
Database D, is the union of database Do and
the applied extension Dx and corresponds to
the sanitized outcome that can be safely
released.

TABLE 1
Sample: Sanitized Database D as a Mixture of the
Original Database DO and the Applied Extension
DX

Table 2
Sample: Frequent Item Sets for DO and DX at msup
= 3 (for table 1)
NICE-2010

Acharya Institute of Technology, Bangalore-560090 83


III. HYBRID SOLUTION METHODOLOGY
A. Computation of size of database extension

Database DO is extended by DX to
construct database D. An initial and very
important step in the hiding process is the
computation of the size of DX. A lower bound on
this value can be established based on the
sensitive item set in S, which has the highest
support. The rationale here is given as follows: by
identifying the sensitive item set with the highest
support, one can safely decide upon the minimum
number of transactions that must not support this
item set in DX, so that it becomes infrequent in D.
Lower bound Q is

1
) , sup(
+
(

= N
mfreq
D I
Q
O N
.1

Equation (1) provides the absolute minimum
number of transactions that need to be added in
DX, to allow for the proper hiding of the sensitive
item sets of DO. However, this
lower bound can, under certain circumstances, be
insufficient to allow for the identification of an
exact solution, even if one exists. To circumvent
this problem, one needs to expand the size Q of DX
as determined by (1), by a certain number of
transactions. A threshold, called safety margin
(denoted hereon as SM), is incorporated for this
purpose. Safety margins can be either predefined
or be computed dynamically, based on particular
properties of database DO and / or other
parameters regarding the hiding process.
B. Exact and Ideal Solutions

Definition 1 (feasible/exact/approximate
solution).
A solution to the hiding of the sensitive
knowledge in DO is considered as feasible if it
achieves to hide the sensitive patterns. Any
feasible solution, introducing no side effects in the
hiding process, is called exact. Finally, any non
exact feasible solution is called approximate.

Definition 2 (database quality).
Given the sanitized database D, its original
version DO, and the produced extension DX , the
quality of database D is measured both in the size
of DX and in the number of binary variables set to
1 in the transactions of DX (i.e., the distance
metric). In both cases, lower values correspond to
better solutions.

Definition 3 (ideal solution).
A solution to the hiding of the sensitive
item sets is considered as ideal if it has the
minimum distance among all the existing exact
solutions and is obtained through the minimum
expansion of DX. In that sense, ideal is a solution
that is both minimal (with respect to distance and
size of extension) and exact.
C. Border Revision
The rationale behind this process is that
hiding of a set of item sets corresponds to a
movement of the original borderline in the lattice
that separates the frequent item sets from their
infrequent counterparts , such that the sensitive
item sets lie below the revised borderline. There
are four possible scenarios involving the status of
each item set I prior and after the application of
border revision:
C1 :
Item set I was frequent in DO and remains
frequent in D.
C2 :
Item set I was infrequent in DO and is infrequent
in D.
C3 :
NICE-2010

Acharya Institute of Technology, Bangalore-560090 84

Item set I was frequent in DO and became
infrequent in D.
C4 :
Item set I was infrequent in DO and became
frequent in D.
Since the borders are revised to
accommodate for an exact solution, the revised
hyper plane is designed to be ideal in the sense
that it excludes only the sensitive item sets and
their supersets from the set of frequent patterns
in D, leaving the rest of the item sets in their
previous status as in database DO.

The first step in the hiding methodology
rests on the identification of the revised borders
for D. The hiding algorithm relies on both the
revised positive and the negative borders,
denoted as Bd
+
(F
1
D) and Bd

(F
1
D), respectively.
After identifying the new (ideal) borders, the
hiding process has to perform all the required
minimal adjustments of the transactions in Dx to
enforce the existence of the new borderline in the
result database.


Fig. 1 An sample item set lattice demonstration (a)
the original border and the sensitive item sets, and
(b) the revised border for Table1

D. Problem Size Reduction
To enforce the computed revised border and
identify the exact hiding solution, a mechanism is
needed to regulate the status (frequent versus
infrequent) of all the item sets in D. Let C be the
minimal set of border item sets used to regulate
the values of the various uqm variables in DX.
Moreover, suppose that I C is an item set, whose
behavior we want to regulate in D. Then, item set I
will be frequent in D if and only if sup (I, Do) +
Sup(I, Dx) m freq x (N+Q), or equivalently if
Sup (I,DO) +

=
Q
q I i
M
1
uqm mfreq x (N + Q) (3)
equivalently if
sup (I,DO) +

=
Q
q I i
M
1
uqm < mfreq x (N + Q) ...(4)

Inequality (3) corresponds to the minimum
number of times that an item set I has to appear in
the extension DX to remain frequent in D. On the
other hand, (4) provides the maximum number of
times that an item set I has to appear in DX to be
infrequent in database D. To identify an exact
solution to the hiding problem, every possible
item set in P, according to its position in the
latticewith respect to the revised bordermust
satisfy either (3) or (4). However, the complexity
of solving the entire system of the 2
M
_ 1
inequalities is well known to be NP-hard .
Therefore, one should restrict the problem to
capture only a small subset of these inequalities,
thus leading to a problem size that is
computationally manageable. The proposed
problem formulation achieves this by reducing
the number of the participating inequalities that
need to be satisfied. Even more, by carefully
selecting the item sets of set C, the hiding
algorithm ensures that the exact same solution to
the one of solving the entire system of inequalities
is attained. This is accomplished by exploiting
cover relations existing among the item sets in the
lattice.

Set C is chosen appropriately to consist of
all the item sets of the revised border. The
proposed hiding algorithm is capable of ensuring
that if (3) and (4) are satisfied for all the item sets
in C, then the produced solution is exact and is
identical to the solution involving the whole
system of the 2
M
1 inequalities. Cover relations
governing the various item sets in the lattice of DO
ensure that the formulated set of item sets C has
an identical solution to the one of solving the
system of all 2
M
1 inequalities for D.

NICE-2010

Acharya Institute of Technology, Bangalore-560090 85

The cover relations that exist between the
item sets of Bd
+
(F`D) and those of F`D. In the same
manner, the item sets of Bd

(FD) are generalized


covers for all the item sets of P \ (Bd
+
(F`D)
{}). Therefore, the item sets of the positive and
the negative borders cover all item sets in P.

Optimal solution set C: The exact hiding solution,
which is identical to the solution of the entire
system of the 2
M
1 inequality, can be attained
based on the item sets of set

C = Bd
+
(F`D) Bd

(F`D)

Based on (7), the item sets of the revised borders
Bd
+
(F`D) and Bd

(F`D) can be used to produce the


corresponding inequalities, which will allow for
an exact hiding solution for DO.
D. Handling of Suboptimality
Since an exact solution may not always be
feasible, the hiding algorithm should be capable of
identifying good approximate solutions. There are
two possible scenarios that may lead to
nonexistence of an exact solution. Under the first
scenario, DO itself does not allow for an optimal
solution due to the various supports of the
participating item sets. Under the second
scenario, database DO is capable of providing an
exact solution, but the size of the database
extension is insufficient to satisfy all the required
Inequalities of this solution. To tackle the first
case, the hiding algorithm assigns different
degrees of importance to different inequalities. To
be more precise, while it is crucial to ensure that
(4) holds for all sensitive item sets in D, thus they
are properly protected from disclosure,
satisfaction of (3) for an item set rests in the
discretion of ensuring the minimal possible
impact of the sanitization process to DO. This
inherent difference in the significance of the two
inequalities, along with the fact that solving the
system of all inequalities of the form (4) always
leads to a feasible solution (i.e., for any database
DO), allows the relaxation of the problem, when
needed, and the identification of a good
approximate solution.

To overcome the second issue, the hiding
algorithm incorporates the use of a safety margin
threshold, which produces a further expansion of
DX by a certain number of transactions. These
transactions must be added to the ones computed
by using (1). The introduction of a safety margin
can be justified as follows: Since (1) provides the
lower bound on the size of database DX , it is
possible that the artificially created transactions
are too few to accommodate for the proper hiding
of knowledge. This situation may occur due to
conflicting constraints imposed by the various
item sets regarding their status in D. These
constraints require more transactions (or to be
more precise, more item modifications) in order
to be met. Thus, a proper safety margin will allow
the algorithm to identify an exact solution if such
a solution exists. The additional extension of DX,
due to the incorporation of the safety margin, can
be restricted to the necessary degree. A portion of
transactions in DX is selected and removed at a
later point, thus reducing its size and allowing an
exact solution. Therefore, the only side effect of
the use of the safety margin in the hiding process
is inflation in the number of constraints and
associated binary variables in the problem
formulation, leading to a minuscule overhead in
the runtime of the hiding algorithm.
E. Formulation and Solution of the CSP
A CSP is defined by a set of variables and a
set of constraints, where each variable has a
nonempty domain of potential values. The
constraints involve a subset of the variables and
specify the allowable combinations of values that
these variables can attain. Since in this work all
variables involved are binary in nature, the
produced CSP is solved by using a technique
called BIP that transforms it to an optimization
problem. To avoid the high degree of constraints,
the application of a Constraints Degree Reduction
(CDR) approach is essential. On the other hand,
the resulting inequalities are simple in nature and
allow for fast solutions, thus adhere for an
efficient solution of the entire CSP. The proposed
CSP formulation is


NICE-2010

Acharya Institute of Technology, Bangalore-560090 86

F. Validity of Transactions
The incorporation of the safety margin
threshold in the hiding process may lead to an
unnecessary extension of DX. it is possible to
identify and remove the extra portion of DX that is
not needed, thus minimize the size of database D
to the necessary limit. To achieve that, one needs
to rely on the notion of null transactions,
appearing in database DX. A transaction Tq is
defined as null or empty if it does not support any
valid item set in the lattice. Null transactions do
not support any pattern from P \ {}.

Apart from the lack of providing any
useful information, null transactions are easily
identifiable, thus produce a privacy breach in the
hiding methodology. They may exist due to two
reasons: 1) an unnecessarily large safety margin
or 2) a large value of Q essential for proper hiding.
In the first case, these transactions need to be
removed from DX, while in the second case the
null transactions need to be validated, since Q
denotes the lower bound in the number of
transactions to ensure proper hiding.

After solving the CSP, all the null
transactions appearing in DX are identified.
Suppose that Qinv such transactions exist. The size
of database DX will then equal the value of Q plus
the safety margin SM. This means that the valid
transactions in DX will be equal to = Q + SM
Qinv. To ensure minimum size of DX , the hiding
algorithm keeps only k null transactions, such that

k = max (Q - , 0) k = max (Qinv SM, 0).

As a second step, the hiding algorithm
needs to ensure that the k empty transactions that
remain in DX become valid prior to releasing
database D to public. A heuristic is applied for this
purpose, which effectively replaces null
transactions of DX with transactions supporting
item sets of the revised positive border. After
solving the CSP in Fig. 2, the outcome is examined
to identify null transactions. Then, the null
transactions are replaced with valid ones,
supporting item sets of Bd
+
(F`D).
IV. EXPERIMENTAL EVALUATION
The algorithm was tested on different
datasets using different parameters such as
minimum support threshold and number/size of
sensitive itemsets to hide. The thresholds of
minimum support were properly set to ensure an
adequate amount of frequent itemsets, among
which a set of sensitive itemsets were randomly
selected. We compare the solutions of the hybrid
algorithm against three state-of-the-art BBAs: the
BBA, the Max-Min 2 algorithm, and inline
algorithm at terms of side effects introduced by
the hiding process. The hybrid algorithm
consistently outperforms the three other
schemes, with the inline approach being the
second best. An interesting insight from the
conducted experiments is the fact that the hybrid
approach, when compared to the inline algorithm
and the heuristic approaches can better preserve
the quality of the border and produce superior
solutions. Indeed, the hybrid approach introduces
the least amount of side effects among the four
tested algorithms.
V. CONCLUSION

A novel, exact border-based hybrid
approach to sensitive frequent item set hiding,
through the introduction of a minimal extension
to the original database was presented. A hybrid
approach of combining Apriori and FP_Tree was
done to find optimized frequent item set. This
methodology is capable of identifying an ideal
solution whenever one exists, or approximate the
exact solution and the solution of this approach is
of higher quality than previous approaches.
REFERENCES
[1] Aris Gkoulalas Divanis, Vassilios S. Verykois,
(May 2009) Exact Knowledge Hiding through
Database Extension IEEE Transaction on
Knowledge and Data Engineering, Vol. 21, No.
5,
[2] Gkoulalas-Divanis. A and Verykios. V.S, (Nov.
2006) An Integer Programming Approach for
Frequent Itemset Hiding, Proc. ACM Conf.
Information and Knowledge Management
(CIKM 06), pp. 748-757,
[3] Oliveira. S.R.M. and Zaane. O.R.,, (2003)
Protecting Sensitive Knowledge by Data
Sanitization, Proc. Third IEEE Intl Conf. Data
Mining (ICDM 03), pp. 211-218.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 87


[4] Sun. X and Yu. P.S, (2005) A Border-Based
Approach for Hiding Sensitive Frequent
Itemsets, Proc. Fifth IEEE Intl Conf. Data
Mining (ICDM 05), pp. 426-433.
[5] Verykios. V.S., Emagarmid. A.K., Bertino. E.,
Saygin. Y, and Dasseni. E, (Apr. 2004)
Association Rule Hiding, IEEE Trans.
Knowledge and Data Eng., vol. 16, no. 4, pp.
434-447,










































































NICE-2010

Acharya Institute of Technology, Bangalore-560090 88

BDI AGENTS FOR INFORMATION FUSION IN WIRELESS SENSOR NETWORKS
Prashant Sangulagi A. V. Sutagundar
Department of Electronics and Communication Engineering
Basaveshwar Engineering College, Bagalkot-587102, INDIA
psangulgi@gmail.com, ashok_ec@yahoo.com
_____________________________________________________________________________________________________________________________
ABSTRACT
Information fusion is one of the major
problems in the wireless sensor network where
data, image, audio, video can be passed through
WSN. The conventional agents are not gaining
well throughput because they are not too
intelligent to act upon the critical conditions like
sudden environment change etc, so to overcome
from this problem the BDI (Belief Desire
Intention) agents are used instead of
conventional agents where BDI agents uses their
belief, desire and intentions to gather the
information from sensor nodes and send within
time to the sink node. BDI are autonomous, social
ability, reactiveness, proactivity and the special
one is intelligence, it uses these characteristic to
fuse the information very accurately and send
the fused information to the sink node within
time.
KEYWORDS

WSN, information fusion, intelligent agents, BDI
agents.

I. INTRODUCTION
Wireless sensor networks (WSN) have
gained much attention recently [1]. The sensor
networks can be used for various application
areas such as health, military, environmental
monitoring, home, etc. Usually a wireless sensor
network (WSN) is composed of a large number of
sensor nodes, which are densely deployed either
inside the phenomenon or very close to it. The
position of sensor nodes need not be engineered
or pre-determined. Another unique feature of
sensor networks is the cooperative effort of
sensor nodes. Sensor nodes use their processing
abilities to locally carry out simple computations
and transmit only the required and partially
processed data [2].
A WSN may be designed with different
objectives. It may also be designed to monitor an
environment for the occurrence of a set of
possible events, so that the proper action may be
taken whenever necessary. Information fusion
arises as a discipline that is concerned with how
data gathered by sensors can be processed to
increase the relevance of such a mass of data.
Information fusion can be defined as the
combination of multiple sources to obtain
improved information (cheaper, greater quality,
or greater relevance). Information fusion is
commonly used in detection and classification
tasks in different application domains, such as
robotics and military applications. Simple
aggregation techniques (e.g., maximum, minimum,
and average) have been used to reduce the overall
data traffic to save energy [3].
Information fusion can be used to compose the
complete view from the pieces provided by each
node. Redundancy makes the WSN less vulnerable
to failure of a single node, and overlapping
measurements can be fused to obtain more
accurate data. Information fusion can be used to
combine complementary data so the resultant
data allows inferences that might be not possible
to be obtained from the individual measurements
(e.g., angle and distance of an imminent threat can
be fused to obtain its position).
Agent technologies span a range of specific
techniques and algorithms for dealing with
interactions with others in dynamic and open
environments. These include issues such as
balancing reaction and deliberation in individual
agent architectures, learning from and about
other agents in the environment and user
preferences, finding ways to negotiate and
cooperate with agents and developing
appropriate means of forming and managing
coalitions.
BDI stands for (B)eliefs, (D)esires and
(I)ntentions, which are mental components
present in many agent architectures[15]. In short,
belief represents the agents knowledge, desire
represents the agents goals and intention lends
deliberation to the agent. The exact definition of
these components will vary from author to
author. One can expect to see different
interpretations of these mental components in
different applications. Further sections are II)
Information fusion WSN, III) Agent technology,
NICE-2010

Acharya Institute of Technology, Bangalore-560090 89

IV) BDI agent, V) related work and last
Conclusion.
II. INFORMATION FUSION IN WIRELESS
SENSOR NETWORK
WSNs are deployed in environments
where sensors can be exposed to conditions that
might interfere with the sensor readings or even
destroy the sensor nodes. As a result, sensor
measurements may be more imprecise than
expected, and the sensing coverage may be
reduced. A natural solution to overcome failures
and imprecise measurements is to use redundant
nodes that cooperate with each other to monitor
the environment. However, redundancy poses
scalability problems caused by potential packet
collisions and transmissions of redundant data. To
overcome such a problem, information fusion is
frequently used. Briefly, information fusion
comprises theories, algorithms, and tools used to
process several sources of information generating
an output that is, in some sense, better than the
individual sources. The proper meaning of
better depends on the application. For WSNs,
better has at least two meanings: cheaper and
more accurate [4].
Information fusion can be categorized
based on several aspects. Relationships among the
input data may be used to segregate information
fusion into classes (e.g. cooperative, redundant,
and complementary data). Also, the abstraction
level of the manipulated data during the fusion
process (measurement, signal, feature, decision)
can be used to distinguish among fusion
processes. Another common classification
consists in making explicit the abstraction level of
the input and output of a fusion process.
Common classifications of information fusion are
explored in this section.

Figure1: Information fusion in WSN
Types of information fusion based on the
relationship among the sources, figure1 adapted
from [10] where Complementary is used When
information provided by the sources represents
different portions of a broader scene, information
fusion can be applied to obtain a piece of
information that is more complete (broader).
Redundant: If two or more independent sources
provide the same piece of information, these
pieces can be fused to increase the associated and
Cooperative: Two independent sources are
cooperative when the information provided by
them is fused into new information (usually more
complex than the original data) that, from the
application perspective, better represents the
reality [4].
III. AGENT TECHNOLOGY
Agents can be defined to be autonomous,
problem-solving computational entities capable of
effective operation in dynamic and open
environments. Agents are often deployed in
environments in which they interact and maybe
cooperate with other agents (including both
people and software) that have possibly
conflicting aims. Such environments are known as
multiagent systems. The use of agent systems to
simulate real-world domains may provide
answers to complex physical or social problems
which would be otherwise unobtainable, as in the
modeling of the impacts of climate change on
various biological populations, or modeling the
impact of public policy options on social or
economic behavior.
An intelligent software agent is a computational
process which has several characteristics:
[17]reactivity (allowing agents to perceive and
respond to a changing environment), [18]social
ability (by which agents interact with other
agents) and [19]Proactiveness (through which
agents behave in a goal-directed way).
Agents can be distinguished from objects
(in the sense of object oriented software) in that
they are autonomous entities capable of exercising
choice over their actions and interactions. Agents
cannot, therefore, be directly invoked like objects.
However, they may be constructed using object
technology. Agents can be written in Java, Tcl, Perl
and XML languages [1]. An agent interpreter
depends on the type of agent script/language
used. An agent platform offers the following
services: creation of static and mobile agents,
transport for mobile agents, security,
communication messaging and persistence.

(a)Types of Agents
(1)Mobile agents
(2)Static agents
NICE-2010

Acharya Institute of Technology, Bangalore-560090 90


Mobile agents:
A mobile agent is a composition of computer
software and data which is able to migrate (move)
from one computer to another autonomously and
continue its execution on the destination
computer. A Mobile Agent, namely, is a type of
software agent, with the feature of autonomy,
social ability, learning, and most importantly,
mobility. More specifically, a mobile agent is a
process that can transport its state from one
environment to another, with its data intact, and
be capable of performing appropriately in the new
environment. Mobile agents decide when and
where to move. Movement is often evolved from
RPC methods. Just as a user directs an Internet
browser to "visit" a website (the browser merely
downloads a copy of the site or one version of it in
the case of dynamic web sites), similarly, a mobile
agent accomplishes a move through data
duplication. When a mobile agent decides to
move, it saves its own state, transports this saved
state to the new host, and resumes execution from
the saved state.
Static agents:
A Static (stationary) agent executes only on the
system where it begins execution. If it needs
information that is not on that system, or needs to
interact with an agent on a different system, it
typically uses a communication mechanism such
as remote procedure calling (RPC).
Three different types of agents which are used in
the fusion process are follows
1) Surveillance-sensor agent: This type of agent
tracks all the objects and sends the data to a
fusion agent. It acquires the environment
information through a camera and performs the
local signal processing side. The tasks carried out
are: detection of objects, data association, state
estimation, projection on fusion coordinates and
the communication with the fusion agent.
2) Fusion agent: It performs the fusion of the data
received from each surveillance agent. Then they
fuse the information received from the
surveillance-sensor agents, which is received in
FIPA ACL messages and it is time stamped.
3) Interface agent: This agent receives the fused
data and shows it to the final user. It is also the
user interface of the surveillance application.
(b)Agent based information fusion:
In this section, we describe the agents
information fusion using agents, Agents are
employed [6] in peer-to-peer sensor networks to
perform data fusion for supporting situation
awareness on the digital battlefield. The work
given in [7] is based on applying mobile agents for
wireless sensor networks that collects the data
and sends to sink. Here the fusion agent receives
tracks information from the sensor agents
through a TCP/IP network using FIPA ACL [5]
messages performs the fusion of the data
received. The most important fusion agent
parameters involved in the fusion process are:
IV. BDI AGENTS
The Belief-Desire-Intention (BDI)
software model is a software model developed for
programming intelligent agent superficially
characterized by the implementation of an agent's
beliefs, desires and intentions, it actually uses these
concepts to solve a particular problem in agent
programming. In essence, it provides a
mechanism for separating the activity of selecting
a plan (from a plan library) from the execution of
currently active plans. Consequently, BDI agents
are able to balance the time spent on deliberating
about plans (choosing what to do) and executing
those plans (doing it). A third activity, creating the
plans in the first place (planning), is not within
the scope of the model, and is left to the system
designer and programmer. The Belief-Desire-
Intention (BDI) model has been proved as a
dominant view in contemporary philosophy of
human mind and action. We utilize BDI as a tool to
analyze agents environments, goals, and
behaviors. In BDI agents the meaning of BDI
represents as follows.
Beliefs: Beliefs represent the
informational state of the agent, in other words its
beliefs about the world (including it and other
agents). Beliefs can also include inference rules
allowing forward chaining to lead to new beliefs.
Using the term belief rather than knowledge
recognizes that what an agent believes may not
necessarily be true (and in fact may change in the
future).
Desires: Desires represent the
motivational state of the agent. They represent
objectives or situations that the agent would like
to accomplish or bring about. Examples of desires
might be: find the best price, go to the party or
become rich. Goals: A goal is a desire that has been
adopted for active pursuit by the agent. Usage of
the term goals adds the further restriction that the
set of active desires must be consistent.
Intentions: Intentions represent the
deliberative state of the agent - what the agent has
NICE-2010

Acharya Institute of Technology, Bangalore-560090 91

chosen to do. Intentions are desires to which the
agent has to some extent committed. In
implemented systems, this means the agent has
begun executing a plan. Plans: Plans are
sequences of actions that an agent can perform to
achieve one or more of its intentions. Plans may
include other plans: my plan to play cricket and to
score good runs may also include plan to find
good wicket to bat on.
The figure 2 shows general BDI architecture, it
shows how BDI works when it get information
from sensor node.
The knowledge box is needed and is used by
mobile agent to decide which route to be selected
next. Static agent constructs and maintains the
interest cache, other static agents energy models,
self model and forwarding table (Routing Table).

General BDI architecture:

Figure2: General BDI agent Architecture

V. BDI AGENTS FOR INFORMATION FUSION
IN WSN
SYSTEM ENVIRONMENT:
BDI Agent based information fusion is
represented in below shown figure3, where all the
nodes are randomly deployed and if an event
occurs in the environment means more than one
node has that information so to make information
fusion BDI agent is generated at the node, all the
nodes have a capability to generate BDI agent.
Now BDI agent will migrate from one node to
another fuse the entire event occurred
information and also gets information about the
neighbour nodes and updates its knowledge box,
and now BDI agent move to next event accoured
node and continue its fusing process. While
routing the information from event occurred
nodes to the Sink Node (SN) it uses intermediate
nodes as it consumes some energy. On routing if
the intermediate node is dead or the
environmental condition is not suitable to route
means the BDI agent changes its path and select
the suitable path to route the information this
shows intelligence of the BDI agent as shown in
below shown figure3.


Figure3: Information fusion in WSN using BDI
agent

RELATED WORKS:
(a) Fuzzy Decision Making through Energy-
aware and Utility Agents within Wireless
Sensor Networks:
This paper proposed that the Multi-agent
Systems (MAS) through their intrinsically
distributed nature offer a promising software
modeling and implementation framework for
Wireless Sensor Network (WSN) applications.
WSNs are characterized by limited resources from
a computational and energy perspective; in
addition, the integrity of the WSN coverage area
may be compromised over the duration of the
networks operational lifetime, as environmental
effects amongst others take their toll. Thus a
significant problem arises and the agent cant
construct an accurate model of the prevailing
situation in order that it can make effective
decisions about future courses of action within
these constraints so that BDI architecture is used.
The BDI model represents an abstraction of
human deliberation and is based on a theory of
rational activity in the human cognition process.
In particular, the fundamental issue of belief
generation within WSN constraints using classical
reasoning augmented with a fuzzy component in a
hybrid fashion is explored in terms of energy-
awareness and utility.
Energy-aware and utility-based agents [8] have
been proposed to provide solutions to the
modeling of distributed intelligence modeling for
resource-bounded sensors. The authors view the
matured BDI paradigm as an effective solution to
such pervasive issues as load balancing, routing
and distributed data processing.
In this paper a two-level BDI multi-agent
system is proposed the higher-level MAS consist
of a number of WSN subsets. This MAS is
comprised of resource-rich members, and may be
regarded as constituting a classic BDI MAS. Within
these MAS, information may be freely exchanged
without regard to cost constraints. Indeed,
decisions that affect the entire network may be
made at this level. The beliefs that inform these
NICE-2010

Acharya Institute of Technology, Bangalore-560090 92

decisions are those propagated from the lower-
level MAS, that is, the individual sensors, via their
BSs.
Belief Generation in a Fuzzy Context:
While data gathering is a general term for
collecting data from multiple sensors, the terms
data aggregation and data fusion both refer to the
analysis and interpretation of the data. Though
these terms are commonly used, there is not a
complete consensus as to their meaning [9]. Thus,
as far as this discussion is concerned, data
aggregation, data gathering and data fusion all
refer to the combination of multiple sensor data
into one representation or control action. When
BDI agents are considered in a WSN context, it can
be seen that in generating their beliefs, a BDI
agent is essentially engaging in an exercise of data
fusion or data aggregation.
A prerequisite to the successful deployment of
agents within WSNs is a capability to reason in a
distributed manner and with partial, noisy and
incomplete knowledge. In the case of BDI agents,
this has implications for the belief generation
process.
In brief the author has tried to describe the
design and operation of energy-aware and utility-
based agents. They have focused in particular
upon the core issue of belief generation within
computationally challenged environments. It has
outlined experimental work and initial results
that support the efficacy of the proposed
approach. Energy-aware utility-based agents offer
a hybrid approach to deliberative reasoning by
combining fuzzy reasoning with classical BDI
approaches.

(b)Information Fusion for Visual Reference
Resolution in Dynamic Situated Dialogue
Human-Robot Interaction (HRI) invariably
involves dialogue about objects in the
environment in which the agents are situated. The
paper focuses on the issue of resolving discourse
references to such visual objects. The paper
addresses the problem using strategies for intra-
modal fusion and inter-modal fusion .Where
Inter-modal fusion relates the use of object
references across different modalities, e.g. the
resolution of an exospheric linguistic reference
against an object in the robots perceptual field
.Within the framework and Inter-modal fusion
results in the binding of equivalence classes from
different modalities. A key element of the inter-
modal fusion process is the use of ontology-based
mediation to provide a mapping between
conceptual systems to establish whether we can
relate percepts from different modalities. Core to
these strategies are sensorimotoric coordination,
and ontology-based mediation between content in
different modalities. One of the main advantages
of this framework is that it provides a mechanism
for dealing with the temporal dimension of
situated reference. The approach has been fully
implemented, and is illustrated with several
working examples [11].
The BDI based process used as a mediate
between different subsystems. In this belief
provides a common ground between different
modalities, rather than being a layer on top of the
different modalities. Beliefs thus provide a means
for cross-modal information fusion, in its minimal
form by co-indexing references to information in
individual modalities [12]. The author has used
bounding box-method to determine the region of
interest in the image for that SIFT-based model
should be learned. Create a visual referent id for
the resulting model, and store this id in a new
visual EC for the object. We provide the EC with a
structural description of the object (box) based
on what was said [13]. Then return the identifiers
of the sighting and its visual EC to BDI mediation.
BDI mediation creates a belief in which the
dialogue and visual ECs are connected, and
informs the communication subsystem that a
visual model has been successfully acquired for
the robot to provide feedback.

(c) Extending BDI Multi-Agent Systems with
Situation Management:
In this [14] author explained about
extension of BDI (Belief, Desire, and Intention)
agent model by enabling agent beliefs to be based
on real-time situations that are generated by a
Situation Management (SM) system. Situation
management is intended for application domains
characterized by large volumes of real-time
events and complex domain models which require
a combination of data fusion, event correlation
and semantic reasoning in order to identify and
assess the current context and recommend
actions. SM system has several advantages for
multi-agent systems using BDI agents. 1) Because
of the use of event correlation and data fusion
techniques in situation management, agent
platforms can support highly reactive distributed
applications. 2) The situation manager provides a
semantically rich representation of the world and
can dynamically adapt its representation for
situations over time. BDI agent model is a well-
established approach to designing deliberative
agent systems. Extend the BDI (Belief, Desire, and
Intention) agent model by including agent beliefs
that are real-time complex situations generated
NICE-2010

Acharya Institute of Technology, Bangalore-560090 93

by a situation management system. This
integration produces SBBDI (Situation-Based BDI)
agents which can support highly reactive
applications and an enhanced representation of
agent beliefs.
There are several future research issues of SBBDI
agents that need to be addressed:
a. Effective situation specification languages and
methods that preserve the completeness and
correctness of situations.
b. Synergistic two-way communication between
the basic BDI agent model functions and the
situation management functions to increase the
overall effectiveness of the SBBDI agent.
c. Learning situations by the SBBDI agent system.

(d) Multi-Agents Supporting Reflection in a
Middleware for Mission-Driven Heterogeneous
Sensor Networks
This paper presents the concepts of a
middleware needed to address mission-driven
heterogeneous sensor networks deployed in
highly dynamic scenarios [16]. The emerging
applications using sensor networks technologies
constitute a new trend requiring several different
devices to work together and this partly
autonomously. However, the integration and
coordination of heterogeneous sensors in these
emerging systems is still a challenge, especially
when the target application scenario is
susceptible to constant changes. Such systems
must adapt themselves in order to fulfill
requirements that can also change during the
system runtime. The reflective behavior must be
provided for quick decision. This paper presents a
reflective middleware that supports reflective
behaviors to address adaptation needs of
heterogeneous sensor networks deployed in
dynamic scenarios. This middleware presents
specific handling of users requirements by
representing them as missions that the network
must accomplish with. These missions are then
translated to network parameters and distributed
over the network by means of the reasoning about
network nodes capabilities and environment
conditions. A multiagent approach is proposed to
perform this initial reasoning as well as the
adaptations needed during the system runtime.
Here BDI approach is used for network wide
reasoning. The overall adaptation from this paper
is that inclusion of the agents concepts in the
simulator framework, and an interface with the
Mission Specification Console.
Planning-Agent model:
The author used different types of agents like
cognitive and reactive in order to perform
different activities in the middleware, from the
provisioning of simple services to complex
reasoning about the network setup but given
more stress on cognitive agents. The model used
in the present approach for the cognitive agents is
based on the model of mental attitudes, known as
BDI model (Believes-Desires-Intentions). The BDI
approach appears to suite well to the problem
addressed in this work, as some decisions that
must be taken by the agents in the proposed
approach require cognitive skills to wonder if
certain actions are adequate to achieve a desired
result, based on knowledge about conditions that
may interfere on the performance of those
actions. The BDI model presented in this paper
has given more focus on sensor networks
activities, in which the network nodes do not
perform any action that changes the world around
them, what simplifies the model by eliminating
the assumptions about this aspect. The proposal
herein is simpler.
Architectural Structure:
Cognitive planning-agents architecture has been
described in this paper based on the BDI
architecture which is shown as below figure4.

Figure4: Planning-agent Internal Architectural
Structure
The author also described Stepwise description
about the architecture like how BDI updates its
belief according to the current intentions and how
it considers the new plans to achieve its goal, the
architecture also explains option generator,
planning agent and filters etc. then author moved
towards multi agent reasoning in that mapping
function is explained with stepwise.
(e) Analysis of Distributed Fusion Alternatives
in Coordinated Vision Agents:
In this paper [20], details some technical
alternatives when building a coherent distributed
visual sensor network by using the Multi-Agent
paradigm. The Multi-Agent paradigm fits well
within the visual sensor network architecture and
one of the main advantages using a visual sensor
NICE-2010

Acharya Institute of Technology, Bangalore-560090 94

networks is the increase of spatial coverage. In
order to have a global view of the environment
under surveillance the visual sensors must be
correctly deployed. . In this paper they specially
focus on the problem of distributed data fusion.
Three different data fusion coordination schemes
are proposed and experimental results of Passive
Fusion are presented and discussed. The main
contributions of this paper are twofold, first one is
propose the use of Multi-Agent paradigm as the
visual sensor architecture and present a real
system results. Secondly, the use of feedback
information in the visual sensors, called Active
Fusion, is proposed. The experimental results
prove that the Multi-Agent paradigm fits well
within the visual sensor network and provide a
novel mechanism to develop a real visual sensor
network system. Many Multi-Agent languages and
frameworks have been developed. The proposed
architecture is based on the open source
framework Jadex. Jadex is a Belief-Desire-
Intention (BDI) Multi- Agent model. The BDI
model provides a way to conceptualize the system
and structure its design.
Types of fusion considered are Passive fusion
(sends the tracks of information to the fusion
agent), Active fusion (in this fusion process deals
with data incest) and Peer to Peer fusion (the
fusion is performed inside each surveillance-
sensor agent and then it is sent to the fusion
agent).
The author has given a brief explanation about
passive fusion by showing a simple experiment
where in that he made two conditions first one is
normal condition, in normal condition he took
three surveillance-sensor agents and one fusion
agent. Each surveillance-sensor captures the
events happening in the room and collects the
information every time and sends the collected
information to the fusion agents. The second one
is abnormal condition, if the error occurs in any
one of the sensor agent means the fusion agents
cant accept the information from that sensor
agents from that it concludes that some changes
has occurred in that particular area send that
information to the sink node.

CONCLUSION

The BDI agents use its belief desire intension
technique in agent technology to overcome the
problems occurred by using simple agent
technology in wireless sensor networking where
the numbers of distributed systems present in it.
Problems like, the agent may not be able to react
to circumstance change in time and in certain
conditions like, agent need more execution time
than a time slice, can never be executed. The BDI
agents also have database in them, where it can
store data in it and uses whenever it is required.
BDI software model (in terms of its research
relevance) is the existence of logical models
through which it is possible to define and reason
about BDI agents. BDI Agents are used in case of
fusion because it has natural abilities of doing
several things at the same time and the BDI agent
is able to deliberate on some new beliefs to plan
for future actions while executing an existing
intention; the agent is able to execute several
intentions at once
These all characteristics of BDI agent leads to
improve the fusion process in the wireless sensor
network because WSN facing major problem for
information fusion. We have seen how BDI agents
works in WSN for fusing the similar information
by taking some related works and some examples.

REFERENCE

[1] A.V. Sutagundar, S. S. Manvi Agent Based
Approach to Information Fusion in Wireless
Sensor Networks IEEE Trans pp.1-7, 2008.
[2] Ian F. Akyildiz, Weilian Su, Yogesh
Sankarasubramaniam, and Erdal Cayirci A
Survey on Sensor Networks Georgia Institute of
Technology, IEEE Magazine, pp.102-114 July
2002.
[3] Pattem, S., Krishnamachari, B., and Govindan,
R. 2004.The impact of spatial correlation on
routing with compression in wireless sensor
networks. In proceedings IPSN04,pp. 2835.
[4] Manju k Information Fusion for Wireless
Sensor Networks department of computer
science Cochin University of science and
technology 2008.
[5]Federico Castanedo, Jesus Garca, Miguel A.
Patricio and Jos M. Molina A Multi-Agent
architecture to support active fusion in a VISUAL
Sensor network IEEE Trans 2008.
[6] Sajid Hussain, Abdul w. Martin, Hierarachical
cluster based routing in wireless routing in
wireless sensor Networks, available at
www.cs.virginia.edu/ ipsn06/wip/Hussain-
1568986753.pdf.
[7] Available at http://www.ece.wisc.edu/ sensit.
[8] Shen S, OHare, GMP (2007). Wireless Sensor
Networks, an Energy-Aware and Utility-Based BDI
Agent Approach. Int. J. Sensor Networks 2(3-
4):235245.
[9] Kalpakis K, Dasgupta K, Namjoshi P (2003)
Maximum Lifetime Data Gathering and
NICE-2010

Acharya Institute of Technology, Bangalore-560090 95

Aggregation in Wireless Sensor Networks.
Computer Networks 42:697716.
[10] Elmenreich, w. 2002. Sensor fusion in time-
triggered systems. Ph.D. thesis, Institut f ur
Technische Informatik, Vienna University of
Technology, Vienna, Austria.
[11] Geert-Jan M. Kruijff, John D. Kelleher, and
Nick Hawes Information Fusion For Visual
Reference Resolution In Dynamic Situated
Dialogue.
[12] Gurevych, I., Porzel, R., Slinko, E., Pfleger, N.,
Alexandersson, J., and Merten, S. (2003). Less is
more: Using a single knowledge representation in
dialogue systems. In Proceedings of the HLT-
NAACL Workshop on Text Meaning, pages pp. 14
21, Edmonton, Canada.
[13] Kruijff, G.-J. M., Kelleher, J., Berginc, G., and
Leonardis, A. (2006b).Structural descriptions in
human-assisted robot visual learning. In
Proceedings of the 1st Annual Conference on
Human-Robot Interaction (HRI06), Salt Lake City,
UT. pp 1-7.
[14] Buford, J.; Jakobson, G.; Lewis,Extending BDI
Multi-Agent Systems with Situation Management
10-13July2006Page(s):1-7 Digital Object
Identifier 10.1109/ICIF.2006.301781.
[15] Bratman, M. E. Intention, Plans, and Practical
Reason.Cambridge, MA, 1987.
[16] Multi agent supporting reflection in a
middleware for mission-driven heterogeneous
sensor network
[17] E.Waltz and J. Llinas. Multisensor Data
Fusion. Artech House Inc, Norwood,
Massachussets, U.S, 1990.
[18] M. Wooldridge and N. Jennings, Intelligent
agents: Theory and practice, The knowledge
Engineering Review, 1995.
[19] F.Castanedo, M.A. Patricio, J. Garcia and J.M.
Molina. Extending surveillance systems
capabilities using BDI cooperative sensor agents,
Proceedings of the 4th ACM international
workshop on Video surveillance and sensor
networks, pp. 131138, 2006.
[20]Federico Castanedo, Jesus Garca, Miguel A.
Patricio and Jose M. Molina Analysis of
Distributed Fusion Alternatives in
Coordinated Vision Agents.






























NICE-2010

Acharya Institute of Technology, Bangalore-560090 96

H.264/AVC CABAC DECODER DESIGN USING VHDL
J.Vinodh
1
, N. Vishnu priya
2
(IV Year )
RMK Engineering College
vinodhjr@gmail.com
1
, videnash@yahoo.in
2

_____________________________________________________________________________________________________________________________
ABSTRACT
The H.264/AVC is the most recent standard of
video compression/decompression for future
broadband network. This standard was developed
through the Joint Video Team (JVT) from the ITU-T
Video Coding Experts Group and the ISO/IEC MPEG
standardization committee. In this project H.264
decoder functional block such as Context based
Binary arithmetic coding (CABAC) is designed using
VHDL. CABAC includes three basic building blocks
of context modeling, binary arithmetic coding and
Inverse binarization. Here the compressed bit-
stream from NAL unit is expanded by CABAC
module to generate various syntax elements. Here
the basic arithmetic decoding circuit units are
designed to share efficiently by all syntax elements.
KEYWORDS

MPEG-2, H.264, MPEG-4 Part 10, AVC, digital video
codec standard, Lossy compression, lossy transform
codecs, lossy predictive codecs

I. INTRODUCTION

H.264/AVC is the latest video
compression standard developed by ISO/IEC
Moving Picture Experts Group (MPEG) and ITU-T
Video Coding Experts Group (VCEG) for next-
generation multimedia coding applications. H.264
adopts many brand new technologies such as
variable block size motion estimation with
multiple reference frames, de-blocking filtering,
B-frame coding, context-adaptive entropy coding,
picture adaptive frame field (PAFF) coding,
macroblock (MB) adaptive frame field coding, and
8 8 transform in the newly developed H.264
high-profile (HP) specification for high definition
television (HDTV) applications.
Compared to the previous MPEG standards,
H.264 provides over two times higher
compression ratio under the same video coding
quality. However, the computational complexity
of H.264 video coding is much higher than those
of the previous MPEG standards.


Fig1: Baseline, Main and Extended profiles

There are two techniques adopted in H.264
entropy coding. One is context-based adaptive
variable length coding (CAVLC) for baseline
profile (BP). The other is context-based adaptive
binary arithmetic coding (CABAC) for main profile
(MP) and HP. Compared to the CAVLC, adopting
CABAC can save about 914% bit rates at the cost
of much higher computational complexity.
The further paper is organized as follows: the
next section describes the problem statement.
Section III, explains about cabac decoder and the
different parts of CABAC Decoder. Section IV,
provides simulation and synthesis results of
CABAC Decoder. Section V, we conclude.
II. PROBLEM STATEMENT

Nowadays, a large number of consumer
products such as digital cameras, Personal Digital
Assistants, video telephony, portable DVD player
as well as storage, broadcast and streaming of
standard definition TV are common practice. All
those applications demand efficient management
of the large amount of video data. This motivated
a large body of research in industry as well as in
academia to develop advanced video coding
technology. H.264/AVC video coding standard
developed by the ITU-T/ISO/IEC is the latest
international standard developed by ITU-T Video
Coding Experts Group and the ISO/IEC Moving
Picture Experts Group.
The new standard provides gains in
compression efficiency of up to 50% over a wide
range of bit rates and video resolutions compared
with the former standards. Averagely 30~40
cycles are needed to decode a single bin on DSP.
That means for such typical 4M bit stream,
averagely about 1.5 100 30=4500 cycles are
needed simply to implement the arithmetic
decoding task for one MB, while the cycles for
other controls are not counted in. This speed is
NICE-2010

Acharya Institute of Technology, Bangalore-560090 97

unacceptable for real time applications, where 30
frames of D1 resolution are required to be
decoded within 1s with 100MHz clock, i.e. a MB
has to be decoded within at most 2000 cycles. So
hardware acceleration is necessary for a
commercially viable H.264/AVC based video
application, especially with increase in image size
and quality settings in the future. To speed up the
decoding process, multiplication-free logic is used
to calculate subinterval range.

III OVERVIEW OF CABAC DECODER

At first, it needs to re-initialize both the context
tables in the beginning of each slice and
probability model from bit-streams. Then, the
syntax elements (SEs) of each MB could be
decoded one by one. The Decode Core in Fig. 3
consists of three arithmetic decoding processes
defined in H.264 video coding standard.
According to our observation, the behaviors
between DecodeDecision and DecodeTerminate are
almost the same except that DecodeTerminate
applies the fixed context index 276 that will only
be used without being updated. Therefore, we
refer the combination of DecodeDecision and
DecodeTerminate to the decision decoding engine
and use a bypass decoding engine to stand for
DecodeBypass in the rest of this letter. Only one of
them works at a time to generate a bin
value. The bin is the basic decoding unit of the
SEs in CABAC. After a bin is decoded, the
binarization stage checks if the successive
decoded bin is in bin string. If it is not, the decoder
will keep on decoding the next bin. Otherwise, it
will prepare for decoding the first bin of the next
SE.
The MBAFF coding tool, which is provided in
H.264 MP/HP, provides good coding efficiency
when a scene consists of both stationary and
significant motion regions The MBAFF coding tool
was reported to reduce 2833% of bit rate than
the frame-based coding. In decision decoding
engine, it is required to refer syntax information
from the left MB and top MB to select the context
index. To support the MBAFF coding tool, it
becomes more complicated to obtain the essential
syntax information from the neighboring MBs. It
has to refer to the top or the bottom MB in the
neighboring MB pairs according to the related
MBs coded in frame or field mode, which greatly
increases the complexity for hardware realization.
For getting better compression efficiency for
HD videos, an 8 8 transform is adopted in H.264
HP. That is, the CABAC decoding architecture
should be able to support both the 4 4 and 8 8
transform blocks, which complicates the
hardware complexity in increasing both the
hardware cost for 8 8 blocks and the decoding
latency. Therefore, we have to consider the
hardware sharing of CABAC decoders when
operating on 44 and 88 blocks for reducing the
hardware cost.

Motion Estimation

Motion estimation of a macroblock involves
finding a 16 16-sample region in a reference
frame that closely matches the current
macroblock. The reference frame is a previously
encoded frame from the sequence and may be
before or after the current frame in display order.
An area in the reference frame centred on the
current macroblock position (the search area) is
searched and the 16 16 region within the search
area that minimizes a matching criterion is chosen
as the best match.

Motion Compensation

The selected best matching region in the
reference frame is subtracted from the current
macroblock to produce a residual macroblock
(luminance and chrominance) that is encoded and
transmitted together with a motion vector
describing the position of the best matching
region (relative to the current macroblock
position).Within the encoder, the residual is
encoded and decoded and added to the matching
region to form a reconstructed macroblock which
is stored as a reference for further motion-
compensated prediction. It is necessary to use a
decoded residual to reconstruct the macroblock in
order to ensure that encoder and decoder use an
identical reference frame for motion
compensation.

Transform Coding

The purpose of the transform stage in an image
or video CODEC is to convert image or motion-
compensated residual data into another domain
(the transform domain). The choice of transform
depends on a number of criteria:
1. Data in the transform domain should be
decorrelated (separated into components with
minimal inter-dependence) and compact (most of
the energy in the transformed data should be
concentrated into a small number of values).
2. The transform should be reversible.
3. The transform should be computationally
tractable (low memory requirement, achievable
NICE-2010

Acharya Institute of Technology, Bangalore-560090 98

using limited-precision arithmetic, low number of
arithmetic operations, etc.).

Quantization

A quantiser maps a signal with a range of
values X to a quantised signal with a reduced
range of values Y. It should be possible to
represent the quantised signal with fewer bits
than the original since the range of possible
values is smaller. A scalar quantiser maps one
sample of the input signal to one quantised output
value and a vector quantiser maps a group of input
samples (a vector) to a group of quantised
values.

Arithmetic Coding

The variable length coding schemes share the
fundamental disadvantage that assigning a
codeword containing an integral number of bits to
each symbol is sub-optimal, since the optimal
number of bits for a symbol depends on the
information content and is usually a fractional
number. Compression efficiency of variable length
codes is particularly
poor for symbols
with probabilities
greater than 0.5 as
the best that can be
achieved is to
represent these
symbols with a
single-bit code. Arithmetic coding provides a
Fig2. Block diagram of CABAC Decoder
practical alternative to Huffman coding that can
more closely approach theoretical maximum
compression ratios. An arithmetic encoder
converts a sequence of data symbols into a single
fractional number and can approach the optimal
fractional number of bits required to represent
each symbol.

Flow chart for cabac decoder


Fig3: decoding flow of cabac decoder

Fig4. decode bypass process

Fig5. Normal Decode process

Fig6. Terminal decoding process
IV SIMULATION AND SYNTHESIS RESULTS

Simuation Results:




Fig7.Bypass decode



NICE-2010

Acharya Institute of Technology, Bangalore-560090 99



Fig8. Getcabac decoding



Fig9. Terminate decode



Fig10. Cabac decoder

Synthesis Results:

Fig11 Bypass Decode

Fig12. Termiate decode

Fig13. Getcabac

Fig14.Cabac decoder


V.CONCLUSION

In this project H.264 decoder functional blocks
such as Context based Binary arithmetic coding
(CABAC), Inverse Quantization and Inverse
Discrete Cosine Transform are designed using
VHDL to increase the speed of decoding
operation. Since CABAC decoding is a highly time
consuming process, CPU or DSP is not being the
appropriate choice for real-time CABAC decoding
applications. This project work shows that the
hardware design of CABAC Decoder is possible for
a commercially viable H.264/AVC based video
application, especially with increase in image size
and quality settings in the future.

Future Developments
In this project work, CABAC decoder is
designed using VHDL to increase the speed of
decoding operation. Since CABAC is a key
technology adopted in H.264/AVC standard, it
offers a 16% bit-rate reduction when compared to
baseline entropy coder while increasing access
NICE-2010

Acharya Institute of Technology, Bangalore-560090 100

frequency from 25% to 30%.So CABAC decoding
is a highly time consuming process. Multiple
decoding engines and shared memory between
the modules can be implemented in future to
increase the decoding speed especially to suite for
high bit rate applications such as HDTV, High
Definition DVD, Broadcast and Streaming, Digital
Television. So Much space is left for real-time
applications of higher video quality and larger
image resolutions in the future.

REFERENCES:
[1] Joint Video Team (JVT) of ISO/IEC MPEG&ITU-T
VCEG, ISO/IEC 14496-10, 2003.
[2] ITU-T Recommendations for H.264
Conformance
[3] Bitstream Files, Available:
http://ftp3.itu.ch/avarch/jvtsite/
draft_conformance/ Joint Video Team (JVT)
Reference Software JM10.2.
[4] I. Richardson, Video CODEC Design, John
Wiley & Sons, 2002.
[5]. D. Marpe, G Blttermann and T Wiegand,
Adaptive Codes for H.26L, ITU-T SG16/6
document VCEG-L13, Eibsee, Germany, January
2001
[6] vcodex.com/h.264mal
[7]
http://www.ittiam.com/pages/products/h264-
dec.htm



























































NICE-2010

Acharya Institute of Technology, Bangalore-560090 101

AN OCR SYSTEM FOR PRINTED KANNADA CHARACTERS USING
CORRELATION METHOD
Dr. Ramesh Babu, Prof. & HOD CSE, Dayanand Sagar Engineering College, Bangalore
Mr. Nitya E., Asst prof., Dept of CSE, Dr. Ambedkar Institute of Technology, Bangalore
Mr. Harinarayana Bhat G., 4
th
sem M.Tech, Dr. Ambedkar Institute of Technology, Bangalore
E-mail: nityae@gmail.com, hariannadka@gmail.com
______________________________________________________________________________________________________________________
ABSTRACT
Optical Character Recognition (OCR) is
the process of converting the textual image into
the machine editable format. This paper proposes
an OCR system for printed Kannada Characters.
The input to the system would be the scanned
image of a page of text that containing Kannada
characters and the output is a machine editable
file. The system first pre-processes the input
document containing the Kannada characters and
converts it into binary form. Then the system
extracts the lines from the document image and
segments the lines into character and sub-
character level pieces. Here histogram technique
and connected component method is used for
character segmentation and correlation method is
used to recognize the characters. Here first we are
collecting different sample characters and it is
pre-processed and stores it in a file. The input
image is segmented to character level pieces and
it is compared with sample characters stored in
the file. It returns corresponding target ID. Each
target ID has corresponding character class name.
Then we are displaying the class name in the
editor, which is in machine editable format.
Index Terms: Correlation method, Connected
Component method, Histogram technique.
1. Introduction

Optical Character Recognition is one
of the oldest sub fields of pattern recognition
with a rich contribution for the recognition of
printed documents. OCR is a field of research
in pattern recognition, artificial intelligence
and computer vision. Though academic
research in the field continues, the focus on
OCR has shifted to implementation of proven
techniques.
When one scans a paper page into a
computer, it produces just an image file, a photo
of the page. The computer cannot understand the
letters on the page, so you cannot search for
words or edit it or change the font, as in a word
processor. You would use OCR software to
convert it into a text or word processor file so that
you could do those things. The result is much
more flexible and compact than the original page
photo. The need for OCR arises in the context of
digitizing the documents from the library, which
helps in sharing the data through the Internet.
Currently there are many OCR systems
available for handling printed English documents
with reasonable levels of accuracy. Such systems
are also available for many European languages as
well as some of the Asian languages such as
Japanese, Chinese, etc. However, there are not
many reported efforts at developing OCR systems
for Indian languages especially for a South Indian
language like Kannada. Kannada is one of the
South Indian languages, which has 16 vowels and
34 consonants. It includes the possible consonant-
vowel combinations are 16*34 =544. The number
of possible consonant-consonant-vowel (Complex
characters) combinations is 16*34*34 = 18496.
Some of the Kannada characters are shown in
Figure 1.1

Figure 1.1: Printed Kannada Characters
2. Proposed methodology


Figure 2.1 Proposed method
The main steps involved in achieving OCR are as
follows:
Preprocessing
Segmentation
Character recognition

3. Preprocessing
NICE-2010

Acharya Institute of Technology, Bangalore-560090 102


The input to the system is a digital image of
the document containing printed Kannada
text captured by scanning the document using
a flatbed scanner or digital camera. The input
documents in RBG format. First it is
converted to Grayscale image then we are
calculating the threshold value of the
grayscale image and by using that value we
are converting that image to black and white
format. At the end we are storing that image
in a matrix, say binary form.

4. Segmentation
4.1 Line segmentation:
To separate the text lines, from the document
image, the horizontal projection profile [1] of
the document image is found. The horizontal
projection profile is the histogram of the
number of ON pixels along every row of the
image. White space between text lines is used
to segment the text lines. Figure 4.1 shows a
sample Kannada document along with its
horizontal projection. The projection profile
will have valleys of zero height between the
text lines. Line segmentation is done at these
points.

Figure 4.1 Line segmentation
4.2 Character segmentation:
The letters in Kannada are composed by
attaching to the glyph of a consonant, the glyphs
of the vowel modifiers and the glyphs of the
consonant conjuncts. If we considered all the
combination, then building the classifier of these
numbers of character is very difficult. So our
strategy is that we will segment the word into its
constituents, i.e. the base consonant, the vowel
modifier and the consonant conjunct. Its very
difficult to achieve this. If we have good look on
the Kannada word, we will see that for extracting
glyph of a consonant the glyphs of the vowel
modifiers and the glyphs of the consonant
conjuncts we can divide the character into two
zones [2].
Top zone: Top zone mainly consist of
main portion of the character. It includes
base consonant or vowels or some vowel
modifiers.
Bottom zone: Bottom zone consists of
glyphs for the consonant conjuncts.
Here by using connected component
method [3] we are first counting the number of
consonants, vowels, vathu or vowel modifiers
present in the text line. Connected component
method is an algorithmic application of graph
theory, where subsets of connected components
are uniquely labeled based on a given heuristic.
Connected component labeling is used in
computer vision to detect connected regions in
binary digital images, although color images and
data with higher-dimensionality can also be
processed. In connected component method
connectivity checks are carried out by checking
the labels of pixels that are North-East, North,
North-West and West of the current pixel. Then
we extracting that characters separately and send
it for character recognition.

5. Character Recognition

In character recognition phase, it takes each
individual character as input. Here we are
using correlation method [4] to compare the
input image character and stored sample
character. Before applying correlation
method, the size of input character and stored
character size should be same. So we have to
resize the input character image. For that
purpose we are using Nearest Neighbor
Interpolation method. Then Input image is
compared with the stored character and it
will return corresponding correlation
coefficient. The maximum correlation
coefficients target ID returned to main
application. Each target ID has corresponding
class name.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 103


5.1 Correlation Method

The correlation is one of the most
common and most useful statistics. A correlation
is a single number that describes the degree of
relationship between two variables. Correlation
functions used in image processing, astronomy,
financial analysis, and statistical mechanics differ
only in the particular stochastic processes they
are applied to. In correlation area based method, a
statistical comparison is computed from digital
numbers taken from same-size sub-arrays in the
two images. A correlation coefficient is computed
by the following equation, using digital numbers
from sub-arrays A and B.

The normalized correlation coefficient c
assumes values in the range from -1 to +1, with +1
indicating perfect correlation (exact match). A
coefficient of -1 indicates an inverse correlation,
such in the case with a positive and a negative of
the same image. Coefficient value near zero
indicate non-match, and could result from
comparison of any two set of random numbers.
The output after classification has to be
transformed into a format, which can be loaded
into a Kannada editing package. The method of
composition of aksharas in all Kannada
typesetting packages is similar. The string
representing an akshara is composed from
different character class names corresponding to
the different components of the akshara as
follows: the codes for the base consonant appear
first followed by the codes for the consonant
conjuncts; the codes for the vowel modifier
appear at the last and signify the end of an
akshara. Here we are displaying that character
class name in open source Baraha editor. If it is
Complex character, letter concatenation takes
place before displaying in editor. From Baraha
editor we will export the character, which will be
displaying in notepad.
6. Results and discussion

We measured the performance of our
system by scanning document that contains
different complex Kannada characters. We
collected nearly fifty different samples that
include vowels, consonants, vathu and vowel
modifier. The complex Kannada character means,
which is the combination of vowel or consonants
or vathu or vowel modifier. The system first
segments the document into character level
pieces and it is compared with the sample
characters. It recognized more than hundred
Kannada character and more than hundred
complex Kannada words. Here the system gives
more than 90% accuracy. The screen shots of the
system shown in the following figures.


Input printed Kannada Character


Preprocessed Character



Segmented characters
NICE-2010

Acharya Institute of Technology, Bangalore-560090 104


Character recognition and displaying
7. Conclusion

This paper describes a simple and efficient OCR
system for printed text documents in Kannada, a
South Indian language. It takes printed Kannada
character as input image and converts it into
machine editable format. The system is designed to
be independent of the font and size of text. At the
end, the paper shows some results with the system,
which delivers reasonable character recognition
accuracy. By using this system we can restore
available ancient Kannada documents or image into
machine editable format, so that we can easily
analyze and understand the ancient document. We
can easily manage the Kannada documents once it is
converted into machine editable format.
8. References

[1] R SANJEEV KUNTE and R D SUDHAKER SAMUEL,
A simple and efficient OCR, for basic symbols in
printed Kannada text, Sadhana Vol. 32, Part 5,
October 2007, pp. 521533. Printed in India.
[2] T V ASHWIN and P S SASTRY, A font and size-
independent OCR system for printed Kannada
documents using support vector machines, Sadhana
Vol. 27, Part 1, February 2002, pp. 3558. Printed in
India.
[3] Gonzalez R C, Woods R E 1993, Digital image
processing, (Boston, MA, USA: Addison Wesley
Longman Publishing Co. Inc.)

[4] C. Balletti F. Guerra, Image matching for historical
maps comparison, e-perimetron, Vol. 4, No. 3,
2009[180-186].






























































NICE-2010

Acharya Institute of Technology, Bangalore-560090 105

HEART ATTACK DETECTION USING MOBILE PHONE AND WIRELESS
SENSORS
Rajeev R Thobbi, Athar Shaikh
Department of Information Science and Engineering,
S D M College of Engineering & Technology, Dharwad, Karnataka, India
rajeevthobbi@gmail.com,atharshaikh1@gmail.com
_________________________________________________________________________________
ABSTRACT
In the next generation of Info-
communications, mobile Internet-enabled devices
and third generation mobile communication
networks have become reality, Location Based
Services (LBS) are expected to be a major area of
growth. Providing information, content and
services through positioning technologies forms the
platform for new services for users and developers,
as well as creating new revenue channels for
service providers. These crucial advances in
location based services have opened up new
opportunities in real time heart attack detection
and eliminate patient error. In this paper a mobile-
based location technique using the Global
Positioning System (GPS) and cellular mobile
network infrastructure is employed to provide the
location tracking capability. The patient will have
to carry Bluetooth enabled, pulse sensing wrist
band and cell phone equipped with Bluetooth and
GPS technology. When the Bluetooth enabled wrist
band detects a heart attack, it will alert the cell
phone which in turn will automatically call for help
and provide the patients location. The goal is to
provide early heart attack detection so that the
patient will be given medical attention within the
first few critical hours, thus greatly improving his
or her chances of survival.

Keywords:
Info-communication, LBS (Location Based
Services), GPS (Global Positioning System),
tracking, Bluetooth.

I. INTRODUCTION

Cardiovascular disease is the leading cause of
death in the developed world. It refers to various
medical conditions that affect the heart and the
blood vessels. These conditions include coronary
artery disease, myocardial infarction (heart
attack), angina, congestive heart failure,
hardening of the arteries, stroke and peripheral
vascular disease. Studies in Australia show that
more than two thirds of Australians would not call
an ambulance if they thought they were having a
heart attack [1]. This is backed up by international
studies [2] that indicate that many people
hesitate calling the emergency services or going to
emergency centers with symptoms of a heart
attack. However, after a heart attack it is
extremely important to get treatment as quickly
as possible, since there is a direct relationship
between time-to-treatment and the success of
reperfusion (restoration of blood flow to the
heart). A heart attack comes with warning signs
that are not always recognized by the victim.
People often confuse a heart attack with
indigestion or heart burn. A study in Germany [3]
has shown that sudden cardiac death does not
come out of the blue and people often have typical
symptoms as long as 2 hours before cardiac death
occurs.

By just wearing the Bluetooth enabled pulse
sensing wrist band, the patient need not worry
about device operation. The patient will only be
required to carry a cell phone equipped with
Bluetooth and GPS technology. When the
Bluetooth enabled wrist band detects a heart
attack, it will alert the cell phone which in turn
will automatically call for help and provide the
patients location. The goal is to provide early
heart attack detection so that the patient will be
given medical attention within the first few
critical hours, thus greatly improving his or her
chances of survival.

Much money and research is spent on making
people aware of the warning signs (e.g. [4], [5]).
Getting patients to recognize the warning signs is
not an easy task. Several web sites offer a set of
questions to assess whether a person has heart
attack symptoms, but the questions are not
integrated in devices that a user carries all the
time.

The challenge is to reduce the delay time
between the onset of a heart attack and the call to
the emergency services ([2], [6]), since early
detection and prompt treatment is the key to the
success of the clinical outcomes.
II. METHODOLOGY

The sensors have to be simple to operate
allowing a person to take a measurement. It is
NICE-2010

Acharya Institute of Technology, Bangalore-560090 106

important to optimize the accuracy and ease-of-
use ratio for sensors.




A. Wrist Band Block Descriptions

Fig. 1.Overview of wrist band block diagram
Biosensors - Disposable Ag-AgCl ECG round pad
electrodes are to be placed on each wrist of the
patient. The electrodes are embedded in pre-
soaked electrolyte foam with double-sided peel-
off adhesive tape for attachment. The foam
provides good electrical contact with the skin and
reduces motion artifacts. The electrodes read
the hearts electrical activity and outputs to the
circuitry.
Analog Circuitry The circuitry will consist of
two buffers, a differential amplifier, and a band-
pass filter. Each electrode will connect to a buffer
which is needed to match the high impedance of
skin to the low impedance differential amplifier.
The differential amplifier then takes the
difference between the data collected by the
electrodes and provides a gain before outputting
to the band-pass filter. The band-pass filter is
needed to eliminate noise (other biological
signals, environmental, motion, etc.) and provides
additional gain. Finally, the ECG waveform is fed
into the A/D Converter.
A/D Converter The analog to digital converter
will convert the analog data from the biosensors
to digitally sampled data points, while allowing
enough resolution and sampling rate for purpose
of detecting a heart attack. The data points will be
sent to the microcontroller and be sampled at
regular intervals.
Microcontroller The microcontroller is a
BASIC Stamp 2 which will run a real time program
to constantly monitor the output of the A/D
Converter, comparing current data samples
against stored samples. It will include an
algorithm to process both the amplitude and
frequency of the heart beat, to cover as many
possible cases of a heart attack as reliably as
possible. Once a heart attack is detected and
confirmed, relevant data such as the time of
occurrence will be collected, and a signal is sent to
the Bluetooth Module to initiate the emergency
dial up sequence via the cell phone.
Bluetooth Module The Bluetooth module is an
Embedded Blue Transceiver AppMod which
conforms to v1.1 of the Bluetooth standard and
provides connectivity for the BASIC Stamp.
Bluetooth is chosen as the method of choice for
wireless connectivity between the sensor package
and the cell phone, because the connection can be
treated as a low power wireless serial link, and
Bluetooth is an emerging standard for personal
area networks.
B. Working of Mobile phone and GPS Module
Cell Phone A phone should meet the
requirements for having both Bluetooth and GPS
built in. An application should establish the link
between the sensor package and the phone and to
pass GPS and subscriber information to
emergency personnel.
GPS Unit GPS is becoming a standard feature
of newer production model cell phones in order to
become compliant with FCC regulations. Work
may include enabling a GPS unit to feed data to
the cell phone if time permits.
The target group for our proposal is users that
have had a heart attack and are concerned that
they will be struck by another one. Their concerns
are backed up by the American Heart Association
indicating that people who have had a heart
attack have a sudden death rate that is 4 to 6
times that of the general population [9]. We also
target users that have a known heart condition
(e.g. irregular heart beat, angina).
The system is capable of monitoring the
personal health of its user using a mobile phone
and various wireless sensors (see Figure 1). The
mobile phone application analyses, in real-time,
data wirelessly received from the sensors, such as
an electrocardiogram (ECG), blood pressure
measurements or accelerometer data. The mobile
phone can send this data, in real time, to heart
specialists. If a person is in danger (cardiac arrest,
fall) and is unable to call an ambulance, the
mobile phone will automatically determine the
current location of the person using GSM Cell-id
or GPS and sends automated voice and text
messages to their cardiologist and other
emergency numbers programmed into it [8].
NICE-2010

Acharya Institute of Technology, Bangalore-560090 107



Fig.2. Wireless health monitoring system
III. REQUIREMENTS
Time is a crucial factor for heart attack patients.
If proper medical treatment is performed within
60 minutes of the event, the chances of surviving
improve dramatically and the likelihood of
serious damage to heart tissue decreases.

If a person decides to do the self-test, it usually
means she/he feels something. The first priority is
to ask whether the person has heart attack related
symptoms and analyze the answers. If the
symptoms clearly indicate that the person is
having a heart attack there is no reason to delay
the call to the emergency services. For example, if
the user feels pressure in the chest, pain
spreading to their left arm, is sweaty and looks
extremely pale, the application will immediately
urge the person to call the emergency services.
Making an additional ECG recording would simply
delay the process by several minutes. Automating
the call to the emergency services avoids the
possibility of dialing incorrect numbers but more
importantly, can reduce the hesitation many
people might have when calling an ambulance,
and hoping that the pain will pass. Talking to an
emergency operator could convince the user that
urgent treatment is needed. Ease of use is
important since most people will be stressed or
feeling very uncomfortable, so a quick assessment
with minimal interaction is crucial.

A heart attack can happen anytime, anywhere.
Therefore the user must be able to do the self-test
wherever and whenever symptoms occur. This
has an impact on the technology that can be used.
The sensors should be small and non intrusive so
that people are willing to carry them all the time.
A mobile phone is a logical choice device for the
self-test application since most people carry one.

It is important to know some personal details
about the user, such as their age, gender or
preferred language, in order to adjust the way the
application interacts with the person. Medical
conditions, such as prior heart attacks, angina or
allergy to certain medicines are important in
assessing the probability of a heart attack and to
provide the correct feedback to the user (e.g. do
not prescribe an aspirin to a person if s/he is
allergic to it).

The American Heart Association [7]
recommends having easy access to important
phone numbers in case of mild to moderate
symptoms that do not require the emergency
services. The correct phone number to dial varies
depending on the time and day. It is therefore
important to add contextual information such as
date and time. This allows the application to dial
automatically the correct number at the day/time
the self-test is conducted. Knowing the location is
useful if the user has a cardiac arrest, as this will
guide emergency services to the right location.

IV. LOCATION TRACKING SYSTEM

The location technique is isolated from the
various applications in order to provide flexibility
to the overall system architecture. This feature
increases the interoperability of the system by
enabling it to be interfaced with different
applications, which also allows for reusability of
software when location technology advances and
further development is required [10]. The
Location Tracking System focuses on the
integration of the hybrid location technique based
on GPS + Cell-ID technique as illustrated in
Figure 3.


A. Location Determination Unit

The location determination unit (LDU) is
designed to be implemented on the mobile station
NICE-2010

Acharya Institute of Technology, Bangalore-560090 108

for acquiring raw location information. This
information will then be processed into an
appropriate form, and sent out on a periodic basis
to the MIS using short messaging service (SMS) or
via the internet using general packet radio service
(GPRS) for further processing [10].

B. Location Management Unit

Raw location information sent by location
determination unit is in a coded format. There
must be a location management function to
interprete the positioning data for location based
service (LBS) applications to be relevant. This is
achieved through the location management unit
(LMU), which serves as an interface between
positioning equipment and LBS infrastructure.
The LMU processes the acquired raw location
information or alert signal from the LDU into a
compatible form to be used by the application
[10].

C. Application Unit

The application unit makes use of the acquired
location descriptors in applications that provide
the relevant location information to the intended
recipients. In the PLTS, the applications include
providing the real-time monitoring capability
anytime and anywhere for the intended
recipients. Furthermore, in the event of an
emergency situation, the intended recipients will
be notified by an SMS alert message [10].

V. FEATURES AND BENEFITS

Features

- Continuous monitoring of hearts
electrical activity
- Rapid detection of heart attacks
- Automatic call for medical assistance
- Identifies patient location to emergency
personnel
Benefits
- Provides early detection of heart attacks
- Eliminates delays in receiving medical
treatment
- Improves healthcare services to at risk
population
- Saves lives and improves quality of living
VI. CONCLUSION

This paper presents a real-time wireless sensor
network system for monitoring and detecting any
upcoming cardiovascular disease. The system has
the capability to monitor multiple patients at a
time, to deliver remote diagnosis and
prescriptions, and also for providing fast and
effective warnings to doctors, relatives, and the
hospital. The system design consists of wearable
wireless sensor node, mobile control unit,
heterogeneous wireless network system, two
phase data analysis and visualization system, and
the warning system. The system will contribute in
the reduction of death due to heart attack and
other cardiovascular diseases; also it can be used
for providing health service by specialized
doctors, to rural areas. Comparatively, this system
can be produced in low cost, since it only needs to
develop a wearable wireless sensor system, the
software platforms, and the development of data
storage capability. The system utilizes the
available wireless network for the data
transmission, which contributes to the cost
reduction.


REFERENCES

[1] Australian Heart Foundation, Newspoll
research, www.heartfoundation.org.au (2007).

[2] Schull, M. J.: What are we waiting for?
Understanding, measuring and reducing
treatment delays for cardiac patients. In:
Emergency Medicine Australasia 17:3, 191192
(2005).

[3] Agrawal, R., Arntz, H.R.: Sudden cardiac death
does not always happen without warning. In:
American Heart Association journal report
(2006).

[4] American Heart Association,
www.americanheart.org (2007).

[5] British Heart foundation, www.bhf.org.uk
(2007).

[6] Moser, D.K., McKinley, S., Dracup, K., Chung,
M.L.: Gender differences in reasons patients delay
in seeking treatment for acute myocardial
infarction symptoms. In: Patient Education and
Counseling Vol 56 pp 45-54, Elsevier (2005).

[7] American Heart Association - Cardiac
rehabilitation section, Medical Contact List,
www.americanheart.org/presenter.jhtml?identifi
er=304813.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 109

[8] 21st IEEE International Symposium on
Computer-Based Medical Systems, A Self-test to
Detect a Heart Attack Using a Mobile Phone and
Wearable Sensors by Peter Leijdekkers and
Valrie Gay

[9] American Heart Association. Heart Disease
and Stroke Statistics 2005 Update. Dallas, Texas:
American Heart Association (2005).

[10] A Hybrid Mobile-based Patient Location
Tracking System for Personal Healthcare
Applications by 1S. H. Chew, 1P. A. Chong, 1E.
Gunawan, 1K. W. Goh, 2Y. Kim and 1C. B. Soh







































































NICE-2010

Acharya Institute of Technology, Bangalore-560090 110

DATA AND NETWORK SECURITY IN WINDOWS - THREATS &
COUNTERMEASURES
Dr. Harsh Vikram Singh
Department of Electronics Engineering
Kamla Nehru Institute of Technology, Sultanpur -228118 (India)
harshvikram@gmail.com, harsh@knit.ac.in
________________________________________________________________________________________

ABSTRACT

Information and Communication
Technology (ICT) revolution over the past two
decades has been facilitated multimedia data
transfer over commonly available public domain
open channel networks. Such public domain
network environment is open to all across the
world and thus necessitates security measures to
ensure multimedia data confidentiality,
authenticity, and integrity to the intended
recipient. Organizations of all sizes want secure
network connectivity to their business data and
applications. The need to connect and collaborate
with partners, customers, and remote/mobile
employees anytime and anywhere has expanded
network connectivity requirements beyond
traditional wired LANs to include dial-up remote
access, virtual private networks (VPNs), as well as
Wi-fi, WiMAX and other wireless networks. This
paper discuss the commonly used Windows
operating systems for enabling superior access to
the open networks and other related issues like
security, management complexity, and cost.

KEYWORDS
Information Security, Windows Operating
Systems, Rights Management.
XIII. INTRODUCTION
Enterprises are competing globally to provide
access to information, to enhance productivity,
and to deliver services promptly with the lowest
possible expenditure. The capability to
communicate and collaborate with partners,
suppliers, customers, and employees anytime and
anywhere is now a requirement. Open channel
public domain networks (i.e. Internet) offer public
channels to deliver and exchange information for
cost effective and fast data transfer. Such open
public networks have long been known for being
insecure while at the same time technological
developments for the perfect reproduction, ease
of editing, access and sharing of multimedia data
have resulted in greater concerns of copyright
infringement, illegal distribution, and
unauthorized tampering. Therefore, data security
to ensure authorized access of open channel
digital information and fast delivery to a variety of
end users with guaranteed Quality of Services
(QoS) are important topics of current relevance
[1].
The advent and acceptance of new computing
technologies and the Internet have changed the
way information is stored, accessed, and shared.
Companies have implemented a more open and
distributed information model resulting in
benefits that include [2]:
(i) Increased Employee Productivity:
Enables employees to be flexible, make better
decisions, and respond quickly to the changing
demands of the marketplace by providing secure
access to the information they need anywhere at
anytime.
(ii) Lower Cost: Decreases costs and
increases efficiency by safely leveraging the
power of collaboration and network connectivity.
(iii) Integrated Business Processes:
Increases sales by enabling closer relations with
customers and partners through secure
communications and collaboration.
If you have an operating system (OS) running
on a locked-down box, isolated in a secure room
with no network connections, and it is running a
single application, then most of todays OSes can
be considered secure. But most OSes dont
operate in that environment. Security protection
in Windows perhaps isnt as comprehensive as
was first thought, and is unlikely to ever be
unbreakable, but the layers of protection used in
Vista are still effective at mitigating many attacks
and preventing the exploitation of vulnerabilities
in server processes [3].
Today, many Windows users run with
administrative privileges in both the enterprise
and the home. Running as an administrator
results in a desktop that is hard to manage and
has the potential for high support costs. Deploying
desktops with standard user permissions can
result in cost savings because a non-
administrative user no longer has the ability to
accidentally improperly configure the network or
install an application that might affect system
stability. Windows XP and earlier versions of
Windows are vulnerable to offline attacks that
NICE-2010

Acharya Institute of Technology, Bangalore-560090 111

attempt to obtain a user's data on lost or stolen
computers. Windows Vista includes an agent that
can prevent a Windows Vista-based client from
connecting to your private network if it lacks
current security updates, lacks virus signatures,
or otherwise fails to meet your computer health
requirements [3]. Network Access Protection
(NAP) can be used to protect your network from
remote access clients as well as LAN clients [4].
The agent reports Windows Vista client health
status, such as having current updates and up-to-
date virus signatures installed, to a server-based
Network Access Protection enforcement service.
NAP can enforce health requirements for mobile
computers, remote computers, and computers
directly connected to your private network. The
personal firewall built into Windows Vista builds
on the functionality that is included with
Microsoft Windows XP Service Pack 2. It also
includes application-aware outbound filtering,
which gives you full, directional control over
traffic. Many potentially risky applications, such
as peer-to-peer sharing client applications that
might transmit personal information across the
Internet are designed to bypass firewalls that
block incoming connections. Windows Vistas
firewall enables enterprise administrators to have
the ability to set Group Policy settings for
applications that should be allowed or blocked,
giving them control over which applications can
communicate on the network. Windows Vista has
improved support for data protection at the
document, file, directory, and machine level.
The integrated Rights Management client
allows organizations to enforce policies around
document usage [5]. The Encrypting File System,
which provides user-based file and directory
encryption, has been enhanced to allow storage of
encryption keys on smart cards, providing better
protection of encryption keys. In addition, the
new BitLocker Drive Encryption enterprise
feature adds machine-level data protection [6,7].
On a computer with appropriate enabling
hardware, BitLocker Drive Encryption provides
full volume encryption of the system volume,
including Windows system files and the
hibernation file, which helps protect data from
being compromised on a lost or stolen machine.
An electronic copy can be downloaded from
the conference website. For questions on paper
guidelines, please contact the conference
publications committee as indicated on the
conference website. Information about final
paper submission is available from the conference
website.
XIV. THE NEED FOR SECURITY
The need to connect and collaborate with
partners, suppliers, customers, and employees
anytime and anywhere has expanded network
connectivity requirements beyond traditional
wired LANs to include dial-up remote access,
VPNs, and wireless networks. When addressing
secure network connectivity, administrators need
to consider the following:
Security: Employees not only work from
corporate offices, but also from branch offices,
home offices, or from the road. Providing remote
connectivity requires solutions that are secure,
standards-based, and manageable.
Management complexity: Many vendors offer
dedicated product solutions with little integration
with other products and infrastructure. Setting up
wireless clients with centralized authentication
and policies can be a challenge unless there are
integrated solutions.
Lowering cost: Secure networking can be
expensive if there are multiple products and
technologies with separate licensing, support
contracts, and training.
For example, a secure VPN implementation
may require separate certificate authority for PKI,
separate authentication model, client-side
software, and additional server gateways and
firewalls. By addressing these key secure
connectivity challenges, organizations can achieve
greater employee productivity, decrease costs,
and improve business integration [8].
A. Security
Whereas the LAN once formed a de facto
security boundary, it is now common for
companies to open parts of their internal
networks to suppliers, business partners, and
other stakeholders. By providing greater network
access, companies will need to increase their level
of security to safeguard against unauthorized
access and usage of internal assets. Security
challenges to consider include:
Security procedures and policies that are
adequate to protect LAN data may be ineffective
when the network is opened to outsiders.
Weak authentication used on external
networks can compromise network entry points
and allow unauthorized access.
Sensitive data sent over the Internet or
wireless networks can be compromised without
the proper level of encryption.
Application-aware firewalls are necessary to
ensure traffic is filtered before being allowed onto
the internal network since hackers are now using
more sophisticated application-layer attacks.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 112

B. Management Complexity
Expanding network connectivity brings a set of
technology and process management challenges
that make it difficult for administrators to provide
a centralized and consistent approach to network
access. Management challenges to consider
include:
Consistent network access control: This
requires synchronizing and managing across
multiple network access points such as Internet,
extranets, leased lines, wireless LANs, VPN and
dial-up access, etc.
Access policies: Different users require
different levels of access rights and permissions.
Administrators should consider enforcing policies
based on identity, time, location, and device type.
Single authentication model: A single method
for authentication regardless of the type of access
(dial-up, wireless, VPN, etc.) is highly desired for
ease of management.
C. Lowering Cost
Providing secure network access can increase
employee productivity and expand business
integration; however, deploying, managing, and
maintaining the necessary network access can be
costly. Cost challenges to consider include:
Administrators will spend significant time and
effort if each access method has to be managed
separately with separate authentication and
access control databases.
Security systems are frequently expensive to
acquire, difficult to manage, and obtrusive to end
users' workflow. This may encourage users to find
ways to circumvent systems, or administrators to
minimize their safeguards, leading to less security
instead of more.
In systems with distributed authentication
databases, customers and partners who need
access to data may be forced to wait while the
network staff creates and manages their
credentials, leading to productivity loss.
III. SOLUTIONS FOR SECURE NETWORK
CONNECTIVITY
Microsoft Windows 2000 Server, with its
rich feature set that includes Active Directory,
Certificate Authority, and RRAS (Routing and
Remote Access Service) in combination with other
Microsoft products, such as Windows XP, ISA
Server, and Microsoft Exchange, provide the
foundation that companies can use today to
provide secure network communications to
employees, partners, and suppliers [9]. These
technologies and products work together to
provide three fundamental capabilities that help
deliver secure communications and address
business concerns around security, management
complexity, and cost.
A. Securing the Network Perimeter
The network access points of corporate
networks must be secured against hackers and
unauthorized access. Blocking traffic and shutting
down ports are not sufficient or feasible in an
Internet-connected organization. Having security
solutions that look inside network traffic to
validate application-specific requests mitigates
risks. ISA Server, Microsofts Enterprise firewall,
provides organizations with the stateful-packet
inspection and application-layer firewall
protection required to protect against today's
sophisticated attacks. With ISA Servers
application-level filtering technology, attacks such
as Code Red and Nimda can be mitigated at the
firewall before entering company networks [10].
ISA Server integrates with Microsoft
Management Console (MMC) and Active Directory
to provide a single directory to validate and
manage all access requests for application data or
services. This enables consolidation of access
control and authorization policy in a centrally
managed, replicated, and secure repository. ISA
Server is also designed to work best with
Microsoft Exchange 2000 and Internet
Information Services (IIS) to provide fast and
secure access to e-mail and web content.
B. Providing Strong Authentication and
Encryption
Accessing the corporate network requires
administrators to enforce strong authentication to
validate identity as well as provide strong
encryption to prevent data from being
communicated in the clear. Whether using VPN
or wireless LANs, Microsofts Windows 2000 and
Windows XP provide the authentication and
encryption infrastructure to enable secure
connectivity. With Windows 2000 built-in VPN
server and Windows XP VPN client, organizations
can take advantage of secure standards-based
VPN directly out of the box. Because Microsoft
supports VPN standards such as L2TP/IPSec and
smart card authentication, organizations have
access to the encryption, authentication, and
interoperability that best meet their VPN security
needs. While VPNs are often used to encrypt
traffic over the Internet between users and the
corporate network, encryption can also be
implemented between any Windows 2000,
Windows Server 2003, and Windows XP machine.
Since Microsoft has full standards-based support
for the IPSec security extensions, organizations
can provide robust encryption of all network
NICE-2010

Acharya Institute of Technology, Bangalore-560090 113

traffic, without requiring cumbersome changes to
deployed applications, servers, or network
hardware.
In addition to strong encryption,
authentication requirements can be met through
Windows 2000 support for the IEEE 802.1x
authentication protocol. This allows network
clients and servers to securely authenticate each
other using digital certificates. 802.1x provides
port-level control that can stop interlopers from
connecting to the network and thus prevent any
malicious activity [11]. Companies that want to
build an integrated authentication system that
securely authenticates users against a single
directory, regardless of the access method or
device they are using can take advantage of
Windows 2000s Internet Authentication Service
(IAS) [12]. This built-in industry-standard RADIUS
server interoperates with network access devices
from a multitude of vendors.
C. Securing Wireless Access
In addition to adding remote access
connectivity, customers are also exploring
wireless LANs to provide their mobile laptop
users with anytime, anywhere access.
Authentication and encryption concerns as well as
security weaknesses in the IEEE 802.11b protocol
have slowed the adoption of wireless LANs [13].
Microsoft has tackled the WLAN security problem
in Windows 2000 and Windows XP by working
within the 802.1X standard to support EAP-TLS
(Extensible Authentication Protocol Transport
Layer Security). EAP-TLS provides certificate-
based, mutual authentication for clients and
access points. This counters the rouge access
point threat and supports dynamic session keys
that minimize the key theft problem. EAP can also
be used with smart cards and biometric
authenticators to provide added security.
Windows Vista includes many security features
and improvements to protect client computers
from the latest generation of threats, including
worms, viruses, and other malicious software
(collectively known as malware). Windows 7
extends BitLocker drive encryption support to
removable storage devices, such as flash memory
drives and portable hard drives [14].
Organizations can take advantage of
Microsofts technologies and products to enable:
secure Internet connectivity, secure messaging,
strong user authentication, VPN and wireless LAN
access to corporate networks. All these solutions
can be controlled by a common management
interface and can be administered using Active
Directory policies. This ensures consistent and
complete application of policies to all access
requests, regardless of where they originate.
Although, it is virtually impossible to build a
completely safe operating system that
accommodates literally hundreds of thousands of
different programs, scripts, applets, etc., written
by many different vendors whose developers may
be good or average. However, by providing
advanced security technologies, common
management, and lower cost through integrated
solutions, Microsoft can enable businesses to take
advantage of the network connectivity. Enterprise
users with computers with appropriate enabling
hardware benefit from protection of data on lost
or stolen computers with BitLocker Drive
Encryption. Windows Vista includes new
authentication architecture that is easier for
third-party developers to extend. Biometrics
enhancements include easier reader
configurations, allowing users to manage the
fingerprint data stored on the computer and
control how they log on to Windows 7 [14].
IV. CONCLUSION
To take advantage of the networked world,
organizations must prevent unauthorized users
from accessing their networks, and at the same
time, ensure that authorized users have access
only to authorized assets with current available
OSes. Together, these security improvements will
make users more confident in using their PCs.
Whether or not the presented security
mechanisms can be used easily for Windows must
be examined for each kind of multimedia data and
OSes applications separately. This paper
presented security threats in data transfer over
open channel network environments and their
solutions with countermeasures available in
different visions of Windows OSes. Their merits
and limitations for data security are discussed. It
is explained that there is no single universal
technique for either achieving robust multimedia
data security or providing guaranteed success in
secure transmission of such multimedia objects
and thus these are still open problems requiring
further investigations.
V.REFERENCES
Harsh Vikram Singh, A. K. Singh, and Anand
Mohan, Minimizing Security Threats in
Multimedia Systems, Proc. of 2
nd
IEEE
International Conference on Distributed
Framework for Multimedia Applications, pp.
1-5, Penang, Malaysia, 2006.
Available: http://technet.microsoft.com/en-
s/library
NICE-2010

Acharya Institute of Technology, Bangalore-560090 114

K. Sangani, Review - living with windows vista,
IEEE Engineering & Technology, pp. 52 54,
Vol.1, Aug. 2006.
L.N. Kumar and C Douligeris, Demand and service
matching at heavy loads: a dynamic
bandwidth control mechanism for DQDB
MANs, IEEE Transactions on
Communications, pp. 1485-1495, Vol. 44,
1996.
S.S. Kohali et al. The impact of wireless LAN
security on performance of different
Windows operating systems, Proc. of IEEE
Symposium on Computers and
Communications-2008, pp. 260- 264, July
2008.
D.R. Hayes, D.R. and S. Qureshi, A framework for
computer forensics investigations involving
Microsoft Vista IEEE Long Island Systems,
Applications and Technology Conference, pp.
1-8, May 2008.
http: www.technet.microsoft.com/en-
us/library/cc507844.aspx
S. Al-Khayatt et al. A study of encrypted,
tunneling models in virtual private networks,
Proc of IEEE International Conference on
Information Technology: Coding and
Computing, pp.139 143, April 2002.
Routing and Remote Access Service (Remote
Access), Windows Server 2003 Networking
Recipes ,Chapter 4, pp. 141-189, Springer
publication, 2006.
Kyle Schurman, Hard HAT AREA, X-ray Vision:
Virus Penetration, Computer Power User
Artical.
S. Narayan et. al. The Influence of Wireless
802.11g LAN Encryption Methods on
Throughput and Round Trip Time for
Various Windows Operating Systems, 6
th

IEEE Annual Conference on Communication
Networks and Services Research Conference -
2008, pp.171-175, May 2008.
Internet Authentication Service, Windows Server
2003 Networking Recipes, Chapter 6, pp.
243-287, Springer publication, 2006.
J. Nam et. al. Load modulation power amplifier
with lumped-element combiner for IEEE
802.11b/g WLAN applications, Electronics
Letters, pp. 24-50, Vol. 42, January 2006.
K. Sangani, Lucky seven? IEEE Engineering &
Technology, pp. 32-33, Vol. 4, March 2009.















































NICE-2010

Acharya Institute of Technology, Bangalore-560090 115

CONTROLLING IPSEC TRAFFIC IN WINDOWS 2003 SERVER
Siddharth Pandey
1
, Royal Kumar Pathak
2
, Sumit Pant
3
, Rachit Garg
4

Assistant Professor
1
, MCA Final Year student
234
,
Department Of Master of Computer Application, Shri Ram Murti Smarak
College Of Engineering & Technology, Bareilly
sp.siddharthapndey@gmail.com
1
, theroyalpathak@gmail.com
2
, sumitpant1988@gmail.com
3
,
rachitgarg11@gmail.com
4

_______________________________________________________________________________________________________________________
ABSTRACT

The IP Security protocols are mature to take benefit
from multiple independent implementations and
world wide deployment. Towards that goal, We
implemented the protocols for the Windows OS and
Server Based OS. While some differences in the
implementations exist due to the differences in
underlying operating system structures, the design
philosophy is common. Multitasking and
Multiprocessing used by the OS for handling
different task and routing purposes, it is used to
implement the policy engine; a transform table
switch is used to make addition of security
transformations an easy process; a lightweight
kernel-user communication mechanism is used to
pass key material and other configuration
information from user space to server space, and
two distinct ways of intercepting outgoing packets
and applying the IPsec transformations to them are
employed. In this paper, the techniques used in our
implementations are explained, differences in
approaches are analyzed , and hints are given to
potential future implementers of new transforms.

1. INTRODUCTIONS

IPsec is not a single protocol, but rather a set of
services and protocols that provide a complete
security solution for an IP network. These services
and protocols combine to provide various types of
protection. Since IPsec works at the IP layer, it can
provide these protections for any higher-layer
TCP/IP application or protocol without the need
for additional security methods, which is a
major strength. Some of the kinds of protection
services offered by IPsec include the following: i)
Authentication of the integrity of a message to
ensure that it is not changed route.

ii) Protection against certain types of security
attacks, such as replay attacks




iii) The ability for devices to negotiate the
security algorithms and keys required to meet
their security needs

iv) Two security modes, tunnel and transport, to
meet different network needsIn this paper we
describe our implementation of the IPsec
Protocol using its architecture. In Windows
2003 Server OS We use different Policies and
different mechanism using secpolicy.edit and
services.msc to activated the services. The
implementation was later supported to Windows
OS either client side or server side both and other
server based OS, and the new transforms we



Fig 1. IP Security Protocol Suite

are added to support different security issues,
as security policies and its associations in
security databases.

2. OUR IMPLEMENTATION

Our implementation includes security protocols
for incoming and outgoing IP datagrams ,
maintaining the transforms database and the policy
table, setting avirtual security properties through
setsockopt() calls, as well as a set of tools for
configuring the policy database. setsockopt() is
basically a function used to set the socket value for
outgoing IP packets.

2.1Transforms and policy databases

The transforms database, which holds information
such as session keys, replay counters, SPIs and
other transform special information, is
NICE-2010

Acharya Institute of Technology, Bangalore-560090 116

implemented in security database. The elements
are referenced by SPI, remoteIP address and
security protocol (since that is a unique identifier
of a security association). This allows fast lookup
and packet processing, while minimizing routing
table lookups. The reason routing table lookups
are performed is because we
implemented our policy engine using IPsec
Protocols This mechanism decides whether
some IPsec transforms should be applied to an
outgoing packet. So we get various advantages
i) reuse policy database
ii) faster deployment
iii) less debugging need be done
This process is also used for incoming packets. For
this we use the security parameter index, payload
for the upload and download data. We use 32-bit
security parameter index which can be
interpreted for iP address. Both of the core
protocols ESP [7] and AH[3] are used to
provide authentications and specifications.

2.2 Virtual interfaces Used

Both the Window OS and the Windows Server 2003
OS implementations implement a virtual
interface (enc0 for Window OS, ipsecn, where n is 0,
1, packet filtering: when used in a packet filtering
firewall, the interface can be used as a means of
selecting packets that have gone through IPsec
processing. This allowsa firewall to only permit
authenticated/encrypted traffic, without having to
change the internals of the packet filter itself. (for
window OS only): provide a means to send packets
to the IPsec code; in the next subsection we shall
explain why we had to use a virtual interface for
this, and what our approach was in the server code.
(for 2003 Server only): allow us to implement the
transport layer mode of IPsec. The lack of an ip
output() routine in Linux made this a non-
trivial exercise.
In Server 2003 OS, the enc0 interface is
implemented like the loopback interface, except
that no input routine is defined, as it is not needed.

2.3 Incoming packets

Incoming IP packets that have a Next Protocol field
indicating ESP, AH,[3][7] or IP-in-IP are delivered
by ip intr() to ah input(), esp input() or ipe4 input7
respectively.These routines lookup the outer SPI
of the IP packet (the one corresponding to the last
IPsec transform applied) in the transforms
database. If an entry is found for an SA
(Destination Address, SPI, Security Protocol), the
packet and the relevant entry is passed to the
appropriate routine which performs the
necessary (cryptographic) transforms. If more
than one SPI applies (recursively) to the packet,
the appropriate routines are called in turn.
When the transforms are applied to the received
packet. This lets the transport layer's input
routines compare the expected level of security
to the actual security services used for the
packet and drop it if the security level was
insufficient. If these routines fail, the packet is
discarded and the failure is logged.
Additional measures might be taken in
future releases. Various statistics are kept, and
can be seen via the netstat command or the
kernfs/procfs interface on some platforms. A
sysctl interface has also been provided, to turn
debugging messages on or If the kernel has
been compiled with the appropriate options.

2.4 Outgoing packet processing

The Window server 2003 and Linux
implementations
differ in the way they handle outgoing packets, due
to the differences of their network stacks.

2.4.1 Outgoing packet processing in
Server 2003

In this implementation, the ip output() routine
is
tapped"; shortly after it is called, a lookup in
the policy database is performed. The result of
that query is either no action, in which case the
packet is sent out as is, or the beginning of an
SPI chain which defines the set of transforms that
should be applied to the packet before it is sent
on the network. In the latter case, the
appropriate routines are called successively. If
no failure is reported, statistical counters and
the internal state are updated. Finally, the
modified packet is sent to the appropriate virtual
interface's output routine, it is indicating that
IPsec processed the packet. It will then be re-
processed by ip output(), just for fragmentation
and next-hop route discovery (since the
destination address might have changed as a
result of tunneling).

2.4.2 Outgoing packet processing in Linux
NICE-2010

Acharya Institute of Technology, Bangalore-560090 117


Since Linux doesn't have an ip output() routine or
its
equivalent, we decided to change the way the
policy engine works: for packets that need to go
through the IPsec code, routing entries are
created that point to one of the virtual interfaces.
These virtual interfaces are matched on a one-on-
one basis with real network interfaces. This
allowed us to push security processing to
these pseudo-device drivers, without
radically modifying the routing code. However,
since Linux does not use the Radix tree for its
routing table, we were not able to route based on
arbitrary fields in the packet, as we did in Server
2003 OS. This means that if there exists a security
association with a remote host for a specific
packet flow (for example, a TCP connection), all
packets to that host will be delivered to an ipsec
interface, which then has to decide which of these
packets should be further (cryptographically)
processed. This processing is done by
performing a lookup in a Radix tree that contains
the more detailed policy information.
Otherwise, the output routine of the ipsec
interface behaves like the Server 2003 OS ip
output()10. Using virtual interfaces in this way
has the advantage of presenting a more realistic
MTU to the TCP protocol.
It is not clear however what the MTU that should
be reported in the interface structure is.
Small MTUs[12] will cause no fragmentation due to
IPsec processing, but will decrease throughput by
reducing the actual data size in the packets.
Large MTUs (close to the real network MTU)
maximize the e_ective bandwidth, but may cause
fragmentation. However, since the greatest
performance cost in software IPsec
implementations is the cryptographic algorithms, it
is unclear whether fragmentation will further
degrade network performance. Determining the
MTU dynamically (for each packet sent) would
require extensive modifications of the TCP code
and impose additional delays in
outgoing packet processing for all such
packets.

2.5 Kernel interfaces

For communication between user processes and
the
kernel, we implemented two mechanisms: the
PF ENCAP[PFENCAP][2] protocol family, which
works similarly to the PF ROUTE. It is used
for manipulating the transforms database (setting
up new security associations, modifying and
deleting them).
It is used by the manual keying utilities, as well
as potential key management daemons. In Linux,
we used Netlink instead, which is a generic
kernel communication mechanism. This
mechanism allows the kernel to ask the key
management daemons to establish a security
association with a remote host For a
setsockopt() or getsockopt() interface. This
allows processes to set the security properties of
their packet flow and get information about
system defaults. The options are set in the
protocol control block of the socket. Subsequent
calls to connect() or send() will cause either use
of existing SPIs or a notification to the key
management daemon (through PF ENCAP or
PFKEY)[2] to negotiate a security association with
the remote host.

2.6 Management utilities
A number of utilities that take advantage of the
PF ENCAP[2] interface were written to allow for
manual key setup, which is a requirement for
all IPsec conforming implementations.
These utilities initialize,
modify and delete the state kept in the kernel
transform and policy databases,the latter using the
PF ROUTE socket mechanism.

2.7 Future work

We plan on adding soft state to the tunnel
endpoints,
so that Path MTU [PMTU] discovery information can
find its way back

NICE-2010

Acharya Institute of Technology, Bangalore-560090 118


Fig2. Connection b/w Server & Client

to the sender, in the presence of multiple
encapsulation in the network. An additional
possible optimization for Path MTU discovery
would be to check what the final size of a
packet about to be processed is; if it is larger
than the MTU of the network interface and the
DF (Don't Fragment) flag is set, there's no need to
actually do the cryptographic processing.
Instead, the appropriate ICMP[12]
message can be sent back. We also plan to
modify TCP's initial MSS resolution to improve
performance in the presence of fragmentation
caused by IPsec imposed headers. Finally,
there is thought of implementing the PF KEY
kernel communication draft, as an alternative to
PF ENCAP[2]. Performance parameters of interest
include latency and throughput. We measured
latency using ping. The measurement
configuration consisted of two machines running
our software. The first was a 166MHz Pentium
equipped
with a 100Mbit/sec ethernet card, the second a
120MHz Pentium with a 10Mbit/sec ethernet card.
We did the test for different packet sizes (512, 1024,
4096 and 8192 bytes of payload) and different
IPsec transforms, pinging from the P166 to the
P120. It is notice that the scale is logarithmic. The
graph shows that the cost of authenticating packets
does not really downgrade response time, but
that encryption (especially triple-DES)is a major
bottleneck. The results from the Linux
implementation are similar, which is not
surprising since the computationally intensive
part (encryption) remains the same. In the second
test, we measured how fast the P166 machine could
"push" 15MB of data to the network, using
UDP,[9] while applying different transforms on the
packets. The size of the packets was 1KB. In figure 2
of server client socket connection..In a third test,
we transfered 15MB of randomly chosen data from
the P166 to a 50MHz SPARC LX with a
10Mbit/sec ethernet card. We used tcp to
measure throughput, with TCP[10] as the
transport protocol. It would be interesting to do
the same tests using some hardware cryptographic
device (a DES chip),[14] but the current
regulations (ie. ITAR) make this difficult.

4 CONCLUSION

We described the IPsec architecture and some of
the
important parts of our implementation for the
Server
2003 and Linux kernels. Since most of the existing
implementations are proprietary, we believe
this paper will help potential developers in
designing and implementing the standards in the
future. There is still work to be done on our
implementation, as new transforms a interfaces are
defined and standardized.

REFERENCES

[1] [SWIPE] The Architecture and Implementation of
Network-Layer Security Under Unix", Ioannidis, J.
and Blaze, M., Fourth Usenix Security Symposium
Proceedings, October 2007
[2] [PFENCAP] The ENCAP Key Management
Protocol, Version 1", Ioannidis, J., Keromytis, A. D.
and Provos, N., Work in Progress
[3] [AH] IP Authentication Header", Atkinson, R.,
RFC 1826, September 2007
[4] [ESP3DES] The ESP Triple DES-CBC
Transform", Metzger,P., Karn, P. and Simpson, W.,
RFC 1851, October 2005
[5] M. W. Tobias. Locks, Safes and Security (2/e).
Charles Thomas Publisher, Ltd. Springfield, IL,
USA. 2000
[6] Angelos D. Keromytis University of Pennsylvania
{ angelos@dsl.cis.upenn.edu
John Ioannidisy AT&T Labs { Research {
ji@research.att.com Jonathan M. Smith [7] [ESP]
\IP Encapsulating Security Payload", Atkinson,
R., RFC 1827, August 1995
[8] [AHMD5] \IP Authentication using Keyed MD5",
Metzger, P. and Simpson, W., RFC 1828, August
1995
[9] [UDP] User Datagram Protocol", Postel, J.B.,
RFC 768, August 1980
[10] [TCP]Transmission Control Protocol", Postel,
J.B.,RFC 793, September 1981
[10] [IP] Internet Protocol", Postel, J.B., RFC 791,
September 1981
[11] [IPv6] Internet Protocol, Version 6 (IPv6)
Specification", Deering, S. and Hinden, R., RFC
1883, January 1996
[12] [ICMP] Internet Control Message Protocol",
Postel, J.,RFC 792, September 1981
[13] [MD5] The MD5 Message-Digest Algorithm",
Rivest,R., RFC 1321, April 1992
[14] [DES] NBS FIPS PUB 46 - Data Encryption
Standard", National Bureau of Standards, U.S.
Department of Commerce, January 1977
NICE-2010

Acharya Institute of Technology, Bangalore-560090 119

[15] [Anderson] A Protocol for Secure
Communication in Large Distributed
Systems", Anderson, D. P. et al, Technical Report
UCB/UCSD
87/342, University
of California, Berkeley, February 1987
[16] [SP3] \NISTIR 90-4250: Secure Data
Network Systems (SDNS) Network, Transport
and Message Security Protocols", National
Institute of Standards and Technology, February
1990
[17] [NLSP] \ISO-IEC DIS 11577
InformationTechnologyTelecommunications and
Information Exchange Between Systems -
Network Layer Security Protocol", ISO/IEC
JTC1/SC6, November 1992
NICE-2010

Acharya Institute of Technology, Bangalore-560090 120

KNOWLEDGE DISCOVERY AND DATA MINING COUNTER TERRORISM
Vani.S#1
IT department, Park College of Engineering and Technology
Anna University, Coimbatore, India.
1v237vv@gmail.com [9566637301]
__________________________________________________________________________________________________________________________________

I. ABSTRACT
Data mining can be used to model crime detection
problems. Crimes are a social nuisance and cost our
society dearly in several ways. Any research that
can help in solving crimes faster will pay for itself.
About 10% of the criminals commit about 50% of
the crimes. Here we look at use of clustering
algorithm for a data mining approach to help
detect the crimes patterns and speed up the process
of solving crime. We applied these techniques to
real crime data from a sheriffs office and validated
our results. We also use semi-supervised learning
technique here for knowledge discovery from the
crime records and to help increase the predictive
accuracy. We also developed a weighting scheme
for attributes here to deal with limitations of
various out of the box clustering tools and
techniques. This easy to implement data mining
framework works with the geospatial plot of crime
and helps to improve the productivity of the
detectives and other law enforcement officers. It
can also be applied for counter terrorism for
homeland security.
Key Words: Data Mining, Clustering, Geospatial
Mining
II. INTRODUCTION
Terrorists are typically indistinguishable from the
local civilian population. They arent part of an
organized, conventional military forcerather,
they form highly adaptive organizational webs
based on tribal or religious affinities. They
conduct quasi-military operations using
instruments of legitimate activity found in any
open or modern society, making extensive use of
the Internet, cell phones, the press, schools, and
houses of worship, prisons, hospitals, commercial
vehicles, and financial systems. Terrorists
deliberately attack civilian populations with the
objective to kill as many people as possible and
create chaos and destruction. They see weapons
of mass destruction not as an option of last resort
but as an equalizer a weapon of choice.
Of the numerous challenges to countering
terrorism, none are more significant then being
able to detect, identify, and preempt terrorists
and terrorist cells whose identities and
whereabouts are unknown a priori. (Alan
Dershowitzs Preemption: A Knife that Cuts Both
Ways offers an extensive discussion of
preemption and the need for a legal structure.) In
our judgment, if preemption is the goal, the key to
detecting terrorists is to look for patterns of
activity indicative of terrorist plots based on
observations of current plots and past terrorist
attacks, including estimates about how terrorists
will adapt to avoid detection.
Our fundamental hypothesis is ifterrorists plan to
launch an attack, the plot must involve people
(the terrorists, their financiers, and so forth). The
transactions all these people conduct will
manifest in databases owned by public,
commercial, and government sectors and will
leave a signature detectable cluesin the
information space. Because terrorists operate
worldwide, data associated with their activities
will be mixed with data about people who arent
terrorists. If the government wants access to this
activity data, then it must also have some way to
protect the privacy of those who arent involved
in terrorism.
III. IT ENABLED COUNTER TERRORISM
INFRASTRUCTURE IN DEVELOPED
COUNTRIES
Law enforcement agencies in developed countries
have become more vigilant about criminal and
terrorist activities. Countering modern terrorism
is a multi and interdisciplinary activity. That is
why researchers in the natural, computational,
and social sciences as well as engineering,
medicine, and many other fields have directed
their research in science and technology to help
enhance the capabilities in fighting the new
counterterrorism war.
Amongst all the technologies, Information
technology ahs been cited as the most important
tool in making a country safer from terrorism. IT
can support intelligence and knowledge discovery
by collecting, processing, analyzing, and
developing applications for terrorism-and crime-
related data. Federal, state, and local authorities
can use the results to make timely decisions,
select strategies and tactics, and allocate
appropriate resources to detect, prevent, and
respond to attacks.


Acharya Institute of Technology, Bangalore

Figure 1: Counter terrorism framework.
Figure 2: Web-like structure

In its report, National Strategy for
Security, the US Department of Homeland Security
(DHS) identifies six critical mission areas
IT can contribute to accomplishing strategic
national security objectives. This hypothesis has
several inherent critical challenges. First, can
counterterrorism analysts imagine
understand the numerous signatures that
terrorist plans, plots, and activities will
Second, if they do understand these
can analysts detect them when theyre embedded
in a world of information noise before the attacks
happen (in this context, noise refers to
transactions corresponding to non terrorists
Finally, can analysts detect these signatures
without adversely violating the privacy or
liberties of non terrorists? Ultimately, the goal
should be to understand the level of improvement
possible in our counterterrorism capabilities if the
government could use advanced information
technologies and access a greater portion of
information space; but also consider the
if anyon policies such as privacy, and then
mitigate this impact with privacy protection
technology and corresponding policy.
1. Primitiveness through Intelligence

, Bangalore-560090
Figure 1: Counter terrorism framework.

like structure

In its report, National Strategy for Homeland
Homeland Security
critical mission areas where
contribute to accomplishing strategic
This hypothesis has
challenges. First, can
imagine and
understand the numerous signatures that
activities will create?
Second, if they do understand these signatures,
theyre embedded
noise before the attacks
context, noise refers to
corresponding to non terrorists)?
Finally, can analysts detect these signatures
without adversely violating the privacy or civil
liberties of non terrorists? Ultimately, the goal
the level of improvement
counterterrorism capabilities if the
ment could use advanced information
technologies and access a greater portion of the
information space; but also consider the impact
privacy, and then
privacy protection
policy.
and warning
Terrorists must plan and prepare before
executing an attack. They usually employ
identities as part of their deceptive
looking for activity patterns
techniques, IT makes it
deceptive identities. By
surveillance and monitoring techniques, we can
issue timely, critical alerts through intelligence
and warning systems and prevent attacks or
crimes.
2. Smart Borders
By sharing travel information on borders
traveler identities, images,
used, and other characteristics, we can greatly
improve counterterrorism and crime
capabilities. Technologies such as
sharing and integration,
communication, biometrics, and image and speech
recognition can help achieve this.
3. Domestic counterterrorism
Terrorism has grown in to organized crime,
gang activity and narcotics trafficking,
terrorists might participate in local
generate funds for international or
terrorism. IT that helps finds this
between domestic and international. In addition,
monitoring criminal use of email and web pages
can help public safety personnel and policy
makers.
4. Protecting critical infrastructure and
Roads, bridges, water supplies, and many
physical service systems are critical
and key national assets. Their
make them potential targets
Virtual (cyber) infrastructures such as the
Internet are also vulnerable to intrusions and
other threats. To monitor these assets, we need
not only physical devices, such as sensors and
detectors, but also advanced information
technologies that can model normal use
and distinguish abnormal
Such information can guide the selection of
protective or reactive measures to secure these
assets from attacks.
5 Defending against catastrophic
Terrorist attacks that use weapons of mass
destruction or other means
commercial airlines for suicide missions
thousands of civilians at a time can
society. Information systems that
collect, access, analyze, and
to catastrophic events
NICE-2010
560090 121
Terrorists must plan and prepare before
executing an attack. They usually employ false
identities as part of their deceptive strategy. By
looking for activity patterns using data mining
IT makes it possible to detect
deceptive identities. By employing other
monitoring techniques, we can
critical alerts through intelligence
warning systems and prevent attacks or
information on borders such as
traveler identities, images, fingerprints, vehicles
characteristics, we can greatly
counterterrorism and crime-fighting
capabilities. Technologies such as information
sharing and integration, collaboration and
biometrics, and image and speech
recognition can help achieve this.
3. Domestic counterterrorism
Terrorism has grown in to organized crime, like
gang activity and narcotics trafficking, where
terrorists might participate in local crimes to
generate funds for international or domestic
terrorism. IT that helps finds this relationship
international. In addition,
criminal use of email and web pages
help public safety personnel and policy
Protecting critical infrastructure and key assets
Roads, bridges, water supplies, and many other
physical service systems are critical infrastructure
and key national assets. Their vulnerabilities
make them potential targets of terrorist attacks.
infrastructures such as the
vulnerable to intrusions and
monitor these assets, we need
physical devices, such as sensors and
detectors, but also advanced information
technologies that can model normal use behaviors
and distinguish abnormal behaviors from them.
guide the selection of
measures to secure these
5 Defending against catastrophic terrorism
Terrorist attacks that use weapons of mass
destruction or other meanslike hijacking
commercial airlines for suicide missionsto kill
thousands of civilians at a time can devastate a
society. Information systems that effectively
collect, access, analyze, and report data relevant
are critical to helping
NICE-2010

Acharya Institute of Technology, Bangalore-560090 122

prevent, detect, and manage responses to these
attacks.
6 Emergency preparedness and responses
Prompt and effective responses reduce the
damage in national emergencies. In addition to
helping defend against catastrophic terrorism, IT
can help design and test optimized response
plans, identify experts, train response
professionals, and manage the consequences of an
attack. Information systems that facilitate social
and psychological support for attack victims can
help society recover from disasters.
. Among the ways technology can help find
terrorists before they strike also included are:
1 Improved data sharing
Combining criminal records and intelligence
information from a variety of federal, state, and
local agencies that can be accessed wirelessly to
identify wanted criminals and suspected
terrorists when they encounter law enforcement
or attempt to enter secure facilities.
2. Smart ID cards with biometric identifier
Adding chips containing thumbprint scans or
other biometric data to drivers licenses, as well
as standardized security features for preventing
forgery and fraud.
3. Smart visas and improved border security
Placing biometric information on visas to identify
visitors, keep track of their entry and exit, and
confirm compliance with the terms of their entry,
and protecting unguarded stretches of the
borders.
4. Digital surveillance
Extending conventional principles of law
enforcement and surveillance to the Internet by
permitting surveillance of email and other
electronic data will help prevent cyber crimes and
cyber terrorist attacks.
5. Face recognition technology
That can detect known terrorists amongst crowds
at potential targets such as the Olympics and
other national festivals or social, cultural or
religious events.
IV. COUNTER-TERRORISM DOMAIN
CHALLENGES
Intelligence and security agencies already gather
large amounts of data from various sources. In
addition to the usual difficulties of processing and
analyzing large data stores, counterterrorism and
crime-fighting applications pose some unique IT
problems and challenges.

Table 1: Simple crime example
1 Distribution of activities
Most of the time terrorists have certain tactical
advantages. They are able to choose the time,
place, and method of their attacks. They get fake
identities and use civilians as a shield. As a result,
investigations must cover multiple offenders and
criminal activities in different places at different
times. Presently, it is difficult, given limited
intelligence and security agency resources.
Moreover, as computer and Internet technologies
advance with out adequate laws governing their
usage, criminals are using cyberspace to commit
various types of cyber-crimes under the disguise
of ordinary online transactions and
communications. We look at table 1 with a simple
example of crime list. The type of crime is robbery
and it will be the most important attribute. The
rows 1 and 3 show a simple crime pattern where
the suspect description matches and victim profile
is also similar. The aim here is that we can use
data mining to detect much more complex
patterns since in real life there are many
attributes or factors for crime and often there is
partial information available about the crime. In a
general case it will not be easy for a computer
data analyst or detective to identify these patterns
by simple querying. Thus clustering technique
using data mining comes in handy to deal with
enormous amounts of data and dealing with noisy
or missing data about the crime incidents.
2 Heterogeneous sources and formats
Large data volumes and diverse sources and
formats create significant challenges, including
information stovepipes and information
overloads. The intelligence and security domain
differs from conventional domains such as
marketing, finance, and medicine. These domains
can collect data from particular sources, such as a
companys sales records or a patients hospital
medical histories, but the intelligence and security
domain doesnt have a well-defined data source.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 123

Investigators must gather both authoritative
information (for example, crime incident reports,
telephone records, financial statements, and
immigration and customs records) and open
source information (news stories, journal articles,
books, and Web pages). Data formats range from
structured database records to unstructured text,
image, audio, and video files. Important
information such as criminal associations might
be available but only in unstructured, multilingual
texts, which are difficult to access and retrieve.
Moreover, as data volume increases, extracting
intelligence and knowledge from it becomes more
difficult.
Table 2: The challenging requirements for intelligence collection, analysis and reporting.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 124

3 Crime and intelligence analysis techniques
IT has developed several potentially helpful tools
and methodologies, including data integration,
data analysis, text mining, image and video
processing, and evidence combination. However,
how to employ them in the intelligence and
security domain remains an unanswered
question, as does how to use them effectively in
national security. Turning raw data into
actionable intelligence requires AI artificial
intelligence and related technologies such as data
mining, text mining, Web mining, natural language
processing, planning, reasoning, conflict
resolution, link analysis, and search algorithms.
Two fundamental problems usually plague the
government agencies from building an efficient
government-wide information system.
First, government acquisition of information
systems has not been routinely coordinated. Over
time, hundreds of new systems were acquired to
address specific agency requirements. Agencies
have not pursued compatibility across the federal
government or with provincial and local entities.
Organizations have evolved into islands of
technologydistinct networks that obstruct
efficient collaboration.
Second, legal and cultural barriers often prevent
agencies from exchanging and integrating
information. Information-sharing capabilities are
similarly deficient at the state and local levels.
Many states maintain terrorism, gang, and drug
databases that other states cannot access. In
addition, there are deficiencies in the
communications systems used by municipalities
throughout the country. If an attack were to occur
today, most provincial and local first responders
would not be using compatible communications
equipment. Wireless technology used by most
communities is outdated.
These problems can be addressed by addressed
by implementing the following
Integrate information sharing across the federal
government;
Integrate information sharing across state and
local governments, private industry, and citizens;
Adopt common meta-data standards for
electronic information relevant to homeland
security;
Improve public safety emergency
communications; and
Ensure reliable public health information.
V. CONCLUSIONS AND FUTURE DIRECTION
We looked at the use of data mining for
identifying crime patterns crime pattern using the
clustering techniques. Our contribution here was
to formulate crime pattern detection as machine
learning task and to thereby use data mining to
support police detectives in solving crimes. We
identified the significant attributes; using expert
based semi-supervised learning method and
developed the scheme for weighting the
significant attributes. Our modeling technique
was able to identify the crime patterns from a
large number of crimes making the job for crime
detectives easier. Some of the limitations of our
study include that crime pattern analysis can only
help the detective, not replace them. Also data
mining is sensitive to quality of input data that
may be inaccurate, have missing information, be
data entry error prone etc.
They need to work closely with a detective in the
initial phases. As a future extension of this study
we will create models for predicting the crime
hot-spots that will help in the deployment of
police at most likely places of crime for any given
window of time, to allow most effective utilization
of police resources. We also plan to look into
developing social link networks to link criminals,
suspects, gangs and study their interrelationships.
Additionally the ability to search suspect
description in regional, FBI databases, to traffic
violation databases from different states etc. to
aid the crime pattern detection or more
specifically counter terrorism measures will also
add value to this crime detection paradigm.
VI. REFERENCES
[1]. Hsinchun Chen, Wingyan Chung, Yi Qin,
Michael Chau, Jennifer Jie Xu, Gang Wang, Rong,
Homa Atabakhsh, Crime Data Mining: A
Framework and Some examples
[2]. C McCue, Using Data Mining to Predict and
Prevent Violent Crimes, available at:
http://www.spss.com/dirvideo/richmond.htm?so
urce=dmpage&zone=rtsidebar
[3]. R. Popp et al., Countering Terrorism through
Information Technology, Comm. ACM, vol. 47, no.
3, 2004, pp. 3643.
[4]. Whitepaper, Oracles Integration Hub for
Justice and Public Safety, Oracle Corp. 2004,
available
at:http://www.oracle.com/industries/governmen
t/IntegrationHub_Justice.pdf.
NICE-2010

Acharya Institute of Technology, Bangalore-560090 125

[5]. Daily The News, Pakistan, Printed October
12 ,2005
[6]. Jeffrey W. Seifert. CRS Report for Congress
Congressional Research Service The Library of
congress, Updated December 16, 2004.
[7]. National Research Council, Making the Nation
Safer: The Role of Science and Technology in
Countering Terrorism. Washington, DC:
Committee on Science and Technology for
Countering Terrorism, U.S. National Research
Council 2002.
[8]. L. Sweeney, Weaving Technology and Policy
Together to Maintain Confidentiality, J. Law,
Medicine and Ethics, vol. 25, nos. 2-3, 1997, pp.
98110.







































































NICE-2010

Acharya Institute of Technology, Bangalore-560090 126

MONTE CARLO- LINE OF ACTION
Sowmya ( I Sem M.Tech),Velammal.M Asst. Professor
Department Of Computer Science and Engineering
The Oxford College Of Engineering, Bangalore
swm.bhat@gmail.com, velammaljegan@yahoo.co.in
_____________________________________________________________________________________________________________________________
ABSTRACT Alpha beta based Line of Action is a
board game, of the same general type as Chess, Go,
or Othello. LOA was invented by Claude Soucie, and
described in A Gamut of Games by Sid Sackson. A
line of Action is played on a standard chessboard,
with the same algebraic notation for ranks and
files. Each player controls twelve checkers, which
are initially arrayed. The object of the game is to
bring all of one's checkers together into a
contiguous body so that they are connected
vertically, horizontally or diagonally. The success of
Monte-Carlo Tree Search (MCTS) in many games is
fail of alpha beta-search.

KEYWORDS
Monte-Carlo Tree Search, Lines of Action.
I. INTRODUCTION
For decades alpha beta search has been the
standard approach used by programs for playing
two-person zero-sum games such as chess and
checkers. Over the years many search
enhancements have been proposed for this
framework that further enhances its
effectiveness. This traditional game tree- search
approach has, however, been less successful for
other types of games, in particular where a large
branching factor prevents a deep look ahead or
the complexity of game state evaluations hinders
the construction of an effective evaluation
function.
The alpha-beta search includes
The object of a search is to find a path
from the starting position to a goal
position
In a puzzle-type problem, you (the
searcher) get to choose every move
In a two-player competitive game, you
alternate moves with the other player
The other player doesnt want to reach
your goal
Your search technique must be very
different


The program uses a highly effective MCTS variant
that has been imbued with numerous
enhancements. Modifications were made to both the
selection and the play-out steps. Finally, by carrying
useful tree information around as the game
advances and by fine-tuning various search control
parameters further performance gains were
possible. Collectively these enhancements resulted
in a MCTS variant that outperforms even the worlds
best alpha-beta -based LOA player.

II. LINE OF ACTION
In Physics the line of action of a force F expresses
the geometry of how F is applied. It is the line
through the point at which F is applied and along
the direction in which F is applied. The concept is
essential, for instance, for understanding the net
effect of multiple forces applied to a body. As an
example, if two forces of equal magnitude act
upon a rigid body along the same line of action
but in opposite directions, then they have no net
effect loosely speaking, they cancel one another
out. But if, instead, their lines of action are not
identical, but merely parallel, then their effect is to
create a moment on the body, which tends to
rotate it.

A. How to play
Equipment: An ordinary checkerboard is
all that's needed.
Initial Setup: In the standard version of
the game, the black checkers are placed in
two rows along the top and bottom of the
board, while the white stones are placed
in two rows at the left and right of the
board.
The Object of the Game: Is to move your
pieces until they are all in one connected
group. Diagonals are considered to be
connected.




Acharya Institute of Technology, Bangalore

B. The Rules of the game
Black moves first
Each turn, the player to move moves one
of his pieces, in a straight line, exactly as
many squares as there are pieces of either
colour anywhere along the line of
movement. (These are the Lines of Action
You may jump over your own pieces.
You may not jump over your opponents
pieces, but you can capture them by
landing on them.
C. The "fine print" rules
If one player is reduced by
single piece, that is a win for the captured
player
If a move simultaneously creates a win for
both the player moving and the opponent,
the player moving wins. There are
actually quite a few unusual endgames
which are at least theoretically
possible. The canonical "unusual
endgame" is this, where both players
achieve a winning position
simultaneously

III. MONTE-CARLO TREE SEARCH

A play out is the basis of all Monte Carlo methods.
It is a fast game played with random moves from a
starting position to the end of the game,
generating either a score or simply a result
(win/loss). A play out can be light
random moves) or heavy (moves are
on heuristics, such as pattern libraries, shape,
group status, move history, killer moves, etc.).
based on a randomized exploration of the search
space. Using the results of previous explorations,
the algorithm gradually builds up a game tree
memory, and successively becomes better at
accurately estimating the values of the most
promising moves.
Place the stone on the board you want to
find the value for. This is now your
beginning board position.
Play a series of random moves (presented
from a move generator) from your
beginning board position. Evaluate the
board. Return to the beginning board
position and repeat the play of series a
dozen times, summing the board
evaluations.

, Bangalore-560090
Each turn, the player to move moves one
of his pieces, in a straight line, exactly as
many squares as there are pieces of either
along the line of
Lines of Action).
You may jump over your own pieces.
You may not jump over your opponents
pieces, but you can capture them by
If one player is reduced by captures to a
single piece, that is a win for the captured
If a move simultaneously creates a win for
both the player moving and the opponent,
the player moving wins. There are
actually quite a few unusual endgames
which are at least theoretically
The canonical "unusual
endgame" is this, where both players
achieve a winning position
CARLO TREE SEARCH
is the basis of all Monte Carlo methods.
It is a fast game played with random moves from a
position to the end of the game,
generating either a score or simply a result
light (completely
(moves are biased based
on heuristics, such as pattern libraries, shape,
group status, move history, killer moves, etc.). It is
based on a randomized exploration of the search
space. Using the results of previous explorations,
the algorithm gradually builds up a game tree in
memory, and successively becomes better at
accurately estimating the values of the most
Place the stone on the board you want to
find the value for. This is now your
Play a series of random moves (presented
m a move generator) from your
beginning board position. Evaluate the
beginning board
position and repeat the play of series a
dozen times, summing the board
Go back to the initial board, and place a new
stone, then do the
that position, and again for the next stone,
and again, and again....
The opponents response could also be
weighted, so that the better response would
be more likely to choose in the random
function. This could result in slowly bu
a tree for the best responses. The most
likely moves.
Find the stone that had the best sum.

A. Initial layout:

A line of Action is played on a
standard chessboard. Each player controls twelve
checkers, which are initially arrayed as in fig 1


Fig1 Initial Position

B. Movement diagrams:

A checker may not jump over an enemy checker.
Thus in the diagram below, white can't play a6
even though there are three checkers in row 6.
White might instead play a6
spaces because there are two checkers in the
diagonal (a6-f1) in which white is moving
2
Fig 2 Diagonal move of a checker
A checker may jump over friendly checkers.
Thus black may continue with e8
own checker. He moves three spaces because
there are three checkers in the diagona
which he is moving as in fig 3
NICE-2010
560090 127
Go back to the initial board, and place a new
stone, then do the play a new series from
that position, and again for the next stone,
and again, and again....
The opponents response could also be
weighted, so that the better response would
be more likely to choose in the random
function. This could result in slowly building
a tree for the best responses. The most
Find the stone that had the best sum.
A line of Action is played on a
chessboard. Each player controls twelve
re initially arrayed as in fig 1

Fig1 Initial Position

A checker may not jump over an enemy checker.
Thus in the diagram below, white can't play a6-d6,
even though there are three checkers in row 6.
White might instead play a6-c4, moving two
are two checkers in the
f1) in which white is moving as in fig

move of a checker
A checker may jump over friendly checkers.
Thus black may continue with e8-b5, jumping his
own checker. He moves three spaces because
there are three checkers in the diagonal (a4-e8) in
which he is moving as in fig 3


Acharya Institute of Technology, Bangalore


Fig 3 Checker jump over friendly checker
A checker may land on a square occupied by an
enemy checker, resulting in the latters capture
and removal from the game. For exam
may play h3-f1, capturing the black checker on f1
as in fig 4






Fig 4 Capture of Opponent
A player who is reduced to a single checker wins
the game, because his pieces are by definition
united. If a move results, due to a capture, in each
player having all his pieces in a contiguous body,
then either the player moving wins or the game is
a draw, depending on the rules in force at the
particular tournament.

C. The four strategic steps:

The four strategic steps of MCTS are discussed in
detail below. We will demonstrate how ea
these steps is used in Monte-Carlo LOA program
and are represented in the form of tree as in fig 5

1) Selection: Selection picks a child to be
searched based on previous gained information. It
controls the balance between exploitation and
exploration. On the one hand, the task often
consists of selecting the move that leads to the
best results so far (exploitation). On the other
hand, the less promising moves still must be tried,
due to the uncertainty of th
(exploration).We use the UCT (Upper Confidence
Bounds applied to Trees) strategy, enhanced with
Progressive Bias. UCT is easy to implement and
used in many Monte-Carlo Go programs. PB is a
technique to embed domain-knowledge bias into
the UCT formula. It is successfully applied in the
Go program Mango. UCT with PB works as
follows. Let I be the set of nodes immediately
reachable from the current node p. The selection

, Bangalore-560090

Fig 3 Checker jump over friendly checker
A checker may land on a square occupied by an
enemy checker, resulting in the latters capture
and removal from the game. For example, white
pturing the black checker on f1
Fig 4 Capture of Opponent
A player who is reduced to a single checker wins
the game, because his pieces are by definition
united. If a move results, due to a capture, in each
player having all his pieces in a contiguous body,
then either the player moving wins or the game is
a draw, depending on the rules in force at the
The four strategic steps of MCTS are discussed in
detail below. We will demonstrate how each of
Carlo LOA program
and are represented in the form of tree as in fig 5
Selection picks a child to be
searched based on previous gained information. It
ance between exploitation and
exploration. On the one hand, the task often
consists of selecting the move that leads to the
best results so far (exploitation). On the other
hand, the less promising moves still must be tried,
due to the uncertainty of the evaluation
(exploration).We use the UCT (Upper Confidence
Bounds applied to Trees) strategy, enhanced with
Progressive Bias. UCT is easy to implement and
Carlo Go programs. PB is a
knowledge bias into
formula. It is successfully applied in the
Go program Mango. UCT with PB works as
be the set of nodes immediately
. The selection
strategy selects the child
satisfies Formula1:
Where vi is the value of the node
count of i, and n p is the visit count of
coefficient, which has to be tuned experimentally
is the PB part of the formula.
constant, which has to be set manually (let
100). Pmc is the transition probability
category mc. For each move category (e.g.,
capture, blocking) the probability that a move
belonging to that category will be played is
determined. The probability is called the
transition probability. This statistic is obta
from game records of matches played by expert
players. The transition probability for a move
category Pmc is calculated as follows:

Where nplayed(mc) is the number of game positions
in which a move belonging to category m
played, and navilable(mc) is the number of positions
in which moves belonging to category m
available.

2) Play-out: The play-out step begins when we
enter a position that is not a part of the tree yet.
Moves are selected in self
game. This task might consist of playing plain
random moves or better pseudo
chosen according to a simulation strategy
well-known that the use of an adequate
simulation strategy improves the level of play
significantly. The main idea is to pl
moves according to heuristic knowledge. In our
Monte-Carlo LOA program, the move categories
together with their transition probabilities are
used to select the moves pseudo
the play-out.

A simulation requires that the numbe
moves per game is limited. When considering the
game of LOA, the simulated game is stopped after
200 moves and scored as a draw. The game is also
stopped when heuristic knowledge indicates that
the game is effectively over. When an evaluation
function returns a position assessment that
exceeds a certain threshold (i.e., 700 points),
which heuristically indicates a decisive advantage,
the game is scored as a win. If the evaluation
NICE-2010
560090 128
strategy selects the child k of the node p that

is the value of the node i, n I is the visit
is the visit count of p. C is a
coefficient, which has to be tuned experimentally
is the PB part of the formula. W is a
constant, which has to be set manually (let W =
transition probability of a move
For each move category (e.g.,
capture, blocking) the probability that a move
belonging to that category will be played is
determined. The probability is called the
This statistic is obtained
from game records of matches played by expert
players. The transition probability for a move
is calculated as follows:

is the number of game positions
in which a move belonging to category mc was
is the number of positions
in which moves belonging to category mc were
out step begins when we
enter a position that is not a part of the tree yet.
Moves are selected in self-play until the end of the
This task might consist of playing plain
random moves or better pseudo-random moves
simulation strategy. It is
known that the use of an adequate
simulation strategy improves the level of play
significantly. The main idea is to play interesting
moves according to heuristic knowledge. In our
Carlo LOA program, the move categories
together with their transition probabilities are
used to select the moves pseudo-randomly during
A simulation requires that the number of
moves per game is limited. When considering the
game of LOA, the simulated game is stopped after
200 moves and scored as a draw. The game is also
stopped when heuristic knowledge indicates that
the game is effectively over. When an evaluation
returns a position assessment that
exceeds a certain threshold (i.e., 700 points),
which heuristically indicates a decisive advantage,
the game is scored as a win. If the evaluation
NICE-2010

Acharya Institute of Technology, Bangalore-560090 129

function returns a value that is below a mirror
threshold (i.e., - 700 points), the game is scored as
a loss.

3) Expansion: Expansion is the strategic task that
decides whether nodes will be added to the tree.
Here, we apply a simple rule: one node is added
per simulated game. The added leaf node L
corresponds to the first position encountered
during the traversal that was not already stored.

4) Back propagation: Back propagation is the
procedure that propagates the result of a
simulated game k back from the leaf node L,
through the previously traversed node, all the
way up to the root. The result is scored positively
(R k=+1) if the game is won, and negatively (Rk= -
1) if the game is lost. Draws lead to a result Rk= 0.
A back propagation strategy is applied to the
value vL of a node. Here, it is computed by taking
the average of the results of all simulated games
made through this node,


D. Parallelization:

The parallel version of our MC-LOA program uses
the so-called single-run parallelization, also
called root parallelization. It consists of building
multiple MCTS trees in, with one thread per tree.
These threads do not share information with each
other. When the available time is up, all the root
children of the separate MCTS trees are merged
with their corresponding clones. For each group
of clones, the scores of all games played are
added. Based on this grand total, the best move is
selected. This parallelization method only
requires a minimal amount of communication
between threads, root parallelization remarkably
well in comparison to other parallelization
methods. However, root parallelization does not
scale well for a larger number of threads. An
alternative is to use tree parallelization, which
had good results in Computer Go. This method
uses one shared tree from which several
simulated
games are played simultaneously.

Fig 5 Outline of Monte-Carlo Tree Search

IV. MONTE-CARLO TREE SEARCH SOLVERS

Although MCTS is unable to prove the game-
theoretic value, in the long run MCTS equipped
with the UCT formula is able to converge to the
game-theoretic value. For a fixed termination
game like Go, MCTS is able to find the optimal
move relatively fast. But in a sudden-death game
like LOA, where the main line towards the
winning position is narrow, MCTS may often lead
to an erroneous outcome because the nodes
values in the tree do not converge fast enough to
their game-theoretical value. For example, if we
let MCTS analyse the position in Fig 6 .for 5
seconds, it selects c7xc4 as the best move,
winning 67.2% of the simulations. However, this
move is a forced 8-ply loss, while f8-f7 (scoring
48.2%) is a 7-ply win. Only when we let MCTS
search for 60 seconds, it selects the optimal move.
For a reference, we remark that it takes in this
position less than a second to select the best move
and prove the win. We designed a new variant
called MCTS-Solver, which is able to prove the
game-theoretical value of a position. The back
propagation and selection mechanisms have been
modified for this variant.

Fig 6 White to move

A. Back propagation:

The play-out step returns the values {1, 0, 1} for
simulations ending in a win, draw, or loss,
respectively. In regular MCTS the same is true for
terminal positions occurring in the search tree
(built by the MCTS expansion step). In the MCTS
Solver, terminal win and loss positions occurring
in the tree are handled differently. Draws are
generally more problematic to prove than wins
NICE-2010

Acharya Institute of Technology, Bangalore-560090 130

and losses; however, because draws happen only
in exceptional cases in LOA we took the decision
not to handle them for efficiency reasons instead
assigns1or 1, respectively. A special provision is
then taken when backing such proven values up
the tree. There are three cases to consider as
shown in Fig.7. First, when a simulation backs up
a proven loss - from a child c to a parent p,
the parent node p becomes, and is labelled as, a
proven win that is, the position is won for
the player at p because the move played leads to a
win.
When backing up a proven win from c to p,
one must, however, also look at the other children
of p to determine ps value. In the second case,
when all child nodes of p are also a proven win
, then the value of p becomes a proven loss -
,because all moves lead to a position lost for p
(middle backup diagram in the figure). However,
the third case occurs if there exists at least one
child with a value different value from a proven
win. Then we cannot label p as a proven loss.
Instead p gets updates as if a simulation win were
being backed up from node c Non-proven values
are backed up as in regular MCTS.

B. Selection:

As seen in the previous subsection, a node can
have a proven game-theoretical value of or
the question arises how these game-theoretical
values affect the selection strategy. When entering
a node with such a proven value, that value can
simply be returned without any selection taking
place. A more interesting case is when the node
itself has a non-proven value but some of its
children have.
Assume that one or more moves of node p are
proven to lead to the loss for the player to move in
p. It is tempting to discard them in the selection
step based on the argument that one would never
pick them. However, this can lead to
overestimating the value of node p, especially
when moves are pseudo-randomly selected by the
simulation strategy. For example, in Fig.8 we have
three one-ply subtrees. Leaf nodes B and C are
proven to be a loss (for player to move in A),
indicated by ; the numbers below the other
leaves are the expected pay-off values (also from
the perspective of the player to move in A).
Assume that we select the moves with the same
likelihood. If we would prune the loss nodes, we
would prefer node A above E. The average of A
would be 0.4 and 0.37 for E. It is easy to see that A
is overestimated because E has more good moves.
Conversely, if we do not prune proven loss nodes,
we run the risk of underestimation. Especially,
when we have a strong preference for certain
moves or we would like to explore our options.
We could underestimate positions. Assume that
we have a strong preference for the first move in
the subtrees of Fig.8. We would prefer node I
above A. It is easy to see that A is underestimated
because I has no good moves at all.

C. Final Move Selection:

For standard MCTS several ways exist to select
the move finally played by the program in the
actual game. Often, it is the child with the highest
visit count, or with the highest value, or a
combination of the two. In practice, is does not
matter too much which of the approaches is used
given that a sufficient amount of simulations for
each root move has been played. However, for
MCTS-Solver it does somewhat matter. Because of
the back propagation of game-theoretical values,
the score of a move can suddenly drop or rise.
Therefore, we have chosen a method called secure
child. It is the child that maximizes the quantity
where A is a parameter (here, set to
1), v is the nodes value, and n is the nodes visit
count. For example, if two moves have the same
value, we would prefer the one explored less
often. The rational has to do with the derivative of
their value: because of the imbalance in the
number of simulations, either the value of the
move more explored must have been dropping, or
the value of the one less explored increasing; in
both cases the one less explored is to be favoured.


Fig 7 Backup of proven values



Fig 8 Monte-Carlo Subtree

V. MCTS VS. MCTS-SOLVER

In the first series of experiments MCTS and MCTS-
Solver played 1,000 games against each other,
playing both colours equally. They always started
from the same standardized set of 100 three-ply
NICE-2010

Acharya Institute of Technology, Bangalore-560090 131

positions. The thinking time was limited to 5
seconds per move.

Table I. 1,000-game match results

Evaluator Score Win % Winning
ratio
MCTS-
Solver vs.
MCTS
646.5 353.5% 1.83

The results are given in Table I. MCTS-Solver
outplayed MCTS with a winning score of 65% of
the available points. The winning ratio is 1.83,
meaning that it scored 83% more points than the
opponent. This result shows that the MCTS-Solver
mechanism improves the playing strength of the
Monte-Carlo LOA program.

VI. CONCLUSIONS

This article introduces a new MCTS variant,
called MCTS-Solver. This variant differs from the
traditional MC approaches in that it can prove
game, theoretical outcomes, and thus converges
much faster to the best move in narrow tactical
lines. This is especially important in tactical
sudden-death-like games such as LOA. The
experiments show that a MC-LOA program using
MCTS-Solver defeats the original MCTS program
by an impressive winning score of 65%.
Moreover, when playing against a state-of-the-art
alpha-beta based program, MCTS-Solver performs
much better than a regular MCTS program. Thus,
we may conclude that MCTS-Solver is a genuine
improvement, significantly enhancing MCTS.

VII. REFERENCES
1. Monte-Carlo Tree Search in Lines of Action,
Mark H.M. Winands, YngviBjornsson, and
Jahn-Takeshi Saito
2. "Lines of Action - Computer World
Champion, including academic research"
http://www.personeel.unimaas.nl/m-
winands/loa/ Sackson, Sid. A Gamut of
Games. ISBN 0-486-27347-4
3. Schmittberger, R. Wayne. New Rules for
Classic Games. ISBN 0-471-53621-0
Rules for Lines of Actions with example
diagrams
4. Mind Sports Olympiad LOA Results
http://www.boardability.com/result.php?id
=lines_of_action 10 July 2010












































NICE-2010

Acharya Institute of Technology, Bangalore-560090 132

A SURVEY OF MIDDLEWARE AND NETWORK MANAGEMENT FOR
WIRELESS SENSOR NETWORKS
Mr. G.Srinivasan
1
, Ms.S.Divya
2

Lecturer, Student, IV year (CSE), B.E Dept of Computer Science and Engineering, MNM Jain Engineering
College, Chennai
srinivasan_2010@hotmail.com, divyasri1990@gmail.com
________________________________________________________________________________________________________________________

ABSTRACT
A wireless sensor network is a decentralized
wireless network. The network is adhoc because
it does not rely on a preexisting infrastructure.
The routers are used in wired networks whereas
access points are used in wireless networks.
Wireless sensor network consists of spatially
distributed autonomous sensors. They are now
used in many industrial application areas
including industrial process, civilian application
areas, monitoring and control, healthcare
applications, home automation and traffic
control. In this paper We focus the surveys of
middleware and WSN management and
emphasize how they are important.
Index Terms Sensor Networks,
Middleware, Protocol Stack, Network
Management

I.INTRODUCTION
A network management system is a
collection of tools for network monitoring and
control. A network management system
consists of incremental hardware and software
additions implemented among existing
network components. A centralized network
management system enables the manager to
maintain control over the entire configuration,
balancing resources against need and
optimizing the overall utilization of resources.
A distributed network management system
replaces the single network control center with
interoperable workstations located on LANs
distributed throughout the enterprise.
Distributed management stations are given
limited access for network monitoring and
control. During the past few years
there were various management models
developed [1]. In 1980s computer networks
were at initial stage. They were being
interconnected with large scale. A network
management is having the following functional
areas: Fault Management, Accounting
Management, Configuration and Name
Management, Performance Management and
Security Management.
The middleware concept is very important
in sensor network. The various middleware
were designed in past decades. The
sensorpedia, TinyDB, Mate, Agilla, TinyLime,
TinyCubus were designed [2]. The MANNA and
BOSS [1] are good examples for network
management architecture. MANNA provides a
general architecture for managing wireless
sensor network with functional plane,
informational plane and physical plane. The
functional plane is responsible for the
configuration
of the application specific entities; the
information plane represents syntax and
semantics of the entities, the physical plane is
responsible for interface between managed
entities. The BOSS architecture is based on
UpnP protocol. This protocol provides self
configuration and self-discovering of devices.
WSN middleware is considered as a software
infrastructure. It is designed for smooth
functioning of networking hardware, operating
systems, network topology and supported
applications. Middleware should support long
running application. It should support strong
operation of sensor networks. The virtual
machine approach is very important in
middleware concept. MagnetOS [10] are one of
the middlewares based on virtual machine
approach.
Middleware concept is also classified as
database approach, message-oriented
approach, application driven approach. I will
discuss various middleware concepts and
network management systems. This paper
depicts the suitable middleware for WSN
management.
II.RELATED WORK
In earlier stages the wireless sensor network
was evolved with niche technology. The middle
wares were also developed. A middle ware
acts as an interface between application and
operating system. Like three tier architecture it
NICE-2010
Acharya Institute of Technology, Bangalore-560090 133

has three levels. Middleware is required to
provide the following services:
standardized system services to diverse
applications
a runtime environment to support and
coordinate multiple applications
mechanisms to achieve adaptive and efficient
utilization of system resources
Stringent power constraints: prolong the
network life time
The sensorpedia [2] is a middleware based
on social networking and collaboration
principle. This is used by popular websites
Wikipedia, Google and Facebook. Tiny Db is a
middleware based on query processing. This is
based on TinyOS . It collects data from
individual nodes and reduces the message to be
sent. Mate is a stack based middleware
architecture which breaks the program into
small instructions consisting 24 capsules. These
capsules are capable of self-forwarding. Agilla is
agent-based middleware model. Each node is
having from one to four agents. A single sensor
node can run up to four agents. One node can
run multiple agents at a time. So this
middleware supports multiple applications at a
time. TinyLime is based on the concepts
abstraction model and shared tuple space. It has
lime integration concept, mote interface and
mote level components. TinyCubus is based on
cross-layer framework. It has three frame
works i.e. tiny cross-layer frame work, tiny
configuration engine, tiny data management
frame work. The Tenet[3] is architecture for
embedded sensor networks. It is very useful for
tiered embedded network. In this architecture
the upper tier performs complex tasks and
lower tier motes perform simple tasks. Mobile
agents are very useful for sensor
networks.Agilla[2] is one of the mobile agent
based middleware. . Even if there are various
management approaches for WSN, it is
important to design the efficient and
lightweight management It should contain small
nodes, low power management features.
Graph Representation
In this paper let us consider three
characteristics for groping purpose. As
shown in figure 1 we categorize them as
type of management, type of middleware
and size of the network. The middle
wares are categorized as platform (p),
query (q), stack (s) and agent (a) based.
The size of the node is represented as
number of nodes deployed from a few
nodes from hundreds and thousands of
nodes. The management is classified
depending upon cognitive [4] approach
i.e. overlay sensor network for monitoring
and controlling(oM), policy based
management infrastructure(pM), policy
enforcement points(eM). The network
management system is used to manage all
middleware, layers, agents etc. The H-
WSNMS[5] is a network management
system which is very useful for reusable of
management components. In this system
the Virtual Commands Set was introduced.
The management function is specified by a
virtual command. The three tier approach
was introduced in this system. The client
tier, agent tier and gateway tier are the
three tier architecture used in H-WSNMS
[5]. Most of the WSN are distributed
application system. So the middleware
should be designed to support such
distributed applications. Distributed
relational databases offer abstraction and
heterogeneity. For middleware languages
c++ or Java are used. TinyOS is the most
widely used operating system in wireless
sensor networks. For TinyOS the language
nesC is used. The nesC is a language
based on C. The IDL [6] depicts the
interface to the remote component.
Mainly CORBA and DCOM [6] provide
heterogeneity across programming
languages.



NICE-2010
Acharya Institute of Technology, Bangalore-560090 134

Fig.1. Graph Representation of Sensor Network

B Agent based sensor network
An agent is software which has all local data.
In H-WSNMS [5] system the agent tier which
has agent servers that provide the
communication between management
component and client tier. There are three
kinds of nodes in sensor network. The
common nodes collect various sensor data.
The sink node usually aggregates data from
common nodes. The gateway nodes connect
the sink nodes to external entities. Commonly
the WSN networks should be designed in such
a way that must be able to reconfigure the
network in case of node failure. The JADE [7]
is the middleware with multi-agent based
architecture for develop Java based
applications. Usually the sensor middleware
collects information from various networks. It
is used for gathering and analyzing data and
provides the aggregate data to the end user.
So while designing network management for
wireless sensor network it is important to
consider the middleware. The CALM [7]
middleware is very efficient and flexible. In
figure 2 the middleware design is indicated
clearly. The middleware acts as an interface
between application and operating system.
Here we indicate the necessity of agent based
middleware while designing. The
characteristics of agents are autonomy,
intelligence, mobility and social ability. The
agents are classified into multi-agent and
mobile agent. They are used to accomplish
tasks on behalf of user. So the WSN will be
designed in efficient way.


Fig .2. Agent based middleware architecture

Distributed Computing:
Distributed computing is a wide area for
implementing atomic objects and shared
memory. There are some relations
between distributed computing and
message-driven approach. Web-base
message-driven middleware architecture
[8] is a good example for real-time
applications. In distributed computing we
can implement various resource
allocations and shared memory
algorithms. Because of middleware
controls the resource sharing between
applications and operating systems the
distributed computing is the vital part of
wireless sensor networks.

C. Design of Network Management
For WSN
We have seen various wireless sensor
network management systems. The
traditional WSN management systems are
heterogeneous and distributed environment
applications. In this paper a new approach for
NICE-2010
Acharya Institute of Technology, Bangalore-560090 135

designing WSN management is introduced.
For wireless sensor network the power
management is very tedious task. The WSN
management should be designed in order to
sustain battery power. The Evaluation
Framework [9] is having power saving,
scalability, mobility, heterogeneity, and
usability. Due to the battery power loss the
entire network will be changed. So the WSN
management should be designed to
reconfigure the network automatically. The
resource-energy, bandwidth and processing
power will be changed dynamically in
Dynamic Network Organization [10]. So WSN
should support long running application
program. Similarly the intrusion detection is
to be considered while designing WSN
management. FT-CoWiseNets [11] is one of
the Fault Tolerance Framework for Wireless
Sensor Networks.

III. NETWORK MANAGEMENT DESIGN
WITH WNaN
In this network management system the
Wireless Network after Next (WNaN) [12]
protocol is included. Todays networking
protocols all drop packets immediately if any
node along the path looses the route to the
destination. This is unacceptable in most
military environments because links change
often. WNaN includes Content Based Access
(CBA) techniques. It allows users to query
the network to find information. CBA can
also automatically pre-place certain types of
critical data (such as maps), around the
network to minimize the time and
bandwidth used to get the data when queries
are entered. The WNaN protocols are
designed for small handheld devices. They
include energy conserving capabilities to
extend battery life and are targeted for
embedded operating systems and
processors. The significant advantages can
be realized by densely deploying low cost
nodes which have been jointly optimized
with network operations. In figure 3 the
wireless networking stacks IEEE 802.15.4
standard Zigbee and 6LoWPAN are
illustrated. Then another network layer
WNaN is also indicated. The coexistence
with 2.4GHz wireless (Zigbee,Wi-Fi), multi-
year battery life expectancy in embedded
applications are achieved by 6LoWPAN. The
Zigbee protocol stack is most flexible
approach for applications development.





Fig . 3 A WNaN based middleware
architecture

IV. CONCLUSION AND FUTURE WORK

In this paper a WNaN protocol based
wireless sensor network management
architecture has been created. It has a lot of
advantages rather than traditional wireless
sensor network management system. Similarly
the importance of agent based middleware is
also explained. In future more concentration
should be given for designing middleware.
Similarly for designing network management
architecture, the power consumption of
batteries should also be considered. The Agent-
based WSN power management [13] is based
on neural network concept. When battery
power is not sufficient, the considerable
electrical energy should be converted from
mechanical energy like impulse, shock,
vibration etc. So in future a lot of research in
this area will be needed in the power
management and middleware design for
wireless sensor networks.



NICE-2010
Acharya Institute of Technology, Bangalore-560090 136

V.REFERENCES

[1] Wireless Sensor Network Management
and Functionality: An Overview,
DimitriosGEORGOULAS, Keith BLOW
Adptive Communication Networks
Research Group,EE, Aston University, Aston
Triangle, B47ET, United Kingdom
doi:10.4236/wsn.2009.14032 Published
Online November 2009
[2] Extending Middleware Frameworks
for Wireless Sensor Networks, Syed Rehan
Afzal, Christophe Huygens and Wouter
Joosen IBBT DistriNet Research Group,
Department of Computer Science,
Katholieke Universiteit Leuven Leuven,
Belgium
{Rehan.Afzal,Christophe.Huygens,
[3]Programming Models for Sensor
Networks: A Survey, RYO SUGIHARA and
RAJESH K. GUPTA University of California,
San Diego, ACM Transactions on Sensor
Networks, Vol. 4, No. 2, Article 8,
Publication date: March 2008.
[4] Cognitive Management Architecture
for Communities of Wireless Distributed
Networks,Janise Y. McNairWireless and
Mobile Systems Laboratory, University of
Florida Gainesville, Florida
[5] H-WSNMS: A Web-Based
Heterogeneous Wireless Sensor Networks
Management System Architecture Wei
Zhao, Yao Liang, Qun Yu, and Yan Sui
Department of computer and Information
Science Indiana University - Purdue
University Indianapolis
[6] MIDDLEWARE, David E.
Bakken1Washington State University,
School of Electrical Engineering and
Computer Science; PO Box 642752;
Washington State University; Pullman,
WA 99164-2752 USA
[7] Agent Based Sensor Network
Middleware Sporting Heterogeneous
Environments Jong-Wan Yoon, Hyung-rok
Seo, Choon-Sung Nam, and Dong-Ryeol
Shin School of Information and
Communication Engineering,
Sunkyunkwan University, Suwon, Korea
[8] A Distributed Semantic Web-Based
Message-Driven Middleware Architecture,
Xiwei Feng School of Computer and
Communication Engineering,Liaoning
Shihua University Fushun, P.R. China
113001, Xiwei Feng ,Chuanying Jia, Jiaxuan
Yang Navigation College, Dalian Maritime
UniversityDalian, P.R. China 116026
[9] An Evaluation Framework for
middleware approaches on Wireless
Sensor Networks, Shuai Tong Helsinki
University of Technology
[10] Middleware for wireless sensor
networks :Challenges and Approaches,
Md.Atiqur Rahman Helsinki University of
Technology
[11] FT-CoWiseNets: A Fault Tolerance
Framework for Wireless Sensor Networks,
Luciana Moreira Sa de Souza SAP
Research Vincenz-Prissnitz-
Strasse,1Karlsruhe,Germany
[12] A neural network approach for
Wireless sensor network power
management, Ahmad
Hosseingholizadeh,Dr. Abdolreza
Abhari,Department of Computer
Science,Ryerson University, Toronto,
Canada








NICE-2010
Acharya Institute of Technology, Bangalore-560090 137

MILLIMETER-WAVE SYSTEM VEHICLE AREA NETWORK FOR MULTIMEDIA
COMMUNICATION
Veena .G.N
1
, Ashwini.G
2
, Manjunath R Kounte
3

RITM, Bangalore
veenagowda.gowda@gmail.com
1
, gashwini123@gmail.com
2
, manjunath.kounte@gmail.com
3

_____________________________________________________________________________________________________________________________

ABSTRACT
In order to realize a vehicle area network
for multimedia entertainment systems, which
demand capacities of up to 1 Gbps, and vehicle
control links between many parts of the car, which
demand high reliability and low latency, this paper
proposes a millimeter-wave system that offers high
capacity and low latency transmission.
Propagation measurement results show the
maximum delay spread of 6 - 7 ns can be achieved
with omni antennas without complicated
equalizers. Feasible advances in beam forming
antenna performance will reduce the maximum
delay spread inside the car to only 1 - 2 ns. This
paper proves that millimeter-wave vehicle area
networks can support multimedia communications
as well as control links by employing a basic single
carrier modem with simple rake receivers; no
equalization is needed.

KEY WORDS
Vehicle area network, Multimedia entertainment
systems, Wireless harness, Millimeter-waves.

II INTRODUCTION
The applications of intra-car wireless
communications include video transmission,
hands free phones, control links, sensor links,
portable navigation systems and multimedia
entertainment systems. The car navigation device
is seen as being the key infrastructure of the
vehicle area network (VAN). VAN candidates
include Bluetooth (IEEE802.15.1), ZigBee
(IEEE802.15.4),

and UWB (IEEE802.15.4a). Bluetooth operates in
the 2.4GHz band and its maximum throughput is
up to 1Mbps. ZigBee operates at up to 2.4GHz and
its maximum throughput is a few hundred kbps.
UWB operates at 3.1-10.6GHz and its maximum
throughput is 480Mbps, because its bandwidth
limitation is 500MHz. These systems can not
replace the existing wire harness due to large
latency and poor reliability. The bit-rate of UWB is
not high enough to support multimedia
applications with multiple video channels. Given
this background, we propose a 60GHz millimeter-
wave VAN that offers the low latency and high
capacity needed. The propagation characteristics
of 60GHz millimeter-wave links have been
measured but only for indoor environments , no
intra-car environment testing has been done up to
now. This paper rectifies this omission in
confirming the feasibility of the 60GHz VAN.
Another advantage of the 60GHz band is that the
whole bandwidth (57 to 64GHz or 59 to 66GHz
depending on the country) is unlicensed and
uncontaminated unlike UWB. The potential
problems of the millimeter-wave VAN are its
reliability and
transmission range, both of which heavily depend
on the propagation environment within the car.
WPAN (Wireless personal area network) systems
at 60GHz are being standardized in IEEE
(IEEE802.15.3c Task Group) and the process is
close to completion. This WPAN targets indoor
communications over links of up to 10m and
extensive propagation measurements have be
made in various environments. The measurement
environments reflect the different usage models
such as short range kiosk data downloading (sync
and go) and uncompressed video streaming .The
TSV-channel-model was proposed by the authors;
it includes a direct path (LOS: Line of sight) and
angle of arrival in addition to the SV model which
has been used for Wireless LAN (no antenna
directivity).
One of our key goals was to clarify whether
complicated equalizers / OFDM systems were
needed to implement Intra-car multimedia
communications Propagation measurements
prove the feasibility of the millimeter wave VAN
for multimedia communications: it employs a
basic single carrier modem with simple rake
receivers; no equalizers are needed.

II. IN-CAR PROPAGATION MEASUREMENTS

A. Measurement set up
Propagation measurements were performed in a
car in order to investigate the received power and
delay spread between the most likely
transmission and reception points. Figure 1
shows the configuration of the measurement
system. The transmitter antenna, 2.2dBi antenna
gain (omnidirectional antenna) was set at the car
navigation unit, since the unit

NICE-2010
Acharya Institute of Technology, Bangalore-560090 138



Fig. 1. Measurement set up.

is assumed as the center of car entertainment and
wireless harness, transmitting multiple video
channels and collecting monitoring signals to
display. The omnidirectional antenna has a
monopole configuration for easy fabrication, and
transmits and receives vertically polarized
signals. The receiver antenna has antenna gain of
22dBi (half power beamwidth 15 degrees).
Calibration was carried out by direct connection
to the input and output ports of a network
analyzer without antennas. The measurement
frequency band was set to 61- 64GHz and the test
equipment (HP8510C) was used as a signal
generator and signal analyzer. The measurements
were done by rotating the receive antenna 360
degrees (5 degree step), and the 3GHz band was
traversed. The impulse response was obtained by
inverse Fourier transformation of the measured
frequency response of the received signal, and the
delay spread was calculated.

B. Antenna set up in car

A sport-utility vehicle (SUV) type car was used for
the measurements shown in fig2 The car
dimensions are 4455mm(long) x 1765mm(wide)
x 1875mm(high). The transmitting antenna was
fixed at the position of the car navigation unit as
mentioned before while the receiving antenna
was set at three different locations and rotated in
5 degree steps on the horizontal plane. The
drivers window was kept open to allow
connection of the measurement cables. All
measurement instruments and researchers
remained outside the car, only the antennas,
transmitter and receiver modules were put inside
the car. The receiving antenna was set on the
front seat (Rx1), back seat (Rx2), and the luggage
space (Rx3). This setting created a line of sight
(LOS) environment for Rx1, and non-line of sight
(NLOS) environments for Rx2 and Rx3.



Fig. 2. Configuration of Tx and Rx antennas in car.
(a)Side view. (b)Top view.

C. Measurement results

As expected, many reflected waves were received
from various angles; one cluster of received waves
was observed from the front and back seats. A
second cluster from the rear of the car was
observed in the luggage space. Since the power
level of the second cluster was very small, a single
cluster model was adopted as the cars channel
model. As a general channel model for intra-car
communications, the TSV-model is most suitable
since a strong LOS component was observed at
the front seat. The power is normalized by the
maximum averaged power at the front seat. The
delay spread, S is defined as follows .



Here, TD is the averaged delay time and given by
Eq.(2).

p(i):power density of the impulse response
i:excess time delay variable
M:arrival time of the received multipath
component
NICE-2010
Acharya Institute of Technology, Bangalore-560090 139

i = 1 and N: the first and the last samples of the
delay profiles above threshold level

1) Front seat (LOS environment): The strong
direct path component was observed at 50
degrees, and a strong reflected wave came from
the passengers door at -70 degrees. The delay
spreads for both directions were less than 1ns.
Thus, front seat communications can be realized
by simple single carrier (SC) modems without
equalization.
2) Back seat (NLOS environment): Although this
is nominally an NLOS environment, a relatively
strong wave was
received from the Tx antenna direction (-10
degrees), the delay spread was about 3 ns. Larger
components were reflected from the doors of the
car, 25 and -50 degrees. The delay spreads of
these were as small as 1ns. If a directed / beam
forming antenna is used, equalization can be
rendered unnecessary by steering the antenna
toward the strong reflected wave.
3) Luggage space (NLOS environment):
A large direct wave component was observed and
relatively high reflection waves were received
from the rear window of the car. The measured
delay spreads were as small as 2ns at 0 and 180
degrees. Luggage space communications may
require
a directed / beam forming antenna for adequate
received signal power.
The measurements show that Rician fading was
replacedby Rayleigh fading as the Rx was moved
rearward from the front seat. The amount of delay
spread depends on Rx antenna direction and the
maximum delay spread is around 6-7ns (full 360
degree Rx rotation). This indicates that there is no
need for equalizers or OFDM to overcome fading if
directive / beam forming antennas are employed;
single carrier modems with rake receivers are
feasible if delay spread length is does not exceed
10 symbols.

iii. channel model for intra car communication

A. Path loss model
Path loss models and impulse response equations
are required
to design and evaluate wireless systems. We can
model the path loss by using the common path
loss model given by the following equation.


where n is the mean path loss exponent and the
free space propagation loss, PL0, can be calculated
by the following


Fig. 3. Path loss in the car environment
(d0=100mm).

equation;



where 0 is the free space wavelength. For
example, PL0 is 48.4dB at the reference distance
of d0=10cm at 62.5 GHz. Figure 3 shows the
relative received power at each of the three Rx
positions. If the beam of the Rx antenna can be
controlled by enhanced beam forming technology,
the beam direction can
be aligned to the peak received power direction.
Thus our data set contains the best beam
direction (i.e. maximizing the received power),
and the second best direction (second peak
power). The mean path loss exponents were
extracted by fitting and approximated as linear
lines as shown in Fig.3.
B. Impulse response model
A statistical channel model was developed for
intra car communications from the measurement
results. A single cluster model with simple
exponential decay was adopted. Our statistical
model was developed by modifying the multi
cluster SV model . The complex impulse response
h(t) of the single cluster model is defined by the
following equation.


where k denotes the number of rays, N is the last
ray over the threshold level of receiver sensitivity.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 140

k, k, k, denotes path gain, associated phase and
arrival time of each ray, respectively. The mean
path gain is expressed by



where is the decay factor of path gain. The phase
of k is assumed to be an independent uniform
random variable. For simplicity, the model
assumes that the path gain has no amplitude
distribution. The arrival time is assumed to have a
Poisson distribution as follows


where is the arrival rate of rays


Impulse response of single cluster channel model.

C. Parameter extraction from measured
impulse response
Two intra-cluster parameters, decay factor and
arrival rate , were extracted from the
measurement data. Each rays amplitude and
arrival time were extracted by the CLEAN
algorithm.This method is a form of simple peak
detection. Given the use of this beam forming
technique.The decay factor was estimated from
the linear regression line for each situation. The
decay factor is calculated as the decay time when
the power, 2 k, decreases to 1/e. The arrival rate
was also estimated in the same manner as the
regression line, so is calculated as the gradient
of each line. Since the delay spread of both the
model and measurement agree very well, the
model is realistic in expressing the measurement
response. Additionally, if the best or second best
beam could be captured by beam forming, the
propagation channel is very close to an AWGN
channel. If the best beam is shadowed, a simple
rake receiver can switch to the second best beam
for higher reliability. Even if we consider other
beam directions, the maximum delay spread
observed is 6 - 7 ns. This proves the feasibility of
the millimeter wave VAN that requires only
simple single carrier modems with simple rake
receivers (no equalization) to implement
multimedia communications.




IV. CONCLUSION

An intra-car multimedia communications system
using the 60GHz band has been proposed in order
to achieve data rates of up to 1 Gbps. Propagation
measurements including angle of arrival analyses
were carried out to clarify the propagation
characteristics inside cars. An omnidirectional
antenna and 22dBi conical horn antenna were
used as the Tx and Rx antennas, respectively. Two
or three propagation paths with low delay spread
of 1 ns were observed if the receiving antenna
was set at the best direction. This suggests the
deployment of a beam forming antenna to reduce
delay spread. As a result, WPAN systems are being
standardized in IEEE802.15.3c; since they can use
single carrier modems without equalizers, they
are excellent candidates for realizing intra-car
multimedia communications systems.

REFERENCES

[1] M.Heddebaut, V.Deniau, and K.Adouane,In-
vehicle WLAN radiofrequency communication
characteristics, IEEE Trans. Intell. Transport.
Syst., vol.5 no.2, pp.114-121, June 2004.
[2] Y.Katayama, K.Terasaka, K.Higashikatsuragi,
I.Matsunami, A.Kajiwara,Experimental
Evaluation of In-Vehicle UWB Radio Propagation
Characteristics, IEICE Trans. B, vol.J89-B, no.9,
pp1815-1819, Sept 2006.
[3] C.R.Anderson and T.S.Rappaport, In-Building
Wideband Partition Loss Measurements at 2.5 and
60 GHz, IEEE Trans. Wireless Comm., vol. 3, no. 3,
pp.922-928, May 2004.
[4] H.Xu, V.Kukshya, and T.S.Rappaport, Spatial
and Temporal Characteristics of 60-GHz Indoor
Channels, IEEE J. Select. Areas Commun., vol. 20,
no.3, pp.620-630, Apr. 2002.
[5] IEEE 802.15 WPAN Millimeter Wave
Alternative PHY Task Group 3c (TG3c),
http://www.ieee802.org/15/pub/TG3c.html.








NICE-2010
Acharya Institute of Technology, Bangalore-560090 141
























































NICE-2010
Acharya Institute of Technology, Bangalore-560090 142























































NICE-2010
Acharya Institute of Technology, Bangalore-560090 143























































NICE-2010
Acharya Institute of Technology, Bangalore-560090 144























































NICE-2010
Acharya Institute of Technology, Bangalore-560090 145

INTERWORKED WIMAX-3G CELLULAR DATA NETWORKS USING IMS
B.Dukita Nancy(Final year B.Tech),
IT, Kalasalingam University,
wittynanc@gmail.com
Guided By:Mr.c.Balasubramanian(Lecturer of IT Department ,Kalasalingam University)
_______________________________________________________________________________________________________________________
ABSTRACT

This paper proposes an architecture for
interworking all-IP networks to analysis of its
performance. The novelty of this framework is that
it freely enables any 3G cellular technology, such as
the Universal Mobile Telecommunications System
(UMTS) or the CDMA2000 system, to interwork
with a given Broadband Wireless Access
(BWA)system, such as the Worldwide
interoperability for Microwave Access (WiMAX)
network or the Wireless Local Area
Network(WLAN) .As a universal coupling mediator
for managing these dissimilar networks, the IP
Multimedia Subsystem(IMS) along with security
mechanism has been implemented and used to
evaluate the behavior of Qos. Finally, simulation
based(NS2) platform has been introduced for the
verification of the analytical model and results

KEYWORDS

IMS, UMTS, WiMAX, SIP, mobile IP.

I.INTRDUCTION

THE IEEE 802.16e-2005 version of
Broadband Wireless Access (BWA) technology,
which is commercially known as Mobile
Worldwide interoperability for Microwave
Access(WiMAX), is rapidly gaining popularity [1].
This has sparked many debates such as: could this
become the dominant technology of the future 4th
Generation (4G) networks, or, has it got the
potential for making the existing 3rd
Generation(3G) cellular networks obsolete? Then
again, mobile WiMAX networks are still in its
infancy with many technical challenges lying
ahead. The main hindrance to its competitive edge
is the lack of an infrastructure for the WiMAX
Core Network(CN), which makes it impossible to
become a head-to-head competitor against
current cellular networking infrastructures.
Therefore, from a cooperative perspective, if
mobile WiMAX
is interworked with the already well established
3G cellular networks such as Universal Mobile
Telecommunications
System (UMTS) or the Code Division Multiple
Access based CDMA2000 system, it is more than
likely that WiMAX may become an essential
partner of the future 4G networks The primary
focus of this paper is to introduce this novel
architecture and investigate various performance
measures such as vertical handoff delay, transient
packet loss, jitter and signaling cost relating to
vertical handoff management along with
security.The paper further investigates the
validity of the proposed architecture by using an
NS2 simulation platform. The reminder of this
paper is organized as follows. The nextsections
present an overview on the concepts used for
mobility and session management of the proposed
framework. Followedby which comes the sections
on analytical modeling andperformance analysis.
Lastly, an NS2 based model is included for
verification prior to the conclusion.

II. IMS BASED SESSION MANAGEMENT

The UMTS Release 5 was the first to introduce the
IMS within its CN [6], for controlling multimedia
sessions. The
key elements of the IMS are the Call State Control
Functions(CSCFs) and the Home Subscriber
Server (HSS). A CSCF
can be generalized as a Session Initiation Protocol
(SIP) proxy server and the HSS as a user profile
database. These
CSCFs can be distinguished as follows; Proxy-
CSCF (PCSCF),Interrogating-CSCF (I-CSCF), and
Serving-CSCF (SCSCF).The P-CSCF is the first to
receive a SIP request. It forwards this requests to
the S-CSCF via the I-CSCF. The task of the I-CSCF is
selecting the appropriate S-CSCF by checking with
the HSS. The S-CSCF is the actual SIP server which
performs the user registration and handles
session
management in the IMS. The IMS uses SIP as the
default protocol for signaling, session, and
mobility management

III.WIMAX-3G CELLULAR DATA NETWORKING
Our proposed internetworking architecture is
illustrated in
Fig. 1. Each network is connected to the all-IP CN
via its corresponding gateway (i.e., WiMAX via the
Connectivity
NICE-2010
Acharya Institute of Technology, Bangalore-560090 146

Services Network (CSN) Gateway, UMTS via the
GPRS Gateway Support Node (GGSN), CDMA2000
via the PDSN,and the WLAN via a GGSN emulator).
Each network hasa MIP-FA (or HA) at one of its
gateways and a local PCSCF.The remaining
elements of the IMS and the MIP-HAare located at
the home network of the MN. Thus the IMSis used
for centralized session mobility management and
MIP
for terminal mobility management.


Fig. 1. The Proposed Interworking Architecture.

The data flow is routed from source to destination
bypassing
the home network. Only the SIP based session
control signaling(call setup, call termination, and
session management) gets routed via the home
network. The session control signalingis
forwarded by the P-CSCF of the visiting network
to theS-CSCF (via the I-SCSF) of the home
network. A sessionhandoff scenario form UMTS to
WiMAX can be described as follows. Following the
UMTS system acquisition, setting up
the data pipeline takes place. The IP address
allocation for theMN is initiated by sending the
MIPv4 registration request toits HA via the GGSN
(i.e., the MIP-FA) [12]. Next the MN sends a SIP
registration message to the S-CSCF via the
PCSCF.Once authorized, a suitable S-CSCF gets
assigned and its subscriber profile is sent to this
designated S-CSCF. Afterthe activation of the PDP
context and service registration, the
MN is now ready to establish a session.
As illustrated in Fig. 2, a SIP INVITE message is
sent from the UMTS
interface via the P-CSCF to the S-CSCF, and finally
the destination. It also has a request for the
precondition

























call flow model, since certain preconditions (QoS
levels) must
be met prior to session establishment. Next, the
destinationresponds with a 183 Session Progress
message containing
a Session Description Protocol (SDP) answer. The
acknowledgement
of the reception of this provisional response by a
Precondition ACKnowledgement (PRACK) request
follows.When the PRACK request reaches the
destination a 200 OK
response is received with an SDP answer. Then an
UPDATErequest is sent by the source confirming
resources reservation.Once the destination
receives the UPDATE request, it generates a 200
OK response. Then the session will be in
progress(over the UMTS interface).
When the MN roams to WiMAX, session handoff to
WiMAX is triggered. Firstly the standard WiMAX
link layer
access registration procedures and the MIP
registration procedureswith the ASN Gateway
(MIP FA) are performed. Itthen forwards this
request to the MIP-HA and gets the home IP
address assigned to the WiMAX interface.
Subsequently a MIP Binding Update message is
exchanged between the MN
and the CN to avoid triangular routing [11]. The
next stage is the taking place of the SIP session
handoff procedures. This requires sending a SIP
Re-INVITE (with same Call-ID and other
identifiers of the ongoing session) to the
destination. Followed by this is the preconditions
reservation for the WiMAX interface. Once this is
successfully done the new session flow
can be initiated. It is important to note that until
such time that the new data flow is initiated via

Acharya Institute of Technology, Bangalore

the WiMAX interface,the data flow via the UMTS
interface remains active. Thus
the model follows a make-before-break handoff
mechanism as proposed in our previous work
[11]. Since the session handoff architecture at the
CN works has been centralized, roaming between
UMTS-WiMAX-CDMA2000-WLAN can easily be
acommadated.
A
IV. WiMAX IMPLEMENTATION

The wimax can be implemented by keeping one
station as base station .From that station the flow
of data has implemented in this scenario.The
traffic are excluded using the interworking
process.But this can be later implemented in this
paper. The backhaul
of the Wimax (802.16) is based on the
typical connection to the public wireless networks
by using optical fibre, microwave link, cable or
any other high speed connectivity. In few cases
such as mesh networks, Point-
(PMP) connectivity is also used as a backhaul.
Ideally, Wimax (802.16) should use Point
antennas as a backhaul to join subscriber sites to
each other and to base stations across long
distance.A wimax base station serves subscriber
stations using Non-Line-of-Sight (NLOS) or LOS
Point-to-Multi-Point connectivity; and this
connection is referred to as the last mile
communication. Ideally, Wimax (802.16) should
use NLOS Point-to-Multi-Point antennas to
connect residential or business subscribers to the
Wimax Base Station (BS). A Subscriber Station
(Wimax CPE) typically serves a building using
wired or wireless LAN



Traffic flow as a sequence of packets sent from a
particular source to a particular unicast, anycast,
or multicast destination that the source desires to
label as a flow. A flow could consist of all packets

, Bangalore-560090
the WiMAX interface,the data flow via the UMTS
break handoff
mechanism as proposed in our previous work
[11]. Since the session handoff architecture at the
CN works has been centralized, roaming between
WLAN can easily be
The wimax can be implemented by keeping one
station as base station .From that station the flow
of data has implemented in this scenario.The
traffic are excluded using the interworking
implemented in this
of the Wimax (802.16) is based on the
typical connection to the public wireless networks
by using optical fibre, microwave link, cable or
any other high speed connectivity. In few cases
-to-Multi-Point
(PMP) connectivity is also used as a backhaul.
Ideally, Wimax (802.16) should use Point-to-Point
antennas as a backhaul to join subscriber sites to
each other and to base stations across long
distance.A wimax base station serves subscriber
Sight (NLOS) or LOS
Point connectivity; and this
connection is referred to as the last mile
Ideally, Wimax (802.16) should
Point antennas to
subscribers to the
Wimax Base Station (BS). A Subscriber Station
(Wimax CPE) typically serves a building using

Traffic flow as a sequence of packets sent from a
particular source to a particular unicast, anycast,
r multicast destination that the source desires to
label as a flow. A flow could consist of all packets
in a specific transport connection or a media
stream.
Enstablishing a TCP connection begins with a
three-way handshake and creates two flows. One
from A to B, the other from B to A, where A and B
are IP-Port source and destinations.


Security against an unauthorized access in the
network, transmission path cut
destruction of a network constituting member
such as a node. In a network in which t
of bytes of an ACM (Access Control Message)
portion of a frame format corresponds to the
number of times date passes through the broad
band transmission path, a reception node decides
whether to allow data access according to the
number of bytes of the ACM portion. To provide
security RFA algorithm must be implemented
here for network security.

V. PERFORMANCE MODEL
Packet Loss Analysis
The total packet loss (Pkt
can bedefined as the sum of all lost packets during
a handoff while
the MN is receiving downlink data packets. It is
assumed thatthe packet loss begins when a layer
2 handoff is detected andall in
lost during the vertical handoff time.

Thus, it can be expressed as follows:
Pkt_loss = 1/[(2Tad + DHandoff

b. Signaling Cost Analysis
The resultant signaling cost of mobility
management during a vertical handoff can be
analyzed as the accumulative traffic load on
exchanging signaling messages during the MNs
communication session, which can be defined as:
Cost = P Smessage Ha
where, P is the probability that each handoff will
occur


NICE-2010
560090 147
in a specific transport connection or a media
Enstablishing a TCP connection begins with a
way handshake and creates two flows. One
to B, the other from B to A, where A and B
Port source and destinations.

Security against an unauthorized access in the
network, transmission path cut-off, and
destruction of a network constituting member
such as a node. In a network in which the number
of bytes of an ACM (Access Control Message)
portion of a frame format corresponds to the
number of times date passes through the broad
band transmission path, a reception node decides
whether to allow data access according to the
of the ACM portion. To provide
security RFA algorithm must be implemented
here for network security.
V. PERFORMANCE MODELING
Pkt_loss) during a session
can bedefined as the sum of all lost packets during
the MN is receiving downlink data packets. It is
assumed thatthe packet loss begins when a layer
2 handoff is detected andall in-flight packets are
lost during the vertical handoff time.
Thus, it can be expressed as follows:
DHandoff )] d Nm

The resultant signaling cost of mobility
management during a vertical handoff can be
analyzed as the accumulative traffic load on
exchanging signaling messages during the MNs
which can be defined as:
b
is the probability that each handoff will
NICE-2010
Acharya Institute of Technology, Bangalore-560090 148

VI. SIMULATION RESULTS AND VALIDATION

In order to investigate the ability for interworking
ofthe presented architecture, simulations are
performed on an NS2. A fully functional SIP-IMS
modelis constructed and integrated to existing
UMTS Special Module. Next an all-IP based
heterogeneous network
with MIP and SIP signaling was created. Followed
by this a simple all-IP heterogeneous test bed is
created by interworking a UMTS network with a
WiMAX. Next, measurements are
collected for investigating a joint MIP-SIP vertical
hand-off(say, between UMTS and WiMAX in this
case).
The reader is referred to [7] for specific details of
this simulation platform.
.The average session setup and vertical handoff
delay (from
UMTS to WiMAX) obtained by the NS2 model for a
single VoIP session for an MIP-SIP based
mechanism is 190 ms and210 ms, respectively.
Further, the setup and handoff delays are
approximately close to the analytical results (180
ms and 166 ms respectively). The simulation
results also
indicate that the handoff delay is always smaller
when MIP is used in contrast
to a Pure-SIP mechanisms, which incurs
additional IMS based processing latencies. As the
number of sessions increases, the vertical handoff
delay also shows an exponentially increasing
trend. Lastly, these results are inline with results
published for a case of similar handoff delays.

REFERENCES:-

[1] IEEE,Air Interface for Fixed and Mobile
Broadband Wireless Access
Systems," IEEE P802.16e/D12, Feb. 2005.
[2] S. Khan, S. Khan, S. A. Mahmud, and H. Al-
Raweshidy, Supplementary
interworking architecture for hybrid data
networks (UMTS-WiMAX),"
in Proc. Int. Conf. on Computing in the Global
Information Technology,
Bucharest, Aug. 2006.
[3] F. Xu, et al.,Interworking of WiMAX and 3GPP
networks based on
IMS," IEEE Commun. Mag., vol. 45, pp. 144-150,
2007.
[4] M. Suknaic, M. Grgic, and B. Xovko-Cihlar,
Interconnection between
WLAN and 3GPP networks," in Proc. Int. Symp. on
ELMR, Croatia, June
2006.
[5] A. K. Salkintzis, C. Fors, and R.
Pazhyannur,WLAN-GPRS integration
for next-generation mobile data networks," in
IEEE Wireless Commun.,
vol. 9, pp. 112-124, 2002.
[6] 3GPP, IP multimedia subsystem (IMS)," 3GPP
TS 23.228 Version 6.10.0
Release 6, 2005.
[7] K. Munasinghe and A. Jamalipour, A 3GPP-IMS
based approach for
converging next generation mobile data
networks," in Proc. IEEE Int.
Conf. on Commun. (ICC2007), Glasgow, June 2007.
[8] 3GPP2, All-IP core network multimedia
domain," 3GPP2 X.S0013-002-A v1.0, Nov. 2005.
[9] J. Rosenberg, et al., SIP: Session Initiation
Protocol," RFC 3261, 2002.
[10] A. Mahendran and J. Nasielski, 3GPP/3GPP2
IMS Differences,"
3GPP2, S00-20020513-030, 2002.
[11] C. Perkins, IP Mobility Support for IPv4,"
IETF RFC 3344, 2002.
[12] D. Johnson, C. Perkins, and J. Arkko, Mobility
Support in IPv6," IETF RFC 3775, 2004.
[13] 3GPP2, Wireless IP Network Standard,"
[14] 3GPP,Combined GSM and Mobile IP Mobility
Handling in UMTS IP
CN," 3GPP TR 23.923 v 3.0.0, 2000.
[15] T. M. Chen, Network traffic modeling," The
Handbook of Computer
Networks, Wiley, 2007.
[16] J. Roberts, Internet traffic, QoS and pricing,"
in Proc. IEEE, vol. 92,
pp. 1389-1399, 2004.
[17] V. Paxon and S. Floyd, Wide area traffic: the
faliure of Poisson
modeling," in IEEE/ACM Trans. Networking, vol. 3,
pp. 226-244, 1995.
[18] M. Karam and F. Tobagi, Analysis of delay
and delay jitter of voice
traffic in the Internet," Computer Networks, vol.
40, pp. 711-726, 2002.
[19] K. Salah, On the deployment of VoIP in
Ethernet networks: methodology
and case study," in Elsevier Computer Commun.,
vol. 29, pp. 1039-
1054, 2005.
[20] S. C. Lo, et al., Architecture for mobility and
QoS support in all-IP
wireless networks," IEEE J. Select, Areas Commun.,
vol. 22, pp. 691-705,
2004.
[21] L. Kleinrock, Queuing Systems: Theory. John
Wiley and Sons, 1975.
[22] U. Narayan and X. Jiang, Signaling cost
analysis of handoffs in mixed IPv4/IPv6 mobile
NICE-2010
Acharya Institute of Technology, Bangalore-560090 149

environment," in Proc. IEEE Globecom,
Washington,
D.C., Nov. 2007.
[23] Q. Wang and M. A. Abu-Rgheff, Interacting
mobile IP and SIP for
efficient mobility support in all IP wireless
networks," in Proc. IEE Int.
Conf. on 3G Mobile Commun., London, Oct. 2004.
[24] X. Hong, et al., A group mobility model for ad
hoc wireless networks,"
in Proc. ACM Int. Workshop on Modeling, Analysis,
and Simulation of
Wireless and Mobile Systems, Seattle, Washington,
1999.
[25] L. Badia and N. Bui, A group mobility model
based on nodes attractionfor next generation
wireless networks," in Proc. Int. Conf. on Mobile
Technol., Applications and Systems,
Bangkok,2006.







































NICE-2010
Acharya Institute of Technology, Bangalore-560090 150

PARALLELISM PERFORMANCE ANALYSIS FOR AGGREGATE QUERIES
THROUGH MULTI CORE PROCESSORS.
P.Mohankumar Research scholar
1

School of Information Technology and Engineering.
Vellore Institute of Technology.Vellore..
Tamilnadu. India.
pmohankumar@vit.ac.in.
DR.J.Vaideeswaran.
2
Senior Professor,
School of Computer Science and Engineering
Vellore Institute of Technology.Vellore.(TN).
Tamilnadu India.
_______________________________________________________________________________________________________________________
ABSTRACT.
A data managing process as storage and
retrieval from notebook to net and mobile to
cloud computing as a new era, from the past two
to three decades database pupils facing as
challenging task of how to parallelize and
optimize the query related to real world problem
.Its outcome is in daily use of commercial
database application which includes large amount
of data to be processed .So parallelism for
processing these database queries is highly
suggested in order to get desirable performance.
Thus an approach of how to cover parallelism
for query execution that strives for maximum
resource utilization is analyzed in this paper.
Key words : parallelism, monitor
program,active_group

Introduction
Parallelism of database queries involving
aggregate operations usually many joins and
group-by in day to day applications is highly
recommended in order to obtain acceptable
response time. The main drawback of existing
parallel processing methodologies that treats
these kind of queries are very sensitive to data
distribution and involve more IO and
Communication cost in processing. In order to
overcome the issues a analysis is made how to
cover parallelism on behalf on architectural basis
ie; how the processing load is distributed for
processing and balanced for resultant such a way
over coming delay and response in acceptable
time. As an option a multicore is chosen as a
processor and explained how possibilities are.

Related works.

S.
N
o
Concep
t
Archite
cture
Algorit
hm
Perform
ance
Analysis
Pa
per
1. Join
perfor
Shared
disk
Aggreg
ation
Delay in
response
[3]
med
before
Group
by
operati
ons.
partitio
n and
join
partitio
n
metho
d
time
2. Join
perfor
med
after
Group-
by
Operati
ons
Shared
nothin
g
Bulk
synchr
onous
paralle
l
model.
Histogr
am
data
structu
re.
Minimiz
es
commun
ication
cost and
avoids
data
skew for
immedia
te
results.
[2]
3. Multiw
ay join
query
execute
d using
left
deep
pipelin
e(joins
execute
d by
separat
e
process
or)
Shared
nothin
g
Polyno
mial
time
(static)
(interle
aving
large
class of
joins)
Increase
d
through
put.
[4]
4. Inter
and
intra
operato
r
executi
on of a
query
Multi
cores
2-
Phase
algorit
hm
1.quer
y
proces
sing
2.quer
y
No
conflicts
in
concurre
ncy.
Maximu
m
through
put
[5]
NICE-2010
Acharya Institute of Technology, Bangalore-560090 151

optimi
zation
during
executi
on


Existing parallel database query processing
issues.

How to partition data?
How to partition operations?
Each operation is partitioned across all
nodes
Get best sequential plan then parallelize
scheduling and pipelining issues.
Inter query parallelism.
Query transaction execute in parallel with one
another.
Intra query parallelism
a) Intra operator parallelism (parallelize each
individual operation in query)
b) Inter operator parallelism (execute
different operation in a query expression in
parallel)


Query Processing: activities involved in retrieving
from the database
typical steps when processing a high-level query
(e.g SQL query)

Query tree: internal representation of the query

Execution strategy: how to retrieve results of
query
















Proposed work.
An approach of how to cover parallelism for
multi way join queries processing that strives for
maximum resource utilization is analyzed .ie
1.How a query should be carried out in
parallel.2.Optimisation dynamically refined the
query plan execution phase.
i.e; Rebalancing the threads of execution in to an
equilibrium that guarantees maximum resource
utilization.3.How basically concurrency is
controlled.(task is based on tuple size per page,
IO/CPU bounded ness and estimated execution
time.) In order to process these task its necessary
to analysis the basic issues under
multicores.Firstly as integration of parallelism in
Scanning
Parsing
Validating
Intermediate form of query
Execution plan
Code to execute the query
Runtime database processor
Result of query
Query code generator
Query optimiser

Acharya Institute of Technology, Bangalore

whole query processing, next finding the
parallelism in query optimization based on
evaluation plan selection and cost estimation. A
data structure is designed to manage cache and
partitioning data during execution in order to
cover high inter and intra operator parallelism.
The process is described below
A logical no existent component
(Active_group) is created in order to group
different cores and caches. During execution
an operator must be, these can be explained in
two phases. Distributed over one or more
Active_groups.
Phase one. A data structure is created which is
responsible for resource management like
allocation, deallocation and data transfer between
Active work groups and cores as wells for
scheduling process. These can be implemented
as new Asynchronous operator model
(Active_group) is created which uses pipes and
buffers responsible for data transfer. 1. Allocation
of process 2.Assigning task to process 3.Transfer
required data to the process. Phase two
monitor program routine is created which is
Responsible for managing the execution of query
processing by query evaluation plan.


System model for query processing.






Proposed model for query processing.
Task4
Create
Sum(quantity)
Task1-create group
Task2-send data
Task3- Create count(*)

, Bangalore-560090
whole query processing, next finding the
parallelism in query optimization based on
evaluation plan selection and cost estimation. A
ta structure is designed to manage cache and
partitioning data during execution in order to
cover high inter and intra operator parallelism.

A logical no existent component
(Active_group) is created in order to group
nt cores and caches. During execution
an operator must be, these can be explained in
two phases. Distributed over one or more
. A data structure is created which is
responsible for resource management like
on and data transfer between
Active work groups and cores as wells for
scheduling process. These can be implemented
as new Asynchronous operator model
(Active_group) is created which uses pipes and
buffers responsible for data transfer. 1. Allocation
of process 2.Assigning task to process 3.Transfer
Phase two. A
monitor program routine is created which is
Responsible for managing the execution of query
processing by query evaluation plan.
ssing.

Proposed model for query processing.

Assigning task to Active_group.
Monitor routine.









Data transfer.
Monitor program routine
decisions based on cost frequency of
usage and size.
Active_group
component to group different cores and
caches.
Consider the query as example.
[Select e.dno, sum (salary)
from employee e group by e.dno] .
Let us assume a concern consists various
departments. Each department located at
different locations. In our example if management
need to get the total salary to be issued first the
employee table must be retrieved and
department tables. Department wise the salary
must be aggregated the result must be displayed.
i.e.: aggregate operation is performed on required
tuples stored in disk(results in tuples with partial
sums at each processor) and result of local
aggregation is portioned on group by attributes
and aggregation is performed again at each
processor to get the final result. In our model the
given query execution will be as follows the active
group will take care of creating task ,assigning
task and transfer data between different locations
and the monitor routine will take care of how the
query gets executed and how the results are
temporarily cached and how the final result is
displayed without suffering any delay.ie how to
minimize the total execution t
by Caching intermediate/final results of
individual queries, and using these cached results
to answer later queries



Conclusion.
An analysis of how to cover parallelism for
query involving multiple joins and aggregate
operators processing that strives for maximum
resource utilization is discussed in this paper.
Moreover a model is approached on behalf of
basic query processing methodology and
architectural point of to over come delays with
NICE-2010
560090 152
Assigning task to Active_group.
Monitor routine.
Data transfer.
Monitor program routine- makes cache
decisions based on cost frequency of
nonexistent logical
component to group different cores and
Consider the query as example.
[Select e.dno, sum (salary)
from employee e group by e.dno] .
Let us assume a concern consists various
departments. Each department located at
different locations. In our example if management
need to get the total salary to be issued first the
employee table must be retrieved and then the
department tables. Department wise the salary
must be aggregated the result must be displayed.
i.e.: aggregate operation is performed on required
tuples stored in disk(results in tuples with partial
sums at each processor) and result of local
egation is portioned on group by attributes
and aggregation is performed again at each
processor to get the final result. In our model the
given query execution will be as follows the active
group will take care of creating task ,assigning
fer data between different locations
and the monitor routine will take care of how the
query gets executed and how the results are
temporarily cached and how the final result is
displayed without suffering any delay.ie how to
minimize the total execution time of a workload
by Caching intermediate/final results of
individual queries, and using these cached results
An analysis of how to cover parallelism for
query involving multiple joins and aggregate
rocessing that strives for maximum
resource utilization is discussed in this paper.
Moreover a model is approached on behalf of
basic query processing methodology and
architectural point of to over come delays with
NICE-2010
Acharya Institute of Technology, Bangalore-560090 153

respect to multi cores. This can be extended for
xml databases with multiple aggregate operators
for parallelism further.

REFERENCES:

[1.]Takesh Fukda IBM TOKYO Research
Laboratory .Parallel processing of multiple
aggregate queries on nothing shared processors
EDBT-1998 www.trl.ibm.com/projects

[2.] M.Al.Hajj Hassian and M.Bamba LIFO
University of Orleans France. Springer Very Large
Database Berlin [2007] Parallel processing of
Group by join queries on nothing shared machines.

[3].David Tanair and Rebecca Boon chapter 7&8
Performance analysis of group by after join in
parallel databases.www.Science direct.com
Jan [2003].

[4.]Amol Dasphande Lisa Hellester Flow algorithm
for parallel query optimization IEEE Transaction
on parallel computing 2007 August
22.5.R.Acker,C.Roth R.Bayer .

[5] Parallel query processing in databases on
multicore architecture Springer-Verlag Berlin
Heidelberg 20086.Jin HoKim Yun Ho Kim Soo
Hook

[6]An efficient processing of queries with joins and
aggregate functions in data ware house
environment. IEEE 13th Workshop on database
and expert applications 2002.

[7].Lei Chen #1, Christopher Olston 2, Raghu
Ramakrishnan 3 Parallel Evaluation of
Composite Aggregate Queries ICDE TALK April
2008.

[8]. Patrick valduriez, Hewlett Packard labs
Waquar Hasan INRIA USA.
Open issues in parallel query optimization. Sigmoid
record vol 25 September 1996.

[9]. Performance and tuning: Optimiser and
Abstract plans. Sybase Manual Adaptive server
@enterprise12.5.1.

[10].D.Taniar, Royal Melbourne Institute of
Technology. Australia. Y.jiang, School of IT
Victoria University. Aggregate join query
processing in parallel database systems. IEEE-2000
Journal.























NICE-2010
Acharya Institute of Technology, Bangalore-560090 154

USE OF INFORMATION TECHNOLOGY AND EMBEDDED SYSTEM
IN PRECISION FARMING
Rahul Dubey, Consultant, Tata Technologies Ltd.
_____________________________________________________________________________________________________________________________

ABSTRACT:

This research paper objective is to empha--size on
use of global positioning system and embedded
system in Indian agricultural scenario. Today
Indian farmers are not using Advanced Information
technologies farming which can grow the yield as
well as farming equipments.
This kind of system can be developed by using GPS
and embedded system capabilities. There are two
systems one would be GPS and another would be a
Data acquisition system which acquire the data
about land marking.

I. INTRODUCTION

In the past few years, new trends have emerged in
the agricultural sector. Precision agriculture
concentrates on providing the means for
observing, assessing and controlling agricultural
practices. It covers a wide range of agricultural
concerns from
daily herd management through horticulture to
field crop production. Figure 1 explaining how
these kinds of system works.

Precision farming is generally defined as
information and technology based Precision
Farming system which enable farmer to efficiently
use the land area and other resources.


II. PROPOSED METHEDOLOGY

The methodology this paper proposing is
precision farming based on Global Positioning
System and Embedded System. By just setting the
value of latitude and longitude we can commands
to the agricultural Equipments Electronic Control
Unit (ECU). After getting inputs from user to the
embedded system which is capable of taking
latitude and longitude of the desired agriculture
field it will send commands to cultivate or harvest
the desires area accordingly.

And again at the same time User can generate
report with the aid of Geographical Information
System tools (like ArcGIS software) to get more
proper results. These reports are essential to
make more precise decision about the land area
which has taken for the farming and also to
estimate the yield


Figure 1: Interface of satellite and Agricultural
Equipment
Cultivation of specific crop with in a desire area
can be done with this kind of precision farming
system and decision can be made on the basis of
sensors input which are spread over the field.

Sensors can play more important role e.g. for
measuring soil moisture, By keeping the sensors
within a grid capable of measuring moisture we
can estimate whether the water has reached to
particular part or not.



Figure 2: Embedded system based Precision
farming.

III. CAPABILITIES REQUIRED FOR THE
AGRICULTURAL EQUIPEMENT

There are three basic requirements to realize this
kind of system:

(a) Embedded System with GPS functionality.
(b) ECU in the Agriculture Equipment
NICE-2010
Acharya Institute of Technology, Bangalore-560090 155

(c) ECU controlled Harvesting and Cultivating
Equipment.

Embedded Systems are a crucial technology for
competitiveness. The vision of pervasive
computing is that objects, buildings and
environments may be endowed with software
intelligence to improve human interactions both
with the individual objects and with the system as
a whole. Many intelligent embedded systems
move rapidly within a physical environment.
While the best complete algorithms are doubly
exponential, probabilistic algorithms have
emerged that have very good practical
performance, and probabilistic guarantees of
convergence.

Embedded system is the most important part of
this system which is capable of taking the input
from the GPS (Global Positioning System) and can
also be capable of communicating with the ECU
(Electronics Control Unit) of the Farming
Equipment. So that it can convey the commands to
the ECU to act accordingly.

Global Positioning System (GPS) guidance for
ground based application equipment in
agriculture is being adopted rapidly. Among other
benefits, these systems help operators reduce
skips and overlaps in applying fertilizer,
pesticides and other inputs. GPS guidance systems
can be used for all types of agricultural
operations: planting, spraying, fertilizer spreading
and tillage [2,4].


IV. CONCLUSION:

This kind of technology based system will helps in
improving capabilities of agricultural vehicle and
allows the farmer to take precise decision. Lastly
by digitizing the map of the farm some important
results can be extracted out of this. Geographical
Information System tools will help us to prepare
reports. Thissystem also allows managing land
which will become an added advantage of the
system.

REFERENCES

[1] Aline Baggio Delft University of Technology
The Netherlands Wireless sensor networks in
precision agriculture.

[2] Jess Lowenberg-DeBoer-Purdue University
GPS Based Guidance Systems for Agriculture

[3] Sandhyasree Thaskani,
RammurthyApplication of topology under
control wireless sensor networks in precision
agriculture 41st Mid-Term Symposium on Taking
Telecom and IT Revolution to Rural India

[4] Public release Version NAVSTAR GPS user
equipment introduction September 1996























NICE-2010
Acharya Institute of Technology, Bangalore-560090 156



NICE-2010
Acharya Institute of Technology, Bangalore-560090 157





NICE-2010
Acharya Institute of Technology, Bangalore-560090 158





NICE-2010
Acharya Institute of Technology, Bangalore-560090 159





NICE-2010
Acharya Institute of Technology, Bangalore-560090 160





NICE-2010
Acharya Institute of Technology, Bangalore-560090 161






NICE-2010
Acharya Institute of Technology, Bangalore-560090 162




NICE-2010
Acharya Institute of Technology, Bangalore-560090 163


SECURED AUTHENTICATION FOR ONLINE BANKING USING MOBILE PHONES
Prof. S.Sumathy
1
,R.Hemalatha
2
, V.Jayashree
3

Assistant Professor(SG), ssumathy@vit.ac.in
1
, MS(Software Engineering), hema.ms74@gmail.com
2
, MS(Software
Engineering), venugopal.jayashree@yahoo.com
3

VIT University, Vellore-632014, India
123
.
_______________________________________________________________________________________________________________________________
ABSTRACT

Online banking is essentially safe, but fraud is
on the rise, so consumers need to be aware. The
widespread use of online banking means more
convenience for consumers and offers better ways to
monitor account activity. The client accesses the ATM
using a private key which is the security token that is
sent to clients mobile through an SMS by the banks
authentication server. The key is generated by
implementing SHA256 and Base64 algorithms using
the registered IMSI (International Mobile Subscriber
Identity), IMEI (International Mobile Equipment
Identity) numbers of the clients mobile. SMS based
mechanism makes sure that the key reaches the
registered client.

The client is given a PIN and a Master key
when registered to the Online banking service. If in
case a clients mobile is lost, authentication is done
using Unique Master key, else the private key token is
used thereby making transactions secured and simple
without the need of carrying any USB Tokens. The
additional functionality provides the client more
security with their transactions. Phishing attacks by
the hackers are avoided to an extent in the proposed
methodology. The proposed method has been
implemented and tested in Java (J2ME). Initial results
show the success of the proposed methodology.

Index Terms:
Authentication, OTP, Online Transactions, Security,
IMEI, IMSI, Mobile Phone.

I INTRODUCTION

Today security concerns are on the rise in all
areas such as banks, governmental applications,
healthcare industry, military organization,
educational institutions, etc. Member institutions of
Online Banking Association rated Security as the
most important issue of online banking. New survey
finds 31 percent of bank customers avoid online
transactions because of security reason. The
proposed methodology guarantees authenticated to
online banking service in a secured manner. Online
banking is not different to traditional banking
process. It makes use of technology for time-saving,
paper-based procedures of traditional banking in
order to manage banking service efficiently and
quickly. Clients perform transactions on a secure
website operated by their bank.
Transactions in online banking differ from
general internet shopping transactions. Attacks on
online banking deceive the user to steal the login
data. A weak password is easy to remember, but
open to potential attacks. It is not secured in many
cases and therefore risks are high.
While digital certificates are used against
phishing and pharming, attacks lead to an increasing
number of phishing websites which duplicates
victims passwords. The less the password security
relies on human mediation, the more it is secure.
Dynamic Key Token is used for performing the
banking operation.

II. LITERATURE CITED
Online banking is a very prominent area and
has many methods to make the transactions more
secure. One time passwords, two factor
authentication, digital certificate verification are
considered to provide more security than general
PIN number authentication [2]. Several methods
regarding the Online banking security are discussed
in the following literature tabulated.












NICE-2010
Acharya Institute of Technology, Bangalore-560090 164


Table I : Literature Cited


Security plays a significant role in Online Banking. A
dynamic private key token is generated for the
clients account for authentication. Authenticated
clients can access the overall account information.

III. DESIGN METHODOLOGY

A. SHA (Secure Hash Algorithm)

Hashing, used in many encryption algorithms is
the transformation of a string of characters into a
shorter fixed-length value or key that represents the
original string. The hashing algorithm is called the
hash function. A cryptographic hash function is a
procedure, which takes a block of data and gives a
fixed-size bit string, the (cryptographic) hash value.

They have many information security
applications, including digital signatures, message
authentication codes, and other forms of
authentication.SHA (Secure Hash Algorithm) is one
among a number of cryptographic hash functions.

It is a series of cryptographic hash functions:
SHA-1, the 160-bit version.
SHA-2, a newer revision with four variants: SHA-
224, SHA-256, SHA-384 and SHA-512.
SHA-3, an under development version.

Though SHA-2 has similarities to the SHA-1
algorithm, it includes a significant number of changes
from SHA-1, and security flaws identified in SHA-1
are avoided.

SHA-256 and SHA-512 are new hash functions
computed with 32 and 64-bit words respectively, use
different shift amounts and additive constants,
differing only in the number of rounds. SHA-224 and
SHA-384 are truncated versions of the first two,
computed with different initial values.

SHA-256 is used to authenticate Debian Linux
software packages and in the DKIM message signing
standard. SHA-512 is part of a system to authenticate
archival video. UNIX and Linux vendors are moving
to use 256 and 512-bit SHA-2 for secure password
hashing.




Ref
No:
Title Authors description
1 One Time
Password
System[2]
One-time password
systems provide
additional protection but
their use has been limited
by cost and inconvenience
2 Two Factor
Authentication
Application[4]
The user is simply
requested to possess a
Bluetooth enabled
handheld device to
enforce authentication
based on weak
credentials.
3 Security Token
For Unified
Authentication[5]
Authentication scheme
based on One-Time
Password (OTP) MIDlet
running on a mobile
phone for unified
authentication towards
any type of service on the
Internet.
4 Online
Authentication
Protocol[7]
Online authentication is
to verify identities
through cyber networks.
5 Noisy Password
Scheme[3]
Every time a user is
authenticated by totally
different password with
noisy parts.
6 Delegation based
Security for Web
Services[6]
It extends the basic
security models and
supports flexible
delegation and
evaluation-based access
control.

7 A countable and
time-bound
password based
user
authentication
scheme[1]
Countable feature is to
limit the use to a certain
number of times. The
users are able to login to
the system for a fixed
number of times.

NICE-2010
Acharya Institute of Technology, Bangalore-560090 165

B. BASE 64

Base64 is a way of interpreting bits of data to
transmit over a text-only medium, such as the body
of an e-mail. Base64 is a group of similar encoding
schemes that represent binary data in an ASCII string
format by translating it into a radix-64
representation. Base64 encoding schemes are used to
encode binary data to deal with textual data, a
number of applications including email via MIME,
and storing complex data in XML. It ensures that the
data remains unchanged. Base64 just represents data
in a different form and is one of the data
representation schemes that does not encrypt,
compress the data. Basic authentication over the
internet encrypts the username and password using
base64.

It has many variants, of which first known
standardized use of the encoding is now called MIME
Base64. Selection of the character set for the 64
characters required for the base varies between its
variants. The idea is to choose a set of 64 characters
that is both part of a subset common to most
encodings, and also printable.
Base64 can be used in many contexts including
obfuscation of email passwords, transmit and store
text, evade basic anti-spamming tools, encode
character strings, embed binary data in an XML file,
encode binary files (images within scripts) to avoid
depending on external files.

Fig. 1 Architectural Design Block Diagram
C. CLIENT DESIGN

A J2ME program is developed and installed on
the mobile phone. The program runs on any J2ME-
enabled mobile phone. The key token program
generates the dynamic key using the mobile
credentials, such as IMEI, IMSI numbers and requests
token from the server via an SMS message. In order
for the user to run the key token program, the user
must enter the username and PIN and generate the
security token.

D. DATABASE DESIGN

A database on the server side is used to store
the clients identification information such as the first
name, last name, username, pin, password, mobile
IMEI number, IMSI number, unique symmetric key
token, and the mobile number for each user. MySQL
is used as a back-end.

E. SERVER DESIGN

A server is implemented to generate the token on
the organizations (Bank) side. Database is connected
to a modem for SMS messages exchange. The server
application is multithreaded. The first thread is
responsible for initializing the database and SMS
modem, and listen on the modem for client requests.
The second thread is responsible for verifying the
SMS information, generate and send the token. A
third thread is used to compare the token. In order to
setup in the database, the client must register at the
organization. The clients mobile phone/SIM card
identification factors, such as IMEI, IMSI numbers are
retrieved and stored in the database, in addition to
the username and PIN. The software is configured to
connect to the servers GSM modem for the SMS
option.
A unique symmetric key is also generated and
installed on both the mobile phone and server.

IV. IMPLEMENTATION

A mobile-based software token system that is
supposed to replace existing hardware and
computer-based software tokens is proposed. The
proposed system is secured and consists of three
parts: (1) software installed on the clients mobile
phone, (2) server software, and (3) a GSM modem
connected to the server. The system has two modes
of operation:
NICE-2010
Acharya Institute of Technology, Bangalore-560090 166


A. Connection-Less Authentication System
A private key security token is generated by
connecting the client to the server without any
physical connection. Server requests the Client for
receiving an SMS token, and the Client will respond
to it.

B. SMS-Based Authentication System
If the client and server are out of synchronisation,
the client can request the security token directly
from the server. The server checks the SMS content
and if correct, returns a randomly generated token to
the mobile phone. The user will then have a given
amount of time to use the token before it expires.

V. SYSTEM DESIGN
This section discusses about the modules
included in the secured authentication system.
System has four main modules as follows:
A. Registration and Login Module

Users basic information such as his name,
address, age (Year/Month/Date), mobile
information are registered in the Bank while
creating the account.
Mobile information includes IMEI number,
IMSI number.
IMEI number: International Mobile Equipment
Identity is unique to each mobile phone and
allows a particular user to be identified by the
device. This is accessible on the mobile phone
and is stored in the servers database for each
client.
IMSI number: International Mobile Subscriber
Identity is a unique number associated with all
GSM and Universal Mobile
Telecommunications System (UMTS) network
mobile phone users. It is stored in the
Subscriber Identity Module (SIM) card in the
mobile phone. This number is stored in the
servers database for each client.

With the above information, PIN number and
Master Key are generated which are unique to
each user. Along with registration details, PIN
number and Master Key are stored in banks
database.
The user is logged in by swiping the card and
entering the PIN number.



Fig. 2 Registration Module
B. Token Generation Module

The above factors are concatenated and the
result is hashed using SHA-256, which returns
a message.
The message is then XOR-ed with the PIN
replicated to characters. The result is then
Base64 encoded which yields a character
message.
From the encoded message, a random six digit
output is taken as token number.



Fig. 3 Token Generation Module
NICE-2010
Acharya Institute of Technology, Bangalore-560090 167



C. SMS Module
The token generated is sent to the clients
mobile number through a SMS.



Fig. 4 Client enters Token number got through
SMS


D. ATM Process Module
Client when enters this token number, the
server decrypts the message, extracts the
identification factors, and compares the factors
with the ones stored in the database.
The client then uses several services like
withdrawal, checking account balance etc., if
successfully authenticated.


Fig. 5 ATM Process Module



VI. SYSTEM STUDY

Table II Existing and Proposed System
S. No Existing System Proposed
System
1 Transaction dealt with
swiping the user
account card
Transactions are
accessed by
handling security
token
2 Possible of shoulder
attack
Secure
transactions are
maintained
3 Account cannot be
accessed from
anywhere(must
search for an ATM)
Access the
account using
mobile phone
from anywhere
4 No storage of
customer details for
Future
Reference(except
Customers copy)
Stores customers
digital
certificates in the
database



A. Existing System

NICE-2010
Acharya Institute of Technology, Bangalore-560090 168

Only swiping the card and the pin number are
considered for accessing the ATM machine. In case of
lost or theft of scratch card, the account can be easily
accessed by the unauthorized user. This is not secure
and not reliable for account maintenance.

B. Proposed System

A solution for Authentication of Online
banking services using Mobile Phone to perform
transactions, is proposed and developed. Mobile
phones are more suitable for online banking
authentication than the USB Flash drives. For
providing more security, separate token numbers are
used for performing the banking operations like
money with- drawl, checking account balance and
fund transfer. The token number is generated using
the users mobile number, IMEI, IMSI and the PIN
numbers. The SHA256 and Base64 algorithms are
used for the generation of token number. The
generated token number is sent to the users mobile.
On entering the token number received through
mobile, the user can access the ATM machine. For
each and every transaction, a dynamic password
token is generated which avoids theft, phishing and
hacking attacks.


VII. CONCLUSION

Most parts of the world have evolved into an
electronic era. The Information Technology (IT) and
the Internet plays a significant role among peoples
daily life. E-business, E-learning is reaching people
almost everywhere wherever computers and the
Internet connections are available. Single factor
authentication methods, such as passwords, are no
longer considered secure in the internet and banking
world. Easy-to-guess passwords, such as names and
age are easily found by the automated password
collecting programs.

In most cases, a hardware token is given to
each user for each account. The increase in number
of carried tokens and cost of manufacturing and
maintaining them is difficult for both the client and
organization. Mobile devices and certainly mobile
phones are currently widely spread. Many clients
carry a mobile phone now at all times. An alternative
is to install all the software tokens on the mobile
phone, which helps reduce the manufacturing costs
and the number of devices carried by the client. It
focuses on the implementation of two-factor
authentication method using mobile phones. The
proposed system has two ways of implementing,
either using a free and fast connection-less method
or a slightly more expensive SMS based method. Both
methods have been successfully implemented and
tested, and are shown to be robust and secure.

The system has several factors that make it
difficult to hack, such as
1. At least 10 factors related to the mobile phone, SIM
card, user, date and time are used to generate a
difficult-to-guess, unique one-time pass-word (OTP).

2. The system can easily run on any J2ME-enabled
phone.

3. The system has a user-friendly GUI.

4. The OTP is only generated on the registered
mobile phone and SIM card for stronger
authentication.

5. Even if the mobile phone is stolen, 8-characters
PIN must be input to the phone to generate the
correct token which is hard to brute force or guess by
the hacker.

Future developments include a more user friendly
GUI, and extending the proposed work to be
implemented on various mobile phone platforms.
Bluetooth and WLAN features on mobile phones can
be used for cheaper token generation.

REFERENCES

[1] Iuon-Chang Lin, Chin-Chen Chang. "A countable
and time-bound password-based user authentication
scheme for the applications of electronic commerce".
[2] J. Archer Harris, Department of Computer Science,
James Madison University. "OPA: A One-time
Password System".

[3] Khaled Alghathbar, Hanan A. Mahmoud. "Noisy
Password Scheme: A New One Time Password
System".

[4] Roberto Di Pietro1, Gianluigi Me, Maurizio A.
Strangio."A Two-Factor Mobile Authentication
Scheme for Secure Financial Transactions".

NICE-2010
Acharya Institute of Technology, Bangalore-560090 169

[5] Steffen Hallsteinsen, Ivar Jrstad, Do Van Thanh.
"Using the mobile phone as a security token for
unified authentication".

[6] Wei She Thuraisingham, B. I-Ling Yen.
"Delegation-Based Security Model for Web Services".
[7] Xing Fang Zhan, J. "Online Banking
Authentication Using MobilePhones".

[8] B. Schneier, Two-Factor Authentication: Too
Little, Too Late, inInside Risks 178, Communications
of the ACM, 48(4), April 2005.

[9] Do van Thanh, Tore Jonvik Strong
authentication with mobile phone as security token
[10] N. Mallat, M. Rossi, and V. Tuunainen, Mobile
Banking Services,Communications of the ACM,
47(8), 42-46, May 2004.



































NICE-2010
Acharya Institute of Technology, Bangalore-560090 170

IMPROVED QOS IN MANET BY CLUSTERING USING MOBILITY PREDICTION
S.J.Dharshikha
1
, G.S.GangaSree
2
Department of Information Technology
Thiagarajar College of Engineering, Madurai, Tamil Nadu, India.
dharshikhasj@tce.edu
1
, ganga@tce.edu
2

_____________________________________________________________________________________________________________________________

ABSTRACT
The multi hop packet radio networks also named
mobile ad-hoc networks (MANETs) have a dynamic
topology due to the mobility of their nodes. A notable
amount of energy is utilized every time a signal is sent
and received by a mobile node. Many such signals and
power are wasted to update the positional information
of the nodes in a wireless scenario. Further bandwidth
is also wasted by sending control signals rather than
using it effectively for data communication. To
minimize this utilization, we propose a modified
algorithm that uses Weighted Clustering Algorithm
(WCA) for cluster formation and Mobility Prediction
for cluster maintenance. Clustering is an effective
technique for node management in a MANET. Cluster
formation involves election of a mobile node as Cluster
head and it controls the other nodes in the newly
formed cluster. The connections between nodes and
the cluster head changes rapidly in a mobile ad-hoc
network. Thus cluster maintenance is also essential.
Prediction of mobility based cluster maintenance
involves the process of finding out the next position
that a mobile node might take based on the previous
locations it visited. In this paper we propose to reduce
the overhead in communication by predicting mobility
of node using linear auto regression and cluster
formation.

KEY WORDS: Ad-hoc, Clustering, Cluster-Head,
Mobility Prediction.

INTRODUCTION
The rapid advancement in mobile
computing platform and wireless communication
technology lead us to develop a method to elect
cluster-heads and form clusters [5] in wireless
mobile ad-hoc networks [9]. These networks where
fixed infra structure does not exist permit
the interconnectivity between work-groups moving
in urban and rural areas. They can also help in
Collaborative operations, for example, distributed
scientific research or rescue.
A wireless ad hoc network [8] is a
decentralized wireless network the network is ad hoc
because each node is willing to forward data for
other nodes, and so the determination of which
nodes forward data is made dynamically based on
the network connectivity.
Multi-cluster [5], multi-hop wireless network
should be able to dynamically adopt itself. Some
nodes, known as cluster-heads, are responsible for
formation of clusters each consisting of number of
nodes (analogous to cells in a cellular network) and
maintenance of topology of network. The set of
cluster-heads is also called Dominant set[9]. A
cluster-head is responsible for resource allocation to
all nodes belonging to its cluster and monitors
communication within a cluster. In a cluster [5],
objects are mutually closer to each other than to
objects in other clusters. The Cluster structure need
to be Maintained as the new mobile nodes may enter
the network and the existing nodes may move out or
lose their battery power [1]. It occurs in the case of
both Cluster-Heads and Member Nodes.
Prediction of the geographical position of the
Mobile Node is called mobility prediction
.The Linear Auto Regression is used, among the many
techniques available for predicting. The past
positions or the history is used in predicting the
future positions. Based on this value Clustering is
performed. When it is compared to the Original
Position the resulting Cluster Formed are the same.
Thus Signals sent from the member nodes to the
Cluster-Head regarding the Current Position can be
minimized. This will reduce the Consumption of
Power [1], Wastage of Bandwidth for signals other
than Data, and ultimately Increase the Stability of the
Cluster [11, 18].

RELATED WORK

Recently, a number of clustering algorithms
have been proposed and based on some of criteria to
choose cluster-head such as speed and direction,
mobility [2, 3], energy, position, and the number of
neighbors of a given node. These works present
advantages but some drawbacks as a high
computational overhead for both clustering
algorithm execution and update operations. We will
give each of them a brief description as follows:
NICE-2010
Acharya Institute of Technology, Bangalore-560090 171

The Highest-Degree Algorithm, also known
as connectivity-based algorithm [12, 19]. This
algorithm is based on the degree of nodes assumed to
be the number of neighbors of a given node.
Whenever the election procedure is needed, nodes
broadcast their Identifier (ID) which is assumed to be
unique in the same network. According to the
number of received IDs every node computes its
degree and the one having the maximum degree
becomes cluster-head. Major drawbacks of this
algorithm are the degree of a node changes very
frequently, the CHs are not likely to play their role as
cluster-heads for very long. In addition, as the
number of ordinary nodes in a cluster is increased,
the throughput drops and system performance
degrades. All these drawbacks occur because this
approach does not have any restriction on the upper
bound on the number of nodes in a cluster.
The Lowest-Identifier (LID) also known as
identifier-based clustering algorithm, [13] chooses
the node with the lowest ID as a cluster-head, the
system performance is better than Highest-Degree in
terms of throughput. Major drawbacks of this
algorithm are its bias towards nodes with smaller ids
which may lead to the battery drainage of certain
nodes, and it does not attempt to balance the load
uniformly across all the nodes. The Distributed
Clustering Algorithm (DCA) [10, 19] and
Distributed Mobility Adaptive Clustering
Algorithm (DMAC) [15, 19]. In this approach, each
node is assigned weights [4] (a real number above
zero) based on its suitability of being a cluster-head.
A node is chosen to be a cluster-head if its weight is
higher than any of its neighbors weight [17];
otherwise, it joins a neighboring cluster-head. The
smaller node id [7] is chosen in case of a tie. The DCA
makes an assumption that the network topology does
not change during the execution of the algorithm. To
verify the performance of the system, the nodes were
assigned weights which varied linearly with their
speeds [2,3] but with negative slope. Results proved
that the number of updates required is smaller than
the Highest-Degree and Lowest-ID heuristics. Since
node weights [4, 7] were varied in each simulation
cycle, computing the cluster-heads becomes very
Expensive and there are no optimizations on the
system parameters such as throughput and power
control [6].
The Weighted Clustering Algorithm (WCA)
[4, 8, 14] obtains 1-hop clusters with one cluster-
head. The election of the cluster-head is based on the
weight of each node [17]. It takes four factors into
consideration and makes the selection of cluster-
head and maintenance of cluster more reasonable.
The four factors are node degree (number of
neighbors), distance summation to all its neighboring
nodes, mobility [2,3] and remaining battery power
[1]. Although WCA has proved better performance
than all the previous algorithms, it lacks a drawback
in knowing the weights of all the nodes before
starting the clustering process and in draining the
CHs rapidly. As a result, the overhead induced by
WCA is very high.

PROPOSED ALGORITHM

Assumptions:

The following assumptions are made before
clustering
1. The network topology is static during the
execution of the clustering algorithm.
2. Each mobile node joins exactly one cluster-
head.
3. The optimal number of nodes in the cluster
is assumed to be 8.
4. The co-efficient used in Weight calculations
are assumed the following values, w1=0.7,
w2=0.2, w3=0.05, w4=0.05. The sum of these
co-efficient is 1. This is actually used to
normalize the factors such as spreading
degree, distance with its neighbors, mobility
of the node, and power consumed used in the
calculation of weight of a node. The factors
spreading degree and distance with its
neighbors are given more importance and
assumed higher co-efficient values 0.7 and
0.2 respectively.

Phase I: Formation of Cluster

Initially, each node broadcasts a beacon
message to notify its presence to the neighbors. A
beacon message contains the state of the node. Each
node builds its neighbor list based on the beacon
messages received. The cluster-heads
Election is based on the weight values [4, 7] of the
nodes and the node having the lowest weight is
chosen as CH [17]. Each node computes its weight
value based on the following algorithm:

Step 1: Compute the difference between the optimal
clusters size and the real number of neighbors R
(V)as spreading degree,
NICE-2010
Acharya Institute of Technology, Bangalore-560090 172

I
II
III
IV
sp= 1-(| -R (V)| / )

Step 2: For every node the sum of the distances, Dv,
with all its neighbors is calculated.
Dv= dist(v,v) where vN(v)

Step 3: Calculate the average speed for every node
until the current time T. This gives the measure of the
mobility Mv [2,3] based on the X co-ordinate and Y
co-ordinate ie., position of the node v at all previous
time instance t.

Step 4: Determine how much battery power has
been consumed as Pv. This is assumed to be more for
a Cluster-Head [5] when compared to an ordinary
node. Because Cluster-Head has taken care of all the
members of the cluster by continuosly sending the
signal.

Step 5: The weight Wv for each node is calculated
based on

Wv=(w1xsp)+(w2xDv)+(w3xMv)+(w4xPv)

spreading degree, sp
distance with its neighbors, Dy
mobility of the node, My and
power consumed, Pv

Step 6: The node with the smallest Wv is elected as a
cluster-head. All the neighbors of the chosen cluster-
head are no more allowed to participate in the
election procedure.

Step 7: All the above steps are repeated for
remaining nodes which is not yet elected as a cluster-
head or assigned to a cluster.


MODIFIED WEIGHTED CLUSTERING ALGORITHM
FOR A PARTICULAR NODE NAMED V

INITIALIZE:
w1=0.7; //Based on the previous assumption.
w2=0.2;
w3=0.05;
w4=0.05;

ACTION:
Broadcast a beacon signal to all its neighbor
nodes in the transmission range;
Process the beacon signals received from
the neighbor nodes in the network and form the
connection matrix, A;

CALCULATION:
Degree R (V) using A;
Spreading degree, sp;
Sum of the distances, Dv, with all its
neighbors;
Average speed, Mv;
Amount of battery power that has been
consumed, PV;
Weight of V Wv;

ACTION:
Broadcast weight value Wv to all its
neighbor nodes;
Process the signals received from the
neighbor nodes in the network and identify the
weights of the neighbors;

CALCULATION:
Find the node with minimum weight in the
neighborhood;

ACTION:
If (Wv is the least weight)
Declare itself as the Cluster-head;
Else
Send request to join the Cluster
formed by the neighbor with least
weight;


An example of four clusters with four cluster-
heads and different number of member nodes is
shown below.




1
2 3


4 5 6

NICE-2010
Acharya Institute of Technology, Bangalore-560090 173

8
7 12
10 11

9 14 13




The cluster I has the Node 1, Node 2, Node 4, Node 5,
and Node 6. These have the head as Node 5. The
clusters do not share the Node with any other cluster.
The composition of the Cluster II is Node 13,
Node 14, Node 11 and Node 12. The Node 14 is the
Cluster-Head elected in this scenario.
Similarly the composition of the Cluster III is
Node 10, Node 7, Node 8 and Node 9. The Cluster-
head is Node 10.
There is a secluded Node 3 which forms a
cluster of its own as it does not have any other
connection for weight calculation.
The dominant set has the Node 3, Node 10,
Node 14, Node 5.

Phase II. Cluster Maintenance

The second phase is the clustering
maintenance. There are defined two distinct types of
operations for cluster maintenance: the battery
power [1] threshold property and the node
movement to the outside of its cluster boundary.

Node Movements
The node movements can be in the form of
node joining or node leaving a cluster. These
operations will have only local effects on the
clustered topology if the moving node is a CM node. If
the leaving node is CH node, the cluster
reorganization has to be performed for the nodes in
the cluster by evoking the clustering algorithm.
The Concept of Linear Auto Regression is,
given a time series of data , the autoregressive (AR)
models is a tool for understanding and predicting
future values in this series. It is used in statistics and
signal processing. For i = 1P
X = X-i + ( (X-i - X-j )/N)
Where X is the predicted value at time
based on the average rate of change of the previous
values. N is the total number of differences
calculated. The value of i and j are 1 and 2
respectively.
Using the regression technique a time series
data about the previous position of the node are
analyzed and the next value in the series is predicted.
Using this predicted value [4, 8, 14] the cluster
calculation is done.
It is found that the cluster formed with
predicted values and actual values are the same and
thus the power of the mobile node [6] can be saved if
we use prediction calculation and avoid the beacon
signals between mobile node and its Cluster-Head to
get the geographical position of the node. Bandwidth
that must be utilized for data transfer is also saved in
this case. A stable cluster topology is thus obtained.

ALGORITHM FOR NEWLY ARRIVING NODE U

ACTION:
Broadcast a beacon signal to all its neighbor
nodes in the transmission range;
Process the signals received from the
neighbor nodes in the network and form the
connection matrix, A;

CALCULATION:
Degree R (U) using A;
Spreading degree, sp;
Sum of the distances, Du, with all its
neighbors;
Average speed, Mu;
Amount of battery power that has been
consumed, Pu;
Weight of U Wu;

ACTION:
If(A Cluster head already exist in the
neighborhood)
Send request to join the Cluster;
Else
Form a new cluster and declare
itself as the cluster-head;

Battery Power Threshold
The battery power of the nodes participating
in the Clustering changes continuously. The Cluster-
Heads Power [1] decreases more rapidly when
compared to the Cluster Members. When the Cluster-
Heads Battery Power falls below a threshold then the
node is no longer able to perform its activates and a
New Head from the members available need to be
chosen.
ALGORITHM FOR RE-ELECTION OF CLUSTER-HEAD

Acharya Institute of Technology, Bangalore


Number of Nodes Vs. Minimum Life-Span of Nodes
0
20
40
60
80
100
120
140
160
10 20 30 40 50 60 70
Number Of Nodes
Mi
ni
mu
m
Lif
e-
Sp
an
of
No
de
s
WCA(3km/s)
WCA(10km/s
)
WCA(30km/s
)
MWCA(3km/s
)
MWCA(10km/s
)
MWCA(30km/s
)

ACTION:
Verify the threshold on the Cluster
Battery power;
If (Battery power < Threshold)
Cluster-Head sends a
LIFE_DOWN message to all its
Neighbors;
All the Member nodes participate
in the Re-Election Procedure
using Modified Weighted
Clustering Algorithm and the
Node with least weight is
selected as the New Cluster
Else
Re-election is not needed;

PERFORMANCE EVALUATION













The Graph specifies the advantage of using
Modified Weighted Clustering Algorithm and
Mobility Prediction over the Static Weighted
Clustering Algorithm. It was computed for varying
number of mobile nodes 10, 20, 30,40,50,60, and 70
in a NS2 simulation environment. Based on the
Calculation of Life span of the nodes the Modified
WCA with Mobility Prediction is proved to perform
better than WCA. The Life span of the Node is
inversely proportional to the Speed with which the
node travels. The Life span of the Node is higher in all
the cases of 3km/s,10km/s and 30 km/s for the
nodes clustered using Modified WCA than WCA with
corresponding values. Vast difference is found in the
case of MWCA(3km/s) as it is proved to be much
better than WCA. Using NS2 simulation environment
the behavior of different node population is observed
and minimal life span is found to be highest in
modified WCA. Thus the lower limits of minimum life
span increases. Basically for a nod

, Bangalore-560090
Span of Nodes
WCA(3km/s)
WCA(10km/s
)
WCA(30km/s
)
MWCA(3km/s
)
MWCA(10km/s
)
MWCA(30km/s
)
Verify the threshold on the Cluster -Heads
If (Battery power < Threshold)

LIFE_DOWN message to all its
All the Member nodes participate
Election Procedure
using Modified Weighted
Clustering Algorithm and the
Node with least weight is
selected as the New Cluster-Head;
election is not needed;
The Graph specifies the advantage of using
Modified Weighted Clustering Algorithm and
Mobility Prediction over the Static Weighted
Clustering Algorithm. It was computed for varying
des 10, 20, 30,40,50,60, and 70
in a NS2 simulation environment. Based on the
Calculation of Life span of the nodes the Modified
WCA with Mobility Prediction is proved to perform
better than WCA. The Life span of the Node is
peed with which the
node travels. The Life span of the Node is higher in all
the cases of 3km/s,10km/s and 30 km/s for the
nodes clustered using Modified WCA than WCA with
corresponding values. Vast difference is found in the
roved to be much
better than WCA. Using NS2 simulation environment
the behavior of different node population is observed
and minimal life span is found to be highest in
modified WCA. Thus the lower limits of minimum life
span increases. Basically for a node moving with
lesser speed the Life-span is longer. This feature is
more enhanced in the case of our new Algorithm.





Stability of the Cluster formed using WCA
and Modified WCA with is compared [11, 18].
Computation for varying number of mobile nod
20, 30,40,50,60, and 70 in a NS2 simulation
environment. When the number of nodes is zero all
the clusters are highly stable. But when the number
of nodes increases the Stability gradually decreases
and stabilizes in the range of 40 to 80 nodes. Thi
stable value for the Modified WCA is higher in the
range of 0.7 than the WCA which is in the range of
0.5. Stability is a essential criteria for efficient
clustering and using NS2 simulation the Modified
weighted clustering algorithm(MWCA) is proved to
have higher stability value than the Weighted
clustering algorithm(WCA)
Throughput is the amount of us
transmitted between the Nodes. Basically the
Throughput decreases when the Speed with which
the node travels increases. At every level of Speed the
Three types of Clustering Lowest ID, WCA and
Modified Weighted Clustering Algorithm are
compared. At the speeds from 1 m/s to 7 m/s the
Modified Weighted Clustering Algorithm has better
throughput than any other Clustering algorithm. But
when the Speed increases to High values like 8 m/s, 9
m/s, and 10 m/s the throughput of all the algorithm
is drastically low. In such cases the Power utilization
of the nodes are high.
The Graphs show the better performance of
Modified Weighted Clustering Algorithm in the case
Throughput, more Stability and longer Life span [11].
Thus the Weighted Clustering Algorithm when
NICE-2010
560090 174
span is longer. This feature is
more enhanced in the case of our new Algorithm.
Stability of the Cluster formed using WCA
and Modified WCA with is compared [11, 18].
Computation for varying number of mobile nodes 10,
20, 30,40,50,60, and 70 in a NS2 simulation
environment. When the number of nodes is zero all
the clusters are highly stable. But when the number
of nodes increases the Stability gradually decreases
and stabilizes in the range of 40 to 80 nodes. This
stable value for the Modified WCA is higher in the
range of 0.7 than the WCA which is in the range of
0.5. Stability is a essential criteria for efficient
clustering and using NS2 simulation the Modified
weighted clustering algorithm(MWCA) is proved to
ave higher stability value than the Weighted
clustering algorithm(WCA)

NS2
simulation for an
optimal 50 nodes
is carried out to
observer the
variation in
throughput and
Maximum speed of
movement which
are inversely
related.
Throughput is the amount of useful information
transmitted between the Nodes. Basically the
Throughput decreases when the Speed with which
the node travels increases. At every level of Speed the
Three types of Clustering Lowest ID, WCA and
Modified Weighted Clustering Algorithm are
pared. At the speeds from 1 m/s to 7 m/s the
Modified Weighted Clustering Algorithm has better
throughput than any other Clustering algorithm. But
when the Speed increases to High values like 8 m/s, 9
m/s, and 10 m/s the throughput of all the algorithm
drastically low. In such cases the Power utilization
The Graphs show the better performance of
Modified Weighted Clustering Algorithm in the case
Throughput, more Stability and longer Life span [11].
Thus the Weighted Clustering Algorithm when

Acharya Institute of Technology, Bangalore

modified and liner auto regression prediction is more
efficient form of Clustering.

Lower the traffic generation rate higher is
the availability of network resources. The number of
100
traffic
generation rated and speed used for this analysis.
NS2 simulation for 100 nodes is executed and the
traffic load and the Normalized Control overhead
values are tabulated. When the traffic load increases
the Control required and overhead involved also
increases. Thus the overall normalized control
overhead increases. In case of Modified Weighted
Clustering algorithm(MWCA) the overhead is
minimum because cluster is maintained using
predicted mobility with high accuracy. As the mobile
nodes position is predicted the overhead involved in
maintaining the signal based position discovery is
avoided. Traffic load measured in Kbits/S is varied
from 100Kbits/s to 900Kbits/s and the
corresponding Control overhead is measured in a
scale of 5 and it is unit less.

Effective utilization of power, Minimum
wastage of Bandwidth, More Stable Clusters help in
improving the overall Life-span of a node and thus
the lower limits of minimum life span increases.
Basically for a node moving with lesser speed the
Life-span is longer. This feature is more enhanced in
the case of our new Algorithm.




, Bangalore-560090
modified and liner auto regression prediction is more
Lower the traffic generation rate higher is
the availability of network resources. The number of
sources is
with
varying
generation rated and speed used for this analysis.
NS2 simulation for 100 nodes is executed and the
raffic load and the Normalized Control overhead
values are tabulated. When the traffic load increases
the Control required and overhead involved also
increases. Thus the overall normalized control
overhead increases. In case of Modified Weighted
algorithm(MWCA) the overhead is
minimum because cluster is maintained using
predicted mobility with high accuracy. As the mobile
nodes position is predicted the overhead involved in
maintaining the signal based position discovery is
measured in Kbits/S is varied
from 100Kbits/s to 900Kbits/s and the
corresponding Control overhead is measured in a
Effective utilization of power, Minimum
wastage of Bandwidth, More Stable Clusters help in
span of a node and thus
the lower limits of minimum life span increases.
Basically for a node moving with lesser speed the
span is longer. This feature is more enhanced in

The Scalability analysis is d
number of source nodes in the network to create
congestion. The Source variations range from 10 to
100 mobile node pairs. The speed of the source
nodes is varying at 10m/s. The connectivity
maintenance gives the number of source nodes
connected. The Connectivity of nodes gives path for
data packets to be forwarded without loss and
comparison results are shown in the above
graph.NS2 simulation with varying node count
ranging from 10 to 100 is used to study the
percentage of connectivity in
Maximum possible connectivity is 100% where each
node is connected with every other node.

Modified Weighted Clustering Algorithm(MWCA)
shows 100% connectivity as the number of
transmitting nodes increases.


Mobility analysis in NS2 simulated environment with
100 nodes is given with different Pause time values
which influence the node mobility in the random
waypoint model. The lower the Pause time higher the
mobility. The number of sources is 100, with varying
Pause time is used for this analysis. The Packet
Delivery ratio is based on the number of packets
delivered and number of packets sent. Higher the
Packet Delivery Ratio(PDR) better is the performance
of the Cluster. The Modified Weighted Clustering
Algorithm(MWCA) technique
one or 100% delivery as the network becomes more
static. Only under dynamic conditions the packets
collide or the packets are lost. As the pause time
increases the Static nature is enhanced. For WCA and
Lowest ID algorithm even as the
all the packets are not delivered.

CONCLUSION

Effective utilization of power [6], Minimum
wastage of Bandwidth, More Stable Clusters helps in
NICE-2010
560090 175
The Scalability analysis is done by varying the
number of source nodes in the network to create
congestion. The Source variations range from 10 to
100 mobile node pairs. The speed of the source
nodes is varying at 10m/s. The connectivity
maintenance gives the number of source nodes
nnected. The Connectivity of nodes gives path for
data packets to be forwarded without loss and
comparison results are shown in the above
graph.NS2 simulation with varying node count
ranging from 10 to 100 is used to study the
percentage of connectivity in wireless environment.
Maximum possible connectivity is 100% where each
node is connected with every other node.
Modified Weighted Clustering Algorithm(MWCA)
shows 100% connectivity as the number of
transmitting nodes increases.

NS2 simulated environment with
100 nodes is given with different Pause time values
which influence the node mobility in the random
waypoint model. The lower the Pause time higher the
mobility. The number of sources is 100, with varying
for this analysis. The Packet
Delivery ratio is based on the number of packets
delivered and number of packets sent. Higher the
Packet Delivery Ratio(PDR) better is the performance
of the Cluster. The Modified Weighted Clustering
Algorithm(MWCA) technique approaches a ratio of
one or 100% delivery as the network becomes more
static. Only under dynamic conditions the packets
collide or the packets are lost. As the pause time
increases the Static nature is enhanced. For WCA and
Lowest ID algorithm even as the nodes become static
all the packets are not delivered.
Effective utilization of power [6], Minimum
wastage of Bandwidth, More Stable Clusters helps in
NICE-2010
Acharya Institute of Technology, Bangalore-560090 176

improving the QOS in MANETS. Weighted Clustering
Algorithm itself is improved with the use of mobility
prediction in the Cluster Maintenance phase.


REFERENCES

[1]. Ali Bokar, Muslim Bozyigit, & Cevat Sener (2009).
Scalable Energy-Aware Dynamic Task Allocation.
International Conference on Advanced Information
Networking and Applications Workshops.

[2]. Basagni .S. (1999). Distributed Clustering for Ad
Hoc Networks. International Symposium on Parallel
Architectures, Algorithms and Networks, pp. 310- 315.

[3]. Dhurandher.S.K., & Singh.G.V. (2006). Power
Aware Clustering Technique in Wireless Ad Hoc
Networks, International Symposium on Ad Hoc and
Ubiquitous Computing, ISAUHC '06

[4]. Dhurandher S.K., & Singh .G.V. (2005). Weight
Based Adaptive Clustering in Wireless Ad Hoc
Networks. IEEE International Conference on Personal
Wireless Communications, New Delhi, India, 95-100.

[5]. Gerla .M., & Tsai .J. T. C. (1995). Multi-cluster
Mobile Multimedia Radio Network. ACM/Baltzer
Wireless Networks Journal 95, vol. 1, pp. 255-265.

[6]. Hussein, Abu Salem.A.H., & Yousef .A.O. (2008).
A flexible weighted clustering algorithm based on
battery power for Mobile Ad hoc Networks, IEEE
International Symposium on Industrial Electronics.

[7]. Jieying Zhou, Jianfeng Chen, Weicong Xie, & Jing
Li (2007). Improved Weight Clustering Algorithm
for IDs in Mobile AdHoc Network, Wireless
Communications, Networking and Mobile Computing,
2007. WiCom 07, International Conference on volume
, Issue , 21-25.

[8]. Jing Wu , Guo-chang Gu, & Guo-zhao Hou (2009).
A Clustering Algorithm Considering on a Hierarchical
Topologys Stability for Ad Hoc Networks. First
International Workshop on Education Technology and
Computer Science.

[9]. Mohapatra.P., & Krishnamurthy.S.V. (2005). Ad
Hoc Networks Technologies and Protocols, Springer
Science + Business Media.

[10]. Ramanathan.R,. & Redi.J. (2002). A Brief
Overview of Ad Hoc Networks: Challenges and
Directions, IEEE Communication Magazine, 40(5).

[11]. Sharmila Anand John Francis, Elijah Blessing
Rajsingh, & Giss George (2009). Enhancing Stability
of Network through Clustering in Mobile Ad hoc
Networks. Third International Conference on Next
Generation Mobile Applications, Services and
Technologies

[12]. Sucec.J., & Marsic.I. (2002). Clustering overhead
for hierarchical routing in mobile ad hoc networks,
IEEE proceeding.

[13]. Toh.c.k, & Chai K Toh ( 2002). Ad Hoc Mobile
Wireless Networks protocols and Systems, New
Jersey:Prentice Hall PTR.

[14]. Vieu .V.B., Nasser .N., & Mikou .N. (2006). A
Weighted Clustering Algorithm Using Local Cluster-
heads Election for QoS in MANETs. IEEE GLOBECOM.

[15]. Wang. .Y.,Chen .H., Yang X., & Zhang .D. (2007).
Wachm: Weight based adaptive clustering for large
scale heterogeneous manet. Communications and
Information Technologies, ISCIT 07. pp. 936941.

[16]. Yan Shuailing, Jiang Huawei, & Wang Gaoping
(2008). An Improved Clustering Algorithm Based on
MANET Network. International Symposium on IT in
Medicine and Education.

[17]. Yang .W.d., & Zhang .G.z. (2007). A Weight-
based Clustering Algorithm for mobile Ad Hoc
network. Third International Conference on Wireless
Communications.

[18]. Yoon-cheol Hwang, Yoon-Su Jeong, Sang-Ho Lee,
Jeong-Young Song, & Jin-Il Kim1 ( 2008). Advanced
Efficiency and Stability Combined Weight Based
Distributed Clustering Algorithm in MANET, Future
Generation Communication and Networking.

[19]. Yu .J.P. & Chong P.H.J. (2005).A Survey of
Clustering Schemes for Mobile Ad Hoc Networks.
IEEE Communications Surveys and Tutorials, Vol. 7,
No. 1, pp. 32-48.

[20]. Yuji Kawai, & Iwao Sasase (2008). A Stable
Clustering Scheme by Prediction of the Staying Time
in a Cluster for Mobile Ad Hoc Networks. APCC.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 177

ENERGY-EFFICIENT NODE ACTIVATION SCHEME FOR OBJECT-TRACKING IN
DISTRIBUTED SENSOR NETWORKS
Rajashekar Kunabeva, Santosh Kumar.M
Dept.of Information Science and Engineering, GM Institute of Technology, Davanagere
rajashekarnk@gmail.com, mm_san65@yahoo.com
_________________________________________________________________________________________________________________________________

ABSTRACT
Lifetime maximization is one of the critical
elements in the design of sensor-network-based
surveillance and tracking applications. In order to fully
realize the potential of sensor networks, energy
awareness should be incorporated into every stage of
the network design and operation. A sensor typically
runs on a battery having a limited lifetime. Thus In this
paper We address the energy management issue in
Object tracking sensor network. Wireless sensor
networks contain a set of nodes which have limited
computational power and energy supply. There is need
for aggressive energy optimization technique in WSNs
which use minimum number of nodes for tracking
object. The scheme aims that current node will
activate another node where the object enters
detection area of another node and it will go to sleep
state for saving its energy. The life of sensor network
will be increased as minimum nodes are engaged in
tracking activity.
Keywords:
Tracking, node activation, energy efficiency,
sensor network, Life time
I. INTRODUCTION
Recent advances in MEMS-based sensor
technology, low-power analog and digital electronics,
and low-power RF design have enabled the
development of relatively inexpensive
and low-power wireless micro sensors [ 2, 3, 4].
A wireless sensor network is a network in which a
number of nodes are deployed in a region, connected
to each other by means of wireless communication.
Important Limitations of sensor network are:
Limited energy supply i.e. Battery driven
Low storage memory
Less Bandwidth
Limited processing
Considerable channel induced errors

Advantages also over the traditional networks:
Zero Configurability.
Less energy consumption in sensor nodes
Scalability
Low cost of sensor node.

Wireless sensor network have found applications in
Medical,industrial,military,environment monitoring,
chemical detection, under water domains and may
more..
object tracking is a killer application of the wireless
sensor network which allows to explore and
investigate many a research issues.


Figure1. object Tracking in wireless sensor network
field

We have taken N number of sensor nodes
arranged in a structured sensor network to track an
object. Each sensor has limited sensing area, of radius
'R'. lot of studies done earlier have tried to minimize
communication cost or traded off computation at
sensor node. we are focus here on two aspects.
One we will study OTSN application with
respect to number of nodes required to track object
.second one is avoid redundant data from floating in
network to reduce unnecessary energy consumption
at relay nodes. we aim to increase the life time of
NICE-2010
Acharya Institute of Technology, Bangalore-560090 178

sensor network by engaging minimum nodes for
tracking an object while inactivating rest of the
nodes into sleeping mode.

II OBJECT TRACKING SENSOR NETWORKS

There are a lot of research issues in design and
implementation of the object tracking sensor
networks (OTSNs), including data fusion,
aggregation, routing, and energy conservation, etc.
Among those, energy conservation is one of the most
critical one. Like other sensor networks, the OTSN is
driven by scarce energy resource. Therefore, energy
saving is the major issue addressed in this paper. In
the following, we first provide some background of
the OTSNs, describe the assumptions we made in this
paper, and discuss the factors that contribute to
energy consumption and design complexity of the
OTSNs.

Sensor is a transducer that converts a
physical phenomenon into transformable
signals
Sensor node.- A basic unit in a sensor
network, with on board sensor, processor,
memory, wireless modem, battery-driven
power supply. The sensor nodes are enabled
for computation, sensing and communication
by the Micro-Controller Unit (MCU), sensor
components and the RF radio component
respectively. To facilitate the energy
conservation, most of todays sensor nodes
allow these three basic components to be
inactivated separately when they are not
needed.

Network Topology :sensor network is
represented as connected graph. In a
wireless sensor network the link represents
a one-hop connection, and the neighbors of a
node are those within the radio range of the
node.



Figure 2 shows that each node senses the reachable
area and upon event detection i.e object is detected
data is sent to designated relay node which routes
the data towards the gateway. The gateway will
further forward to the server database. data can be
read by initiating query of interest. The sensor nodes
are enabled for computation, sensing and
communication by the Micro-Controller Unit (MCU),
sensor components and the RF radio component
respectively. To facilitate the energy conservation,
most of todays sensor nodes allow these three basic
components to be inactivated separately when they
are not needed . Thus, in this paper, we assume that
each sensor node is a logical representation of a set
of sensor nodes which collaboratively decide the
properties of a moving object. In other words, the
sensor nodes referred in this paper are the sensing
leaders or cluster heads in a multiple level sensor
network. We also assume that the moving objects are
identifiable.

The objects are electronically tagged or can be
identified based on the pre-embedded object code
table in the sensor nodes, which classifies all the
objects such as jaguar, elephant and pedestrians. A
unique object ID is assigned to each tracked object

Two issues of concern are:-
Object is tracked with its pin point location.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 179

Object is tracked without knowing the exact
location.

When an object is tracked with its exact location
then in two dimensional plane we need at least three
nodes to detect the object and then find its exact
location. When we want to find exact location of an
object then we need at least three nodes in two-
dimensional plane where'r' is the sensing region of a
node. Each node has its limited detection range. So
intersection of the three regions will show the exact
location of the object.

















In this paper we assume that wireless sensor
network simply track the object without knowing the
exact location.

III. Basic schemes
We first study some basic energy saving schemes for
OTSNs
Nave
Scheduled Monitoring (SM)
Continuous Monitoring (CM):

Naive: In this scheme, all the sensor nodes stays in
active mode to monitor their detection areas all the
time. As such,the objects in the network are always
tracked and their locations are reported to the base
station (every T seconds) by the nodes who have the
objects in their detection areas. We introduce
it here to illustrate the principles used in other
energy saving schemes and later use it as a baseline
for comparisons with our proposed scheme.

Scheduled Monitoring (SM): Assuming that all the
sensor nodes and base station are well synchronized,
all the sensor nodes can turn to sleep and only wake
up when its time to monitor their detection areas
and report sensed results.
Thus, in this scheme, all the S nodes will be activated
for X second then go to sleep for (T X) seconds.
Sensor nodes spend minimal time in active mode and
stay in sleep mode as long as they can. Hence, a
significant amount of energy is saved if the
applications do not need frequent reports from the
network. However, in order to capture the moving
objects (i.e., to ensure no missing report),the number
of sensor nodes involved in object tracking is more
than needed.

Continuous Monitoring (CM):

Instead of having all the sensor nodes in the field
wake up periodically to sense the whole area, only
the sensor node who has the object in its detection
area will be activated. An awake node actively
monitors the object until the object enters a
neighboring cell. It may wake up the destination node
(handoff) W seconds before object enters.
However, to ensure no missing report, the active
sensor has to stay awake while there exists an object
in its detection area

So the ideal scheme will be that which has least
number of nodes participated in tracking and the
missing rate of the object should be 0% .

IV. PROPOSED SCHEME
Solution space of energy saving schemes for OTSNs
suggests that the ideal one will have only one active
node. In this scheme we assume that all the
boundaries nodes of the sensor network area are self
NICE-2010
Acharya Institute of Technology, Bangalore-560090 180

activated nodes, and continuously in contact with
cluster head .The next node it will be decided and
activated by the self-activated node (boundary node)
and that node will decide and activate other node and
this process will go on until the object remains in the
sensor network area.
It may be possible that every time an object enters in
the region and same node will be activated again and
again by the boundary node which causes early death
of that node. So to avoid this problem one solution is
that when object enter in the sensor network region
first node which is boundary node will activate and
then it will randomly activate the node coming in its
detection region or which share detection region.


The node will activate next node to track the moving
object and will go to sleep mode only after receiving
the acknowledgement from node being activated else
will stay active.

V. ASSUMPTIONS AND COMPARISONS

Single target tracking. We assume that only one
targets enter the area under surveillance
Coverage. We assume that the tracking field is
fully covered by sensor nodes sensing ranges.
Node location. The locations of sensor nodes are
assumed to be not known .
Time synchronization. All the sensor nodes are
assumed to be well time synchronized
Architecture. We assume that all the sensor
nodes are homogeneous, and are topologically
arranged in hierarchical fashion.
Also we assume the tracking field is flat, so that
we take simple two-dimensional Cartesian
coordinates for object location.
Sink node. We dont assume the existence of sink
nodes. All the tracking behaviors are completed
locally around the target without the
intervention of sink nodes.
Energy consumed by an inactive node is 0 j/sec.
Energy consumed by an active node is 1 j/sec
Speed of the moving object is same from source
to destination without variation at intermediate
point in the path

VI SIMULATION RESULTS.

We have performed JAVA based simulation to get the
results of proposed and trivial schemes.
The simulation setup consists of 22 sensors
uniformly placed nodes across 900*900 sq.m area.
Battery reduction is taken as 5% .The key
contribution of proposed scheme is the energy
consumption reduction. In this paper we have
compared the NAVE scheme with the Proposed
scheme with respect to energy consumption and
nodes required for tracking one object.

Name of
scheme
Total
nodes
Active
nodes
Energy
consumption
(Joules)
Naive 22 22 22
Proposed
Scheme
22 6 6

NICE-2010
Acharya Institute of Technology, Bangalore-560090 181

The proposed scheme in better than Nave scheme
and consumes 73% less energy .


Fig 5 Energy consumption for Naive scheme


Fig 6 Energy consumption for proposed scheme

CONCLUSION AND FUTURE WORK.

Many research work carried out focus mainly
on turning radio component and try to save the
energy. We have focused on computation and
number of nodes required to track single object
moving through the area. Our scheme activates only
one node and rest are kept in sleeping mode hence
scheme consumes less energy compared to nave
scheme thus increase the life time of OTSN.
We intend to enchance our future work taking into
consideration
varying speed of object long the path
Mobility of Base station and nodes
Randomly distributed sensor nodes
scalable

REFERENCES
[1] www.en.Wikipaedia.Org
[2] Chandrakasan, Amirtharajah, Cho, Goodman,
Konduri, Kulik, Rabiner, And Wang. Design
Considerations For Distributed Microsensor Systems.
In Ieee 1999 Custom Integrated Circuits Conference
(Cicc), Pages 279286, May 1999.
[3] Clare, Pottie, And Agre. Self-Organizing
Distributed Sensor Networks. In Spie Conference On
Unattended Ground Sensor Technologies And
Applications, Pages 229237, Apr.1999.
[4] M. Dong, K. Yung, And W. Kaiser. Low Power
Signal Processing Architectures For Network
Microsensors. In Proceedings 1997 International
Symposium On Low PowerElectronics And Design,
Pages 173177, Aug. 1997.
[5] J. Elson, L. Girod, And D. Estrin. Fine-Grained
Network Time Synchronization Using Reference
Broadcasts. Sigops Oper. Syst. Rev., Pages 147163,
2002.
6. R. Stoleru, J. A. Stankovic, And S. Son. Robust Node
Localization For Wireless Sensor Networks. In
Emnets, 2007
[7] J. Fuemmeler And V. Veeravalli. Smart Sleeping
Policies For Energy Efficient Tracking In Sensor
NICE-2010
Acharya Institute of Technology, Bangalore-560090 182

Networks.Ieee Transactions On Signal Processing,
2007
[8] N. A. Vasanthi, S. Annadurai. Energy Saving
Schedule For Target Tracking Sensor Networks To
Maximize The Network Lifetime. Ieee Proceedings,
2006.
[9] B. Jiang, K. Han, B. Ravindran, H. Cho. Energy
Efficient Sleep Scheduling Based On Moving
Directions In Target Tracking Sensor
Network. Ieee Proceedings, 2008
[10]Project Report: Object Tracking In Wireless
Sensor Networks Pulkit Gambhir , Mohit Rajani July
25, 2005
[11] Distributed Energy-Efficient Solutions For Area
Coverage Problems In Wireless Sensor Networks By
Chinh Trung Vu Under The Direction Of Yingshu Li
[12] Jdk 1.6 Package From Www.Java.Sun.Com .




























































NICE-2010
Acharya Institute of Technology, Bangalore-560090 183

RELIABLE DATA SECURITY ARCHITECTURE FOR MOBILE AD HOC
NETWORKS
M. Renuka, M.C.A., M.Phil.
1
, (Ph.D.,), Dr.P.Thangaraj
2
,
Asst.Professor, Department of Applied Science, SSM College of Engineering, Komarapalayam
1
.
Professor & Dean, School of CT & Application, Kongu Engineering College, Perundurai
2
.
mr.renuka@gmail.com, 9965010674
1
, ctptr@yahoo.co.in, 9842720572
2
.
_________________________________________________________________________________________________________________________________

ABSTRACT
Mobile ad hoc networks proved their
efficiency in the deployment for different fields, but
highly vulnerable to security attacks. It seems to be
more challenging of in wireless networks. Existing
research carried out provides authentication,
confidentiality, availability, and secure routing and
intrusion detection in ad hoc networks. Ad hoc
network characteristics should be taken into
consideration to design efficient data security along
its path of transmission. The proposal work present,
reliable data security architecture (RDSA) in
improving the data transmission security of ad hoc
networks with reliable multi-path routing. Reliable
multiple paths between nodes in the ad hoc network
increase the security level of transmitted data. The
original message to be secured is split into parts
that are transmitted in reliable multiple paths. The
disseminated messages are encrypted on its course
of transmission to improve the security further.
Experimental simulations are conducted to the
proposed RDSA approach and compared with
existing ad hoc multi-path security solutions. RDSA
shows better performance compared to that of
generic data security architecture in terms of path
stability and data loss to 5% and 7% respectively.

1. INTRODUCTION

Security is a critical issue in a mobile ad hoc
network because the primary applications of ad hoc
networks are the military applications, such as the
tactical communications in a battlefield, where the
environment is hostile and the operation is security-
sensitive. As compared with a fixed or a wired
network, the characteristics of an ad hoc network
pose many new challenges in security. For example,
the wireless channels are more susceptible to various
forms of attacks such as passive eavesdropping,
active signal interference, and jamming. The co-
operative nature of ad hoc protocols makes it more
vulnerable to data tampering, impersonation, and
denial of services. The lack of a fixed infrastructure
restricts the applicability of some conventional
security solutions, such as a Public Key Infrastructure
(PKI), which relies on a

Centralized trusted authority, and the intrusion
detection system, which needs a concentration point
to collect audit data. The limited resources of mobile
devices, such as the battery power, also limit the
practical deployment of more comprehensive
security schemes in an ad hoc network. Finally, the
continuous and unpredictable ad hoc mobility clouds
the distinction between normalcy and anomaly, thus
makes the detection of the malicious behavior
difficult.

2. LITERATURE REVIEW

A few research works have been done to
address the security issues in ad hoc networks.
Security issues that have been addressed particularly
for ad hoc networks include key management [1],
secure routing protocols [2], handling node
misbehavior [3], preventing traffic analysis, and so
on [4]. In this paper, we address the data
confidentiality service in an ad hoc network. The data
confidentiality is the protection of data from passive
attacks such as eavesdropping while they are
transmitted across the network. The wireless
channel in a hostile environment is vulnerable to
various forms of attacks, particularly the
eavesdropping. A more severe problem in a MANET
is that mobile nodes might be compromised
themselves (e.g., nodes be captured in a battle field
scenario) and subsequently be used to intercept
secret information relayed by them. In [5], we
proposed a SPREAD (Secure Protocol for REliable
dAta Delivery) scheme to statistically enhance the
data confidentiality service in an ad hoc network.
SPREAD is based on secret sharing and multi-path
routing. Multi-path routing has been extensively
studied in a wired network context for aggregating
bandwidth, reducing blocking probability, and
increasing the fault tolerance, etc. [12]. However, the
NICE-2010
Acharya Institute of Technology, Bangalore-560090 184

shared wireless channel has a significant impact on
the performance of multi-path routing [9]. In this the
proposal of this work study the security performance
of Data Security Architecture in multi-path with
encrypted parted messages by simulation.

3. MULTI-PATH DATA SECURITY IN AD
HOC NETWORKS

The motivation of proposed data security in
multi-path routing protocol is to divide the initial
message into parts then to encrypt and combine
these parts by pairs. Then use the characteristic of
existence of multiple paths between nodes in an ad
hoc network to increase the robustness of
confidentiality. This is achieved by sending encrypted
combinations on the different existing paths between
the sender and the receiver. In our solution, even if
an attacker succeeds to have one part or more of
transmitted parts, the probability that the original
message can be reconstructed is low.

3.1 Multi-path Routing Topology

The originality of the proposed approach is
that it does not modify the existing lower layer
protocols. The constraints applied in the security
protocol are the sender `A' and the receiver `B' are
authenticated, session key and message key is used
for the encryption/decryption of frames at MAC layer
and the authentication of the terminals, a mechanism
of discovering the topology of the network is
available, and the protocol uses a routing protocol
supporting multi-path routing.

3.2 Multiple path message Transmission

With the knowledge of network topology the
proposed security model will use n routes (the
message will be divided into n - 1 shares). One path is
used for signaling, a second one is used to transmit in
plain text a key share (randomly chosen) used to
initiate the decombination process and the others (n
-2 paths) are used to transmit the different shares of
the original message. Therefore the proposed data
security multi-path protocol should have at least 3
links.
3.3 Algorithm for Multi-path message
transmission
The expressions used to describe the multi-
path parted message transmission algorithm are
described below
m: message to be sent securely between A
and B.
Dividing m into n - 1 parts gives:
P (m) = {c1, c2, , cn-1}
Tp(ntwk): Function invoked periodically to
discover topology of the ad hoc network. It returns
true if modifications in topology exists, otherwise it
returns false.
Frequency: represents the frequency of
topology refreshing.
N (A, B): number of links between A and B
n: is an integer; 3 <= n <= N(A, B)
The original message m is divided into (n - 1)
shares, each of them has a unique identifier. The
protocol generates a path numbers of appropriate
message parts to be sent on the signaling channel.
The path numbers assigned for the message parts are
selected randomly and sent at appropriate paths of
multi-path routing protocol. During next message
transmission different paths are used for parted
messages which usually generated through pseudo
random model. The message share will be
transmitted in plain text. The final part is sent in
plain text on one of the n paths. It will be the start
point for receiver to find other parts. Concerning the
manner of dividing messages, a channel coding
approach called Diversity Coding is used to recover
from link failures.
Finally combine, the n - 1 parts of m in pairs
using pseudo random operation related to final path.
On the nth link, which is considered as signaling
channel, send values of pseudo random number and
number of path in which message parts are
transmitted.

3.4 Encryption of Messages in the multiple
paths

The message parts on every data channel are
sent encrypted by WPA to reinforce confidentiality.
The chosen WPA for encryption in the proposed
simulation provides efficient security to be imbibed
in the multi-path routing protocol. This gives a two
layer security to the data confidentiality.
Combination of SDMP with WPA is performed to its
perfection in terms of security and transmission
efficiency. Parts' identifiers are sent to allow the
receiver to reconstitute the original message in the
correct order. For fault tolerance problem, Diversity
Coding technique is used which is based on
information redundancy.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 185

Even if an attacker succeeds to obtain one
part or more of the transmitted message, the
probability of reconstructing the message is low. The
attacker must have all the parts. This means, that
he/she has to eavesdrop on all used paths, or should
be near A or B. Furthermore, he/she should be
informed about our combining function and be able
to decrypt the WPA encoding. SDMP is deployed in an
ad hoc network by introducing only software
modifications.

4. DATA SECURITY ARCHITECTURE (DSA)

Design an application layer situated on top of
the network (IP) layer that will manage the use of
proposed two level data security solution to sent data
securely. Specific header, called DSA header will be
added for useful information to ensure security. DSA
layer is situated between two important layers. The
first one is the IP layer that will provide our protocol
with important information about routing, number of
available routes, quality of routes, depending on the
routing protocol used. The second layer is the
transport layer (TCP/UDP) that is able to manage
retransmission, if needed, especially when topology
has changed after the data transmission had started.
The Data Security Architecture introduces a
set of features that can be incorporated with low
overhead without modifying lower layer protocols.
Both sender and receiver should implement DSA
layer to be able to use this protocol. Before sending
data between sender (A) and destination (B), the
topology is provided in order to calculate the
different routes n between A and B. If n is<3, a
message error is generated, otherwise the n routes
that will be used to transmit data securely will be
chosen from the n existing routes according to a cost
function we will explain in detail in next section.

4.1 Paths selection in DSA

In an ad hoc network the topology changes
frequently, which makes wireless links instable.
Sometimes packets might be dropped due to the bad
wireless channel conditions, the collision on multi-
path routing, at MAC layer transmission, or because
of out of date routing information. When packet loss
does occur, non-redundant share allocation will
disable the reconstruction of the message at the
intended destination. To deal with this problem, it is
necessary to introduce some redundancy (if there is
enough paths) in Data Security Architecture (DSA) to
improve the reliability, (i.e. the destination would
have better chance to receive enough shares for
reconstructing the initial message). The decision of
using or not redundancy will be taken according to
the average mobility of the network's nodes and to
the existing path number.
DSA is based on multipath routing in ad hoc
networks. The question is how to find the desired
multiple paths in a mobile ad hoc network and how
to deliver the different message parts to the
destination using these paths. Routing in a Ad hoc
network presents great challenge because the nodes
are capable of moving and the network topology can
change continuously and unpredictably. The Dynamic
Source Routing (DSR) protocol is capable of
maintaining multiple paths from the source to a
destination. This on-demand protocol works by
broadcasting the route inquiry messages throughout
the network and then gathering the replies from the
destination. Even though DSR is able to find multiple
disjointed paths, these paths might not be optimal for
SDMP scheme because of the path selection which is
usually based on the hop count or propagation delay
in these routing protocols. Whereas we need that
security will be the essential parameter in choosing
different paths in such a way that the message
security is maximized.
Therefore, elucidate a path which is
compromised when any one or more of the nodes
along the path is compromised. For each path, we
consider that if it is compromised, all the shares
allocated to it would be compromised. Otherwise, if
the path is not compromised, all shares on that path
will be safe. As paths are node disjointed, further
assume that the probability that one path is
compromised is independent of others.
Assume that there are n disjointed paths:
path1, path2, . , path n, available from the source to
the destination. Use the vector to denote the security
parameters of paths, where pi (I =1, 2, , n) is the
probability that the path i is compromised. Assume
also that: p1<= p2 <= pn, ch means that the paths
are ordered from more secure one to less secure one.
Note that the path security information, P, is obtained
at the source from the used multi-path routing
protocol. Assume that if one node was compromised,
all the shares traveling through that node will be
compromised. The probability pi does not include the
probability that the source or the destination node is
compromised. The source and the destination are
both reliable.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 186

The maximum security provided only
depends on the chosen paths. As pi is a probability
satisfying 0<= pi<=1, the more paths we use to
distribute the message parts, the less the probability
is, and the more secured is the delivered message. It
is intuitive that non-redundant secret sharing
scheme provides the maximum security to the
message, because it gives fewer chances to an enemy
to obtain all message parts. However, it requires the
successful reception of all the parts and the
knowledge of used combination function.
In an ad hoc network, wireless links are not
stable. Redundancy is a common way to improve the
reliability. It is based on the idea of sending more
information than minimum requirement, so that the
original message can be reconstructed if any loss
happens in the network. So, the path selection
criteria is to choose, at least, the first m most secure
paths among the n existing ones (while m is the
number of message parts). We assume also that the
signaling information should be sent on the most
secure selected path because of sensitive data it will
contain.
With probability qi that node ni is
compromised. Then, the probability that a path from
A to B consisting of node A, n1, n2, , nl, B is
compromised is equal to p = 1 (1 - q1, 1-q2 . 1-
qi). Since consider the protection of messages when
they are transmitted across the network, we assume
that the source and the destination are safe with qs D
qd D 0. Note that the probability qi indicates the
security level of node i and could be estimated from
the feedback of some security monitoring software or
hardware such as firewalls and intrusion detection
devices. It could also be assigned manually by
administrators based on the level of physical
protection of nodes, the positions of nodes, etc.

5. EXPERIMENTAL PERFORMANCE

NS-2 is used to simulate an ad hoc network
with 50 nodes randomly deployed in a 500 m by 500
m area. Nodes have equal transmission range in each
simulation and can vary in different simulations. We
use two different transmission ranges 100 m and 100
m. Factored out the effect of routing protocols in the
simulations, so we assumed that the network
topology is known. Routes we considered are
disjointed. In the 3rd and 4th evaluations
(eavesdropping and overhead), node mobility is
randomly defined in the interval [0, 2 m/s]. In the
first set, all nodes are assumed equally likely to be
compromised with probability qi, so all links are of
the same cost. In the second set, each node is
assigned a probability randomly. The maximum
number of paths the algorithm is able to find is
independent of the link costs; it solely depends on
the network topology although using SDMP
algorithm to select paths to be used for sending
message parts is different for different link costs.
Using characteristics of 2nd simulation set, it
is not necessarily true that more paths imply higher
security, because that depends on security level of
used paths. Experiment the two cases: equal costs
and different costs at every time. The probability to
find multiple disjointed paths in an ad hoc network is
pretty high. This justifies the achievability of our
protocol RDSA that is based on the use of multiple
paths in ad hoc networks. There are one or more
chances to find multiple paths when using set1
conditions (all nodes are assumed equally likely to be
compromised with probability qi ) and more is the
transmission range, more are existing disjointed
paths.
In this simulation, consider that a message is
compromised if there are at least m compromised
nodes on m disjointed paths. Probability of
compromised messages decreases quickly when
increasing path number, especially when using more
than 5 paths. That proves the efficiency of using
multi-path in RDSA. We also observe that when
nodes have different security levels, RDSA selects
more secure paths. In the 3rd simulation set, we use
the same message length and vary path number. Use
redundancy and nodes are considered randomly
mobile (max 2 m/s). A message is considered
dropped, if the destination does not find all its parts
to be able to reconstitute it. Using multiple paths
allows us taking profit from using some redundancy.
That decreases probability of dropping messages.
High transmission range is an important factor in
decreasing probability of dropped messages.

Figure 1: Reliable path for RDSA and DSA

The reliable paths identified for RDSA is
better than DSA in terms of node being varied in the
The image part with relationship ID rId400 was not found in the file.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 187

ad hoc network as shown in Figure 1. The reliability
increases with RDSA as the path stability is governed
properly for secured data transmission. The impact
of utilizing the reliable multi-path restricts the
adversaries in affecting the data being transmitted. In
addition it shows a modest trade-off to obtain
robustness of security in ad hoc networks which are
very hard to secure.


Figure 2: Data loss measure of RDSA and DSA in ad hoc
networks

The RDSA model proposed in this work
suffers a lesser data loss in the reliable path on
adversary effects as compared to that of DSA
(Depicted in Figure 2). The simulation is carried for
various mobility factors of the node to measure the
loss during the secured data transmission. It
shows a decrease of 7% data loss in RDSA as to that
of DSA
6. CONCLUSION
The reliable data security architecture
presented in this work is provided a highly reliable
secured data path for on-demand ad hoc networks.
The encryption of parted data messages in MANETs
enable the security precision to next level. This
represents a better effort towards a formal security
model that deals with levels of security in
safeguarding the message transmissions. In the
context of mobility, RDSA requires, that route
discovery take place simultaneously with reliable
data path selection. Consequently, in the proposed
formal model, it prevents the adversarial nodes
break up routes by inserting alternate path for the
parted messages. The results of the simulation
depicts that the RDSA with encrypted message parts
performs better security (5% and 7%) than
conventional DSA with in terms of path stability and
data loss.

7. REFERENCES
[1] Y.-C. Hu, A. Perrig and D. B. Johnson, Ariadne : a
secure on-demand routing protocol for ad hoc
networks, MobiCom 2002, Sep 2002
[2] W. Lou, Y. Fang, A survey of wireless security in
mobile ad hoc networks: challenges and available
solutions, book chapter in Ad Hoc Wireless
Networking, Kluwer, May 2003
[3] P. Papadimitratos and Z. Haas, Secure Routing
for Mobile Ad Hoc Networks, Proc. SCS Comm.
Networks and Distributed Systems Modeling and
Simulation Conf. (CNDS 02), 2002.
[4] W. Lou, Y. Fang, Securing data delivery in ad hoc
networks, International Workshop on Cryptology
and Network Security (CANS03), Miami, FL, Sep
2003
[5] C.E. Perkins and E.M. Belding-Royer, Ad-Hoc On-
Demand Distance Vector Routing, Proc. Second
Workshop Mobile Computing Systems and
Applications (WMCSA 99), pp. 90-100, 1999.
[6] M.G. Zapata, Secure Ad Hoc On-Demand Distance
Vector Routing, Mobile Computing and Comm. Rev.,
vol. 6, no. 3, pp. 106-107, 2002.
[7] P. Papadimitratos and Z. Haas, Securing Mobile
Ad Hoc Networks, Handbook of Ad Hoc Wireless
Networks, M. Ilyas, ed., CRC Press, 2002.
[8] K. Sanzgiri, B. Dahill, B.N. Levine, C. Shields, and
E.M. Belding-Royer, A Secure Routing Protocol for
Ad Hoc Networks, Proc. IEEE Intl Conf. Network
Protocols (ICNP 02), pp. 78-89, 2002.
[9] Y.-C. Hu, D.B. Johnson, and A. Perrig, SEAD:
Secure Efficient Distance Vector Routing for Mobile
Wireless Ad Hoc Networks, Ad Hoc Networks, vol. 1,
no. 1, pp. 175-192, 2003.
[10] Y.-C. Hu, A. Perrig, and D.B. Johnson, Packet
Leashes: A Defense against Wormhole Attacks in
Wireless Networks, Proc. IEEE INFOCOM, 2003.
[11] Y.-C. Hu and A. Perrig, A Survey of Secure
Wireless Ad Hoc Routing, IEEE Security and Privacy,
vol. 2, no. 3, pp. 28-39, Mar. 2004.
[12] L. Buttyan and I. Vajda, Towards Provable
Security for Ad Hoc Routing Protocols, Proc. ACM
Workshop Ad Hoc and Sensor Networks (SASN 04),
2004.
[13] G. _ Acs, L. Buttyan, and I. Vajda, Provably
Secure On-Demand Source Routing in Mobile Ad Hoc
Networks, Technical Report 159, Intl Assoc. for
Cryptologic Research, 2004.
[14] G. _ Acs, L. Buttyan, and I. Vajda, Provable
Security of On- Demand Distance Vector Routing in
Wireless Ad Hoc Networks, Proc. European
Workshop Security and Privacy in Ad Hoc and Sensor
Networks (ESAS 05), pp. 113-127, 2005.
[15] G. _ Acs, L. Buttyan, and I. Vajda, Provably
Secure On-Demand Source Routing in Mobile Ad Hoc
Networks, IEEE Trans. Mobile Computing, vol. 5, no.
11, pp. 1533-1546, Nov. 2006.

Data Loss Measure of RDSAand DSAin adhoc
networks
0
20
40
60
80
100
1 2 3 4 5 6 7 8 9
MOBILITY
D
A
T
A
L
O
S
S
0
1
2
3
4
5
6
MOBILITY
DATA LOSS
NICE-2010
Acharya Institute of Technology, Bangalore-560090 188

REDUCTION OF MOTION BLUR
apeksha.r.reddy, sahana.bhattramakki
SDMCET
apeksha.r.r@gmail.com, buttersahana@gmail.com
___________________________________________________________________________________________

ABSTRACT
In this project, we propose methods for reduction
of motion blur using flutter shutter. In reduction of
motion blur, we assume the flutter shutter camera and
present a method to deblur it. This method is simple
and intuitive. Here instead of leaving the shutter of the
camera open for large duration of time, we use a
particular coded sequence to flutter the shutter. The
image captured by the camera due the convolution of
it with the cameras system causes the resultant
blurred image. Hence we have proposed a method to
first estimate the PSF and then deconvolve it to get the
unblurred, clear image. It can be used for the
applications wherein the blur is caused either due to
motion of the object or motion of the camera.
XV. INTRODUCTION
Motion blur is a result of relative motion between the
camera and scene during the integration time of the
image. Motion blur is used for aesthetic purposes and
also used in computer graphics to create more
realistic images which are pleasing to eye.[1] Despite
its usefulness to human viewers, motion is often the
bane of photography: the clearest, most detailed
digital photo requires a perfectly stationary camera
and a motionless scene. Relative motion causes
motion blur in the photo. Current practice presumes
a 0th order model of motion; it seeks the longest
possible exposure time for which moving objects will
still appear motionless. Our goal is to address a first-
order motion model: movements with constant
speed rather than constant position. Ideally, the
camera would enable us to obtain a sharp, detailed
record of each moving component of an image, plus
its movement. The first step towards this goal by
recoverably encoding large, first-order motion in a
single photograph. We rapidly open and close the
shutter using a pseudo-random binary sequence
during the exposure time so that the motion blur
itself retains decodable details of the moving object.
This greatly simplifies the corresponding image
deblurring process. We then use deconvolution to
compute sharp images of both the moving and
stationary components within it, even those with
occlusions and linear mixing with the background.
Very often, motion blur is simply an undesired effect.
It has plagued
photography since its early days and is still
considered to be an effect that can significantly
degrade image quality. In practice, due to the large
space of possible motion paths, every motion
blurred image tends to be uniquely blurred. This
makes the problem of motion deblurring hard.
Motion blurred images can be restored (up to lost
spatial frequencies) by image deconvolution,
provided that the motion is shift invariant, at least
locally, and that the blur function (point spread
function, or PSF) that caused the blur is known.

1.1 Motivation
Moving cameras cause motion blur. The exposure
time defines a temporal box filter that smears the
moving object across the image by convolution. This
box filter destroys important high-frequency spatial
details so that de- blurring via deconvolution
becomes an ill-posed problem. Rather than leaving
the shutter open for the entire exposure duration, we
flutter the cameras shutter open and closed during
the chosen exposure time with a binary pseudo-
random sequence. The flutter changes the box filter
to a broad-band filter that preserves high-frequency
spatial details in the blurred image and the
corresponding deconvolution becomes a well-posed
problem. We demonstrate that manually-specified
point spread functions are sufficient for several
challenging cases of motion-blur removal including
extremely large motions, textured backgrounds and
partial occludes.[2]
II MOTION DEBLUR.
Steps involved in motion deblurring are:
1. Modulated capture
2. PSF estimation
NICE-2010
Acharya Institute of Technology, Bangalore-560090 189

3. Image deconvolution

2.1 Modulated Capture (Coded exposure)
An image is formed when light energy is integrated
by an image detector over a time interval. Let us
assume that the total light energy received by a pixel
during integration must be above a minimum level
for the light to be detected. This minimum level is
determined by the signal-to-noise characteristics of
the detector. Therefore, given such a minimum level
and an incident flux level, the exposure time required
to ensure detection of the incident light is inversely
proportional to the area of the pixel. In other words,
exposure time is proportional to spatial resolution.
When the detector is linear in its response, the above
relationship between exposure and resolution is also
linear. This is the fundamental tradeoff between the
spatial resolution (number of pixels) and the
temporal resolution (number of images per second).
Rather than leaving the shutter open for the duration
of the exposure, we flutter it open and closed in a
rapid irregular binary sequence. We call the resultant
alteration a coded blur. The fluttering toggles the
integration of motion on and off in such a way that
the resultant point spread function, P(x), has
maximum coverage in the Fourier domain. Although
the object motion is unknown a priori, the temporal
pattern can be chosen so that the convolved
(blurred) image I(x) preserves the higher spatial
frequencies of moving objects and allows us to
recover them by a relatively simple decoding process.

2.1.1 Motion Model
We describe convolution using linear algebra. Let B
denote the blurred input image pixel values. Each
pixel of B is a linear combination of the intensities in
the desired unblurred image, X and can be written as:
AX = B + h (2.1)
The matrix A, denoted as the smearing matrix,
describes the convolution of the input image with the
point spread function P(x) and h represents the
measurement uncertainty due to noise, quantization
error, and model inaccuracies. For two-dimensional
PSFs, A is block-circulant while for one- dimensional
PSF, A is circulant. For simplicity, we will describe
the coding and decoding process for a one-
dimensional PSF case. Given a finite exposure time of
T seconds, we subdivide the integration time into m
time slices, called chops, so that each chop is T = m
seconds long. The on/off chop pattern then is a
binary sequence of length m. The motion blur
process is a time to space projection where, in the
one-dimensional motion case, the motion in T
seconds causes a linear blur of k pixels. Hence, within
one single chops duration, the smear covers k/m
pixels. To find the best estimate of the n pixels from
the observed n + k 1 pixels. The smear matrix A can
be obtained as follows. Each pixel in the unknown
image X contributes to a total of k pixels after
smearing. The first column of circulant matrix A is
the PSF vector of length k followed by n1 zeros. And
each column is obtained from the previous one by
cyclically shifting the entries one step forward.
Therefore, in case of a black background the linear
convolution with P(x) (or multiplication by the
circulant matrix A) is equivalent to a circular
convolution with a PSF vector of length k padded
with n-1 zeros. In practice, since X has only n
unknown values in the
smear direction, one can build up an over
constrained least square system by truncating A to
keep only its first n columns. Thus, the size of A
becomes ((n+k1)xn). In the case of flat blur, the
time-to-space projection of an input signal of length n
with constant values creates a response with a
trapezoidal
intensity profile. The ramps have a span of k pixels
each and the plateau is: n k 1 pixel long.

Code Selection
Our goal is to select a temporal code that improves
the invertibility of the imaging process. We analyze
the invertibility by studying the condition number of
the coding matrix and the variance of the frequency
spectrum of the code. The invertibility of the
smearing matrix A, in the presence of uncertainty
and noise, can be judged by the standard matrix
conditioning analysis. The condition number is the
ratio of the largest to the smallest singular value and
indicates the sensitivity of the solution X to the noise
in the input image B. We note that the Eigen values of
a circulant matrix comprise the magnitude of the DFT
of the first column of the circulant matrix and that
NICE-2010
Acharya Institute of Technology, Bangalore-560090 190

each column in A is the PSF vector padded with
zeros. Based on this observation, we choose a coded
sequence with a broadband frequency response so
that the corresponding condition number for the
smearing matrix is as small as possible. However for
Motion Blurring circular convolution occurs with PSF
vector of length k padded with n-1 zeros where n is
the size of the object in pixels. Given our hardware
constraints, we settled for a compromise value by
experimentation, choosing a sequence of m=52 chops
with 50 percent duty cycle, i.e., with 26 ones and
zeros. The first and last bit of the code should be 1,
which results in
50C24 = 1.2X (1014)
choices. Among them, there are a multitude of
potential candidates with acceptable frequency
magnitude profile but different phase. We computed
a near-optimal code by implementing a randomized
linear search and considered approximately 3106
candidate codes. We chose a code that:
maximizes the minimum of the magnitude of the
DFT values and
minimizes the variance of the DFT values.
The near-optimal code we found is:
10100001110000010100001100111101110101110
01001100111.
2.2 Motion Decoding

2.2.1 Image Deconvolution (Post capture Linear
System
Algorithm)
Consider the problem of deblurring a 1-D signal via
deconvolution. The goal is to estimate the signal S(x)
that was blurred by a linear systems point- spread
function P (x). The measured image signal I(x) is then
known to be:
I(x) = P(x) _ S(x);
(2.2)
with * denoting convolution. In the ideal case, a good
estimate of the image, S(x), can be recovered via a
deconvolution filter P+(x):
S(x) = P+(x) _ I(x)
(2.3)
In the case of band-limited point-spread functions or
point-spread functions with incomplete coverage of
the Fourier domain, information is lost and therefore,
deconvolution is not possible. The iterative
deconvolution technique is applicable for whole
motion blur and assumes the complete signal I(x) is
available, but it fails to handle cases where different
parts of the scene have different PSFs. For example,
in the case of a moving object on a static textured
background, the background contribution to the
blurred image is different from the foreground object
smear. Hence we go for more practical approach such
as linear algebra approach.
2.3 Linear Solution
We use a least-square estimation to solve for the
deblurred image X as:
X = A+B; (2.4)
where A+ is the pseudo-inverse of A in the least-
square sense. Since the input image can have a
motion blur k different from m, we first
expand/shrink the given blurred image by factor
m = k. We then estimate X and scale it back by k = m.
All the images in this paper have been deblurred
using this simple
linear approach with no additional post-processing.
In the following sections, we focus on one
dimensional PSFs. Motion of real-world objects
within a frame tends to be one dimensional due to
energy and inertial constraints. We refer to the one
dimensional line-like paths for motion as motion
lines. Note that scene features on a given motion line
contribute only to pixels on that motion line and
therefore the motion lines are independent. The
solution for each motion line can be computed
independent of other motion lines. In the explanation
below, without loss of generality, the motion lines are
assumed to be oriented along the horizontal scan
lines. However, in examples such as camera shake,
the PSF is typically a collection of 1-D manifolds in 2-
D and we show how our method can extend to these
PSFs as well.
2.4 Background Estimation:
NICE-2010
Acharya Institute of Technology, Bangalore-560090 191

We now address the problem of motion blur due to
an opaque object moving in front of a stationary
(non-blurred) but non-zero-valued background. This
is a commonplace but difficult case because the
moving object blends with the background and it is,
therefore, not sufficient to know the moving objects
PSF to deblur the image. One also needs to estimate
the background simultaneously. We explore this
problem, classify the cases and show that in some
instances, the unknown background visible at the
edges of the blurred object can be recovered during
the deblurring process. In the case of a non-zero
background, the blurred image is given by,
B = AX + AgXg; (2.5)
where X is the moving foreground object, Xg is the
static background and Ag is the background
attenuation matrix. Ag is a diagonal matrix whose
elements attenuate the static background. Ag can be
written as:
Ag = I diag(A _ I(n + k 1)X1);
(2.6)
where IqX1 is a vector of length q with all 1s and
diag(v) returns a square matrix by placing the vector
v on the main diagonal. The analysis of background
estimation is based on the number of background
pixels, g, that contribute to the blurred region. In the
blurred region of size (n + k 1), when n >k, the
background is visible only near the edges and
contributes to only 2k pixels. However, when n < k,
the object smears more than its length and hence the
background is partly visible in all the blurred pixels.
Hence
g = min (2k, n + k 1)
(2.7)
Given observations at (n+k1) pixels, we must
estimate a minimum of n+2k values. The additional
k+1 unknown can be estimated by adding constraints
on the object motion and on the complexity of the
texture corresponding to the background image.
III RESULTS

In this chapter we demonstrate the effectiveness of
our methods for Motion Deblur by using using
blurred image directly.

(a) Input Image1

(b)output image1
Figure 3.1: Image of motion deblur by using directly
blurred image

(a) Input Image2

(b) Output Image2
Figure 3.2: Image of motion deblur by using self
introduced left to right motion blur
NICE-2010
Acharya Institute of Technology, Bangalore-560090 192


(a) Input Image3

(b) Output Image3

Figure 3.3: Image of motion deblur by using self
introduced motion blur from top to bottom
IV CONCLUSIONS AND FUTURE WORK
4.1 Conclusions
In this project we demonstrated the motion deblur.
We assumed flutter shutter camera and presented a
method to deblur the image by particular code
sequence. The method is simple and intuitive and is
applicable for blur caused either due to motion of
object or motion of camera.
4.2 Future Work
The nature of coded blur photography points to a
range of new research problems. Exploring the Codes
We have analyzed the coded exposure in discrete
frequency domain and via matrix conditioning
analysis. However, the relationship among the
various elements: code sequence, code length, the
blur length and the corresponding noise after
decoding requires further investigation in the
continuous domain. The code may have applications
in other areas where a linear mixing model is
inverted during decoding. Deconvolution via coded
exposure exhibits similarities to code division
multiplexing and de-multiplexing of a single
communication channel. Advances from the CDMA
world in simultaneous orthogonal codes or channel
reception with background noise may improve and
broaden results in coded blur photography. The
coding and reconstruction has several similarities
with tomography and coded-aperture imaging, and
exploiting this connection may yield further benefits
in temporal image processing.
BIBLIOGRAPHY
[1] Moshe Ben-Ezra and Shree K. Nayar. Motion-
based motion deblurring.
[2] Jack Tumblin Ramesh Raskar, Amit Agrawal.
Coded exposure photog-
raphy: Motion deblurring using fluttered shutter. July
2006























NICE-2010
Acharya Institute of Technology, Bangalore-560090 193

RESTORATION OF BLURRED IMAGE USING BLIND DECONVOLUTION
ALGORITHM
Ms.S.Ramya
Ms.T.Mercy Christial, Lecturer of IT
Kalasalingam University, Anand Nagar, Krishnankoil, ramyareys@gmail.com
_________________________________________________________________________________________________________________________________

ABSTRACT

Image restoration is the process of recovering the
original image from the degraded image. Aspire of the
project is to restore the blurred/degraded images
using Blind Deconvolution algorithm. The fundamental
task of Image deblurring is to de-convolute the
degraded image with the PSF that exactly describe the
distortion. Firstly, the original image is degraded using
the Degradation Model. It can be done by Gaussian
filter which is a low-pass filter used to blur an image.
In the edges of the blurred image, the ringing effect
can be detected using Canny Edge Detection method
and then it can be removed before restoration process.
Blind Deconvolution algorithm is applied to the
blurred image. It is possible to renovate the original
image without having specific knowledge of
degradation filter, additive noise and PSF. To get the
effective results, the Penalized Maximum Likelihood
(PML) Estimation Technique is used with our proposed
Blind Deconvolution Algorithm.
Keywords: Blind Deconvolution Algorithm, Canny Edge
Detection, Degradation Model, Image restoration,
PML, PSF

1. INTRODUCTION
Image deblurring is an inverse problem which
whose aspire is to recover an image which has
suffered from linear degradation. The blurring
degradation can be space-invariant or space-in
variant. Image deblurring methods can be divided
into two classes: nonblind, in which the blurring
operator is known. And blind, in which the blurring
operator is unknown.
Blurring is a form of bandwidth reduction of the
image due to imperfect image formation process. It
can be caused by relative motion between camera
and original image. Normally, an image can be
degraded using low-pass filters and its noise. This
low-pass filter is used to blur/smooth the image
using certain functions.
Image restoration is to improve the quality of the
degraded image. It is used to recover an image from
distortions to its original image. It is an objective
process which removes the effects of sensing
environment. It is the process of recovering the
original scene image from a degraded or observed
image using knowledge about its nature. There are
two broad categories of image restoration concept
such as Image Deconvolution and Blind Image
Deconvolution.
Image Deconvolution is a linear image
restoration problem where the parameters of the
true image are estimated using the observed or
degraded image and a known PSF (Point Spread
Function). Blind Image Deconvolution is a more
difficult image restoration where image recovery is
performed with little or no prior knowledge of the
degrading PSF. The advantages of Deconvolution are
higher resolution and better quality.
This paper is structured as follows: Section 2
describes the degradation model for blurring an
image. Section 3 represents Canny Edge Detection.
Section 4 describes the deblurring algorithm and
overall architecture of this paper. Section 5 describes
the sample results for deblurred images using our
proposed algorithm. Section 6 describes the
conclusion, comparison and future work.

2. DEGRADATION MODEL
In degradation model, the image is blurred using
filters and additive noise. Image can be degraded
using Gaussian Filter and Gaussian Noise. Gaussian
Filter represents the PSF which is a blurring function.
The degraded image can be described by the
following equation (1)
g =H* f + n (1)
In equation (1), g is degraded/blurred image, H is
space-invariant function (i.e.) blurring function, f is
an original image, and n is additive noise. The
following fig.(a) represents the structure of
degradation model.


NICE-2010
Acharya Institute of Technology, Bangalore-560090 194



Fig. (a) Degradation Model

Image deblurring can be done by the technique,
Gaussian Blur. It is the convolution of the image with
2-D Gaussian function.

2.1) Gaussian Filter:
Gaussian filter is used to blur an image using
Gaussian function. It requires two parameters such
as mean and variance. It is weighted blurring.
Gaussian function is of the following form
G (x, y) =1/2
2
* e
x2+y2/22

where is variance and x and y are the distance from
the origin in the horizontal axis and vertical axis
respectively. Gaussian Filter has an efficient
implementation of that allows it to create a very
blurry blur image in a relatively short time.

2.2) Gaussian Noise:
The ability to simulate the behavior and
effects of noise is central to image restoration.
Gaussian noise is a white noise with constant mean
and variance. The default values of mean and
variance are 0 and 0.01 respectively.

2.3) Blurring Parameter:
The parameters needed for blurring an image
are PSF, Blur length, Blur angle and type of noise.
Point Spread Function is a blurring function. When
the intensity of the observed point image is spread
over several pixels, this is known as PSF. Blur length
is the number of pixels by which the image is
degraded. It is number of pixel position is shifted
from original position. Blur angle is an angle at which
the image is degraded. Available types of noise are
Gaussian noise, Salt and pepper noise, Poisson noise,
Speckle noise which are used for blurring. In this
paper, we are using Gaussian noise which is also
known as White noise. It requires mean and variance
as parameters.

2.4) Algorithm for Degradation Model
Input:
Load an input image f
Initialize blur length l
Initialize blur angle theta
Assign the type of noise n
PSF (Point Spread Function), h
Procedure I
h=create (f, l, theta) %Creation of PSF
Blurred image (g) = f*h + n
g= filter (f, h, n,convolution)
If g contains ringing at its edge then
Remove ringing effect using edge taper function
Else
Go to Procedure II
End Procedure I


3. CANNY EDGE DETECTION AND RINGING
EFFECT
The Discrete Fourier Transform used by the
deblurring function creates high frequency drop-off
at the edges of images. This high frequency drop-off
can create an effect called boundary related ringing
in deblurred images. For avoiding this ringing effect
at the edge of image, we have to detect the edge of an
image. There are various edge detection methods
available to detect an edge of the image.
The edge can be detected effectively using
Canny Edge Detection method. It differs from other
edge-detection methods such as Sobel, Prewitt,
Roberts, Log in that it uses two different thresholds
foe detecting both strong and weak edges. Edge
provides a number of derivative (of the intensity is
larger than threshold) estimators. The edge can be
detected for checking whether there exists ringing
effect in an input image.

3.1) Canny Edge Detector
Canny edge detection method finds edges by looking
for local maxima of the gradient of f(x, y). The
gradient is calculated using the derivative of a
Gaussian Filter. The method uses two thresholds to
detect strong and weak edges, and includes the weak
edges in the output only if they are connected to
strong edges. Therefore, this method is more likely to
detect true weak edges.

3.1.1) Steps involved in canny method:
o The image is smoothed using Gaussian
Filter with a specified standard deviation,
, to reduce noise
NICE-2010
Acharya Institute of Technology, Bangalore-560090 195

o The local gradient, g(x, y) and edge
direction are computed at each point.
o The edge point determined give rise to
ridges in the gradient magnitude image.
This ridge pixels are then thresholds,
T1& T2, with T1<T2.
Ridge pixels with values greater than T2 are said to
be strong edge pixels. Ridge pixels with values
between T1 & T2 are said to be weak edge pixels.

3.2) Edgetaper for Ringing Effect:
The ringing effect can be avoided using edge
taper function. Edgetaper function is used to
preprocess our image before passing it to the
deblurring functions. It removes the high frequency
drop-off at the edge of an image by blurring the
entire image & then replacing the center pixels of the
blurred image with the original image.


4. OVERALL ARCHITECTURE AND DEBLURRING
ALGORITHM
The following fig.(b) represents the overall
architecture of this paper.


Fig. (b) Overall Architecture

The original image is degraded or blurred using
degradation model to produce the blurred image. The
blurred image should be an input to the Deblurring
algorithm. Various algorithms are available for
deblurring.
In this paper, we are going to use Blind
Deconvolution Algorithm. The result of this algorithm
produces the deblurring image which can be
compared with our original image.

4.1) Blind Deconvolution Algorithm:
Blind Deconvolution Algorithm can be used
effectively when no information of distortion is
known. It restores image and PSF simultaneously.
This algorithm can be achieved based on Maximum
Likelihood Estimation (MLE).

4.1.1) Algorithm for Deblurring:
Input:
Blurred image g
Initialize number of iterations i
Initial PSF h
Weight of an image w % pixels considered for
restoration
a=0 (default) %Array corresponding to additive
noise
Procedure II
If PSF is not known then
Guess initial value of PSF
Else
Specify the PSF of degraded image
Restored Image f= Deconvolution (g, h, i, w, a)
End Procedure II

5. SAMPLE RESULTS
The below images represent the result of degradation
model using Gaussian blur. First image represented
the original image and its edge can be estimated by
Canny Edge detection method.


Original Image
The edge detection can be applicable to Gray Image.
Therfore the origianl RGB image can be converted to
gray image. After that Canny Edge Detection is
applied for getting the Edges of the original image.

NICE-2010
Acharya Institute of Technology, Bangalore-560090 196


Edges of original Image
The original can be blurred using gaussian low pass
filter by specifying the blur parameters. The
following image is depicted as blurred image.


Blurred Image


Edge of Blurred Image

The sample image after applying the proposed
algorithm will be as follows

Restored Image

6. CONCLUSION
We have presented a method for blind image
deblurring. The method differs from most other
existing methods by only imposing weak restrictions
on the blurring filter, being able to recover images
which have suffered a wide range of degradations.
Good estimates of both the image and the blurring
operator are reached by initially considering the
main image edges.
The restoration quality of our method was
visually and quantitatively better than those of the
other algorithms such as Wiener Filter algorithm,
Regularization algorithm and Lucy-Richardson with
which it was compared.
The advantage of our proposed Blind
Deconvolution algorithm is used to deblur the
degraded image without prior knowledge of PSF and
additive noise. But in other algorithms, we should
have the knowledge over the blurring parameters.
The future work of this paper is to increase the speed
of the deblurring process that is reducing the number
of iterations used for deblurring the image for
achieving better quality image.

7. REFERENCES

[1] Mariana S.C. Almeida and Luis B. Almeida.,
Blind and Semi-Blind Deblurring of Natural
Images, IEEE Transactions on Image Processing,
Vol 19, pp.36-52, No. 1,January 2010.

[2] Michal Sorel and Jan Flusser, Senior Member,
IEEE., Space-Variant Restoration of Images
Degraded by Camera Motion Blur, IEEE
Transactions on Image Processing, Vol 17, pp.105-
116, No. 2,February 2008.

NICE-2010
Acharya Institute of Technology, Bangalore-560090 197

[3] Jian-Feng Cai, hui ji, Chaoqiang liu, Zuowei
Shen., Blind Motion deblurring using multiple
images, journal of Computational physics., pp.
5057-5071, 2009.

[4] Shao-jie, WU Qiong, li Guo-hui., Blind Image
deconvolution for single motion-blurred
image, Journal., 2009.

[5] D. Kundur and D. Hatzinakos, Blind image
deconvolution, IEEE Sig.Process. Mag., pp. 4364,
May 1996.

Rafael C.Gonzalez, Richard E. Woods, Steven L.
Eddins, Digital Image Processing Using MATLAB
(Pearson Education, Inc., 2006)

































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 198

TWO-HOP NEIGHBORHOOD ROUTING PROTOCOL TO ENHANCE THE QOS OF
REAL-TIME PACKET DELIVERY FOR WSN
Saravanan.B, Ramesh.V, Mrs.A.Lakshmi
Master Of Technology, Kalasalingam University
saravananboomi@gmail.com,ramesh_8607@yahoo.co.in
___________________________________________________________________________________________
ABSTRACT
A two-hop neighborhood information-based
routingprotocol is proposed for real-time wireless
sensor networks. Theapproach of mapping packet
deadline to a velocity is adoptedas that in SPEED;
however, our routing decision is made basedon the
novel two-hop velocity integrated with energy
balancingmechanism. Initiative drop control is
embedded to enhance energy utilization efficiency,
while reducing packet deadline miss ratio. Simulation
and comparison show that the new protocol has led to
lower packet deadline miss ratio and higher energy
efficiency than two existing popular schemes. The
result has also indicated a promising direction in
supporting real-time quality-of-service forwireless
sensor networks.

Keywords Deadline miss ratio (DMR), energy
utilization efficiency, quality-of-service (QoS), real-
time, two-hop information, wireless sensor networks
(WSNs).

I. INTRODUCTION

Wireless sensor networks(WSNs) have recently
received increasing attention in the industrial
communication community. A vivid vision of WSN
can be described by the concept of smart dust [1]:
small and cheap sensor nodes are embedded to sense
the surroundings, communicate wirelessly, perform
collaborative signal processing and make the
environment intelligent. With WSN, it is possible to
collect more real-time data than before, from places
which are hazardous or inaccessible by wired
technology. WSN can be used in many ways in
industrial and factory automation [2]. For example,
vibration, pressure or thermal sensors can be
equipped to rotating machinery or conveyer belts to
monitor their health. This helps to detect possible
system failure and to trigger a preventive
maintenance routine before a more costly repair is
needed. WSNs are also useful for tracking leakage
or radiation in chemical plants. Different from some
existing
best-effort services which may not have stringent
packet timeliness requirement and can tolerate a
significant amount of packet loss, these real-time
industrial applications are much
more demanding [3]. Out-of-date data are usually
irrelevant
and may even lead to negative effects to the system
monitoring and control. In industrial WSNs, traffic is
dominated by readings and commands exchanged
between sensors/actuators and control units.
Providing quality-of-service (QoS) in such a scenario
is to enable transmissions of periodic or sporadic
messages within predefined deadlines
in a reliable fashion; timeliness is especially
important for crucial alarm messages. Since the
wireless channel is random and time-varying,
conventional deterministic QoS measures should be
replaced by probabilistic ones. An important
performance measure is the deadline miss ratio
(DMR) which is defined as the ratio of messages that
cannot meet deadlines [4]. Moreover, sensor nodes
usually use battery for energy supply. Hence, energy
efficiency is also an important design goal. It is
usually defined by the energy consumed per
successfully transmitted packet. Furthermore, in
order to avoid network topology holes and achieve a
longer network lifetime, node load and energy
balance need to be considered.
Generally speaking, supporting real-time QoS in WSN
can
be addressed from different layers and mechanisms
[5]. For example, medium access control (MAC) can
offer channel access (one-hop) delay guarantee,
while routing protocol in the network layer can
support multihop QoS. Transmission scheduling can
be used to provide conflict-free channel sharing
based on regular network topology (e.g., tree,
hexagonal layout, etc.) with techniques of topology
control and clock synchronization [6]. Deterministic
service delay bound for real-time applications is
expected. In-network data aggregation is known as a
good complement to routing protocols in reducing
data redundancy and alleviating network congestion.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 199

Cross-layer optimization can provide further
improvement. Among the above, without loss of
generality, routing protocol has always played a
crucial role in supporting end-to-end QoS. Here, we
will focus on this domain and the design in this paper
is oriented to more demanding applications which
emphasize packet delivery timeliness and
end-to-end QoS, e.g., alarm messages should be
transmitted
from sensor nodes to control center in time so as to
take prompt actions. Energy efficiency and load
balance are also among the design goals. It is known
from the literature [3] that for system simplicity most
existing routing protocols are based on one-hop
neighborhood information. However, it is expected
that multihop information can lead to better
performance in many issues including routing,
message broadcasting, and channel access scheduling
[7][10]. For computing two-hop neighborhood
information in wireless ad hoc and sensor networks,
some distributed algorithms and efficient
information exchange schemes are reported in [10]
and [11]. In a network of nodes, computing one-hop
neighbors with messages is trivial while
computing two-hop neighbors seems to increase the
complexity and overheads. However, a complexity
analysis reported in [11] has shown that every node
can obtain the knowledge of two-hop neighborhood
by a total of messages, each of bits, which could be
enough to address the ID and
geographic position of nodes. It is very likely that a
system can perform better if more
information is available and effectively utilized. By
the study
of asymptotic performance of a generic routing with
multihop
routing information [7], it is observed that the
number of hops
required from the source to sink decreases
significantly from
one-hop to two-hop information-based routing.
However, the
further gain from two-hop-based decision to three-
hop-based
decision is less attractive, especially if complexity
increase is
also a concern. In this paper, we propose a two-hop
information-based real-time routing protocol and
show its improvement over one-hop-based protocol
SPEED [12]. The choice of two hops is a tradeoff
between performance improvement and the
complexity cost. The idea of two-hop routing is
straightforward but how to use or integrate the
information effectively so as to improve energy and
real-time performance is generally nontrivial. The
resulting design has the following novel features.
1) Compared with existing protocols that utilize only
one-hop
neighborhood information, it achieves lower
deadline miss
ratio and also higher energy efficiency.
2) A mechanism is embedded which can release
nodes
that are frequently chosen as packet forwarder. An
improvement
of energy balance throughout the network is
achieved.
3) The simulation is built on Mica2-based [13] lossy
link
model, energy model and CSMA/CA MAC setting
(similar
to B-MAC [14]) which are very close to real systems.
The rest of this paper is organized as follows. Section
II discusses
related routing protocols for real-time QoS in WSN
and
explains the motivations. Section III presents our
design. The
performance of proposed protocol is reported in
Section IV.
Simulations and comparisons have shown its
effectiveness.
In Section V, we discuss possible enhancement.
Finally,
Section VI concludes this paper.

II. REAL-TIME ROUTING PROTOCOLS FOR WSN

Generally speaking, there are three classes of routing
policies that favor end-to-end delay performance
guarantee in WSN: (i) tree-based routing; (ii) optimal
routing based on shortestpath- first (SPF) principle
by the knowledge of whole network topology; and
(iii) geographic routing by the knowledge of node
position.
Tree-based routing is popular in industrial WSN
setting.
ZigBee has provided a hierarchical tree routing
scheme in
which packets travel along the edges of the tree
network. This
approach suits the many-to-one traffic model and
does not need routing table. End-to-end QoS (delay,
energy consumption, etc.) can be estimated by the
NICE-2010
Acharya Institute of Technology, Bangalore-560090 200

depth of the tree. However, the hierarchical tree
routing can be very
inefficient when two nodes in different branches but
mutual radio range want to communicate with each
other since packets must travel through the ZigBee
coordinators. Ad hoc on-demand distance vector
(AODV) routing is thus suggested as a supplement in
this case. As proposed in [15], another solution is to
look up the neighbor table in routing decisions so as
to avoid long path and thus shorten the worst-case
delay. Another drawback of tree-basedrouting
is the problem of node energy consumption
balancing. Nodes near the root of the tree will
consume much more energy than the other and
consequently lead to network topology holes.
A tree routing protocol is often not optimal as it does
not
choose the shortest path. AODV is one of the optimal
routing based protocols by SPF principle. However,
additional overhead (e.g., extra packet and energy
consumption) will be introduced in order to maintain
the routing table. AODV is a reactive routing which is
more favorable when communication is required
infrequently. The route discovery on demand adds
additional latency to packet transmission. This has
been investigated in [17] and an AODV variant is
proposed after introducing a new routing metric in
evaluating path efficiency which includes end-to-end
delay and energy consumption. As a result, the
network lifetime is prolonged and end-to-end
delivery ratio is improved for real-time embedded
systems.
Another QoS aware routing protocol is proposed in
[16] for
WSN. It finds multiple least-cost and energy-efficient
paths by extended Dijkstras algorithm and pick the
path that can meet end-to-end delay requirement
during the connection. In addition, a class-based
queueing model is employed to serve both best-effort
and real-time traffics. Their approach, however, does
not consider the impact of channel access delay.
Besides, the use of class-based priority queueing
mechanism is too complicated and costly for
resource limited sensor nodes.
Geographic routing is popular in WSN since it does
not need to maintain routing table and consequently
can reduce network energy consumption. Resulting
algorithms are highly scalable [18]. However,
geographic routing protocols are in general not
optimal since most of them are based on one-hop
decision. In addition, determining node position will
introduce some overheads and energy consumption.
Several solutions exist for finding coordinates, e.g.,
using global positioning system (GPS). Note that for
resource-limited WSN, using GPS can be a problem as
the required positioning chips will increase the price
and energy consumption. This problem can be
alleviated by using positioning chips only in some
nodes, while other nodes calculate positions with the
assistance of their neighbors. On the other hand,
existing localization techniques such as triangulation,
multilateration and diffusion [18] can provide GPS-
free solutions. Some ranging techniques have also
been specified in the IEEE 802.15.4a standard [19],
e.g., estimating distance by measuring the difference
of propagation delays.
In geographic routing, the heuristic greedy
forwarding
protocol SPEED [12] is the first one addressing real-
time
guarantees for WSN. Relay velocity toward a next-
hop node
is identified by dividing the distance progress by its
estimated
forwarding delay. Packet deadline is mapped to a
velocity requirement. The node with the largest relay
velocity higher than the velocity requirement is
selected in the highest probability. If there is no
neighbor node that can meet the requirement, the
packet is dropped probabilistically to regulate
network workload. Meanwhile, back-pressure packet
rerouting in large-delay link is conducted to divert
and reduce packets injected to a congested area. MM-
SPEED [20] extends SPEED by defining multiple
delivery velocities for packets with different
deadlines in supporting different QoS. Real-time
power-aware routing (RPAR) [21] is another variant
of SPEED. A node will adaptively change its
transmission power by the progress towards
destination and packets due time in order to meet
the required velocity in the most energy-efficient
way. Note that all the above protocols are based on
one-hop neighborhood information.
In our proposed scheme, we also adopt the approach
of
mapping packet deadline to a velocity, which is
known as a
good metric to delay constrained packet delivery.
However, our routing decision will be made based on
two-hop neighborhood information and
corresponding metrics. It is therefore named as two-
hop velocity-based routing (THVR). Note that
generally speaking it is also possible to employ other
metrics, e.g., by packet lifetime or hop count, to
NICE-2010
Acharya Institute of Technology, Bangalore-560090 201

design routing protocols. The idea of two-hop
information-based routing is generic and applicable.
Here, we will focus on THVR. The routing design and
details are given in the next section.

III. DESIGN OF THVR FOR RT-WSN

Although two-hop information-based routing is
intuitively
helpful to improve the routing decision, an explicit
mechanism is necessary. It is worth noting that THVR
primarily aims at lowering packet DMR for
demanding real-time WSN but will also consider
energy utilization efficiency that has not been
explicitly addressed in SPEED and MM-SPEED.
As assumed in most geographic routing algorithms,
each
node in the network is aware of the geographic
location of itself and the destination, via GPS or other
localization techniques [18], [19] as mentioned in
Section II. The information can be further exchanged
among two-hop neighbors [10], [22]. Thus, each node
is aware of its immediate and two-hop neighbors,
and their locations. This is achieved by two rounds of
HELLO messages. First, each node informs its
neighbors about its existence (ID, position, remaining
energy, etc.). Next, each node sends message to all its
neighbors informing about its one-hop neighbors. If
the network is static or with low mobility, this could
be done at one stroke until there is node failure.
Otherwise , in a mobile network, each node
periodically emits additional HELLO messages to
maintain two-hop information. Too old entries are
removed from the neighbor table, as corresponding
nodes have moved out of one-hop or two-hop range.
To be detailed below, our design is mainly composed
of three components: (i) forwarding metric; (ii) delay
estimation
and update;and (iii) initiative drop control.
A . Forwarding Metric
To begin with, some definitions are introduced. For
each
node, is used to denote the set of its one-hop
neighbors.
The source and destination nodes are labeled by and ,
respectively. The distance between a pair of nodes i
and j
is denoted by . Consequently, the required end-to-
end
packet delivery velocity for deadline, , is defined as




An illustration of nodes neighbor set, one-hop and
two-hop forwarder set is shown in figure

Sset =d(S,D) / tset ----------- 1
F( i) is defined as the set of node s potential
forwarders
which vwill make a progress towards the destination,
i.e.,
F(i) {j|d(i,D)- d(j,D)>0,jN(i)}-------------------- 2
==
F2 (i, j) is defined to represent the set of
corresponding two-hop potential forwarders, i.e.,
F2(i , j) {k|d(j,D)- d(k,D)>0,jF(i),kN(j)------------3
=
In SPEED, the core component SNGF (stateless
nondeterministic geographic forwarding) works as
follows. Upon receiving a packet, node calculates the
velocity provided by each of the forwarding nodes in ,
which is expressible as,

Sij={d(i,d)-d(j,d)}/ DELAYij----------4
In our proposed THVR, similarly to SPEED, by two-
hop information, node will calculate the velocity
provided by each
of the two-hop forwarding pairs , i.e.,

Sijk ={d(i,D)-d(k,D)} / {DELAYij-DELAYjk}---------5
Where jF(i) and kF2(i,j) For node pair(j,k)
satisfying
Sijk Sset . we denote it by set .Beyond comparing
the potential forwarding velocities, we also take into
account nodes remaining energy level, and thus
define the following new joint metric:

6
NICE-2010
Acharya Institute of Technology, Bangalore-560090 202

where Ej is the remaining energy of forwarder
candidate ,
while Ej0 is its initial energy, and is the weighting
factor incorporating energy level into the joint
metric. Note that larger tends to favor end-to-end
delay performance, while
smaller one can distribute traffics to nodes in higher
energy
level and result in a better energy balance. Clearly, a
setting of
relies on deadline requirements. The larger the
deadline, the
smaller could be.
By (6), the node in (e.g., node ) with the largest
will be chosen as the forwarder. The routing then
proceeds and the mechanism is repeated at the
selected node iteratively. In THVR, the sender will
search the largest velocity in two-hopneighborhood
before making the forwarding decision. However, in
SPEED [12], it is only one-hop optimized. For
example, if there is a topology hole after the first
forwarding node, SPEED will get a critical problem
and have to activate back-pressure rerouting. By
THVR, this kind of problems can be
alleviated.Inherently, THVR has one-hop more
prediction capability as using a telescope in finding
the path.
B. Delay Estimation
From (5), it is observable that packet delay
estimation from
sender to its potential forwarder has played an
important role in the velocity. In general, the delay of
a packet from a node to its immediate forwarder is
expressible as
7
whereDELAYmac and DELAYtx are used to represent
the
MAC delay, and transmission count, respectively. The
transmission time includes the queueing delay
(depending on the load of the node) and the packet
transmission time (determined by the packet/ACK
size and the bandwidth). The transmission count
refers to the number of retransmissions involved
since automatic repeat-request (ARQ) is adopted
when packet fails to be transmitted due to collision
or lossy link.
To have packet delay estimation in identifying (5),
we adopt
the method of window mean with exponentially
weighted
moving average (WMEWMA), which has been shown
in [23]
with its best estimation performance among existing
techniques. The estimate for the time is given as
8

where Tis the time window, is the newly measured
delay
(known from the most recent packet), and is the
tunable weighting coefficient. It is clear that a large
will emphasizeand fits the case where delay variance
is small,
while a small is more suitable if the variance is
significant. A
demonstration of the delay estimates under different
is plotted
in Fig. 2, while the sum of deviations is indicated in
Fig. 3.
With a small , the delay estimate is insensitive and
too slow to capture the systems immediate
fluctuation and thusmay result in a big deviation
sum. However, when is too large, the update to the
delay estimate appears too rigorous
while nearly ignoring the historic average and results
in an
evenlarger deviation sum. As indicated in Fig. 3, the
deviation is thelowest when is set to 0.5 which is
quite robust generally. Notethat it is also possible to
NICE-2010
Acharya Institute of Technology, Bangalore-560090 203

design an adaptive tuning mechanismwith reference
to encountered delay variance. However, we willnot
go into the detail in this paper.
To identify the link delay of a packet, a sender will
stamp
the time when the packet is first sent and then
compare it with
the time when an acknowledgment (ACK) is received.
On the
other hand, to update the link delay information to
other nodes
in the routing path, after receiving the ACK with
delay information from its forwarder, the node will
initiate a feedback packet,
which contains the updated delay of the forwarding
link, to its
parent node, i.e., the one who chose it as a forwarder.
Meanwhile,other neighboring nodes which can
overhear the feedbackwill also update their delay
records.


Fig. 4 shows an example of link delay update where G
node is chosen as the forwarder of E node . is first
updated at after receiving ACK from, and then
feedback to node . Nodes B and C overhear the
feedback. As a result, the delay field in their records,
e.g.,a two-hop delay table, will be updated by the new
information and (9). It should be noted that the two-
hop information will enlargethe table of delay profile
stored in each node. This needs to be considered if
sensor nodes employed have very limited memory.

C. Initiative Drop Control
If no node in the two-hop forwarding set can provide
the requiredvelocity, the following initiative drop
control will be conducted.To begin with, some
technical details are defined. Letbe the packet loss
ratio of node and be thenumber of nodes in .1 We
define the following forwarding probability of node ,
denoted by , as

The forwarding probability is jointly decided
by the loss ratio in the forwarding set and the node
position.
First, the node that is close to the destination has
higher forwarding probability. This is designed by
the fact that a packetnear the destination has already
traveled a long way along therouting and many nodes
have consumed energy to relay it, thusit is
worthwhile to try more efforts and see whether we
can finallydeliver it successfully. Although the
current hop may notbe able to meet the required
velocity, it is possible to meet theend-to-end
requirement finally if the coming hops may have
relativelyshort delays. However, if the packet is still
at a node nearthe source that cannot meet the
velocity, from the point-of-viewof energy utilization
efficiency, it will be more efficient to drop it earlier.


Second, by (10), the node whose forwarding
candidates have
lower average loss ratio has higher forwarding
probability. As
shown in Fig. 5, the link layer collects the node
packet loss
ratio and feeds it back to the dropping controller,
which calculatesthe forwarding probability according
to (10). For WSN,the broadcast nature of the wireless
medium allows snooping onthe channel. Losses can
be known by tracking the link sequencenumber in
the packets from each source. Various low-
powerlistening mechanisms exist [23] that would
enable snooping ata much lower cost. An alternative
approach is to use receivedsignal strength as an
indication of link quality. Note that the controlleris a
proportional controller and the function of the
controlloop is to force the loss ratio of neighbors to
NICE-2010
Acharya Institute of Technology, Bangalore-560090 204

converge to the setpoint, e.g., 0. The output of the
controller is deterministic andbinary. If the output of
the controller is 1, the node will forwardthe packet to
the candidate that provides largest regardless ofthe
velocity. Otherwise, dropping is made to maintain the
delay
requirement.

IV. PERFORMANCE EVALUATION

The proposed THVR is simulated in ProwlerRmase
[24],
[25]. Prowler is a probabilistic wireless network
simulator,
capable of simulating wireless distributed systems,
from the
application to the physical communication layers. It
provides
simple yet realistic radio/MAC models based on the
Berkeley
mote platform, and supports an event-driven
structure similar
to TinyOS/NesC. Rmase has extended Prowler to
more options of topologies, application models, and
routing designs.
To be close to practical WSN and realistic
implementation,
we set the MAC layer, link quality model, and energy
consumption parameters according to Mica2 Motes
[13] with MPR400 (915 MHz) radio. Nodes are
distributed in a 200 m 200 m area following Poisson
point process with node density
node/m . This node density is chosen by the method
described
in [26] to ensure a high level of network connectivity.
To
simulate multihop transmissions with a large enough
number of hop counts, we limit the source nodes to
the left-lower corner of the above region, while the
sink is fixed at the location (200 m, 200 m). The size
of the neighbor table for each node is set to 400 bytes
for all the tested protocols, which is found sufficient
to store neighbor information within two hops. Note
that for THVR, relatively each node needs more
memory to maintain two-hop information.
Practically, the average number of neighbors within
two hops is around 20 with average node degree 6
under our simulation settings.
A. MAC Settings
Following the default CSMA scheme (similar to B-
MAC
[14]) in Mica2 Motes, to initiate a packet
transmission, a sensor node will generate a random
initial waiting time uniformly
distributed in the range of [200, 328] bit-time (by
Mica2 Motes, one bit-time equals to 1/40000 s) and
start a timer Upon timer expiration, the channel is
sensed. If it is found idle, a packet is transmitted.
Otherwise, it backoffs and then continues the sensing
until the channel is found idle. The backoff time is
uniformly distributed in [100, 130] bit-time. To
improve delivery reliability, ARQ is employed here. If
the total number of transmission count is greater
than 7, the packet will be dropped. This is for
avoiding excessive tries to a bad link or a too busy
channel.
B. Link Model
We adopt the packet reception rate (PRR) model [27]
for lossy
WSN links. It is built on experimental measures of
practical systems with respect to statistics of
wireless channel.With the standard noncoherent FSK
modulation and Manchester encoding, the PRR, , of a
wireless link is expressible as

where S is the transmitter-receiver distance, is the
signal-to-noise ratio (SNR), and is the frame size
which
equals to 50 bytes including preamble (2 bytes),
payload and
CRC (2 bytes). Here, we adopt the setting of [27].
Note that
the maximum packet size allowed is 241 bytes for
Mica2. This model takes into account both distance-
dependent path loss and log-normal shadowing in
characterizing wireless links.
C. Energy Model
In WSN, the energy consumed in a node is mainly due
to
packet transmission , reception and channel sensing
to check whether it is clear. The total energy
consumed is
thus expressible as

D. Simulation Results
In this section, a detailed performance investigation
of THVR
NICE-2010
Acharya Institute of Technology, Bangalore-560090 205

is conducted and compared with SPEED [12] and the
wellknown PRR-distance-product routing metric, ,
proposed
in [28], in which is the distance traversed towards
destination,
which was verified superior to simple greedy
geographic
routing. In the first set of simulation, we consider
there is one source node located at (20 m, 20 m),
while the sink is at (200m, 200 m). The source
generates aCBR flowat 1 packet/s with packet frame
size equal to 50 bytes (including preamble, payload,
and CRC). The value of in (7) is set at 0.9 to
emphasize end-to-end delay performance. In each
run, 500 packets are transmitted. Fig. 6 shows the
result under different deadline requirements ranging
from 900 to 1800 ms. As shown in Fig. 6(a), it is clear
that with an increase of deadline, the DMR of a
protocol has decreased generally by the fact that
more packets can finally be forwarded to the
destination due to a longer allowable duration. As the
deadline increases, their DMRs will converge to some
corresponding levels. It can be observed that THVR
has lower DMR than all the other in general. When
deadline is stringent (e.g.,less than 1200 ms), the
advantage is especially significant. Generally
speaking, SPEED with has smaller DMR thanthat with
as expected. On the other hand, the metricis known
good at choosing a link for better reliabilityand
routes in best effort. However, it lacks an explicit
considerationof packet timeliness and delay
performance. Clearly,compared with the other two
protocols, the proposed THVR isable to enhance real-
time delivery by an effective integration of two-hop
information. It has inherently a higher capability in
path finding.
Fig. 6(b) shows the energy consumed per packet
successfully
transmitted. The consumption has similar tendency
and characteristic as that in DMR, Fig. 6(a). By a high
tolerance of packet delay (e.g., deadline larger than
1400 ms), DMR tends to be stable and the number of
packets successfully transmitted from end to end is
also quite stable. This supports the convergence of
overall energy consumption. In comparing to the
other protocols, Fig. 6(b) clearly shows that THVR is
more energy efficient. One of the major reasons is
that THVR has a better capability in forwarding
packets to a small delay path. This has resulted in a
smaller DMR, smaller retransmission rate, and also
higher energy utilization efficiency. Besides, the
initiative drop control has a positive effect in energy
saving. As indicated by Fig. 6, THVR outperforms
SPEED and also geographic routing in bothDMRand
energy efficiency performance under the workload of
single CBR flow. Furthermore, we investigate the
performance of THVR under different workload. Fig.
7(a) shows the DMR in which the number of sources
increases from 1 to 6. Each source generates a CBR
flow at 1 packet/s, while the deadline requirement is
fixed at 1200 ms. The source nodes are located in the
left bottom area, as highlighted in Fig. 8(a) and
(b)labeled with ID , respectively. On the other
hand, Fig. 7(b) shows the energy consumption
performance of
the three protocols. It is clear that as the number of
sources
increases, both the DMR and energy consumption
generally
increase. The increase in DMR is resulted by the
increased
channel busy probability, packet collisions at MAC,
and network congestion, due to the increased
number of sources and consequent traffics. The
comparison indicates that THVR
has lower DMR and also lower energy consumption
per successfully transmitted packet, as shown in Fig.
7(a) and (b),
respectively. This reflects the general improvement
by THVR.
It is worth pointing out that as the number of sources
increases
from 4 to 6, SPEED with will outperform SPEED with .
This is due to the benefit of load balance with in the
case the workload is heavy and traffic congestion is
more likely to happen.
Fig. 8 depicts the energy consumption distribution
and magnitude of nodes in the WSN. In a comparison
of Fig. 8(a) and (b), it is observable that SPEED with
has energy consumption footprints more centralized
along the
diagonal path, while the setting of is able to spread
the
footprints to a wider area. Comparing the four
distributions in
Fig. 8, it is clear that THVR has the most even energy
consumption that is shared among a large number of
nodes.

It can be expected that THVR will have a longer
system lifetime due to the better balancing. On the
other hand, their delay performance in the WSN is
shown in Table II under the simulation. THVR has the
lowest DMR and energy consumption per
NICE-2010
Acharya Institute of Technology, Bangalore-560090 206

successfully transmitted packet. However, it is worth
noting that the value of should be carefully chosen in
the delay and load balance tradeoff. Otherwise, the
end-to-end delay performance could be over-
sacrificed, and consequently much degraded.





V. DISCUSSIONS

Generally, an instant two-hop delay updating will
induce
more overheads than that required for one-hop
information
updating. This issue will impact our two-hop-based
design as
well. It can be observed from Fig. 4 that a further
feedback will be sent from a child node to its parent
node as aforementioned. We measure the total
amount of overheads (ACK packets) encountered and
plot it (labeled by THVR IU) in Fig. 9, and compare
to that required in SPEED. In our case, it is nearly two
times of that by SPEED.2 However, one can consider
to reduce the overheads by piggybacking the updated
information in conventional ACK packets but without
further (extra) feedbacks. Consequently, these data
will be piggybacked and sent together only when
ACK is to be transmitted. This helps to maintain in a
small number of feedback packets despite the fact
that the resulting ACK size will be larger. By this
approach, simulation shows that the number of
overheads encountered in the WSN is almost the
same as that in SPEED. Theoretically, they are in the
same amount. The slight difference is due to
simulation randomness. The result is plotted in Fig. 9
and labeled by
THVR PU. Note that a drawback of this piggyback
solution
is that the two-hop delay information may not be
updated
frequently enough. However, since the link delay
estimation
is based on the combination of the historical average
and
most recent one, there could be only minor difference
to the
estimation performance even when the update is not
immediate and especially in WSN with low mobility.
VI. CONCLUSION
In this paper, a two-hop neighborhood information-
based
geographic routing protocol is proposed to enhance
the service
quality of real-time packet delivery for WSN. We
adopt the
approach of mapping packet deadline to the velocity
as SPEED; however, the routing decision is made
based on the two-hop velocity integrated with energy
balancing mechanism. An energy-efficient packet
drop control is incorporated to enhance energy
utilization efficiency while keeping low packet
deadline miss ratio. The actual characteristics of
NICE-2010
Acharya Institute of Technology, Bangalore-560090 207

physical and MAC layers are captured in the
simulation studies. Simulation results show that,
compared with SPEED and the routing which both
only utilize one-hop information, THVR has achieved
lower end-to-end deadline miss ratio and higher
energy utilization efficiency.




REFERENCES

[1] J. M. Kahn, R. H. Katz, and K. S. J. Pister, Next
century challenges:Mobile networking for smart
dust, in Proc. IEEE/ACM MobiCom,Aug. 1999, pp.
271278.
[2] A. Willig, Recent and emerging topics in wireless
industrial communications:A selection, IEEE Trans.
Ind. Infomat., vol. 4, no. 2, pp.102124, May 2008.
[3] Y. Li, C. S. Chen, Y.-Q. Song, and Z. Wang, Real-
time QoS supportin wireless sensor networks: A
survey, in Proc. IFAC FET, Nov. 2007, pp. 373380.
[4] J. Stankovic, T. Abdelzaher, C. Lu, L. Sha, and J.
Hou, Real-time communication and coordination in
embedded sensor networks, Proc.IEEE, vol. 91, no. 7,
pp. 10021022, 2003.
[5] Y. Li, C. S. Chen, Y.-Q. Song, Z.Wang, and Y. Sun, A
two-hop based real-time routing protocol protocol
for wireless sensor networks, inProc. IEEE WFCS,
May 2008, pp. 6574.
[6] K. S. Prabh and T. F. Abdelzaher, On scheduling
and real-time capacity of hexagonal wireless sensor
networks, in Proc. ECRTS, 2007,pp. 136145.
[7] C. S. Chen, Y. Li, and Y.-Q. Song, An exploration of
geographic routing with -hop based searching in
wireless sensor networks, in Proc. CHINACOM, Aug.
2008, pp. 376381.
[8] M. A. Spohn and J. J. Garcia-Luna-Aceves,
Enhancing broadcast operations in ad hoc networks
with two-hop connected dominating sets, in Proc.
IEEE MASS, 2004, pp. 543545.
[9] W. Lou and J. Wu, On reducing broadcast
redundancy in ad hoc wireless networks, IEEE
Trans. Mobile Comput., vol. 1, no. 2, pp.111122,
2002.
[10] V. Rajendran, K. Obraczka1, and J. J. Garcia-Luna-
Aceves, Energyefficient,collision-free medium access
control for wireless sensor networks,Wireless
Networks, vol. 12, no. 1, pp. 6378, Feb. 2006.
[11] T. He, J. Stankovic, C. Lu, and T. Abdelzaher, A
spatiotemporal communication protocol for wireless
sensor networks, IEEE Trans. Parallel Distrib. Syst.,
vol. 16, no. 10, pp. 9951006, 2005.
[12] J. Polastre, J. Hill, and D. Culler, Versatile low
power media access for wireless sensor networks, in
Proc. ACM SenSys, pp. 95107, 2004.

[13] M.Zuniga and B. Krishnamachari, Analyzing
the transitional region in low power wireless links,
in Proc. IEEE SECON, pp.517526, 2004.
[14] K. Seada, M. Zuniga, A. Helmy, and B.
Krishnamachari, Energy-efficient forwarding
strategies for geographic routing in lossy wireless
sensor networks, in Proc. ACM SenSys, pp. 108121,
2004.
[15] A. Woo, T. Tong, and D. Culler, Taming the
underlying challenges of reliable multi hop routing in
sensor networks, in Proc. ACM Sensys, pp. 1427,
Nov. 2003.
[16] W. Lou and J. Wu, On reducing broadcast
redundancy in ad hoc wireless networks, IEEE
Trans. Mobile Comput., vol. 1, no. 2, pp.111122,
2002.
[17] L. Bao and J. Garcia-Luna-Aceves, Transmission
scheduling in ad hoc networks with directional
antennas, in Proc. MobiCom, pp.4858, 2002.
[18] O. Chipara, Z. He, G. Xing, Q. Chen, X.Wang, C. Lu,
J. Stankovic, and T. Abdelzaher, Real-time power-
aware routing in sensor network, in Proc. IWQoS,
Jun. 2006, pp. 8392.

[19] E. Felemban, C. G. Lee, and E. Ekici, MMSPEED:
Multipath multispeed protocol for QoS guarantee of
reliability and timeliness in wireless sensor
network, IEEE Trans. Mobile Comput., vol. 5, no. 6,
pp.738754, 2006.

[20] N. Boughanmi and Y.-Q. Song, A new routing
metric for satisfying both energy and delay
constraints in wireless sensor networks, J.Signal
Process. Syst., vol. 51, no. 2, pp. 137143, 2008.







NICE-2010
Acharya Institute of Technology, Bangalore-560090 208

GREEDY GRID SCHEDULING ALGORITHM IN STATIC JOB SUBMISSION
ENVIRONMENT
Saumitra Singh
Department of Compuetr Science
Manav Rachna College of Engineering Faridabad
saumitra.jk.mcs@gmail.com

________________________________________________________________________________________________________________________________________________________________________________________________
ABSTRACT

Grid scheduling is a technique by which the
user demands are met and the resources are efficiently
utilized. The scheduling algorithms are used to
minimize the jobs waiting time and completion time.
Most of the minimization algorithms are implemented
in homogeneous resource environment. In this paper
the presented algorithm minimize average turnaround
time in heterogeneous resource environment. This
algorithm is based on greedy approach which is used
in static job submission environment where all the jobs
are submitted at same time. Taken all jobs
independent the turnaround time of each job is
minimized to minimize the average turnaround time of
all submitted jobs.

Keywords
greedy method, grid, heterogeneous, high
performance computing, scheduling.
INTRODUCTION
Using the distributed resources to solve the
applications involving large volume of data is known
as grid computing [1], [2]. There exists many tools to
submit jobs on the resources which have different
computational power and are connected via Local
Area Network (LAN) or Virtual Private Network
(VPN). The main challenge in grid computing is
efficient resource utilization and minimization of
turnaround time. The existing system model consists
of the web based grid network platform with
different management policies, forming a
heterogeneous system where the computing cost and
computing performance become significant at each
node [3], [4].
In grid computing environment, applications are
submitted for use of grid resources by users from
their terminals. The resources include computing
power, communication power and storage. An
application consists of number of jobs; users want to
execute these jobs in an efficient manner [5]. There
are two possibilities of submission of jobs/data on
resources; in one of them, job is submitted on the
resources where the input data is available and in the
other, on the basis of specific criteria, resource is
selected on which both job and input data are
transferred. This paper uses second approach,
wherein the job is submitted on a scheduler and data
on a resource identified by the scheduler. A resource
in existing algorithms is selected randomly,
sequentially or according to its processing power [2],
[6], [7]. In this paper the proposed algorithm chooses
a resource on the basis of processing power, job
requirement and time to start at that resource.
The next section describes details about system
model. Section 3 describes the proposed scheduling
algorithm in static job submission environment. In
section 4 the experimental details and the results of
experiments are presented with comparison of some
existing algorithms. In section 5 conclusions and
suggestions for future improvements are proposed.
SYSTEM MODEL
A grid is considered as the combination of multiple
layers. In our model the whole system is composed of
three layers (Fig.1). The first layer is the user
application layer in which the user authentication is
done and jobs are submitted to the scheduler by the
user. The second layer contains scheduler and GIS.
The scheduler schedules jobs among various
resources after taking resource status information
from GIS. The second layer is connected through a
VPN to user. VPN provides additional security and
only authorized users can access services. All the
resources reside in third layer where user's jobs are
executed which are also connected through VPN.
Fig. 1: Layered architecture.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 209

PROPOSED ALGORITHM
The existing grid scheduling algorithms are based on
the speed of resources [6], [7]. Each resource of layer
3 (Figure. 1) has different processing power and all
the resources of layer 3 are connected via
homogeneous communication environment in which
the communication delay between scheduler and
resources is assumed constant, also the jobs are
assumed to submitted on layer 1 having different job
requirement.
An algorithm is proposed in this paper which is
suitable for static job submission in heterogeneous
resource environment connected to the scheduler
through homogeneous communication environment.
Greedy approach is used to solve the job scheduling
problem. According to the greedy approach A
greedy algorithm always makes the choice that looks
best at that moment. That is, it makes a locally
optimal choice in the hope that this choice will lead
to a globally optimal solution" [8]. The proposed
algorithm uses the similar approach; it takes every
job as independent of each other and each of them is
scheduled on a resource to give minimum
turnaround time for that job.The overall turnaround
time of all the jobs is thus minimized. The parameters
used in this algorithm are as follows:
A set of resources, R = {R1, R2, R3,........, Rn}.
Ji = The submitted i
th
job.
Arr_timei = Arrival time of job Ji.
Proc_powerj = Processing power of resource Rj.
Strt_timej = Estimated time at which a job starts
execution at resource Rj.
Job_reqi = Length of job Ji.
Schd_valueij = Expected turnaround time of i
th
job at
j
th
resource.
Min = The minimum of Schd_valueij among all
resources.
Res_id = Current selected resource id having
optimum turnaround time.

The algorithm used to schedule a job is given as
follows:





GREEDY_SCHEDULE
/*The users submit their jobs on the scheduler.*/
For all resource Rj
/*Initialize the start
time at resources.*/
Strt_timej=0.0
End For
/*The jobs are stored in a queue Q.*/
Insert all the jobs Ji in Q
While Q is not empty
do
Delete the job Ji
SUBMIT_NEW_JOB
UPDATE_STATUS
Advance the Q pointer
End While
End GREEDY_SCHEDULE

The scheduler uses SUBMIT_NEW_JOB algorithm to
find the best suited resource that minimizes the
turnaround time. The turnaround time is calculated
on the basis of expected completion time of a job. The
detailed SUBMIT_NEW_JOB algorithm is as follows:

SUBMIT_JOB
Min =
For every resource Rj
/* Calculating the expected
Schd_valueij = Strt_timej + (Job_reqi/Proc_powerj)
If Min is greater than Schd_valueij
Then
Min = Schd_valueij
Res_id = Rj
End If
End For
Submit the job Ji to Res_id resource
Submit the input data of Ji jobto Res_id resource
End SUBMIT_NEW_JOB

Once the scheduler submits a job to a resource, the
resource will remains for some time in processing of
that job. The UPDATE_STATUS algorithm is used to
find out when the resource will be available to
process a new job. The UPDATE_STATUS algorithm is
given below:

UPDATE_STATUS
/* Res_id is the resource on which the job Ji is
submitted. j is the index of resource on which the
job Ji is submitted and Rj = Res_id*/
Strt_timej = Schd_valueij
End UPDATE_STATUS

The above presented algorithm has the time
complexity of O(n) for each job, where n is the
number of resources. The above algorithm required
NICE-2010
Acharya Institute of Technology, Bangalore-560090 210

additional space to store the resorces current status
for availability.
EXPERIMENT RESULTS
The GridSim simulator [6] is used to simulate the
algorithms. The GridSim toolkit is used to simulate
heterogeneous resource environment and the
communication environment. The experiments are
performed with three algorithms. The algorithms are
Random Resource Selection and Equal Job
Distribution and Proposed Algorithm. The input data
is taken to be the same for all the three algorithms.
The simulation is conducted with three resources
which are shown in Table 1.

Table 1: Resources with their architecture and
processing power.
Resource R0 R1 R2
Architecture Sun
Ultra
Sun
Ultra
Sun
Ult
ra
OS Unix AIX Uni
x
Proc_power(in
MIPS)
48000 43000 54
00
0

The scheduler submits these jobs on resources
according to these algorithms. The algorithms are
presented one by one with their simulation results.



A. Random Resource Selection
In this algorithm the scheduler contacts GIS to obtain
the resource information and then it chooses a
resource randomly [7]. The job is submitted on this
chosen resource. This algorithm is very simple to
implement and has less overhead on the scheduler.
The bar chart (Fig. 2) shows the turnaround time of
different jobs. The completion time is a time at which
the result of a job is available. After simulation the
average turnaround time is found to be 20105.65
seconds and all the jobs are completed at the
64420.25
th
second.
Fig. 2: Jobs and turnaround time using Random
Resource Selection.
B. Equal Job Distribution
In Equal Job Distribution we firstly calculate the total
length of all the jobs and
then distribute these length equally on every
resource. The main notations which are used in the
formula are as follows:

L = Total length of all the jobs taken together.

Proc poweri = Processing power of resource Rj.
tProc power = Total processing power of all
resources.

Loadi = Load assigned on resource Rj.
The formula used to calculate the job distribution is
given bellow:

Loadi = L* (Proc poweri /tProc power)
The turnaround time of each job is shown by the bar
chart in Fig. 3. Experimental results show that the
average turnaround time is 17968.55 seconds and
the last result is outputed at 39000.22
th
second. Equal Job Distribution reduces the average
turnaround time by 10.62% and it takes less time in
comparison to the Random Resource Selection to
give all the results.
Fig 3: Jobs and turnaround time using Equal Job
Distribution.
C. Proposed Algorithm
In Proposed Algorithm, the scheduler finds the
resource information with the help of GIS and
calculates the approximate completion time of this
NICE-2010
Acharya Institute of Technology, Bangalore-560090 211

job on every resource. Using these values the
scheduler chooses a resource which has the
minimum of completion time and submits that job on
this resource. The turnaround time of each job is
shown in bar chart in Fig. 4. Through this algorithm
the average turnaround time of these jobs is
17208.77 seconds and all the jobs are completed at
41840.88
th
second. The Proposed Algorithm further
reduces the average turnaround time by 4.22% as
compared with Equal Job Distribution. The
completion time of all jobs takes some more time
than Equal Job Distribution algorithm.
Fig. 4: Jobs and turnaround time using Proposed
Algorithm.
CONCLUSION AND FUTURE WORK
The proposed scheduling algorithm reduces the
average turnaround time of all submitted jobs. The
considered environment executed the jobs on
different resources which are geographically
distributed. It is observed that the Proposed
Algorithm reduces the average turnaround by 4.22%
with Equal Job Distribution (as shown in Table 2).
The algorithm uses meta-scheduler where resource
failure is not considered.

Table 2: Algorithms with their average turnaround
time and completion time.
Algorithms Average
Turnaround
Time
(In Seconds)
Completion
Time
(In Seconds)
Random Resource
Selection
20105.65 64420.25

Equal Job
Distribution
17968.55 39000.22

Proposed Algorithm 17208.77 41840.88

REFERENCES
[1] AMMAR H. ALHUSAINI, VIKTOR K. PRASANNA,C.S.
RAGHAVENDRA, UNIFIED RESOURCE SCHEDULING
FRAMEWORK FOR HETEROGENEOUS COMPUTING
ENVIRONMENTS", IN PROCEEDINGS OF THE EIGHTH
HETEROGENEOUS COMPUTING WORKSHOP, SAN JUAN,
PUERTO RICO, PP. 156-165, 1999.
[2] N. Muthuvelu, J. Liu, N. L. Soe, S.r Venugopal, A.
Sulistio and R. Buyya, A Dynamic Job Grouping-
Based Scheduling for Deploying Applications with
Fine-Grained Tasks on Global Grids", Proceedings of
the 3rd Australasian Workshop on Grid Computing
and e-Research (AusGrid 2005), Newcastle, Australia,
41-48, January 30 - February 4, 2005.
[3] I. Foster, C Kesselman, The Grid: Blueprint for a
new computing infrastructure", Morgan Kaufmann
Publishers, San Francisco, USA, 1999.
[4] R. Buyya, D. Abramson, J.Giddy, Nimrod/G: An
Architecture for a Resource Management and
Scheduling System in a Global Computation Grid",
International Conference on High Performance
Computing in Asia-Pacific Region (HPC Asia 2000),
Beijing, China. IEEE Computer Society Press, USA,
2000.
[5] Cong Liu, Sanjeev Baskiyar and Shuang Li, A
General Distributed Scalable Peer to Peer for Mixed
Tasks in Grids", High Performance Computing
HiPC 2007, ISBN:978-3-540-77219-4, 320-330, 2007.
[6] Rajkumar Buyya, Manzur Murshed, GridSim: a
toolkit for the modeling and simulation of distributed
resource management and scheduling for Grid
computing", Technical Report, Monash University,
Nov. 2001. To appear in the Journal of Concurrency
and Computation: Practice and Experience (CCPE),
pp. 1-32, Wiley Press, May 2002.
[7] Volker Hamscher, Uwe Schewiegelshohn, Achim
Streit, Ramin Yahyapour, Evaluation of Job-
Scheduling Strategies for Grid Computing", in 1st
IEEE/ACM International Workshop on Grid
Computing (Grid 2000), Berlin, Lecture Notes in
Computer Science (LNCS), Springer, Berlin,
Heidelberg, New York, pp. 191-202, 2000.[8]
Cormen TH, Leiserson CE, Rivest RL, Introduction to
algorithms 2nd edition", MIT and McGraw-Hill Book
Company, Boston Massachusetts, cp. 16, 370-403,
2001.



NICE-2010
Acharya Institute of Technology, Bangalore-560090 212

WIRELESS PATIENT MONITORING SYSTEM TO MEASURE HEARTBEAT, BODY &
RESPIRATION TEMPERATURE
Mr. Md.Nadeem.A.Siddiqui1, Mrs. Ch.Hima Bindu, (Ph.d)
2
, Mr.Y V B Reddy M.Tech
3

P.G student in ECE Department, QIS College of Engg &Technology, Ongole, Prakasam (Dt), A.P
1
.
Sr. Associate Professor, QIS College of Engg &Technology, Ongole, Prakasam (Dt), A.P
2
.
Associate Professor, QIS College of Engg &Technology, Ongole, Prakasam (Dt), A.P
3
.
siddiqui.nadeem07@gmail.com
1
, hb.muvvala@gmail.com
2
, yvbreddy09@gmail.com
3

_______________________________________________________________________________________________________________________
ABSTRACT

This project deals with patients to monitor the
conditions such as the bio medical parameters like
heartbeat, respiration and body temperature. There
are different systems present in the market but all are
very expensive and difficult to understand. We use RF
technology to overcome all the difficulties.

INTRODUCTION
Embedded Systems
An Embedded system is a combination of
computer hardware, software and additional
mechanical parts designed to perform a specific
function. An example is the mobile phones. It is
hardly realized that the phone actually consists of a
processor and the software running inside. Another
example is the TV remote control. Very few actually
realize that there is a microcontroller inside that runs
a set of programs especially for the TV. Wireless
patient monitoring system to measure heartbeat,
body & respiration temperature is also an
application of embedded technologies in which a
microcontroller is used to control the entire device.
Using this technology we can also place this inside
cellular systems.
MICRO CONTROLLER
A Microcontroller is a general-purpose device that is
meant to read data, perform limited calculations on
that data and control its environment based on those
calculations. The prime use of a microcontroller is to
control the operation of a machine using a fixed
program that is stored in ROM and that does not
change over the lifetime of the system. A
microcontroller is a highly integrated chip that
includes all or most of the parts needed for a
controller in a single chip. The microcontroller could
be rightly called a one-chip solution.
MICRO CONTROLLER Vs MICRO PROCESSOR
If a system is developed with a microprocessor, the
designer has to go for external memory such as RAM,
ROM or EPROM and peripherals and hence the size of
the PCB will be large to hold all the required
peripherals. But, the microcontroller has got all these
peripheral facilities on a single chip and hence
development of similar system with microcontroller
reduces PCB size and the overall cost of the design.
The difference between a Microprocessor and
Microcontroller is that a Microprocessor can only
process with the data, but Microcontroller can
control external device in addition to processing the
data. If a device has to be switched ON or OFF,
external ICs are needed to do this work. But with
Microcontroller the device can be directly controlled
without an IC. A Microcontroller often deals with bits,
not bytes as in the real world application, for
example switch contracts can be open or close,
indicators should be lit or dark and motors can be
either turned on or off and so forth.
PATIENT MONITORING SYSTEM
The patient is totally monitored under certified
monitoring technicians. If there is a problem relating
to abnormality in heartbeat then. It certainly buzzers
an alarm which indicate that there is an urgency to
monitor and treat that patient according to that
situation. This automatic system plays an important
role in the part of painkilling .AAI can be defined as
Automatic administration of Certified monitoring
Ease of Use technician based on the bio-medical
parameters of the patient, eliminating future side
effects and the need for a doctor.
This is very essential in performing painless
monitoring so an Automatic administration of heart is
needed to save the patients life.
PRESENT SYSTEM
At present patients condition is carried by specialists
only to observe the patient condition. Which may
cause many difficulties such as


Acharya Institute of Technology, Bangalore

1. There is a chance of abnormalities in heart and
there is a chance of getting side effects in future.
2. If suppose the certified fails to monitor the
problem during the predetermined period, the
patient may be disturbed.
3. Other systems developed to monitor the heartbeat,
by sensing the consciousness level of the patient and
not by measuring his overall body conditions.
4. We have different technologies to monitor all the
bio medical parameters but these are very expensive
and very complicate hardware.
PROPOSED SYSTEM
Now days, embedded systems are used in many
applications in medical field for controlling
biomedical parameters. In this design, a micro
controller is used for controlling the patients
conditions automatically, depending upon the
various biomedical parameters such as body
temperature, heart rate, respiration rate etc.
We can now monitor the heartbeat, body
temperature and respiration temperature
automatically. These parameters are quite helpful to
role to judge the conditions of patients. Therefore it
is necessary to check the patient condition without
any delay. Totally automatic system is very helpful to
know the status of the patient. Advantages of using
the proposed system are,
1 The need for a specialist is eliminated.
2 Heartbeat abnormalities are measured, so the
future side effects are eliminated.
3 Using combination of hardware and software this
system is designed at an affordable cost.

Block Diagram of aSystem
Fig: block diagram of Patient monitoring system

, Bangalore-560090
1. There is a chance of abnormalities in heart and
there is a chance of getting side effects in future.
If suppose the certified fails to monitor the
problem during the predetermined period, the
3. Other systems developed to monitor the heartbeat,
by sensing the consciousness level of the patient and
y conditions.
4. We have different technologies to monitor all the
bio medical parameters but these are very expensive
Now days, embedded systems are used in many
applications in medical field for controlling various
biomedical parameters. In this design, a micro-
controller is used for controlling the patients
conditions automatically, depending upon the
various biomedical parameters such as body
temperature, heart rate, respiration rate etc.
r the heartbeat, body
temperature and respiration temperature
automatically. These parameters are quite helpful to
role to judge the conditions of patients. Therefore it
is necessary to check the patient condition without
m is very helpful to
know the status of the patient. Advantages of using
1 The need for a specialist is eliminated.
2 Heartbeat abnormalities are measured, so the
and software this
system is designed at an affordable cost.

Fig: block diagram of Patient monitoring system
WORKING OF THE SYSTEM

First the heart beat sensor, temperature sensor and
respiration sensor are placed on patient, the
readings are set according to some predefined values
to be administered in terms of beats per minute to
measure heartbeat in digital form. If there any
abnormality it simply goes to the microcontroller,
and then proceeded to transmitter, Microcontroller
sets the system to administer condition. The reading
also displayed at the transmitter so that patient can
also aware of the condition. Simultaneously, the bo
and respiration temperature also measured and
transmitted to the receiver section. In the receiver
section, all the parameters are monitored by using a
microcontroller and a buzzer. The specialist can also
observe the patients condition using personal
computer as well as be listening the buzzer. So that
he can treat the patient according to the condition
and save the life.
Fig: Transmitter Part of the system
NICE-2010
560090 213
WORKING OF THE SYSTEM
First the heart beat sensor, temperature sensor and
respiration sensor are placed on patient, the
readings are set according to some predefined values
to be administered in terms of beats per minute to
measure heartbeat in digital form. If there any
rmality it simply goes to the microcontroller,
and then proceeded to transmitter, Microcontroller
sets the system to administer condition. The reading
also displayed at the transmitter so that patient can
also aware of the condition. Simultaneously, the body
and respiration temperature also measured and
transmitted to the receiver section. In the receiver
section, all the parameters are monitored by using a
microcontroller and a buzzer. The specialist can also
observe the patients condition using personal
computer as well as be listening the buzzer. So that
he can treat the patient according to the condition
and save the life.

Fig: Transmitter Part of the system

NICE-2010
Acharya Institute of Technology, Bangalore-560090 214

Based on the results obtained from the receiver, the
specialist can analyze and record the readings so that
in future he will keep the record to analyze further
treatment. This is one of the effective ways so that
not only save the patients life but also using this
technology everyone can adopt this new technology.
This system can be used in homes as well as in small
to medium size hospitals so that they also treat the
patient without any problems. The range also quite
impressive for this technology, up to 10 metres we
can use this technique.



Fig: Receiver section of the system

In this system, we can also measure the Respiration
rate. This respiration rate can be defined as the
number of breaths taken by human. Normally the
range for respiration is 17 breaths per minute. But
this respiration rate can vary depending on age, size
of the humans. For children it will range upto 30 40
breaths per minutes. As the age increases the
respiration rate is decreased. So in our system, we
simply monitor the heartbeat, body temperature, and
respiration temperature and respiration rate. So that
this system becomes complete monitoring system.

COMPONENTS REQUIRED FOR THE SYSTEM

1. Temperature Sensor to measure body
temperature
2. Heart Beat Sensor to measure heartbeat in
bps i.e. in digital form.
3. Micro-Controller to Control the overall
operation
4. A/D Converter to convert the analog
information in to a digital format.
5. Operational Amplifier: to amplify very low
level signals.



MEASUREMENT OF BIO-MEDICAL PARAMETERS

The measurement of bio-medical parameters is a
vital process. These parameters
determine the overall condition of the patient. It
plays a very significant process to check the patients
conditions. Only based on these parameters the
movement of the microcontroller sense if it is some
abnormality in the heart beat and other parameters it
sense the signal and allow the signal through VIP
antenna, similarly the body temperature is also
measured, but this is measured in terms of both
analog and in digital form so that the doctor can
understand the condition properly and also the
respiration temperature are also measured. These
are the key links in all sensors designed to describe
and analyze the bio-medical parameters. The
transducers used here are just those that find
applications in patient monitoring systems and
experimental work on three parameters namely body
temperature, Heartbeat and respiratory temperature.
Both transducers and thermistors are made in a wide
variety of forms suitable for use in medical
applications. They are available as
1 wafers for applying on the skin surfaces
2 tiny beads for inserting into the tissues
3 Clips to simply connect

NICE-2010
Acharya Institute of Technology, Bangalore-560090 215

TEMPERATURE SENSOR
1)The most accurate method to measure
temperature is to use Thermistors and
Resistance Thermometers. Thermistor or
thermal resistor is a two-terminal semiconductor
device whose resistance is temperature sensitive.
The value of such resistors decreases with
increase in temperature. The thermistors have
very high temperature coefficient of resistance of
the order of 3% to 5% per C, making it an ideal
temperature transducer. The temperature co-
efficient of resistance is normally negative. The
output of the temperature sensor is given to the
amplifier stages. Resistance thermometers can
also be used to measure the body temperature.
Important characteristics of resistance
thermometers are high temperature co-efficient
to resistance, stable properties so that the
resistance characteristics does not drift with
repeated heating For author/s of more than two
affiliations: To change the default, adjust the
template as follows.
or cooling or mechanical strain and high resistivity to
permit the construction of small sensors.
RESPIRATION SENSOR
The primary functions of the respiratory system are
to supply oxygen to the tissues and remove carbon
dioxide from the tissues. The action of breathing is
controlled by muscular action causing the volume of
the lung to increase and decrease to affect a precise
and sensitive control of the tension of carbon dioxide
in the arterial blood. Under normal circumstances,
this is rhythmic action.
Respiratory activity can be detected by measuring
changes in the impedance across the thorax. Several
types of transducers have been developed for the
measurement of respiration rate. A Strain Gauge type
Chest Transducer is a suitable transducer to measure
the respiratory activity. The respiratory movement
results in the changes of the strain gauge element of
the transducer hence the respiration temperature
can be measured.
HEART BEAT SENSOR
Heart rate is our body's way of telling how hard it is
going. It is very vital that heart beat has to be in
normal while administering anesthesia to the patient.
Normal heart beat is 72 beats per minute. A sensor is
designed for monitoring the changes in the heart beat
of the human body. There are 2 ways of monitoring
heart rate information from the body. They are
1. Electrocardiogram (ECG)
2. PULSE
1) The E.C.G or Electrocardiogram, gives the
electrically picked up signals from the limbs due to
the nervous activity of the heart. The electrodes are
pasted on to the 2 hands and the left leg, the right leg
electrode serving as the common or ground
reference. The signals are picked up and amplified by
high gain differential amplifiers and then the
electrocardiogram signal is obtained.
2) The pulse signal refers to the flow of blood that
passes from the heart to the limbs and the peripheral
organs once per beat. Usually, the physician looks for
the pulse on the wrist of the patient. The artery is
near the surface of the skin and hence easily
palpable. This pulse occurs once per heart beat.
These pulse signals can be picked up by keeping a
piezo-electric pick up on the artery site (in the wrist).

1. We use the pulse signal to measure the
heartbeat for every 20 seconds, we take a
sample so that it gets multiplied and
monitors.

DESIGN OF A MICROCONTROLLER

The design approach of the microcontroller mirrors
that of the microprocessor. The microprocessor
design accomplishes a very flexible and extensive
repertoire of multi-byte instructions. These
instructions work in hardware configurations that
enables large amount of memory and IO to be
connected to address and data bus pins on the
integrated circuit package. The microcontroller
design uses a much more limited set of single and
double byte instructions that are used to move code
and data from internal memory to the ALU. The pins
are programmable that is capable of having several
different functions depending on the wishes of the
programmer. It is concerned with getting data from
and to its own pins. The main features of this micro
NICE-2010
Acharya Institute of Technology, Bangalore-560090 216

controller are discussed below. Depending upon these
features only one can choose any microcontroller for
suitable applications. For this application we choose
Atmel microcontroller. It is very effective and easy to
operate.
89C51 MICRO CONTROLLER

The Microcontroller that is used in this system
is AT89C51 manufactured by Atmel, MC, USA.
This is an advanced version of 8031.
SERIES : 89C51 Family
TECHNOLOGY : CMOS
The major features of 8-bit micro controller
ATMEL 89C51:
8 Bit CPU optimized for control applications
Extensive Boolean processing (Single-bit
Logic) Capabilities
On-chip Flash Program Memory
On-chip Data RAM
Bi-directional and Individually Addressable
I/O Lines
Multiple 16-Bit Timer/Counters
Full Duplex UART
Multiple Source/Vector/Priority Interrupt
Structure
On-Chip Oscillator and Clock circuitry
On-Chip EPROM
SPI Serial Bus Interface
Watch Dog Timer
Flash ROM
The 4-kb ROM in the microprocessor can be
erased and reprogrammed. If the available
memory is not enough for the program an
external ROM can be interfaced with this IC.
AT89C51 has 16 address lines, so a maximum
of (2^16) i.e. 64 bytes of ROM can be
interfaced. Both internal and external ROM
can be used simultaneously.
RAM
The Microcontroller provides internal 256
bytes of RAM. Theses 256 bytes of internal
RAM can be used along with the external RAM.
Externally a 64-kb of RAM can be connected
with the microcontroller. In internal RAM first
128 bytes of RAM is available for the user and
the remaining 128 bytes are used as special
function registers (SFR). These SFRs are used
as control registers for timer, serial port etc.

Input/output port
Four I/O ports are available in AT89C51. They are
Port 0, Port 1, Port2 and Port 3. These ports are eight
bit ports and can be controlled individually. In
addition to this the ports also has pull-up registers to
maximize its use.
Interrupts
The AT 89C51 provides 5 Interrupt sources:
1. 2 external interrupts INT0 and INT1
2. 2 timer interrupts TF0 and TF1
3. a serial port interrupts.
Memory
The memory is logically separated into Program
memory and Data memory. This logical separation
allows the data memory to be addressed by 8-bit
address. Program memory can only read the
information. There can be up to 64 bytes of directly
addressable program memory.
ADC 0808/0809
The ADC 0808/0809 is an 8-bit digital to analog
converter with 8-channel inbuilt Multiplexer. It is the
monolithic CMOS device manufactured by the
National semiconductors. It uses the principle of
Successive Approximation technique for the
conversion process. The 8-channel Multiplexer can
directly access any of the 8-single-ended analog
signals. Easy interfacing to the microcontrollers is
provided by the latched and decoded multiplexers
address inputs and latched TTL TIR-STATE outputs.
The salient features are:
1. High Speed and Accuracy
2. Minimal temperature Dependence
3. Excellent temperature dependence
4. Excellent long term accuracy and
repeatability
5. Consumes minimal power. (15 mW)
These features make this device ideally suited to
applications from process and machine control to
consumer and automotive applications.
SOFTWARE DETAILS
A program is required which when burnt into the
EPROM will operate with the AT 89C51 to do the
function of monitoring the bio-medical parameters.
The program answers the following requirements:
NICE-2010
Acharya Institute of Technology, Bangalore-560090 217


1. To read the input from the patient provided
with the microcontroller.
2. To activate the internal timer and enable it to
interrupt the AT 89C51 whenever the timer
overflows.
3. To read the parameters such as heart beat,
respiration, body temperature once in every
specified interval.
4. To check for the correctness of the parameter
values and activate the alarm set with the
system.
TO SUMMARIZE:
By using various electrical circuits the bio-medical
parameters can be found. The output of the circuits is
Authors and Affiliations amplified by means of an
amplifier and fed into an A/D converter. The
digitized signal is then fed into the input port of the
Microcontroller.
The Microcontroller displays the parameters in
digital value in the display device. If the level of the
temperature or respiration is increased or decreased
the buzzer will ring automatically with the help of
micro-controller.

Fig: the Interface between receiver and computer used
by Specialist
EXPERIMENTAL RESULTS


Fig: Different results based on patient conditions
FUTURE ENHANCEMENTS
1. Multiple parameters like Blood pressure,
retinal size, age and weight can be included
as controlling parameters in the future.
2. By increasing the Radio Frequency the
operating range also increased.
CONCLUSION
Modern technologies have developed that promotes
comfortable and better life which is disease free.
PREVENTION IS BETTER THAN CURE and protection
is intelligent than prevention and our presentation
on WIRELESS PATIENT MONITORING SYSTEM TO
MEASURE HEARTBEAT, BODY & RESPIRATION
TEMPERATURE is one of the efficient protecting
systems.
REFERENCES
1. Microcontroller and their applications Kenneth
J.Ayalaa Penram International.
2. Bio medical Instrumentation and Application
William John Webster.
3. M. Conover, Understanding Electrocardiomauhy,
C.V.
Mosby Co., Washington D.C. 1988
4. P. Lavine, V. Margaret, Reading EKG correctly,
Intermed Communication, Pennsylvania, 1975
5. J. Peatman, Design with Microcontrollers, McGraw-
HillBook Co., 1988.
6. M. N. Horenstein, Microleectronic Circuits and
Devices, Prentice-Hall, New Jersy, 1996.
7. 6. J. Peatman, Design with Microcontrollers,
McGraw-Hill Book Co., 1988
8. R. Plonsey and R. Barr, Bioelectricitv:
AOuantitative Auproach , Plenum Press, New York,
1988..
9. 2. K. Anant, F. Dowla, and G. Rodrigue, Detection of
the electrocardiogram P-Wave using wavelet
analysis,
Proc. SPIE, Vol. 2242, pp. 744-749, 1994.
10. Embedded C Coding Standard, by Michael Barr
11. Programming Embedded Systems: With C and
GNU Development Tools, 2nd Edition Michael Barr
12. Designing Embedded hardware by John Catsoulis
13. HARDWARE/FIRMWARE INTERFACE DESIGN: BEST
PRACTICES FOR IMPROVING EMBEDDED SYSTEMS
DEVELOPMENT BY GARY STRINGHAM.
14. Embedded C Programming and the Atmel AVR,
2nd Edition by Barnett and O
NICE-2010
Acharya Institute of Technology, Bangalore-560090 218

BELIEF, DESIRE AND INTENTION AGENTS: CONCEPTS, ARCHITECTURE AND AN
APPLICATION
Narendra Reddy P.N., R. C. Biradar, S. S. Manvi
Department of Electronics and Communication
Reva Institute of Technology and Management, Bangalore, India
yahoonarendra.reddy@gmail.com, rajashekharbiradar@yahoo.com,
agentsun2102@yahoo.com
___________________________________________________________________________________________________________________________________
ABSTRACT

A Belief, Desire and Intention(BDI) agent is a
rational agent having certain mental attitudes
representing information, motivational, and
deliberative states of the agent that improves the
throughput of the system in terms of its efficiency,
reliability and profitability. These mental attitudes
determine the system's behavior and critical for
achieving adequate or optimal performance when
deliberation is subject to resource bounds.

In this paper, we provide some of the concepts,
architecture and an application of BDI agents. Also
an idealized interpreter and an interaction
mechanism for BDI agents is provided. One of the
application in real time embedded systems using
BDI agents discussed here is the mobile robot
navigation system.

KEYWORDS
agents, BDI agents, events, plans, goals,
tasks,actions.

I. INTRODUCTION

Agent-oriented software engineering is the new
field, which was born in early 1990s and is still
younger. The piece of software with the artificial
intelligence which works autonomously is called an
agent. At beginning, the agent is defined in this
simple way. With passage of time, the other
parameters are added to the definition of the agent.
Research communities are working to find out the
more efficient way to design the agent software,
which exhibits all characteristics from its definition.
In 1995, Michael , define the agent theory and
practices in three parts i.e., agent theory,
architectures and languages, where he described that
agent architectures can be thought of as software
engineering models. Researchers in this area are
primarily concerned with the problem of designing
software or hardware systems that will satisfy the
properties specified by agent theorists [1]. Agent is
defined as the piece of software either process,
object, subroutine, or sub software system with the
properties of autonomy, social ability, reactivity, pro-
activeness, mobility, veracity, benevolence and
rationality[2].
Goodwin 1993, discussed few more properties like
capable, perspective and successful as general agent
properties and predictive, rational, Sound and
interpretive as deliberative agent properties [3] .
Adaptation and cooperation are the attributes given
by Park K. . Many other attributes now come into this
field by the growth of the requirement. One of the
widely accepted definitions for agent is an agent is
an encapsulated computer system that is situated in
some environment, and that is capable of flexible,
autonomous action in that environment in order to
meet its design objectives. A well-known agent
architecture that reflects many of the notions is the
Belief-Desire-Intention (BDI) architecture, which
summarize this architecture in terms of three
abstraction layers called philosophical (psychology),
theoretical, and implementation. Beliefs, Desires, and
Intentions are seen as high-level, abstract, externally
ascribed characteristics. These three characteristics
are then mapped through to the design or model
layer. In the rest of the paper, the BDI agents, their
architecture, the interaction mechanism between the
two BDI agents and a case study that uses these
agents in embedded applications have been
disscused.

II. BDI AGENTS

Agent is an independent processing entity that
interacts with the external environment and the
other agents to pursue its particular set of goals. The
agent pursues its given goals adopting the
appropriate plans, or intention, according to its
current beliefs about the state of the world, so as to
perform the role it has been given. Such an intelligent
agent is generally referred to as a Belief-Desire-
NICE-2010
Acharya Institute of Technology, Bangalore-560090 219

Intention (BDI) agent. In short, belief represents the
agents knowledge, desire represents the agents
goal, and intention lends deliberation to the agent.
The agent control loop is: first determine current
beliefs, goals and intensions, then find applicable
plans and decide which plan to apply, and finally
start executing the plan.


Fig. 1 Model showing the links between goals and tasks

Figure 1 shows BDI model that has been proved as
a dominant view in contemporary philosophy of
human mind and action. BDI model is used as a tool
to analyze agents environments, goals, and
behaviours [8]. The originality of BDI agents is said to
be that they are either proactive or reactive and may
be both partially on the same time, have a high
degree of autonomy, and are situated in and interact
with their environment, which is sometimes
considered simply as a resource. For direct
interaction with the environment and other agents,
interfaces are required. Interfaces are more
important in agents (than in objects). When an agent
perceives its environment, by using the sensors of
any type (either hardware based or software based),
then it is common to model that in terms of precepts.
When an agent interacts with its environment in
order to make a change in the environment, it is
called an action. The mechanism by which this is
accomplished is often called an effector.

A. BDI agent Architecture
The architecture of the BDI agent model is shown in
Figure 2, which has three external ports: inter-agent
communication, control, and input/output. The
control port has three functions: (a) for the system to
activate/ deactivate the agents; (b) for the agent to
inform the system when it finishes its job; (c) for the
agent to synchronize with the system. The
input/output port is used to send and receive
information to and from the host environment.
The inter-agent communication port allows the
agents to send/receive information with other agents
and cooperate with others. Under the BDI model,
agents may be given precompiled agents, or they
may plan or learn new plans at execution time. Giving
BDI agent pre-compiled plans is a method for
ensuring predictable agent under critical operational
conditions, and for ensuring performance. BDI agents
are highly suitable for the development of time and
mission critical systems, as the BDI approach
provides for the verification and validation of the
model.


Fig. 2 The architecture of BDI agent model.

B. BDI-Agent Interaction Mechanism
BDI agents are generally designed with a specific
purpose in mind. They do one or perhaps several
tasks very well, If BDI agents must perform more
tasks, we can either increase their complexity (which
increase the development effort), or we can make
them work cooperatively. Usually a complex real-
time system can be constructed as a set of
independent and cooperating agents, where each
agent owns its own intention and pursuit its own sets
of goals. For cooperation between BDI agents to
succeed, effective interaction is required. We can
view a collection of BDI agents that work together
cooperatively for a global goal. For a global goal to
function coherently, we need a common language
and communication medium. Since all of the BDI
agents work in an asynchronous mode, the messages
will only be issued on demand. This leads to a new
developed on-demand message passing (ODMP)
protocol.
BDI Agent1 BDI Agent2

NICE-2010
Acharya Institute of Technology, Bangalore-560090 220

Fig. 3 BDI-Agent Interaction System

A typical example of two BDI agents connected by a
communication path is shown in Figure 3. In this
mechanism, each BDI agent creates message queues
queued in FIFO order with priorities, where the
messages marked as urgent are attached to the head
of the queue, and the ones marked as normal are
lined in an FIFO order. Each BDI agent may have
multiple current tasks need to be implemented. The
database consists of group to BDI agent 2, BDI agent
1 needs to check the status of BDI agent 2 in its
database. If the BDI agent 2 is not on service any
more due to some failures, BDI agent 1 stops all its
messages to BDI agent 2. On the other hand, when
BDI agent 1 receives a message from BDI agent 2, it
makes its decision based on the new message and
updates its own database as well.
In order to provide a media-independent agent
interaction mechanism, adapters are necessary.
Adapters provide a uniform interface to agent,
regardless of the particular transport used. The
transport can be a bus driver for software agent, or
other high-level communication protocols, such as
UDP(User Datagram Protocol).

III. BDI AGENT APPLICATION

In this section, we provide how BDI agents can be
used in real time embedded systems. One such case
study provided here is a mobile robot system. Vision
sensors are the powerful sensors for a mobile robot
for situation awareness and target detection.
However, most computer vision approaches have
underlying assumptions, such as large memory, high
computation power, static vision system, or off-line
processing. When a vision system is on a mobile
platform, there are some constraints which are
specific compared to the static vision system. First,
the system has to be highly robust to adapt to the
changing illumination and environmental structures.
Second, the real-time processing is very important to
realize a natural reactive behavior and prevent any
serious damages. Furthermore, a mobile robot
system consists of not only a vision system but also
the navigation and control systems. Most developed
mobile robot systems are designed to work
efficiently under some certain environments, such as
structured indoor buildings, or outdoor open
environments. However, in some emergency
response situations, such as in an urban search and
rescue (USAR) environment or some unknown
hazardous environments, the working environment
of the mobile robot is unpredictable and dynamically
changed. All of these tasks have to be processed by
the on-board processor of a robot under the real-
time constraints. A software-only solution will push
the limits of the processing capability, which may
lead to a very conservative computation solution and
a very challenging real-time scheduler. Furthermore,
with one or multiple microprocessor(s) in the
system, it is difficult to handle the high frequency of
the external I/O. The overhead for the interrupts
reduces the microprocessor performance.

A. Case study : A Mobile Robot System
In this section, we discuss about the BDI-agent-
based architecture for a mobile robot navigation
system [12]. The block diagram is shown in Fig. 4.
Usually one mobile robot system has multiple sensor
systems, where each sensor agent is designed to
acquire some specific information from the external
environment.. All of the sensor agents work parallel
and independently. Sensor fusion agent analyzes the
sensor information acquired by all of the sensor
agents, fuse the sensor data if necessary, and send
the fused sensor data to the higher-level processing
agents. Global path planning agent plans an
optimized path for the robot to navigate across the
global environment given a starting point and a
destination point. Obstacle avoidance agent detours
the robot when there are obstacles in the way to
prevent the robot hitting the obstacle. Actuator
management agent receives requests from
processing agents, and distributes the control
commands to individual actuator agents. Human-
robot interface agent receives the events or
processed environmental information from sensor
fusion agents, and then sends corresponding
commands to the robot. Emergency stop agent
usually takes commands from human and sends
request to actuator manager agent to stop the robot.
According to the hw/sw codesign technology, the
hardware and software agents are partitioned as the
follows: the sensor agents, sensor fusion agent,
actuator management agent, and actuator agents are
configured in hardware on FPGA, while all of the
processing agents, including path planning, obstacle
avoidance, emergency stop, and human-robot
interface agents are implemented in software on
embedded processor core.


NICE-2010
Acharya Institute of Technology, Bangalore-560090 221


Fig. 4 Multi-agent architecture for a mobile robot
navigation system

Based on the above multi-agent architecture, when a
mobile robot navigates to different environments or
sensors encounter some malfunctions, only affected
sensor agents need to be reconfigured, while the
other agents will continue their own tasks without
any influence. The reconfiguration can be either
triggered by the sensor agents by invoking an
interrupt to processor, or by the processor itself
based on its data checking algorithm.


Fig. 5 The navigation environment

The real world scenario is shown in Fig. 5. The
mobile robot (circle) needs to navigate itself from
one fixed starting point at room B to the fixed
destination point at room A while searching a specific
feature target (a square ) and avoiding all of the
obstacles on its way. A simple color detection
algorithm is applied to the vision system to detect the
target. The map of two office rooms is installed to the
robot before the navigation, and the obstacles
(triangle and oval) are randomly scattered.

IV. ADVANTAGES

The main advantages of the BDI agents are based on
intuition and clear functional decomposition. Many
formal models can use this type of agent easily in
MAS(Multi Agent System). It has generic agent
architectures. It is the basis for many previous and
current agent architectures. Successfully applied to
small problems or simulations, like Air traffic control,
Customer-service applications (Agentis) etc.
Procedural Reasoning System (PRS) is the first BDI
system implemented [14]. Where its disadvantages
have less efficiency and commitment versus
reconsideration [15].
This section highlights some of the advantages of this
architecture.
Learning: BDI agents have many specific
mechanisms within the architecture to learn
from past behavior and adapt to new
situations.
Three Attitudes: Belief represents the agents
knowledge, desire represents the agents
goal, and intention lends deliberation to the
agent. These implies that the three attitudes
are sufficient.
Logics: The multi-modal logics that underlie
BDI (that have complete axiomatizations and
are efficiently computable) have high
relevance in practice.
Multiple Agents: In addition to explicitly
supporting learning, the framework is
appropriate to learning behavior. Further,
the BDI model explicitly describe
mechanisms for interaction with other
agents and integration into a multi-agent
system.
Explicit Goals: Most BDI implementations
have an explicit representation of goals.

V. CONCLUSION

As time passes systems go more complex and
embedded so that software engineers are hard to
analyze and design the system with a simple model.
The BDI model teaches us how agents plan their
intentions with reasoning desires on their belief
naturally. In this paper, the BDI concept to model
agent-based system is shown. The development
process for the agent-based software construction
based on the BDI agent model has been discussed.
This paper is beneficial for those who want to get
understanding for the BDI agent and its
implementation in MAS. However any interested
person can go beyond this by using the valuable
references from where the material is compiled and
used. We have discussed one case study that is
implemented using BDI agents.

NICE-2010
Acharya Institute of Technology, Bangalore-560090 222

REFERENCES

[1] Goodwin R., Formalizing Properties of Agents,
Computer Science Department, Carnegie Mellon
University, Pittsburgh, Pensylvania, 1993.
[2] Wooldridge M. and Jennings N.R., "Intelligent
Agents: Theory and Practice", Knowledge
Engineering Review, 1995.
[3] Park K., Kim J. and Park S., Goal based agent-
oriented software modelling"; Software Engineering
Conference, IEEE CNF 2000. APSEC 2000
Proceedings. Seventh Asia-Pacific, 5-8 Dec. 2000.
[4] Wooldridge M., Agent-based software
engineering IEE Proc. on Software Engineering,
1997.
[5] Jennings, N. R, "On agent-based software
engineering", Department of Electronics and
Computer Science, University of Southampton,
September 1999.
[6] Chan, K., Sterling, L. and Karunasekera, S., "Agent-
oriented software analysis", Software Engineering
Conference, 2004. IEEE Proceedings. 2004 Australian
2004.
[7] Wagner G., "A UML Profile for Agent-Oriented
Modeling", Eindhoven Univ. of Technology, Faculty of
Technology Management,G.Wagner@tm.tue.nl, 2002.
[8] Shan L. and Zhu H.,"CAMLE: A Caste-Centric
Agent-Oriented Modeling Language and
Environment, Department of Computer Science,
Oxford Brookes University, SELMAS 2004.
[9] Bruegge and Dutoit, Object-Oriented Software
Engineering (Prentice Hall, 2000). [10] "BDI-agents:
From Theory to Practice". Proceedings of the First
International Conference on Multiagent Systems
(ICMAS'95).
[11] Bratman, M. E. (1999) [1987]. Intention, Plans,
and Practical Reason. CSLI Publications.

[12] TAOM4E, Tool for Agent Oriented visual
Modeling for the Eclipse Platform.
[13] Henderson-Sellers B.,Giorgini P.," Agent-
oriented Methodologies", Publisher: Idea Group
Publishing (June 28, 2005).
[14] Winikoff, M., Padgham, L., and Harland, J."
Simplifying the development of intelligent agents" In
M. Stumptner, D. Corbett, and M. J. Brooks (Eds.),
Proceedings of the 14th Australian Joint Conference
on Artificial Intelligence (AI01), Adelaide, 10-14
December 2001.

[15] Huget, M. P. "Agent UML notation for multi-agent
system design", Internet Computing, IEEE, Volume 8,
Issue 4, July-Aug. 2004.
[16] Agent Modeling Language AML,
http://www.whitestein.com/pages/solutions/meth.
html
[17] Multi-Agent Simulation Environment,
http://www.simsesam.de/
[18] Kinny D. and Georageff M.," Commitment and
effectiveness of situated agents", In Proceedings of
the Twelfth International Joint Conference on
Artificial Intelligence (IJCAI-91), Sydney, Australia,
1991.
[19] V. Gafni. Robots: a real-time systems
architectural style. In Proc 7th European software
engineering conference, Toulouse, 1999.
[20] Jennings N. R., "On agent-based software
engineering", Artificial Intelligence, vol. 117,
February 2000.
























NICE-2010
Acharya Institute of Technology, Bangalore-560090 223

SEGMENTATION OF HISTOGRAM BASED CLUSTERS IN DIFFERENT COLOR
MODELS BY FUSION
Shaik.Basha , Mrs.CH.Hima Bindu,
P.G student,DECS , Asso.Prof in Dept of ECE,
QIS college of Engg&Technology,Ongole,Prakasam(Dt),A.P
bashashaik.r@gmail.com , hb.muvvala@gmail.com
_____________________________________________________________________________________________________________________________
ABSTRACT

This paper presents a new, simple, and efficient
segmentation approach, based on a fusion procedure
which aims at combining several segmentation maps
associated to simple partition models in order to
finally get a more reliable and accurate segmentation
result. The different label fields to be fused in our
application are given by the same and simple
clustering technique on an input image expressed in
different color models. Our fusion strategy aims at
combining these segmentation maps with a final
clustering procedure. This fusion framework remains
simple to implement, fast, general enough to be
applied to various computer vision applications

Index terms:

textured image segmentation, color models, k-means
clustering, fusion of segmentation maps.

I. INTRODUCTION
Image segmentation is a classic inverse problem
which consists of achieving a compact region-based
description of the image scene by decomposing it
into meaningful or spatially coherent regions sharing
similar attributes. This low-level vision task is often
the preliminary (and also crucial) step in many video
and computer vision applications, such as object
localization or recognition, data compression,
racking, image retrieval. Because of its simplicity and
efficiency, clustering approaches were one of the first
techniques used for the segmentation of (textured)
natural images.

After the selection and the extraction of the
image features [usually


Fig: Block diagram of segmentation.

based on color or texture and computed on
overlapping small windows centered around the
pixel to be classified, the feature samples, handled as
vectors, are grouped together in compact but well-
separated clusters corresponding to each class of the
image. The set of connected pixels belonging to each
estimated class thus defined the different regions of
the scene. The method known as k-means is one of
the most commonly used techniques in the
clustering-based segmentation field, and more
generally, by far, the most popular clustering
algorithm used in industrial applications.
PRESENT SCHEMES
Many other methods have been proposed and
studied in the last decades to solve the textured
image segmentation problem. Contrary to clustering
algorithms, spatial-based segmentation methods
exploit the connectivity information between
neighboring pixels, mean-shift-based techniques
graph based or finally region-based split and merge
procedures, sometimes directly expressed by a global
energy function to be optimized.

PROPOSED SCHEME

The segmentation approach, proposed in this
paper, is conceptually different and explores a new
strategy of fusing (i.e., efficiently combining) several
segmentation maps associated to simpler
segmentation models in order to get a final reliable
and accurate segmentation result. More precisely,
this work proposes a fusion framework which aims
at fusing several -means clustering results applied on
an input image expressed by different color models.
These different label fields are fused together by a
simple -means clustering techniques using as input
features, the local histogram of the class labels,
previously estimated and associated to each initial
clustering result.

II. INITIAL SEGMENTATIONS TO BE FUSED

NICE-2010
Acharya Institute of Technology, Bangalore-560090 224

The initial segmentation maps which will
then be fused together by our fusion framework (see
Section III) are simply given, in our application, by a -
means [2] clustering technique, applied on an input
image expressed by different color spaces, and using
as simple cues (i.e., as input multidimensional feature
descriptor) the set of values of the re-quantized color
histogram (with equidistant binning) estimated
around the pixel to be classified. In our application,
this local histogram is equally re-quantized (for each
of the three color channels) in a bins descriptor,
computed on an overlapping squared fixed-size
neighbor
hood
centered
around
the pixel
to be
classified
. This
estimatio
n can be
quickly
compute
d by using a more coarsely requantized color space
and then computing the bin index that represents
each re-quantized color. An estimate of 125 bins
descriptor, characterizing the color distribution for
each pixel to be classified, is given by the following
standard bin counting procedure:
Where is the Kronecker delta function and is a
normalization constant ensuring.






Fig: Estimation, for each pixel x, of the N = q bins
descriptor (q = 5) in the RGB color space





Algorithm. Estimation, for each pixel , of the bins
descriptor.

Estimation of the bins descriptor.

Estimation of the Nb = q
3
bins descriptor for
each pixel X belonging to Nx with color value
Rx,Gx,Bx.

do

kq
2
.q.Rx/256+q.q.Gx/256 +
q.Bx/256
h[k] h[k]+1/(Nw )
2
.

where
In this simpler model, a texton (i.e., the repetitive
character or element of a textured image, also called
a texture primitive) is herein characterized by a
mixture of colors or more precisely by the values of
the re-quantized (local) color histogram. This model
is simple to compute, allows significant data
reduction.
Finally, these (125-bin) descriptors are grouped
together into different clusters (corresponding to
each class of the image) by the classical -means
algorithm [2] with the classical Euclidean distance.
Each color space has an interesting property, which
can efficiently be taken into account in order to make
more reliable the final fusion procedure. For
example, RGB is an additive color system based on
tri-chromatic theory and nonlinear with visual
perception. This space color seems to be the optimal
one for tracking applications. The HSV is interesting
in order to decouple chromatic information from
shading effect [13]. The YIQ color channels have the
property to code the luminance and chrominance
information which are useful in compression
applications (both digital and analogue). Besides, this
system is intended to take advantage of human color
characteristics. XYZ has the advantage of being more
psycho-visually linear, although they are nonlinear in
term of linear component color mixing. The LAB
color system approximates human vision, and its
component closely matches human perception of
lightness [1]. The LUV components provide an
Nx set of pixel locations X within the
Nw X Nw neighborhood region
centered at X.
h[ ] Bins descriptor: Array of
Nb floats(h[0].h[1],h[Nb-1] )
integer part of.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 225

Euclidean color space yielding a perceptually
uniform spacing of color approximating Riemannian
space [17]. Each of these properties will be efficiently
combined by our Examples of fusion results(FCR).


Fig: initial segmentation maps

III. FUSION OF SEGMENTATION MAPS
The key idea of the proposed fusion procedure
simply consists of considering, for each site (or pixel
to be classified), the local histogram of the class (or
texton) labels of each segmentation to be fused,
computed on a squared fixed-size neighborhood
centered around the pixel, as input feature vector of a
final clustering procedure. The proposed fusion
procedure is then herein simply considered as a
problem of clustering local histograms of
(preliminary estimated) class labels computed
around and associated to each site. To this end, we
use, once again, a -means clustering procedure
exploiting, for this fusion step, an histogram-based
similarity measure derived from the Bhattacharya
similarity coefficient. Given a normalized histogram
(at pixel location) and a reference histogram
(representing one of the cluster centers of each class
of a means procedure), the Bhattacharya distance
between these two histograms is defined as and a -
means algorithm based on this distance converges in
all tested examples. The pre estimated label fields to
be fused (see Section II), along with the fusion
procedure can be viewed (and qualitatively
explained) as a two-step hierarchical segmentation
procedure in which, first, a texton segmentation map
(in each color space) is estimated and, second, a final
clustering.
This mixture of textons (expressed in the set of color
space ), is then used for a final clustering. We recall
that a texton, in our framework, is defined by a
nonparametric mixture of colors Consequently, in
this final clustering (the fusion procedure), two sites
for which the local-class-label histogram (i.e., the
mixture of textons in the different color spaces given
by the bins histogram) are not too far away from
each other will be
in the same class in the resulting fused segmentation.
We can notice that none of them can be considered as
reliable except the final segmentation result (at
bottom right) which visually identify quite faithfully
the different objects of the scene. A final merging step
is necessary and is used to avoid over segmentation
for some images. It consists of fusing each region (i.e.,
set of connected pixels belonging to the same class)
of the resulting segmentation map with one of its
neighboring region if the distance is below a given
threshold (or if its size is below 50 pixels with the
closest region in the distance sense

IV. EXPERIMENTAL RESULTS
In all the experiments, we have considered
our fusion methods on initial segmentations obtained
with the following parameters: the size of the
squared window, used to compute
the local histogram for the initial segmentations or
the fusion procedure is set to . The number of bins for
each local re-quantized histogram is set to . We use
segmentations provided by the following color
Example of final merging step using the Bhattacharya
distance on different color spaces as merging
criterion on a fused segmented image of the Berkeley
database. Spaces RGB, HSV, YIQ, XYZ, LAB, and LUV.
Several quantitative performance measures will be
given for several values (comprised between 6 and
13) respectively, the number of classes of the
segmentation to be fused and the resulting number of
classes of the final fused segmentation map. The
optimal value of seems to be comprised between 0.10
and 0.15.


Fusion Results



Fig: Results of images when applying fusion:




The comparison is based on the following
performance measures, namely a probabilistic
measure called PRI (higher probability is better) and
NICE-2010
Acharya Institute of Technology, Bangalore-560090 226

three metrics VoI, GCE, and BDE (lower distance is
better).



Fig: final merging step by using Bhattacharya distance
on six color models

V. CONCLUSION

In this paper, we have presented a new
segmentation strategy based on a fusion procedure
whose goal is to combine several segmentation maps
in order to finally get a more reliable and accurate
segmentation result. The initial segmentations to be
fused can be the output result of the same initial and
simple model used on an input image filtered by a
given filter bank, or it can also be provided by
different segmentation models or different
segmentation results provided by different seeds (or
different variation of parameters) of the same
stochastic segmentation model. This fusion
framework remains simple, fast, easily parallelizable,
general enough to be applied to various computer
vision applications, and performs competitively
among the recently reported state-of-the-art
segmentation methods.

REFERENCES

[1] S. Banks, Signal Processing, Image Processing and
Pattern Recognition.
Englewood Cliffs, NJ: Prentice-Hall, 1990.
[2] S. P. Lloyd, Least squares quantization in PCM,
IEEE Trans. Inf.
Theory, vol. IT-28, no. 2, pp. 129136, Mar. 1982.
[3] D. Comaniciu and P. Meer, Mean shift: A robust
approach toward
feature space analysis, IEEE Trans. Pattern Anal.
Mach. Intell., vol.
24, no. 5, pp. 603619, May 2002.
[4] J. Shi and J. Malik, Normalized cuts and image
segmentation, IEEE
Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp.
888905, Aug.
2000.
[5] P. Felzenszwalb and D. Huttenlocher, Efficient
graph-based image
segmentation, Int. J. Comput. Vis., vol. 59, pp. 167
181, 2004.
[6] M. Mignotte, C. Collet, P. Prez, and P. Bouthemy,
Sonar image
segmentation using a hierarchical MRF model, IEEE
Trans. Image
Process., vol. 9, no. 7, pp. 12161231, Jul. 2000.
[7] M. Mignotte, C. Collet, P. Prez, and P. Bouthemy,
Three-class Markovian
segmentation of high resolution sonar images,
Comput. Vis.
Image Understand., vol. 76, no. 3, pp. 191204, 1999.
[8] F. Destrempes, J.-F. Angers, and M. Mignotte,
Fusion of hidden
Markov random field models and its Bayesian
estimation, IEEE
Trans. Image Process., vol. 15, no. 10, pp. 29202935,
Oct. 2006.
[9] Z. Kato, T. C. Pong, and G. Q. Song, Unsupervised
segmentation of
color textured images using a multi-layer MRF
model, in Proc. Int.
Conf. Image Processing, Barcelona, Spain, Sep. 2003,
pp. 961964.
[10] P. Prez, C. Hue, J. Vermaak, and M. Gangnet,
Color-based probabilistic
tracking, in Proc. Eur. Conf. Computer Vision,
Copenhagen,
Denmark, Jun. 2002, pp. 661675.
Industrial Inspection IX, San Jose, CA, Jan. 2001, pp.
102113.
[12] J.-P. Braquelaire and L. Brun, Comparison and
optimization of
methods of color image quantization, IEEE Trans.
Image Process.,
vol. 6, no. 7, pp. 10481952, Jul. 1997.
29, no. 3, pp. 371381, Mar. 2007.
[13] Z. Kato, A Markov random field image
segmentation model for color
textured images, Image Vis. Comput., vol. 24, no. 10,
pp. 11031114,
2006.
[14] E. Maggio and A. Cavallaro, Multi-part target
representation for color
tracking, in Proc. Int. Conf. Image Processing, Italy,
Genova, Sep.
2005, pp. 729732.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 227

WIRELESS SENSORS NETWORK FOR INDUSTRIAL MONITORING
Tarana Jindal*, Himanshu Sharma
Lecturer: Mangalayatan University, Beswan, Aligarh-202001
(tarana.jindal@mangalayatan.edu.in), Mobile No: 09719557849), (himanshu.sharma@mangalayatan.edu.in),
Mobile No: 09548527451),
Vaibhav Jindal
Student of B.Tech: Aligarh Muslim University, Aligarh-202001
(vicky.elec@gmail.com), Mobile No: 09219250060
ABSTRACT
With developments in sensor
miniaturization, the power and sophistication of
microprocessors and electronics, mobile devices,
communication and networking, standardization of
protocols, wireless sensors and networks are gaining
increasing popularity in industrial applications. A
new technology, wireless sensors networks (WSN) is
a term used to describe an emerging class of
embedded communication products that provide
redundant, fault- tolerant wireless connections and
consisting of spatially distributed autonomous
devices using sensors to cooperatively monitor
physical or environmental conditions, such as
temperature, sound, vibration, pressure, motion or
pollutants, at different locations. In this paper, the
focus is on the wireless solution achieving success in
the challenging industrial environment (Power
plant) by using wireless ad-hoc sensors network.
Wireless sensor networks allow information to be
collected with more monitoring points, providing
awareness into the environmental conditions that
affect overall uptime, safety, or compliance in
industrial environments and flexible monitoring. The
main promise of wireless sensors technology is that it
can deliver big savings in cable and including energy
and material savings, process improvements, labor
savings, and productivity increases.

I.INTRODUCTION
Wireless sensors networks, applied to monitoring
physical environment, have recently emerged as an
important application resulting from the fusion of
wireless communication and embedded
technologies. Wireless is a game-changing
technology, one that will allow application of more
automation to processes because it changes the
technological and financial balance [1]. The first
applications are difficult to reach and costly to
implement solutions where there's a 90% cost
advantage versus wired. In order to take advantage
of all the benefits wireless technology has to offer,
industrial plants must adopt sound

policies mitigating risk and ensuring adequate
security for processes, people and the environment.
With increased economic pressures, continuing
advancement of cost effective wireless technology
and standardization on the horizon, its clear that
we are at the tipping point for wireless to have a
real impact on the industrial plants. The wireless
technology goes far beyond saving on installation
and wiring cost. Wireless help plant operator to
gather field data more easily, increase asset life
through continuous monitoring and improve the
safety of their most important assets their people
[16].

Finally wireless will go into non-critical and critical
control applications. As a performance, a wireless
sensors network is a network of cheap and simple
processing devices that are equipped with
environmental sensors for temperature, pressure
and humidity, etc. and can communicate with each
other using a wireless radio device.

II.ARCHITECTURE OF WIRELESS SENSORS
NETWORK

Wireless sensor networks have emerged recently
as an effective way of monitoring remote or
inhospitable physical environments. A wireless
solution uses radio waves instead of physical wires
as the communication medium [2]. Wireless
sensors network consists of wireless sensors units,
a wireless gateway, and network or control system.
Number of sensors can communicate to their
neighborhood sensors via topology (fig [2])
.

Fig.1.Architecture of wireless sensors network
A. Sensors nodes

Generally sensors collect the data from the external
environment and send signals to receiving devices,
which take the signals to the control system.
Wireless sensors act as a node in the network.
Sensor nodes can be imagined as small computers,
NICE-2010
Acharya Institute of Technology, Bangalore-560090 228

extremely basic in terms of their interfaces and
their components [3]. The deployment of sensor
nodes is the first step in establishing a sensor
network. Since sensor networks contain a large
number of sensor nodes, the nodes must be
deployed in clusters, where the location of each
particular node cannot be fully guaranteed a
priori. Therefore, the number of nodes that must
be deployed in order to completely cover the
whole monitored area is often higher than if a
deterministic procedure were used [14].

The main components of a typical sensor node
include an antenna and a communication device
(radio frequency (RF) transceiver) to allow
communication with other nodes, a memory unit, a
CPU (controller), the sensor unit and the power
source which is usually provided by
batteries(fig.[2])[4].


Fig.2. Sensor node hardware components

Wireless sensor nodes are typically low-cost, low-
power, small devices equipped with limited
sensing, data processing and wireless
communication capabilities, as well as power
supply. The dimensions of a sensor node are
small enough to allow easy deployment of a large
number of nodes into remote and inhospitable
areas. Once deployed, sensor nodes organize a
network, so that they can combine their partial
observations of the environment. By combining
those partial observations, a network offers to a
user a global view of a monitored area [14].

B. Wireless Gateway

A wireless gateway is a computer networking
device that routes packets from a wireless LAN to
another network. They combine the functions of a
wireless access point, a router, and often provide
firewall functions as well; thus they are converged
devices. Gateway is generally software installed
within a router. They understand the protocols
used by each network linked into the router and is
therefore able to translate from one to another. For
the data application requires a serial connection
while utilizing Modbus, a serial connection between
Wireless Gateway and the information system must
be planned. This can be providing by the wired or
wireless Ethernet connectivity [6]. Wireless or
wired Ethernet will provide the optimal
connectivity, security, and integration. The
Ethernet communication protocols also provide the
means for advanced security implementation.

C. Area of concern

Security: Security is at the top of the list of end-user
concerns regarding the use of wireless technology
in industrial applications.

Reliability: The ability of the network to ensure
reliable data transmission in a state of continuous
change of network structure [7].

Scalability: The ability of the network to grow, in
terms of the number of nodes, without excessive
overhead [7]. Scalability an ease of deployment that
is able to expand and incorporate new sensor
systems, and include additional legacy systems
with no limits.

Mobility: The ability of the network to handle
mobile nodes and changeable data paths [7].

III. WIRELESS SENSORS NETWORK IN
INDUSTRIES

Wireless transmission of data in industrial
applications has been around for a long time but
recently it has gained importance, with attention
from both markets leaders and medium and small
sized competitors [15]. Wireless sensor technology
is now moving rapidly into niche applications in
plants (power) and other industrial environments
where it can deliver cost advantages and increase
flexibility [8]. Because of Industrial environment
are uniquely different from office and home
environments. High temperature, excessive
airborne particulates, multiple obstacles and long
distances separating equipment and systems, are
special challenges that make it different to place
and reach sensors, transmission and other data
communication devices [12]. To gain a competitive
advantage, many industrial plants are demanding
greater amounts of information, faster methods of
processing it, and the means to distribute it to more
locations [8]. They seek more devices to collect
better information on the physical world, assess its
meaning, and communicate it- often over longer
distances. Successful use of wireless sensors in
systems such as supervisory control and data
acquisition (SCADA) proved that these devices
could effectively address the needs of industrial
applications [15].

On the other hand, with use of wireless sensors
networks these difficulties are made easy. No
NICE-2010
Acharya Institute of Technology, Bangalore-560090 229

customization, integration, or development is
required, and there are no wiring or installation
costs. With wireless sensors networks, Engineers
should not have to worry about spending all their
time engineering, configuring, programming and
integrating wireless systems. They should be
spending time figuring out how to use the data
not how to get the data from one place to another
[9].


Fig .3. Wireless Sensing System in industries
A. Characteristics of industrial wireless sensors

Limited power they can harvest or store
Ability to cope with node failure
Dynamic network topology
Heterogeneity of nodes
Large scale of deployment
Long life on batteries (5 year)
Not affected by electrical noise or obstacles
Use Multiple Frequencies
Scalable network

B. Topology

First major step in a successful wireless
implementation is to determine what sensors will
be used as wireless devices, what network topology
is best suited for the application (monitoring), and
whether bi-direction data flow will be required in
the application (monitoring). Mostly Wireless
sensor network can be configured into self-
organizing mesh or star-mesh networks (fig [3])
[10]. It should be noted that wireless sensors can
connect to networks, regardless of whether the
networks are wireless or wired.

A mesh network is not a LAN, and it is not Ethernet.
Mesh networks are said to be self configuring and
self healing: they automatically determine the best
path between the sensors and the gateway. The
mesh topology is very reliable.

C. Standards and Protocols

Today, most of the wireless protocols that are
designed specifically for industrial application in a
process environment are proprietary. However,
work on developing the standards for an open
wireless protocol is in an advanced stage. Once
developed, this will help the users by protecting
their investments and would lead to much faster
growth of the wireless applications. Many of the
WSN systems today are based on ZigBee or IEEE
802.15.4 protocols due to their low-power
consumption [11]. The IEEE 802.15.4 protocol
defines the Physical and Medium Access Control
layers in the networking model, providing
communication in the 868 to 915 MHz and 2.4 GHz
ISM bands, and data rates up to 250 kb/s. ZigBee
builds on the 802.15.4 layers to provide security,
reliability through mesh networking topologies,
and interoperability with other devices and
standards. ZigBee also allows user-defined
application objects, or profiles, which provide
customization and flexibility within the protocol
[11]. Use IEEE 802.15.4 with added channel
hopping (FHSS) and Time Synchronized Mesh
Protocol (TSMP) on 2.4GHz ISM band

FHSS (Frequency hopping spread spectrum) - Data
is transmitted on a single channel at a time, but the
channel is rapidly and constantly or hopping. This
scheme requires low bandwidth but also reduces
echo, noise and channel sharing issues [12].This
provides robust fault tolerance in the face of RF
interface.

TSMP (Time Synchronized Mesh Protocol) - TSMP
is a media access and networking protocol that
forms the foundation of reliable, ultra low-power
wireless sensor networking. TSMP also provides
the intelligence required for self-organizing, self-
healing mesh routing [13]. TSMP is a packet-based
protocol where each transmission contains a single
packet and acknowledgements (ACKs) are
generated when a packet has been received
unaltered and complete [13].

IV. CONCLUSION

Wireless sensor networks have opened the doors to
many applications that require monitoring as well
as control applications from remote places and
having a system that can provide large amount of
data about those applications for longer periods of
time. This large amount of data availability usually
allows for other new discoveries and future
improvements in industries. These technologies
also facilitate new applications that were
NICE-2010
Acharya Institute of Technology, Bangalore-560090 230

previously not possible using wired sensors.
Wireless sensor networks provide adaptive
monitoring systems in industrial environments
with the flexibility and adaptability needed in a
plants monitoring. In future, many Industrial
plants expect widespread use of interoperable
devices for industrial system and they will also be
implementing more and more wireless
applications, and introducing new protocols to
their facilities with the use of open standards
solutions. This will enable more & more products to
be available in the market & drive down the costs of
this technology.

REFERENCES

[1] Mark T. Hoske Industrial Wireless
Implementation Guide 8/1/2008S
[2] http:/ www.rosemount.com Deploying
Industrial Wireless Solutions
[3] http:// wsn.oversigma.com/wiki Wireless
Sensors Network 13 /1/ 2007
[4] Bahareh Gholamzadeh, and Hooman Nabovati
Concepts for Designing Low Power Wireless
Sensor Network 2008
[5] Eliana Stavrou Wireless Sensors Networks:
Introduction
[6] Self-Organizing Network: Best Practices
Planning, Installation, and Commissioning
GuideMarch2008
http:/www.emersonprocess.com/rosemount
[7] Maximizing Data Reliability in Wireless
Sensors Networks
http:/www.millennialnet.com
[8] U.S. Doe, Industrial Wireless Technology for the
21st Century Report
[9] Jack Smith Wireless: new tools, strategies
change how plants are monitored, 7/1/2006 at
http:/ www.plantengineering.com
[10] John Suh--How to Build Wireless Sensors
Networks
[11] http:/ www.ni.com white paper on What is a
Wireless Sensors Networks?
[12] http:/ www.bb-elec.com Industrial Wireless-
Selecting a Wireless Technology May2008.
[13] Dust Network Technical Overview of Time
Synchronized Mesh Protocol (TSMP)
[14] Sasa Slijepcevic, Miodrag Potkonjak
Power Efficient Organization of Wireless Sensor
Networks Computer Science Department,
University of California, Los Angeles, CA 90095-
1596
[15]. Dr. Rajender Thusu, PhD Wireless Sensor use
is Expending in Industrial Applications June 1,
2010
[16]. Tarana Jindal, Shradha Yadav Implementing a
Reliable, Secure and Scalable Wireless Sensor
Network .2009









































NICE-2010
Acharya Institute of Technology, Bangalore-560090 231


ROUTING TECHNIQUES IN VANETS: A SURVEY
Vishwanath M. Harnal, Prof.Vijayshree R. B.
Department of Electronics and Communication Engineering
Basaveshwar Engineering College, Bagalkot - 587102, India.
vishwaharnal@gmail.com , vijaysri_rb@rediffmail.com
_____________________________________________________________________________________________________________________________
ABSTRACT
Vehicular Ad-Hoc Networks (VANETs) are
the special case of Mobile Ad-hoc Network (MANET).
A vehicular ad hoc network (VANET) is a high
mobility wireless ad hoc network that is targeted to
support vehicular safety, traffic monitoring, and
other commercial applications. In such type of
dynamic networks, efficient information exchange is
necessary. Many routing protocols have been
developed for MANETs over the past few years. But,
due to the high mobility they are not suitable for
VANETs. Some of the routing techniques are
developed for VANETs. In this paper we are
representing special design issues for routing
protocols in VANETs and valuating the performance
of some efficient developed routing protocols for
VANETs.

KEYWORDS: VANETs, MANETs, design issues,
routing protocols,
1. INTRODUCTION
Vehicular ad hoc networks (VANET) using
IEEE802.11p based wireless technology have
recently received considerable attention. The
VANET can be used for driver-vehicle safety
applications and non-safety applications. The
feasibility and quality of the non-safety applications
will be dependent on the topological and dynamical
properties of the ad hoc networks [1], [2].
Routing is an essential issue in VANETs due to
the high node (vehicle) mobility in vehicular
environment. For safety applications, efficient
information exchange among the vehicles and
vehicle-infrastructure is necessary. Analyzes of
traditional routing protocols for mobile ad hoc
networks (MANETs) demonstrated that their
performance is poor in VANETs [3], [4]. The main
problem with these protocols (AODV, DSR [5], [6],
etc) in VANETs environments is their route
instability. The traditional node-centric view of the
routes (i.e., an established route is a fixed
succession of nodes between the source and
destination) leads to frequent broken routes in the
presence of VANETs high mobility. Consequently,
many packets are dropped and the overhead due to
route repairs or failure notifications increases
significantly, leading to low delivery ratios and high
transmission delays. Alternatively many routing
techniques are developed specially for high mobile
nodes environment. These protocols can be divided
into two broad categories: Topology-based routing
protocols include all proactive and reactive
protocols whereas position-based routing protocols
include geographic and opportunistic protocols [7].
Proactive protocols maintain a correct routing
table at all times by sending periodic control
messages. Reactive protocols do not maintain
routing table but discover the route when data to
be sent. This is usually done by flooding a control
message from the source at that time. Reactive and
proactive protocols have been designed for unicast
communication but most of them have multicast
extensions e.g. MOLSR [8], MAODV [9]. However,
these protocols have several drawbacks and cannot
be used in VANETs. Proactive protocols have too
high overhead due to their periodic control
message. Reactive protocols consume less
bandwidth but unfortunately, due to the mobility of
vehicles in VANETs, their performance, in terms of
delay, is unacceptable.
Geographic routing protocols use the knowledge
of nodes, geographic positions to forward packets
towards their destination. This approach reduces
the complexity because, instead of running an
algorithm over the entire network to find globally
optimal routes, each successive hop is constructed
incrementally. Geographic routing protocols reduce
the routing state of each node dramatically
compared with topology based routing protocols
and are hence much more scalable. They have to
know the position of the destination. Location
services are usually used to determine the
destination nodes position [10]. They also let a
node know the positions of all the one hop
neighbours. Periodic control beacons, containing
their geographic location, are exchanged between
one hop neighbours for this information. GPSR [11]
is the example of geographic routing protocols.
Opportunistic routing protocols are essentially
geographical routing protocols in the sense that the
positions of the nodes are used to route the
packets. The main difference with geographic
routing lies in the fact that opportunistic routing
selects relays dynamically from all the nodes that
have received the packet correctly, for example
Two Phase routing protocol (TOPO) for large scale
VANETs. TOPO defines two phases in routing,
namely routing in access and overlay. While overlay
is a graph of high vehicular density roads, (e.g. state
roads, highways), access is the rest of the
NICE-2010
Acharya Institute of Technology, Bangalore-560090 232

areas/roads connecting to overlay. TOPO utilizes
the road and traffic information on overlay and
delivers message along overlay to the access area of
destination, where routing can again be handled in
a small scale area. Due to mobility, fading, collision,
etc. are reduced dramatically and shows that
opportunistic routing greatly outperforms its rival
[12].
In this paper we are representing design issues
for routing in high mobility environment and
surveying all the routing protocols were developed
for high mobility vehicular communication and also
this paper analyzes all protocols in particular
applications in vehicular environment.
2. Vehicular Ad-hoc Networks
Vehicular Ad-Hoc Network (VANET)s are the
special case of Mobile Ad-hoc Network(MANET)s.
Vehicular ad hoc networking is an important
component of Intelligent Transportation Systems.
Vehicular Ad-Hoc Network
(VANET) communication has recently become an
increasingly popular research topic in the area of
wireless networking as well as the automotive
industries. The main benefit of vehicular ad hoc
network (VANET) communication is seen in active
safety systems that increase passenger safety by
exchanging warning messages between vehicles.
Other applications and private services are also
permitted in order to lower the cost and to
encourage VANET deployment and adoption.
2.1. Architecture
The simple vehicular network architecture is
depicted in fig. 1. The network consists of three
distinct domains: in-vehicle, ad-hoc, and infra-
structure domain. In a VANET, each vehicle is
equipped with the technology that allows the
vehicle to communicate with each other as well as
with the roadside infrastructure, e.g., base stations
also known as roadside units (RSUs), located in
some critical sections of the road, such as traffic
lights, intersections, or stop signs, to improve the
driving experience and make driving safer [13]. By
using such communication devices, also known as
on-board units (OBUs), vehicles can communicate
with each other as well as with RSUs. A VANET is a
self-organized network that enables
communications between vehicles and RSUs, and
the RSUs can be connected to a backbone network,
so that many other network applications and
services, including Internet access, can be provided
to the vehicles.


Figure 1: Simple VANET Scenario.
2.2. Applications of VANETs
Vehicular communication networks will provide a
wide range of applications with different
characteristics. This includes safety applications
and also non-safety applications. Some of the
applications in vehicular environment are shown in
Table1.
Applications based on vehicular communication
range from simple exchange of vehicle status data
to highly complex large-scale traffic management
including infrastructure integration. This section
gives an overview of envisioned application
categories for vehicular networks [14]. This
provides an initial understanding of the properties
NICE-2010
Acharya Institute of Technology, Bangalore-560090 233

of VANET communication and leads to a more
detailed analysis of network characteristics.
3. Routing in VANETs
Routing is an essential building block for VANETs,
which determines how the data can be delivered
from a vehicle to another vehicle in the network.
The design of efficient routing protocols for
VANETs is challenging due to the high node
mobility and the movement constraints of mobile
modes. VANETs, as one category of Inter-Vehicle
Communication (IVC) networks, are characterized
by rapid topology changes and frequent
fragmentation.
The communication protocol design will affect
the performance results. A good communication
protocol design can achieve highly reliable, highly
scalable, and high performance. In aspect of
networking, there are many challenges that how to
offer quality of service, build high performance
routing, provide mobility management, and create
secure vehicular ad hoc networks. VANET safety
applications include safety warnings and collision
avoidance. It is important to improve the non-
safety applications and reduce the occurrence of
collision. Conventional topology-based routing
schemes are not suitable for VANETs. Reactive
routing schemes will fail to discover a complete
path due to frequent network partition and
proactive routing protocols will be overwhelmed
by the rapid topology changes and even fail to
converge during the routing information exchange
stage.





Situation/purpose Application examples





I. Active safety
1. Dangerous road
features
1. Curve speed warning, 2.low bridge warning, 3. warning
about violated traffic lights or stop signals
2. Abnormal traffic
and road conditions
1. Vehicle-based road condition warning, 2.infrastructure-
based road condition warning, 3.Visibility enhancer, 4.
work zone warning
3. Danger of collision 1. Blind spot warning, 2.lane change warning,
3.intersection collision warning, 4.forward/rear collision
warning, 5.emergency electronic brake lights, 6.rail
collision warning, 7. warning about pedestrians crossing
4. Crash imminent 1. Pre-crash sensing
5. Incident occurred 1. Post-crash warning, 2. Break down warning, 3. SOS
service


II. Public service
1. Emergency response 1. Approaching emergency vehicle warning, 2.emergency
vehicle signal preemption, 3. emergency vehicle at scene
warning
2. Support for
Authorities
1. Electronic license plate, 2.electronic drivers license,
3.vehicle safety inspection, 4. stolen vehicles tracking



III. Improved
driving
1. Enhanced Driving 1. Highway merge assistant, 2.left turn assistant, 3.
Cooperative adaptive cruise control, 4.cooperative glare
reduction, 5.in-vehicle signage, 6. adaptive drivetrain
management
2. Traffic Efficiency 1. Notification of crash or road surface conditions to a
traffic operation center, 2.intelligent traffic flow control,
3.enhanced route guidance and navigation, 4. Map
download/update, 5. parking spot locator service



IV. Business/
Entertainment
1. Vehicle
Maintenance
1. Wireless diagnostics, 2.software update/flashing,
3.safety recall notice, 4. just- in time repair notification
2. Mobile Services 1. Internet service provisioning, 2. instant messaging, 3.
point-of-interest notification
3. Enterprise solutions 1. Fleet management, 2.rental car processing, 3.area access
control, 4. Hazardous material cargo tracking
4. E-Payment 1. Toll collection, 2.parking payment, 3. gas payment

Table1. Overview of applications for VANETs
NICE-2010
Acharya Institute of Technology, Bangalore-560090 234

Conventional topology-based routing schemes are
not suitable for VANETs. Reactive routing schemes
will fail to discover a complete path due to frequent
network partition and proactive routing protocols
will be overwhelmed by the rapid topology changes
and even fail to converge during the routing
information exchange stage.
3.1. Routing design issues
Vehicular ad hoc networks (VANETs) are expected to
support a large spectrum of mobile distributed
applications ranging from traffic alert dissemination
and dynamic route planning to context-aware
advertisement and file sharing [1][5]. Considering
the large number of nodes participating in these
networks and their high mobility, debates still exist
about the feasibility of applications using end-to-end
multi-hop communication. The main concern is
whether the performance of VANET routing
protocols can satisfy the throughput and delay
requirements of such applications. While designing of
routing echniques some of the important routing
issues are explained below.
3.1.1. Mobility management
Since the nodes are highly mobile in the vehicular
environment. Thus the network topology may change
rapidly and unpredictably and the connectivity
among the terminals may vary with time, hence this
may affect in some safety applications. VANET should
adept to the traffic and propagation conditions as
well as the mobility patterns of the mobile network
nodes [13]. The mobile nodes in the network
dynamically establish routing among themselves as
they move about, forming their own network on the
fly. Hence there is a need of strong mobility patterns
in VANETs.
3.1.2. Time delay
Time delay during the packet transmission and
reception is mainly depends upon finding shortest
path between source and destination nodes and node
reliability. Main important characteristic of the
routing algorithm is to find out the shortest path
between source and destination nodes. If node
density goes on increases, time delay goes on
increases, because finding of routing table for each
node and access of nodes information based on
reliability of the node, due to high mobility of the
node in the vehicular environment. Hence, low time
delay and more or fast reliability routing protocol is
necessary for safety applications in vehicular
environment.
3.1.3. Bandwidth
The main problem is that, VANETs will face when
achieving high penetration rates in dense traffic
roads, i.e., the limited channel capacity to support the
ex-change of safety-related information. In these
scenarios we consider that all nodes can send two
types of safety related messages: a) periodic
messages to make the other cars aware about their
state and b) emergency messages triggered by the
detection of a non-safe situation [21]. In order to
ensure that both types of messages can be handled
efficiently with the existent resources we limit the
wireless channel load resulting from the periodic
messages. Moreover, we require a strict fairness
among the vehicles because of the safety nature of
VANET applications. For safety applications
broadcasting is efficient, with high traffic density. But
in case of low traffic density, there will be wastage of
bandwidth. Hence bandwidth is an important
criterion in designing of routing protocols.
3.1.4. Node density
Apart from speed and movement pattern, node
density is the important design issue for routing
protocol. In case of low density of vehicles, packet
forwarding becomes impossible. In this case more
sophisticated information dissemination is
necessary, which can store and forward selected
information when vehicles encounter each other. In
this case the same message may be repeated by the
same vehicle multiple times. In high-density
situations the opposite must be achieved. Here, a
message should be repeated only by selected nodes,
because otherwise this may lead to an overloaded
channel. In addition, node density is not only
correlated to the type of road, but also to time. In the
daytime the density on highways or in cities is high
enough for immediate forwarding, as long as the
routing can deal with fragmentation. However,
during the night, few vehicles are around on these
kinds of roads either.
3.1.5. Node reliability
In the vehicular environment, vehicles may join and
leave the network at any time and much more
frequently than in the other wireless networks. The
arrival/departure rate of vehicles depends on their
speed, the environment, as well as on the driver
needs to be connected to the network. In case of ad-
hoc deployments, the communication does not easily
depend on a single vehicle for packet forwarding.
This occurs because of non-coverage of
communication range between communicating
vehicles. Thus there is a need to take help of
intermediate nodes for packet forwarding to
NICE-2010
Acharya Institute of Technology, Bangalore-560090 235

destination vehicle. Intermediate nodes must be
reliable to forward the packets efficiently.
3.1.6. Privacy and Security
Vehicular communication (VC) systems have the
potential to improve road safety and driving comfort.
Nevertheless, securing the operation is a prerequisite
for deployment. It is vital to secure communication in
VANETs, otherwise the benefits of those novel
networks can turn into a nightmare: an attacker
could send falsified information to other nodes, or
block others from receiving safety messages. Since
periodic safety messages are single-hop broadcasts,
the focus has been mostly on securing the application
layer. Digital signatures are a good choice because
safety messages are normally standalone in VANETs
[18]. Because of the large numbers of network
members and variable connectivity to authentication
servers, a public key infrastructure (PKI) is a good
way to implement authentication. Each vehicle would
be given a public /private key pair. Before sending a
safety message, it signs with its private key. Hence
we can improve the performance of the vehicular
communication system.
3.1.7. Network Congestion
Congestion control in VANET is a challenging issue.
Continuous transmission of packets in high traffic
density, leads the congestion. In congestion, source
reduces its data rate. However, in VANETs, the
topology changes within seconds and a congested
node used for forwarding a few seconds ago might
not be used at all at the point in time when the source
reacts to the congestion. Due to the mainly
broadcast/geocast oriented communication and
highly dynamic network topology; these are not
suitable for VANET. So an appropriate model is
needed, where each node locally adepts to the
available bandwidth.
3.1.8. Data aggregation
The main difficulty in proactive type routing
protocols is overheads during the packet
transmission. The vehicles have to pass on the data
sent by the neighbours to other neighbours of its
coverage area. This increases the number of packets
to be sent by a vehicle. Therefore, data aggregation
techniques are used to reduce such overheads. Data
aggregation is an interesting approach, which
reduces the number of packets transmitted
drastically by combining several messages related to
the same event into one aggregate message. Hence
this reduces the number of packets transmission in
the VANET.
3.2. Ongoing works on routing techniques in
VANETs
Routing is crucial to the success of VANET
applications, which is complicated by the fact that
VANETs are highly mobile, and sometimes
partitioned. Two types of routing protocols are
essential to maintain connectivity in the VANETs:
Unicast protocols for point-to-point communication
and broadcast and Multicast protocols for point-to-
multicast communication. IETF (Internet Engineering
Task Force), through its MANET working group, has
designed and standardized several routing protocols
for MANETs e.g. OLSR [6], AODV [7] and DSR [8].
Such types of routing protocols are not suitable for
VANETs. There have also been several other
protocols proposed for use in VANETs, e.g. GPSR [9],
IGF [10], etc. these protocols can be divided into two
broad categories: Topology-based routing protocols
include all proactive and reactive protocols whereas
position-based routing protocols include geographic
and opportunistic protocols.
3.2.1. Greedy Perimeter Stateless Routing (GPSR)
Greedy Perimeter Stateless Routing (GPSR) [9] is one
well known position-based routing protocol. Which is
suitable for highly dynamic environments such as
inter vehicular communication on highways It uses
greedy forwarding to send/forward packets to
neighbouring nodes that are the closest to the
destination called a control node, the transmitted
packet searches the control node which is for the
destination and receives the packet and again it
transmits to the destination. In GPSR it is assumed
that every node in the network knows the exact
physical location of its neighbours and that of the
destination as well. In regions of the network where
such a greedy forwarding is not possible, GPSR
recovers by forwarding in perimeter mode, in which
a packet traverses successively closer faces of a
planar sub graph of the full radio network
connectivity graph, until reaching a node closer to
the destination, where greedy forwarding resumes.
This requires the less bandwidth and time delay.
Hence, this is having high throughput than OLSR and
AODV due to specific mobility and higher velocity.
3.2.2. Movement-Based Routing Algorithm MORA
To improve the position-based routing
performances in VANETs, F. Granelli et all. Have
proposed [10] a Movement-Based Routing Algorithm
(MORA) for vehicular ad hoc networks. They have
applied this algorithm to GPSR. MORA takes into
account the physical location of neighbouring
vehicles and their movement direction when
NICE-2010
Acharya Institute of Technology, Bangalore-560090 236

selecting the next hop for sending/forwarding
packets. More details about MORA can be found in
[10].
We believe that considering only the position and the
movement direction is not enough for a best next hop
selection in VANETs. The vehicles driving speed is
important and should be taken into account as well. A
vehicle which is almost out the communication range
should not be selected as a next hop, which cannot be
guaranteed without taking into account the speed. In
the following section we propose our MOPR concept
applied to GPSR, with taking into account
neighbouring vehicles position and movement
direction, and their movement speed as well. Thus,
with MOPR vehicle which is estimated to go out the
communication range in a short duration time will
not be selected as a next hop for data routing if some
better candidate is available.
3.2.3. MOPR-BASED GEO-ROUTING
To show the performance improvements of MOPR
over position-based routing protocols they have
applied it over GPSR. In GPSR, a vehicle does not save
any route to a destination, and do not use the same
path for the whole transmission. For each packet to
send or to forward, a vehicle selects a next hop
among its neighbours. The selected next hop will be
used for one packet transfer and then a next hop
selection process is done for the next packet to
send/forward. When applying MOPR to GPSR as it is,
the selected paths should be same or longer in terms
of hops number when compared to basic GPSR [10].
And the calculation of neighbouring links' LS before
sending/forwarding each packet takes a considerable
time. All that decreases the routing performances.
To face this problem, they have applied MOPR in a
different way. When a vehicle wants to send or
forward data, it first estimates the future geographic
location after a duration time T in seconds for each
neighbour. T is counted in seconds. Then, it selects as
next hop the closest neighbour to the destination
which has not a future location out of its
communication range after the time T. By doing that,
MOPR-GPSR avoids the case when a next hop goes
out of the communication range during a data packet
transmission. Thus, decreases the data loss and link-
layer and transport retransmissions, which increases
the routing performances.
3.2.4. Road-Based Vehicular Traffic information
(RBVT)
The RBVT routing protocols leverage real-time
vehicular traffic information to create road-based
paths. RBVT paths can be created on-demand or
proactively. We designed and implemented two
RBVT protocols, each illustrating a method of path
creation: a reactive protocol, RBVT-R, and a proactive
protocol RBVT-P. The RBVT protocols assume that
each vehicle is equipped with a GPS receiver, digital
maps (e.g., Tiger Line database [21]), and a
navigation system that maps GPS positions on roads.
Vehicles exchange packets using short-range wireless
interfaces such as IEEE 802.11 [20] and DSRC
(Dedicated Short Range Communication) [17].
RBVT-R: Reactive Routing Protocol
RBVT-R is a reactive source routing protocol for
VANETs that creates road-based paths (routes) on-
demand, using connected road segments. A
connected road segment is a segment between two
adjacent intersections with enough vehicular traffic
to ensure network connectivity. These routes,
represented as sequences of intersections, are stored
in the data packet headers and used by intermediate
nodes to geographically forward packets between
intersections.
Route Discovery: When a source node needs to send
information to a destination node, RBVT-R initiates a
route discovery process. The source creates a route
discovery (RD) packet, whose header includes the
address and location of the source, the address of the
destination, and a sequence number. We assume
unique addresses for nodes. RD is flooded in the
region around the source to discover a route towards
the destination. If a node receives an RD packet with
the same source address and sequence number with
a previously received packet, it discards it. When a
node receives a new RD, it does not directly
rebroadcast this packet; the node holds the packet
for a period of time inversely proportional to the
distance between itself and the sending node. Once
the waiting period is over, a node re-broadcasts the
RD packet. In RBVT-R, the route is built gradually.
Initially, the route stored in the RD packet.
Route Reply: Upon receiving the RD packet, the
destination node creates a route reply (RR) packet
for the source. The route recorded in the RD header
is copied in RR header. This route defines a
connected path, composed of road intersections,
from source to destination. The destination also adds
its current position in the RR header. The RR packet
is forwarded along the road segments defined by the
intersections stored in its header. Upon receiving the
RR packet, the source starts sending data. Each data
packet stores the route in its header and it is
geographically forwarded along this route.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 237

Route Maintenance: Existing routes are updated to
adapt to the movements of the source and
destination over time as well as to repair broken
paths. Since sources and destinations are moving
vehicles, the route created during the route discovery
phase is not expected to remain constant. We use a
dynamic route updating technique at the source to
keep the route consistent with the current road
segment positions of the source and the destination
nodes. This change takes place at the source, which
also informs the destination of the new path using
route update control packets. Consequently, the
destination sends a route update packet to the
source. If this update is received at the source, it
means the route is valid and it can therefore be used
for future data transmissions. A route error occurs
when no forwarding node can be found to reach the
next intersection in the route. In this case, the node
which detects the problem, that will unicasts a route
error packet to the source. We observed that many
times the broken routes are just temporary.
Therefore, to reduce the flooding associated with the
route discovery process, the source does not
generate a new RD packet as soon as it receives a
route error notification. Upon such a notification, it
puts the respective route on hold for a certain
timeout. Packets toward that destination are queued
until the expiration of the hold timeout. The source
then attempts to use the same route. An RD is
generated only after a few consecutive route errors.
RBVT-P: Proactive Road-Based Routing
RBVT-P is a proactive routing algorithm that
periodically discovers and disseminates the road-
based network topology in order to maintain a
relatively consistent view of the network
connectivity at each node. Each vehicle node uses this
(near) real-time graph of the connected road
segments to compute shortest paths to each
intersection. RBVT-P assumes that a source can
query a location service, such as [17], to determine
the position of the destination when it needs to send
data.
Topology Discovery: Proactive routing algorithms
[18] use various forms of flooding to discover the
network topology. To keep up with VANETs
mobility, flooding may be required quite often and
the routing overhead would lead to heavy congestion
in the network. In RBVT-P, however, we can limit the
flooding frequency as we are mainly interested in
discovering the road-based network topology. More
precisely, the goal of RBVT-P is to capture the real-
time view of the traffic on the roads. Thus, the fact
that the connectivity between certain nodes on a
road segment changes over time does not matter as
much, as long as that road segment remains
connected. This situation is highly probable on roads
with relatively dense vehicular traffic. The road-
based network topology is constructed using
connectivity packets (CPs) unicast in the network.
CPs traverses road segments and store their end
points (i.e., intersections) in the packet. CPs are
generated periodically by a number of randomly
selected nodes in the network. Each node decides
independently whether to generate a new CP based
on the estimated current number of vehicles in the
networks, the historic hourly traffic information, and
the time interval since it last received a CP update.
When creating a new CP, a node defines the road-
based perimeter of the region to be covered by the CP
and stores it in the CP.
Topology Dissemination: The network topology
information in the CP is extracted and stored in a
route update (RU) packet that is disseminated to all
nodes in the network. Upon receiving an RU packet,
nodes update their local routing table to reflect the
newly received information. Each node maintains a
routing table.
Route Computation: A source node computes the
shortest path to the destination using only those road
segments that are marked as reachable in its routing
table. The sequence of intersections denoting the
path is added to the header of each data packet. This
header includes the timestamp associated with the
route to allow for freshness comparisons at
intermediate nodes. Once the route is computed,
RBVT-P uses loose source routing to forward data
packets in order to improve the forwarding
performance. The idea is to quickly forward the
packet when the intermediate nodes have the same
or older information than the source, but at the same
time, to take advantage of fresher information when
available.
Route Maintenance: Intermediate nodes with
fresher information update the path in the header of
data packets. In case of route break, the intermediate
node switches to geographical routing, which is used
until the packet reaches a node that has fresher
information, in which case a new route is stored in
the packet header.
3.2.5. BORDER NODE BASED ROUTING (BBR)
PROTOCOL
We briefly describe a Border node Based Routing
(BBR) protocol for partially connected VANETs that
consider the characteristics of partially connected
NICE-2010
Acharya Institute of Technology, Bangalore-560090 238

VANETs while at the same time takes into account
the limitations of existing routing approaches for
partially connected ad hoc networks [14]. The BBR
protocol is mainly based on broadcast and applies
the store-and-forward approach used in epidemic
routing. Instead of simply flooding the network, a
flooding control scheme is explored by using one-hop
neighbour information only. The BBR protocol is
specifically designed to accommodate for the effects
of node mobility on data delivery.
The BBR protocol is designed for sending
messages from any node to any other node (unicast)
or from one node to all other nodes (broadcast). The
general design goals are to optimize the broadcast
behaviour for low node density and high mobility
networks and to deliver messages with high
reliability while minimizing delivery delay. The BBR
protocol has two basic functional units: a neighbour
discovery algorithm, and a border node selection
algorithm. The neighbour discovery process is
responsible for collection of current one-hop
neighbour information. As in most proactive
topology-based protocols, this step requires periodic
beaconing of hello messages. The border node
selection process is responsible for selection of the
right candidate/candidates for packet forwarding
based on the one hop neighbour information
collected in the neighbour discovery process.
In the BBR protocol, border nodes are selected per
broadcast event. A border node is defined as a node
which has the responsibility of saving received
broadcast packet/packets and forwarding the
packet/packets when appropriate. The BBR protocol
uses a distributed border node selection algorithm.
The decision whether a node is a border node or not
for a particular broadcast event is made
independently by an individual node based on its
one-hop neighbour information and the received
broadcast information. For a specific broadcast, an
ideal candidate to forward a packet would be
node/nodes that is/are located at the edge of the
radio transmission range of a source node. The
border node is selected based only on one-hop
neighbour information using a minimum common
neighbour concept. The minimum common
neighbour approach uses a protocol whereby nodes
share nearest neighbour lists, and through a
distributed procedure, determine which node/nodes
share the least number of common neighbours. The
node/nodes that satisfy this condition are typically
furthest from the forwarding node. An alternative
approach, most number of uncommon neighbours,
was also examined and gave similar results. Position-
based protocols, in contrast, use location information
to select the neighbour node that is closest to the
destination. The net effect is equivalent, but BBR
does not require a location service.
3.2.6. Two-Phase routing protocol (TOPO)
In this protocol, routing from any given source to
destination, there can be only four combinations
based on the positioning of the nodes, i.e., source
node in access, destination node in access, source
node in overlay, and destination node in overlay. If
both nodes are in overlay, then only overlay routing
is enough; if only one of them is in overlay, routing
will consist of both overlay and access phases; if both
of them are in access, routing can be done with pure
access phase or a mix of both phases (i.e., access to
overlay then to access). TOPO borrows the
philosophy of Internet routing, where the message
from one end user to another will be relayed on both
the Internet backbone and local access networks
[15], [16]. We give the most general case in VANET
routing both ends of the route in access. The source
node is aware of the position of the destination, and
thus, can calculate the shortest path on the map to
that destination using path planning algorithms. We
argue that it is only necessary and efficient to do the
path planning in overlay. After the paths have been
planned, the source node will begin TOPO with
Access Routing Phase and deliver the messages to the
overlay. Then the routing switches to the Overlay
Routing Phase and messages will be delivered along
the path to the access areas. Finally, TOPO will switch
back to Access Routing Phase and send the messages
to the destination node.
4. CONCLUSION
In this paper, we have presented a survey of VANET
routing protocols. We also presented a qualitative
comparison of their objectives, design approaches,
and requirements. While each protocol described in
the paper has generally provided a performance
evaluation against a few other protocols, a
comprehensive performance evaluation of all the
protocols would offer significant value. In such an
evaluation, variables such as traffic density, map
layouts, route distance, and radio range could be
made consistent for each protocol. Criteria such as
routing overhead, packet delivery ratio, average
packet delay, delay variance, link reliability, average
number of hops, and average buffer sizes could be
compared uniformly. Routing protocols for VANETs
have separated themselves from routing protocols
NICE-2010
Acharya Institute of Technology, Bangalore-560090 239

for MANETs due to the inherent characteristics of
communication on roadways.
Compared to the other ad hoc networks, due to its
highly dynamic nature a VANET environment clearly
presents great challenges in designing appropriate
routing protocols. The results obtained are valuable
because they define the upper performance bound
for unicast routing over DSRC-enabled VANET in
both urban and highway environments with typical
vehicle speeds and traffic densities. In the future, we
plan to further elaborate on the redundant routes in
urban and highway environment in order to
determine the optimum rebroadcast probabilities for
selective broadcast routing protocols in these
environments.
REFERENCES
[1] C. Lochert, H. Hartenstein, J. Tian, H. Fuler, D.
Hermann, and M. Mauve, A routing strategy for
vehicular ad hoc networks in city environments, in
Proceedings IEEE Intelligent Vehicles Symposium,
Columbus, OH, USA, June 2003, pp. 156161.
[2] M. Jerbi, R. Meraihi, S. Senouci, and Y. Ghamri-
Doudane, "GyTAR: Improved Greedy Traffic Aware
Routing Protocol for Vehicular Ad Hoc Networks in
City Environments," Proc. International Workshop on
Vehicular Ad hoc Networks, 2006, pp. 88-89.
[3] B. Karp and H. Kung, "GPSR: Greedy Perimeter
Stateless Routing for Wireless Networks," Proc.
ACMMOBICOM, 2000, pp. 243-254.
[4] U. Lee, B. Zhou, M. Gerla, E. Magistretti, P.
Bellavista, and A. Corradi, "Mobeyes: Smart Mobs for
Urban Monitoring with a Vehicular Sensor Network,"
IEEE Wireless Communications, vol. 13(5), pp, 52-
57,2006.
[5] Y.-J. Kim, R. Govindan, B. Karp and S. Shenker,
Geographic Routing Made Practical, Proceedings of
the 2nd NSDI 2005, Boston, MA.
[6] Kim, Y. J., Govindan, R., KARP, B., Shenker, S.,On
the Pitfalls of Geographic Routing, in Proc. of the 3rd
International Workshop on DIALM-POMC (Cologne,
Germany, September 2005).
[7] H. Frey , I. Stojmenovic, On delivery guarantees
of face and combined greedy-face routing in ad hoc
and sensor networks, Proceedings of the 12th
annual international conference on Mobile
computing and networking, September 23-29, 2006,
Los Angeles, CA, USA.
[8] Karp, B. and Kung, H.T., "Greedy Perimeter
Stateless Routing for Wireless Networks," in
Proceedings of the Sixth Annual ACM/IEEE
International Conference on Mobile Computing and
Networking (MobiCom 2000), Boston, MA, August,
2000, pp. 243-254.
[9] F. Granelli, G. Boato, and D. Kliazovich, "MORA: a
Movement-Based Routing Algorithm for Vehicle Ad
Hoc Networks," IEEE Workshop on Automotive
Networking and Applications (AutoNet 2006), San
Francisco, U.S.A., December 2006.
[10] U.S.Census Bureau - TIGER/Line 2006 Second
edition, http://www.census.gov/geo/www/tiger/
(last accessed July 2008).
[11] The Institute of Electrical and Electronic
Engineers (IEEE), Wireless lan medium access
control (mac) and physical layer scpecifications,
http://standards.ieee.org/getieee802/802.11.html
[12] Task 3 Final Report, Identify intelligent vehicle
safety applications enabled by dsrc, dot hs 809 859,
march 2005. http://wwwnrd.
nhtsa.dot.gov/pdf/nrd-
12/1665CAMP3web/index.html (last accessed July
2008).
























NICE-2010
Acharya Institute of Technology, Bangalore-560090 240

REAL TIME IMAGE ENCODER WITH DCT COMPRESSION ALGORITHM
Sriram. R
*
and Venkatesh. S
*
Meenakshi College of Engineering, Chennai-78. Email: sri17ram@yahoo.com
________________________________________________________________________________________________________________________
ABSTRACT
The main focus of the paper is to compress an
image through Discrete Cosine Transform (DCT)
technique; this can be accomplished by reducing the
chrominance value of the image. To discard such
things, the compressor divides each DCT output value
by Co efficient of quantization and rounds to integer.
Thus the JPEG lossy is more frugal to grey scale and
more frivolous to color. The experiment shows that this
new scheme of image compression tremendously
reduces the image size upto 18 22 times of the
original size, which is a quite important feature
required by real time system.

Key words:
Discrete Cosine Transform (DCT), Chrominance,
Quantization, Compression.

I. INTRODUCTION

Currently, the most popular image
compression scheme is JPEG Standard for still images
and MPEG standard for video sequence. The system
aims at converting an image file to its compressed
format by using some encoding techniques. This can
be accomplished by using a set of primitives. These
primitives are meant to process an image through
various steps and result an image file in compressed
format. For typical images, the adjacent pixels usually
possess a high degree spatial correlation that is a
great deal of information about a pixel value can be
obtained by investigating its neighboring pixel
values.

Predictive techniques exploit this spatial
correlation and encode only the new information
between successive pixels. Intra frame coding is one
such predictive technique. Intra frame compression
is the technique which can be applied to still images,
such as photographs and diagrams, and exploits the
redundancy within the images, known as spatial




redundancy. Intra frame compression techniques can
also applied to individual frames of a video sequence.

JPEG Encoder was designed specifically to
discard information that the human eye cannot easily
see. Slight changes in color are not perceived well by
the human eye, while slight changes in intensity
(light and dark) are. Therefore JPEG's lossy encoding
tends to be more frugal with the gray-scale part of an
image and to be more frivolous with the color.

The JPEG specification defines a minimal
subset of the standard called baseline JPEG, which all
JPEG-aware applications are required to support.
This baseline uses an encoding scheme based on the
Discrete Cosine Transform (DCT) to achieve
compression. DCT is a generic name for a class of
operations identified and published some years ago.
DCT-based algorithms have since made their way
into various compression methods.

DCT algorithms are capable of achieving a
high degree of compression with only minimal loss of
data. This scheme is effective only for compressing
continuous-tone images in which the differences
between adjacent pixels are usually small.

II. DOWN SAMPLING THE CHROMINANCE
COMPONENTS

The simplest way of exploiting the eye's
lesser sensitivity to chrominance information is
simply to use fewer pixels for the chrominance
channels. For example, in an image nominally
1000x1000 pixels, we might use a full 1000x1000
luminance pixels but only 500x500 pixels for each
chrominance component.

Acharya Institute of Technology, Bangalore


When the uncompressed data is supplied in a
conventional format (equal resolution for all
channels), a JPEG compressor must reduce the
resolution of the chrominance channels by
sampling, or averaging together groups of pixels.
A DCT is applied to each 8x8 block. DCT
converts the spatial image represen
frequency map: the low-order or "DC" term
represents the average value in the block, while
successive higher-order ("AC") terms represent the
strength of more and more rapid changes across the
width or height of the block.

The highest AC term represents the strength
of a cosine wave alternating from maximum to
minimum at adjacent pixels. The DCT calculation is
fairly complex; in fact, this is the most costly step in
JPEG compression. The point of doing it is that we
have now separated out the high- and low
information present in the image.
The equations for the Discrete Cosine
Transforms are:


We can discard high-frequency data easily
without losing low-frequency information. The DCT
step itself is lossless except for round off errors. If
you start with a 8x8 block of 64 values,
can be transformed to a new set of 64 values,
by the Forward Discrete Cosine Transform and then
back to the original 64 values, f(x,y)
Discrete Cosine Transform.

III. ENCODER IMPLEMENTATION

To discard an appropriate amount of
information, the compressor divides each DCT output
value by a quantization coefficient and rounds the
result to an integer. The larger the quantization
coefficient, the more data is lost, because the actual
DCT value is represented less accurately. Each of the
64 positions of the DCT output block has its own
quantization coefficient, with the higher

, Bangalore-560090
When the uncompressed data is supplied in a
ional format (equal resolution for all
channels), a JPEG compressor must reduce the
resolution of the chrominance channels by down
, or averaging together groups of pixels.
A DCT is applied to each 8x8 block. DCT
converts the spatial image representation into a
order or "DC" term
represents the average value in the block, while
order ("AC") terms represent the
strength of more and more rapid changes across the
rm represents the strength
of a cosine wave alternating from maximum to
minimum at adjacent pixels. The DCT calculation is
fairly complex; in fact, this is the most costly step in
JPEG compression. The point of doing it is that we
and low-frequency
The equations for the Discrete Cosine

frequency data easily
frequency information. The DCT
step itself is lossless except for round off errors. If
you start with a 8x8 block of 64 values, f (x,y), they
can be transformed to a new set of 64 values, F(x,y),
by the Forward Discrete Cosine Transform and then
f(x,y), by the Inverse
III. ENCODER IMPLEMENTATION
To discard an appropriate amount of
information, the compressor divides each DCT output
by a quantization coefficient and rounds the
result to an integer. The larger the quantization
coefficient, the more data is lost, because the actual
DCT value is represented less accurately. Each of the
64 positions of the DCT output block has its own
antization coefficient, with the higher-order terms
being quantized more heavily than the low
terms (i.e, the higher-
quantization coefficients).

Furthermore, separate quantization tables
are employed for luminance and chromi
with the chrominance data being quantized more
heavily than the luminance data. This allows JPEG to
exploit further the eye's differing sensitivity to
Luminance and chrominance. It is this step that is
controlled by the "quality" setting of mos
compressors. The compressor starts from a built
table that is appropriate for a medium
and increases or decreases the value of each table
entry in inverse proportion to the requested quality.

The complete quantization tables act
used are recorded in the compressed file so that the
decompressor will know how to (approximately)
reconstruct the DCT coefficients. The resulting
coefficients contain a significant amount of
redundant data. Huffman compression will losslessly
remove the redundancies, resulting in smaller JPEG
data. The encoder connection establishment is shown
in below figure (1).


Figure 1. Connection Establishment of the x86
Processor and the development board.

The system board that we have used is
technically called Prayog Development Board. Prayog
is based on Intel Strong Arm Microprocessor (SA
1110) a highly integrated microcontroller that
incorporates a 32-bit Strong arm processor core,
system control module, multiple communication
channels and an LCD contro
as the stand-alone system as it has all the required
hardware interfaces to be a stand

RS232 is a standard for serial binary single
ended data and control signals connecting between a
DTE (Data Terminal Equipment) a
NICE-2010
560090 241
being quantized more heavily than the low-order
-order terms have larger
quantization coefficients).
Furthermore, separate quantization tables
are employed for luminance and chrominance data,
with the chrominance data being quantized more
heavily than the luminance data. This allows JPEG to
exploit further the eye's differing sensitivity to
Luminance and chrominance. It is this step that is
controlled by the "quality" setting of most JPEG
compressors. The compressor starts from a built-in
table that is appropriate for a medium-quality setting
and increases or decreases the value of each table
entry in inverse proportion to the requested quality.
The complete quantization tables actually
used are recorded in the compressed file so that the
decompressor will know how to (approximately)
reconstruct the DCT coefficients. The resulting
coefficients contain a significant amount of
redundant data. Huffman compression will losslessly
the redundancies, resulting in smaller JPEG
data. The encoder connection establishment is shown

Figure 1. Connection Establishment of the x86
Processor and the development board.
The system board that we have used is
called Prayog Development Board. Prayog
is based on Intel Strong Arm Microprocessor (SA-
1110) a highly integrated microcontroller that
bit Strong arm processor core,
system control module, multiple communication
channels and an LCD controller. We can use PRAYOG
alone system as it has all the required
hardware interfaces to be a stand-alone computer.
RS232 is a standard for serial binary single-
ended data and control signals connecting between a
(Data Terminal Equipment) and a DCE (Data
NICE-2010
Acharya Institute of Technology, Bangalore-560090 242

Circuit-terminating Equipment). It is commonly used
in computer serial port. So the connection between
the x86 processor and development board is
established by using RS-232.

IV. RESULTS AND DISCUSSIONS

To compile the developed code you need to
have the Makefile in the directory where the source
code is available. This Makefile is meant to compile
JPEG source code on x86 processor in platform is
Linux.

The connection should be made from the
x86processor to prayog board through Local Area
Network. IP address of the both systems is to be
known in order to have a uninterrupted
communication and execution of the program. Here
the x86 processor acts as a server. The first window
displayed below is to find the IP address of the NFS
server.



The implementation of the connection is
shown below in the second window.



The output of executed code is displayed
below, which intimates the size of the input image
and output image (Compressed image). Figure 2
represents the input image of the encoder system
and the resultant output image is displayed in the
figure 3.








Figure 2. Input image of the Encoder system.







Figure 3. Output image of the Encoder system.






NICE-2010
Acharya Institute of Technology, Bangalore-560090 243

V. CONCLUSION

The quality of the image obtained using JPEG
compression Techniques depends on the
Quantization values chosen. Low Quantization values
produce a slight loss of resolution, but no significant
loss of picture quality. As the Quantization values
increase, the compression process starts to become
visible as blocking of the image. Use of IPP to reduce
the development time greatly, and also the code is
more optimized.

REFERENCES

[1] N. Ahmed, T. Natarayan, and K. R. Rao, Discrete
cosine transform,IEEE Trans. Comput., vol. COM-23,
no. 1, pp. 9093, Jan. 1974.

[2] Q. Lu, J. Luo, and R. L. Joshi, Image Processing
Method for Reducing Noise and Blocking Artifact in a
Digital Image, U.S. Patent 6 636 645, Oct. 21, 2003.

[3] Chen W H, Smith C, and Fralick S, "A Fast
Computational Algorithm forthe Discrete Cosine
Transform," IEEE Transactions on Communications,
vo1.25, no. 9, pp. 1004-1009 , Sept. 1977.

[4] Y. Kawasaka, Block Distortion Corrector and
Image Signal Expander, U.S. Patent 5 949 917, Sep. 7,
1999.

[5] Cho N I, and Lee S U, "Fast Algorithm and
Implementation of 2D Discrete Cosine Transform,"
IEEE Transactions on Circuits and System,
vol. 38, no. 3, pp. 297-305, Mar. 1991.

[6] Agostini L V, Silva I S, and Bampi S "Pipelined fast
2-D DCT architecture for JPEG image compression,"
in Proc. 14th Symposium on integrated Circuits and
Systems Design , Pirenopolis, Brazil, Sept. 2001,pp.
226-231.

[7] Steve Furber. "ARM System-on-Chip
Architecture". Addison-Wesley, 2000.

[8] Adam Wiggins et el. "Implementations of Fast
Address-Space Switching and TLB Sharing on the
Strong ARM Processor", in the Proceedings of the 8
th

Australia-Pacific Computer Systems Architecture
Conference, Aizu Wakmatsu City, Japan, September
2003.
[9] R. Gonzales, R. Woods Digital Image Processing,
Addison-Wesley Publishing Company, 1992, pp 81
125.

[10] G. K. Wallace, The JPEG still picture
compression standard, IEEE Trans. Consum.
Electron., vol. 38, no. 1, pp. 1819, Jan. 1992.

[11] Lee B, "FCT-A Fast Cosine Transform," in Proc.
IEEE International Conference on Acoustics, Speech
and Signal Processing, San Diego, USA, Mar. 1984, vol.
9, pp. 477-480.

[12] K. R. Rao and P. Yip, Discrete Cosine Transform:
Algorithms, Advantages and Applications. San Diego,
CA: Academic, 1990.

[13] J. Luo, C. W. Chen, K. J. Parker, and T. S. Huang,
Artifact reduction in low bit rate DCT-based image
compression, IEEE Trans. Image Process., vol. 5, no.
9, pp. 13631368, Sep. 1996.

[14] Bhaskaran V, and Konstantinides K, Image and
Video Compression Standards Algorithms and
Architectures, 4th ed., Dordrecht: Kluwer Academic,
2007.



















NICE-2010
Acharya Institute of Technology, Bangalore-560090 244






LOCALIZATION ALGORITHMS IN WIRELESS SENSOR NETWORKS:
CURRENT ISSUES AND CHALLENGES
Sunil Kumar Kapil Tomar
Department of Computer Science & Engineering Department of Information Technology
Bharat Institute of Technology at Meerut IIMT at Meerut
Uttar Pradesh Technical Unniversity, Lucknow (UP) Uttar Pradesh Technical Unniversity, Lucknow (UP)
sk4sunilkumar@gmail.com kkapiltomar@gmail.com
_____________________________________________________________________________________________________________________________
ABSTRACT:

Awareness of location is one of the important and
critical issue and challenge in wireless sensor network.
Knowledge of Location among the participating nodes
is one of the crucial requirements in designing of
solutions for various issues related to Wireless sensor
networks (WSNs). Wireless sensor network are being
used in environmental applications to perform the
number of task such as environment monitoring,
disaster relief, target tracking, defenses and many
more. Node localization is required to report the
origin of events, assist group querying of sensors,
routing and to answer questions on the network
coverage. This paper provides an overview of various
aspects involved in the design and implementation of
wireless sensor network localization systems and
future research directions and challenges for
improving node localization in wireless sensor
network.

KEYWORDS:
Centralized Localization, Distributed Localization,
Beacon-based distributed algorithms, Relaxation-
based distributed algorithms, Coordinate system
stitching based distributed algorithms, Diffusion,
Bounding Box, Gradient, Wireless Sensor Networks.

[1] INTRODUCTION:
Wireless Sensor network consisting hundreds and
thousands of sensors which are distributed in a
region and monitor physical or environmental
conditions such as temperature, sound, vibration,
pressure, motion etc. development of wireless sensor
network is motivated by military applications such
as battlefield surveillance and now used in many
industrial, civilians, machine health monitoring,
traffic control and home automation. A sensor node
cost varies in terms of size and power. Wireless
sensor network normally constitutes a wireless ad-
hoc network meaning that each sensor supports a
multi hop routing algorithm where nodes function as
stored and forwarder data to the base station. In
computer science and telecommunication wireless
sensor network are active research area and number
of researchers working in this area.
In most algorithms it is assumed that the sensor
nodes are aware of their locations and also about the
locations of their neighbors. Hence localization is a
major research area in wireless sensor network.
Localization in wireless sensor network can be
defined as identification of sensor nodes positions.
The accuracy of its localization techniques is highly
desired. Existing algorithms can be classified in two
categories.
1- Range based technique. 2- Range free
technique.
In range based location of sensor node determined
with the help of the distance or angle metrics. These
metrics time of arrival (TOA), time difference of
arrivals (TDOA) [6], angle of arrival (AOA) [9],
received signal strength indicator (RSSI) [8]. Range
based highly accurate but required expensive
hardware and lot of computation. DV-hop, DV-
distance is range based. In range free position of
sensor node is identified on the basis of information
transmitted by nearby anchor nodes based on hop or
triangulation basis. Chord selection approach, 3-D
multilateration approach, centroid scheme are range
free techniques. Range free cheaper than range based
but with low accuracy.
GPS is one solution but for large number of sensor
nodes, straightforward solution of adding GPS to all
nodes in the network is not a feasible solution
because:
1- In the presence of dense forests, mountains
or other obstacles that block the line of
Sight from GPS satellites, GPS can not be
NICE-2010
Acharya Institute of Technology, Bangalore-560090 245

implemented.
2- The power consumption of GPS will reduce
the battery life of the sensor nodes and
Also reduce the effective lifetime of the sensor
network.
3- GPS not work efficiently in indoor
environment and in deep under water.
4- Size of GPS and its antenna increases the
sensor node form factor.
For these reasons an alternate solution of GPS
required which is cost effective, rapidly deployable
and operates in diverse environment.
The paper structured as: Section 2 Define problem
definition. Section 3 related work. Section 4 Current
issues in location discovery. Section 5 different
future challenges in different approaches to improve
localization. Section 7 Open problems where the
future challenges and direction to improve
localization in WSN technology.

[2] PROBLEM DEFINITION:
To design a localization scheme that localizes the
randomly deployed sensors nodes with low
computation and low communication overhead.
Consider the case when we have deployed a sensor
network consist of N sensors at locations S= {S1, sS2,
S3Sn}. Let Sx
i
, Sy
i
, Sz
i
denotes coordinates of x, y, z
respectively of location of sensor i. Constraining Sz
i
to
be 0 suffices the 2D version of this problem. But
highly accuracy comes when use 3 D in wireless
sensor network. So we require a mechanism for WSN
that operate in 3 D space for real world problems.
Some sensor node aware of their positions these
nodes are known as anchors node or bacons. All
other node localizes them with the help of location
references received from anchors. One algorithm in 3
D space sensor network [7] is comprised of mobile
and static sensor nodes. Mobile nodes equipped with
GPS and expected that they aware of their positions
at any instance of time. Mobile nodes move in the
network space and periodically broadcast beacon
messages about their locations. Static sensor nodes
receive these messages as soon as they enter the
communication range of any mobile node on
receiving such messages static node calculates their
individual position based on equation of sphere. But
problem arise when all the nodes are moving and
changing its coordinates rapidly no researchers gives
an unique and efficient algorithm that accurately find
the coordinates of moving node in 3 D space WSN
which is universally accepted so this is also a
challenge for all researchers who working in this
field.
So mathematically the localization problem can be
defined as follows. Given a multi hop network
represented by a graph G(V,E) and a set of mobile
nodes B and their positions at any instance {Xb, Yb}
for all b B. we want to find out the position of {Xu,
Yu} for all unknown nodes u U.

[3] RELATED WORK:
Localization in wireless sensor network is an active
area of research [3] [4]. Some techniques are which
are not mentioned in any literature are MDS
(multidimensional scaling), PDM (proximity based
map), APIT and APS (Ad-hoc positioning system). But
these techniques gives new directions in WSN
localization as these scheme gives high accuracy in
low computation cost. Moreover due to channel
fading and noise corruption, error propagation
comes. To reduce this error propagation localization
scheme has not been discussed by any literature.
Real world problem in 3 D wireless sensor network
will give new proposals in future.
As in [] have proposed a range free localization
approach for 3 D wireless sensor network where GPS
equipped flying anchor is moved around a region
under surveillance and it continuously broadcast its
position information these messages help other
sensors nodes to compute their location. This
scheme was proved to be better than any existing
range free localization scheme for 3 D wireless
sensor network. Basic assumptions is node are static
it is impractical to be used in case where sensors are
more prone to displacements.
This literature gives comprehensive summary of
these techniques along with other existing
localization schemes. At the same time this paper
also compares all localization techniques and also
provides future research direction in this area.
Different approaches in location discovery: Existing
location discovery consist of two phase: (1) distance
(or angle) estimation and (2) distance (or angle)
combining. The most popular methods for estimating
the distance between two nodes are RSSI, TOA,
TDOA, AOA, hop count. For the combining phase, the
most popular alternatives are hyperbolic
trilateration, maximum likelihood estimation.

[4] CURRENT ISSUES IN LOCATION DISCOVERY:
1- Resource constraints: nodes must be cheap
to fabricate, and trivially easy to deploy.
Nodes must be cheap, since fifty cents of
additional cost per node translates to $500
NICE-2010
Acharya Institute of Technology, Bangalore-560090 246

for a one thousand node network.
Deployment must be easy as well: thirty
seconds of handling time per node to
prepare for localization translates to over
eight man-hours of work to deploy a 1000
node network. That means designers must
actively work to minimize the power cost,
hardware cost, and deployment cost of their
localization algorithms.
2- Node density: Many localization algorithms
are sensitive to node density. For instance,
hop count based schemes generally require
high node density so that the hop count
approximation for distance is accurate
(section 1.2.3). Similarly, algorithms that
depend on beacon nodes fail when the
beacon density is not high enough in a
particular region. Thus, when designing or
analyzing an algorithm, it is important to
notice the algorithms implicit density
assumptions, since high node density can
sometimes be expensive if not totally
infeasible
3- Environmental obstacles and terrain
irregularities: Environmental obstacles and
terrain irregularities can also wreak havoc
on localization. Large rocks can occlude line
of sight, preventing TDoA ranging, or
interfere with radios, introducing error into
RSSI ranges and producing incorrect hop
count ranges.Indoors, natural features like
walls can impede measurements as well. All
of these issues are likely to come up in real
deployments, so localization systems should
be able to cope
4- Security: Security is the main issue in
localization as the data is transferred from
beacon node to anchor node then any of
mobile beacons which is a virus or not
secure acting as original mobile beacons
transmit false messages due to this an error
will occur which is harmful for our
computation..
5- Non convex topologies: Border nodes are a
problem because less information is
available about them and that information is
of lower quality. This problem is exacerbated
when a sensor network has a non-convex
shape: Sensors outside the main convex body
of the network can often prove unlocalizable.
Even when locations can be found, the
results tend to feature disproportionate
error.

[5] Future challenges in location discovery
approaches: Research on localization in wireless
sensor networks can be classified into two broad
categories centralized and distributed. These
algorithms are given by researchers but there are
some aspects which we will consider as a challenge
in future.
1- Distributed Localization: If each node collects
partial data and executes the algorithm then
localization algorithm is distributed.
1.1- Beacon-based distributed algorithms:
Categorized into 3 parts:
1.1.1- Diffusion: In diffusion the most likely
position of the node is at the centroid [1] [2] of its
neighboring known nodes. APIT requires a high ratio
of beacons to nodes and longer range beacons to get
a good position estimate. For low beacon density this
scheme will not give accurate results.
1.1.2- Bounding box: Bounding box forms a
bounding region for each node and then tries to
refine their positions. The collaborative
multilateration enables sensor nodes to accurately
estimate their locations by using known beacon
locations that are several hops away and distance
measurements to neighboring nodes. At the same
time it increases the computational cost also.
1.1.3- Gradient: Error in hop count distance
matrices in the presence of an obstacle.
1.2- Relaxation-based distributed algorithms: The
limitation of this approach is that the algorithm is
susceptible to local minima [5].
1.3- Coordinate system stitching based distributed
algorithms: The advantage of this approach is that no
global resources or communications are needed. The
disadvantage is that convergence may take some
time and that nodes with high mobility may be hard
to cover.
1.4- Hybrid localization algorithms: The limitation of
this scheme is that it does not perform well when
there are only few anchors. SHARP gives poor
performance for anisotropic network.
1.5- Interferometric ranging based localization:
Localization using this scheme requires considerably
larger set of measurement which limits their solution
to smaller network.

2- Centralized Localization: If an algorithm collects
localization related data from one station and
execute it from the same station then it is called
centralized.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 247

2.1- MDS-MAP: The advantage of this scheme is that
it does not need anchor or beacon nodes to start
with. It builds a relative map of the nodes even
without anchor nodes and next with three or more
anchor nodes, the relative map is transformed into
absolute coordinates. This method works well in
situations with low ratios of anchor nodes. A
drawback of MDS-MAP [10] is that it requires global
information of the network and centralized
computation
2.2- Localize node based on Simulated Annealing:
This algorithm does not propagate error in
localization. The proposed flip ambiguity mitigation
method is based on neighborhood information of
nodes and it works well in a sensor network with
medium to high node density. However when the
node density is low, it is possible that a node is
flipped and still maintains the correct neighborhood.
In this situation, the proposed algorithm fails to
identify the flipped node
2.3- A RSSI-based centralized localization technique:
The advantage of this scheme is that it is a practical,
self-organizing scheme that allows addressing any
outdoor environments [8]. The limitation of this
scheme is that the scheme is power consuming
because it requires extensive generation and need to
forward much information to the central unit
[6] Open problems:
There are some problems which are open for
researchers to give new directions in future.
1. Interferomatric ranging based localization that
takes error propagation into account:
Error propagation can be a potentially significant
problem in this technique. In order to localize large
number of nodes from a small set of anchors, future
localization algorithms need to find a way to
efficiently limit the error propagation.
2. Robust algorithm for mobile sensor networks:
Mobile sensors are useful in this environment
because they can move to locations that meet sensing
coverage requirements. New localization algorithms
will need to be developed to accommodate these
moving nodes. So, devising a robust localization
algorithm for next generation mobile sensor
networks is an open problem in future.
4. Finding the minimum number of Beacon
locations: Beacon based approaches requires of a set
of beacon nodes, with known locations. So, an
optimal as well as robust scheme will be to have a
minimum number of beacons in a region. Further
work is needed to find the minimum number of
locations where beacons must be placed so the whole
network can be localized with a certain level of
accuracy.
5. Finding localization algorithms in three
dimensional spaces: Wireless sensor networks are
physical impossible to be deployed into the area of
absolute plane in the context of real-world
applications. For all kinds of applications in Wireless
sensor networks accurate location information is
crucial. So, a good localization schemes for accurate
localization of sensors in three dimensional spaces
can be a good area of future work.

References:
[1] T. He, C. Huang, B. Blum, J. Stankovic, and T.
Abdelzaher, Range-free localization schemes in large
scale sensor networks, In Proceedings of the Ninth
Annual International Conference on Mobile Computing
and Networking (MobiCom'03), September 2003, San
Diego, CA, USA, pp. 81-95.
[2] A. Savvides, H. Park, and M. Srivastava, The bits
and flops of the n-hop multilateration primitive for
node localization problems, In Proceedings of the 1st
ACM international Workshop on Wireless Sensor
Networks and Applications (WSNA'02), September
2002, Atlanta, Georgia, USA, pp. 112-121.
[3] J. Bachrach and C. Taylor, "Localization in Sensor
Networks," in Handbook of Sensor Networks:
Algorithms and Architectures, I. Stojmenovic, Ed.,
2005.
[4] Available HTTP :
http://www.cse.ust.hk/~yangzh/yang_pqe.pdf.
[5] C. Savarese, J. Rabaey, and J. Beutel, Locationing
in distributed ad-hoc wireless sensor networks, in
Proceedings of IEEE International Conference on
Acoustics, Speech, and Signal
Processing (ICASSP'01), May 2001, Salt Lake City,
Utah, USA, vol. 4, pp. 2037-2040.
[6] Localization in Sensor Networks Jonathan
Bachrach and Christopher.
[7] Composite approach to deal with localization
problems in wireless sensor network, shehryar rao.
[8] Received Signal Strength-Based Wireless
Localization via Semidefinite Programming:
Noncooperative and Cooperative Schemes Robin
Wentao Ouyang, Albert Kai-Sun Wong, and Chin-Tau
Lea, Senior Member, IEEE.
[9] Distributed Algorithm for Anchor-Free Network
Localization Using Angle of Arrival Damir Arbul.
[10] Localization Algorithms in Wireless Sensor
Networks: ISSN 1943-3581 2010, Vol. 2, No. 1
Amitangshu Pal

NICE-2010
Acharya Institute of Technology, Bangalore-560090 248









































































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 249









































































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 250









































































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 251







































































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 252

VEHICLE TELEMATICS SYSTEM USING GPRS
Phalake M. B. and Bhalerao D. M.
Sinhgad College of Engineering / Mech-Mechatronics, Pune 41
phalakemb@gmail.com, dipashree_b@yahoo.com
___________________________________________________________________________________________________________________________

ABSTRACT

Vehicle Telematics is a term used to define
electric data exchange between connected vehicles.
Next generation Telematics services demanding
remote access to vehicle operational data. A challenge
with vehicles utilizing complicated embedded systems
is how to extract the vehicle diagnostic data, evaluate
it to expose possible problems and determine how to
fix any problems.
This paper describes a vehicle data acquisition system
by use of Embedded system places in vehicle and GPRS
connectivity. Large number of data can be transmitted
from the vehicle after collecting it from the ECUs fitted
in the vehicle. This system is reliable and cost effective
also.
KEYWORDS
Vehicle Telematics, Embedded Syatem, GPRS
1 INTRODUCTION

Now a days vehicles are equipped with more and
more electronic gadgets. These Electronics devices
plays vital role in controlling engine parameters (i.e.
proper functioning of engine as well as emissions
parameters), safety, comfort, navigation,
entertainment and much more. Most of these devices
are connected to each other for gathering
information. Further these Electronic modules
provide a diagnostic network interface through a In-
Vehicle network protocols (such as KWP2000, LIN
and CAN) to read number of parameters related to
the engine and vehicle body electrical and also
monitor the Diagnostic Trouble Codes (DTCs). The
DTCs provide valuable information on the faults and
also the sub system where the faults have occurred.
There is need of access of the data available in vehicle
for Vehicle parameters monitoring at the time of
vehicle testing (when vehicle is in design stage), to
monitor the vehicle to diagnostic and resolve any
problem detected when vehicle is away from home
place and from security point of view also.
Earlier the testing vehicle (running on highway)
equipped with data loggers and other test
equipments were able to give collected data only
when that will come back to office. And hence there
were chance of data loss. Hence repeated testing was
required in some cases.
To solve all these problems we have designed a
system by help of an embedded system along with
GPRS modem and vehicle electrical and electronics
systems to give cost effective solution to collect
correct and large number of data.
Hence test engineer can access the Vehicle Data by
using GPRS Embedded Web server fitted in the
vehicle. Using this technique the vehicle test
engineers / owners are able to access vehicle
operational and diagnostic data remotely as well as
control certain parameters / systems from any place
in the world, because this data is available on
internet.
2. OVERVIEW AND RELATED WORK
Basically Telemetics [1] has meant the blending of
telecommunications and informatics. Vehicle
Telematics deals with exchange of data from vehicle
collected from vehicle networks and used for vehicle
tracking, vehicle monitoring, and toll collection and
for number of things. There are no of systems
available for remote diagnostics of data like a remote
on-line diagnostic system for vehicles via the use of
In vehicle networking, On-Board Diagnostic (OBD),
and GPS [1] [2] [3] [4].
While there are some systems which use GSM for
monitoring the vehicle data by using the SMS service
for sending data from vehicle [5].
NICE-2010
Acharya Institute of Technology, Bangalore-560090 253

Here in 1
st
case (OBD and GPS) the data transmission
will be costlier as well as it may be redundant. In
second case (by using SMS) also chance of loosing
necessary of data because of longer data
transmission rate and small size of messages, and the
data transmission will be costlier if we require
continuous monitoring of vehicle.
3. SYSTEM ARCHITECTURE
The use of GPRS [6] is well known to every body and
almost all service providers on GSM are giving this
service. Hence it is very much easy to be getting
connected to the Internet world. And the OBD system
are available in almost all new generation vehicle as
the electronics devices are getting increased in
todays vehicle.
Hence if we will get use of these two then it may be
possible to get the necessary data with less cost and
in proper format for vehicle monitoring.
Hence the System architecture as shown in the Figure
1 is basically showing the basic systems from vehicle
i.e. EMS (Engine Monitoring System), Body Electrical
System, and ABS-SRS (Anti-locking breaking system
and Supplemental Restraint System). These systems
are connected in either of communication lines
available in the vehicle for In-vehicle networking (i.e.
K-line, LIN or CAN).



Figure 1: System Architecture

Also the Microcontroller board is nothing but an
embedded system capable to collect data from said
ECUs and to communicate it further to GPRS modem
[7] for further data transmission.
3.1 Vehicle Connectivity
The system is currently suitable for K-Line (KWP
2000) connection currently. The embedded system
equipped with microcontroller will work on vehicle
battery ant it will be properly mounted in vehicle
with necessary network connections. The
connections will be through the Kline connector. As
soon as system gets powered, it monitors the vehicle
network and starts fetching the latest values of the
pre-configured parameters and the DTCs from the
vehicle network.
Here the communication will be established first by
enabling the respective device. Then the fixed
command will be used for data exchange between
this embedded device and any or all of the ECUs
from vehicle.

3.2 GPRS Connectivity

The GPRS connectivity is achieved by using GPRS
modem based on Siemens MC55 Module is used. The
MC55 wireless modules are the smallest double tri-
band modules on the market today. This module
covers all of the GSM/GPRS networks that exist
across the world - and subsequently enable you to
exchange any data in specific format. GPRS enabled
mobile SIM card to be placed in the adapter provided
in Modem. The embedded client implements AT
Commands [7] to communicate to the mobile devices.
The fixed IP address is configured in the modem and
after establishing communication with the internet
continuous data will be transferred to that particular
IP address by this modem.

3.3 Software:

The software in the embedded system is suitable for:
1. Communication with vehicle network
Internet
User / Test
Engineer
EMS
Body
Elect
ABS-
SRS
Microcon
troller
Board
GPRS
Modem
with
TCPIP
Stack
Embedded System
K line / LIN / CAN
communication
RS 232














V
e
h
i
c
l
e

S
y
s
t
e
m
s



NICE-2010
Acharya Institute of Technology, Bangalore-560090 254

2. Communicate with the Modem

For Communication with the vehicle networks the k
line (KWP 2000) protocol is used. The software for
physical, data link and application layer is written in
the embedded system. Here first the necessary
devices are enabled for communication and then the
data exchange starts. The data will be collected after
fixed interval of 500msec from the devices.

Da
te
Ti
m
e
EM
S
Sys
te
m
DT
Cs
Body
Ele
Syste
m
DTCs
ABS
Sen
sor
DTC
s
U
CL
Te
m
p
(
o

C)
Fu
el
Lev
el
(L)
Im
mo
bili
ser
sta
tus
Br
ea
k
Oil
1
/1
1/
20
10
8:
12
:3
5
12
5 556 NA
3
5 65
OF
F HI
1
/1
1/
20
10
8:
20
:0
0
12
5 556 NA
7
0 64
OF
F HI




Table 1: Data collected on server machine

While collecting this data the embedded system is
busy with connecting to the GPRS by using AT
commands. Here the network will be established
with the fixed IP address configured in the modem.
When the proper connection is established, the data
collected from the ECUs is transmitted in packets
with fixed interval of time.
This data further collected or received at the said IP
address location where one dynamic page of internet
is running. This data collected on excel file and will
be used for further analysis by the test engineer as
shown in table1.
4. FUTURE WORK
This work can be explored to LIN and CAN networks
in future. Also we can explore the system for two way
communication to correct the fault codes detected
from the vehicle by increasing capabilities of
microcontroller to store large number of data. The
commands can be provided from internet to the
Embedded Web Server that can be built at the vehicle
using inbuilt TCPIP stack of the Modem /
microcontroller.
5. CONCLUSION
This system is designed for vehicle diagnostics from
remote location and vehicle performance monitoring
point of view. But this system can be used for wide
variety of applications like:
Vehicle Security
Emergence services
Fleet management
Advertisement in the vehicle using LED
display
Remote fault finding / Troubleshooting
Currently this system supports GPRS and can be
extended to 3G technologies for faster and more
detailed data transfer. It can be extended to any other
Vehicle protocols. This concept can be realized into
small portable hardware that can be mounted to any
automotive vehicle to enable remote access. This can
extended to have a GPS (Global Positioning System)
so that all vehicle parameters including vehicle
location can be monitored remotely. This system can
be extended to provide service support remotely.
6. REFERENCES
[1] K. Y. Cho, C. H. Bae, Y. Chu And M. W. Suh,
Overview Of Telematics: A System Architecture
Approach, International Journal of Automotive
Technology, Vol. 7, No. 4, pp. 509517 (2006)
[2] William Jenkins, Ron Lewis, Georgios Lazarou,
Joseph Picone and Zachary Rowland, Real-Time
Vehicle Performance Monitoring Using Wireless
Networking, Human and Systems Engineering,
Center for Advanced Vehicular Systems,
NICE-2010
Acharya Institute of Technology, Bangalore-560090 255

Mississippi State University, 200 Research Blvd.,
Mississippi State, Mississippi 39759, USA
[3] I. Aris1, M.F. Zakaria2, S.M. Abdullah1 and R.M
Sidek1, Development Of Obd-II Driver
Information System, International Journal of
Engineering and Technology, Vol. 4, No. 2, 2007,
pp. 253-259
[4] Jyong Lin, Shih-Chang Chen, Yu-Tsen Shih, and
Shi-Huang Chen, A Study on Remote On-Line
Diagnostic System for Vehicles by Integrating the
Technology of OBD, GPS, and 3G, World
Academy of Science, Engineering and Technology
56 2009
[5] Vinayak S. Kumbar, Sneha Bharadwaj, Nagalaxmi
B.V, Abhijeet Prem Jetly, Cellular Based Remote
Vehicle Data Access
[6] Christian Bettstetter, Hans-Jrg Vgel, And Jrg
Eberspcher, Gsm Phase 2+ General Packet
Radio Service Gprs: Architecture, Protocols, And
Air Interface, IEEE Communications Surveys,
Third Quarter 1999, vol. 2 no. 3
Siemens Cellular Engine, MC55 AT Command Set,
Version: 02.06 Date: November 12, 2004 DocId:
MC55_ATC_V02.06
















































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 256









































































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 257









































































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 258









































































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 259









































































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 260









































































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 261























































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 262

BIOMETRICS IN SECURE E-TRANSACTIONS
Shantha Pai, Sanghamitra Bordoloi
1
, Gayathri Kamath
2
shantha1990pai@yahoo.com, gayathrikamath@acharya.ac.in
1
Adhiyamaan College Of Engineering,
2
Acharya Institute Of Technology
___________________________________________________________________________________________________________________________
ABSTRACT
Online shopping using WAP enabled mobile phone
has widely come into use and Credit cards serve as the
currency during e-business and e-Shopping. The
negative side has hackers and spoofers misusing credit
card numbers, even though the network has been
made secure.
In this paper, we have proposed a multi-biometric
model (integrating voice, fingerprint and facial
scanning) that can be embedded in a mobile phone,
thus making e-transactions more secure. This paper
uses for image processing or facial recognition and
finger print as well as voice recognition. We have also
simulated a few graphs for voice recognition and
facial verification using MATLAB 6.0

1. INTRODUCTION
Here we present a multimodal system that can be
embedded in a mobile phone, which integrates
fingerprint, voice and facial scanning. It shuts down
the problem of high False Rejection Rate of facial
scanners, eliminates the fooling of fingerprint
scanners and overshadows the disadvantage of voice
recognition models.
A biometric system is a recognition system, which
makes a personal identification by determining the
authenticity of a specific physiological or behavioral
characteristic possessed by the user.
This method of identification is preferred over
traditional methods involving passwords and PIN
numbers for various reasons:
The person to be identified is required to be
physically present at the point of identification.
Identification based on biometric techniques
eliminates the need to remember a password or
carry an identity.
Depending on the context on which a biometric
system works, it can be either classified as an
identification system or a verification
(authentication) system identification involves in
establishing a persons identify whereas in
verification involves confirming or denying a
persons claiming identity.
BIOMETRIC SYSTEM COMPONENTS:
What is needed to make it to work?
capture device (sensor)
Fingerprint reader, video camera, etc algorithm
Processing(feature extraction)
Matching repository
Place to store enrolled biometric templates (for later
comparison).
Should be protected (secure area, signed/encrypted,
etc).
In this paper we propose a multi-biometrics system
which is obtained by the integration of multiple
individual biometrics models. In the rest of the
paper is organized as follows: section 2 deals with
the need for biometrics in mobile phones, section 3
brief about the design of face, voice and fingerprint
recognition, section 4 gives the implementation and
results, section 5 concludes the paper with future
enhancements.

2. NEED FOR BIOMETRICS IN MOBILE PHONES

WAP enabled mobile phone provides the
facilities to consumers to shop online. Credit cards
continue to be an efficient tool for online money
transactions. But, on the other hand, credit cards
number can be stolen on its way to its destination
and can be misused by hackers. Thus, e-Business
through a mobile phone becomes insecure. Also, a
report in www.download.com stated that much anti-
fraud Software, like those provided by ArticSoft and
ISC, created a back door entry and were largely
involved in data spoofing. In addition to this, many
user and companies were prone to the attack of
many viruses and Trojan horses.
With so much of problems faced, the service
providers turned their attention towards biometrics
to prevent data spoofing and to provide secure e-
Transactions.

Fig 1: PROTOCOL:

NICE-2010
Acharya Institute of Technology, Bangalore-560090 263

New User
Registered User
No match


Fig 2: FUTURE MOBILE PHONE:


3. DESIGN

3.1 FACE RECOGNITION:

Humans often use faces to recognize individuals
and advancements in computing capability over the
past few decades now enable similar recognitions
automatically.
There are two predominant approaches to the
face recognition problem: geometric (feature based)
and photometric (view based). As researchers
interest in face recognition continued, many different
algorithms were developed.
Facial recognition is considered to be one of the
most tedious among all scans. Further, difficulty in
acquisition of face and cost of equipments make it
more complex.
WAP enabled phones like CX 400K and LG-
SD1000 manufactured by LG electronics, have built
in cameras that can acquire images and transmit it
over internet. This it is sent to the credit card
company to verify the face received matches with the
face in their database. If it matches, the goods are
sent, else the order is rejected.

fig 3. flowchart for face recognition:
























3.2 VOICE RECOGNITION:

Voice recognition is the identification of an
individual identity using speech as the identifying
characteristic. The voice of the user is processed
(Speech Processing) using a digital signal processor
which is the prime part of a cell phone. We are
programming this DSP to implement this protection
technique.
Firstly an original voice database of the user
is created, and stored in the Flash ROM which is
available inside the cell phone. Then whenever the
user speaks through the cell phone, part of the
speech sample is taken and stored. This is compared
with the original database to check the identity of the
user. If the user is authorized, he is allowed to
continue. If not the transmission is cut abruptly by
making the DSP in idle state.
The speaker-specific characteristics of speech
are due to difference in physiological and behavioral
aspects of the speech production system in humans.
The main physiological aspect of the human speech
production system is the vocal tract shape. The vocal
tract modifies the spectral content of an acoustic
wave as it passes through it, thereby producing
speech. Therefore, it is common in speaker
Customer
Retailer
Orders with
Customer id
Amount request
and subsequent
sanction
Customer id,
Amount &
Biometric Data
Start
Check
Enter ID
Capture Iris image
Convert image into
code
Stored
code
Invalid User
Enrolment
NICE-2010
Acharya Institute of Technology, Bangalore-560090 264

verification systems to make use of features derived
only from the vocal tract.
The microphone in the mobile phone captures the
speech. Then, using cepstral analysis, an utterance
may be represented as a sequence of feature vectors.
Irrespective of the voice impact of the consumer, his
transaction is accepted or rejected. The following
algorithm may be used in voice verification.

Fig 4 : FLOWCHART FOR VOICE VERIFICATION:



3.3 FINGERPRINT ACQUISITION:

Fingerprint identification is the process of
comparing questioned and known friction skin ridge
impressions from fingers, palms, and toes to
determine if the impressions are from the same
finger (or palm, toe, etc.).
Finger based scanning is one of the oldest
methods used for verification.



A fingerprint recognition system that uses local
features preprocessing techniques are applied to
produce an invariant feature vector. The method
gives correct recognition even in the presence of
positioning or rotating errors.

Fig 5: FINGERPRINT RECOGNITION SYSTEM:








4 IMPLEMENTATION & RESULTS

4.1 Face recognition
We took two faces with small differences (you see a
small dot in the forehead of second face) and
programmed MATLAB to find the difference between
the two. The output is place below:
Fig 6







Fig 7









Fig 8: Output Graph:
Difference between two images can be found by
MATLAB.
The above simulations shows that even two
persons having almost similar face with minute
difference can also be differentiated.
Now, there arises a problem. A man, without
beard, make as a transaction successfully .A week
later he makes another transaction with some hair

Image
Processing

Feature
Extraction

Pattern
Matching
Password Image
NICE-2010
Acharya Institute of Technology, Bangalore-560090 265

grown on his chin and goes for acquiring images of
any part of the face like forehead, nose, ear etc.
Hence, this type of facial scanning system can be
used as a part of the multi-biometric system.

4.2 Voice Recognition

An audio spectrum showing the variation in
speech by taking time along the x-axis and frequency
along the y-axis is as follows:


Fig 9

We recorded a person saying the letter a directly
into a sound recorder and plotted the Graph1. This
was simultaneously recorded in a tape recorder and
Graph2 was plotted. The graphs show some minute
differences which prove that this system cannot be
fooled by imitation.

Fig 10: using Sound recorder












Fig 11: Using Tape Recorder












As every mobile phone has an in-built



microphone and some have video camera, the need
for an extra hardware for the speech and image
acquisition is eliminated.

4.3 Fingerprint Acquistion

The Transaction scanner is embedded above display
screen. The scanner here is a transparent layer above
the screen. The scanner consists of arrays of
capacitors of the size of 0.03m. Capacitors with such
a small size can be manufactured with MEMS
technology. When the consumer places his thumb on
the scanner, the points at which his fingerprint
touches the screen get discharged whereas others
remain charged. Thus the fingerprint is scanned and
is then sent for further process.



5 CONCLUSION & FUTURE ENHANCEMENTS:

This technique involving face recognition, voice
recognition and finger print recognition,
implementing multi-biometrics is computationally
effective as well as reliable in terms of recognition
rates. The robustness of these recognitions make it
ideal for authenticating parties to reduce fraud in
applications like E-transactions. The likelihood of a
false positive is extremely low and its relative speed
makes it a great potential biometric. Thus, this
mobile multi-biometrics can be embedded in mobile
phone and since the phone is cost effective and no
special hardware is required this can be effectively
implemented.
Future research focuses on Iris recognition
system, which involves a static or physiological
biometric method in which the iris image is captured
by the Standard Video Camera using Daugmans
algorithm, this iris image is converted into Iris code.
Using an Iris Code record, the system can
authenticate the identity of individuals with 100%
accuracy than any other methods like Fingerprint
and Face etc.



NICE-2010
Acharya Institute of Technology, Bangalore-560090 266

REFERENCES:

1) L. Sirovich, M. Meytlis, Symmetry, Probability,
and Recognition in Face Space, PNAS -
Proceedings of the National Academy of
Sciences, Vol. 106, No. 17, 28 April 2009, pp.
6895-6899.
2) Bowyer, KW;Chang, K;Flynn, A survey of
approaches and challenges in 3D and multi-
modal 3D+2D face recognition Journal: COMPUT
VIS IMAGE UNDERSTAND, 101 (1): 1-15 JAN
2006.
3) Yahaya, Y.H. Isa, M. Aziz, M.I. Comput. Sci.
Dept., Nat. Defence Univ. of Malaysia, Kuala
Lumpur, Malaysia Fingerprint Biometric
Authentication on smart card, ssue Date:28-30
Dec. 2009 Volume: 2 On page(s): 671 673
4) Teddy Ko, Multimodal Biometric Identification
for Large User Population Using Fingerprint,
Face and Iris Recognition, Proceedings of the
34th Applied Imagery and Pattern Recognition
Workshop (AIPR05) 2005 IEEE
5) Biometrics by Samir Nanavathi, Dreamtech
Wiley Publications.
6) Biometrics made easy for you by John walker.
7) Science and Technology a supplementary of
The HINDU .
8) Web Sites:
www.biometrics .cse.mds.edu
www.biometriccatalogue.com
www.bioenabletech.com
www.biometricsgroup.com
www.findbiometrics.com
www.nokia.com
www.howstuffworks.com






















































NICE-2010
Acharya Institute of Technology, Bangalore-560090 267







OPTIMUM COMPATIBILITY WITH THE SAP NET WEAVER BUSINESS
INTELLIGENCE ACCELERATOR
M. Victoria Hebseeba
1
, Dr. T. Bhaskara Reddy
2
.
Department of Computer Science, Rayalaseema. University, Kurnool
1

Department of Computer Science, S.K. University, Anantapur
2

bhaskareddy_sku@yahoo.co.in
1
,Victoria.hebseeba@gmail.com
2

______________________________________________________________________________________________________________________________
ABSTRACT

This paper describes the SAP Net weaver BI
accelerator and its main architectural and technical
features and how the accelerator can be used for
optimized performance being compatible to any IT
industry at lowest operational costs. A number of
global companies are scrambling to get SAP BIA
implemented to mitigate the poor query response
times with burgeoning data. The Business Intelligence
Accelerator was developed for deployment in the IT
Landscape of any company that stores large and
growing volumes of business data in a standard
relational database. Currently, most approaches to
accessing the data held in such databases confront IT
staff with a maintenance challenge. From the
standpoint of company decision making, the key
benefits of SAP Net Weaver BI with the accelerator are
speed, flexibility, and low cost.

I. INTRODUCTION
The twenty first century has come to be known as
the Information Age, where anybody with a huge
source of information and the ability to assimilate
and synthesize data at a lightning pace has the
POWER. Making timely business decisions has
never been easy, but the increasing volume of
available information makes it more difficult than
ever.

SAP Net weaver BI accelerator is a new application
that helps to analyse large amounts of critical
business information up to hundred times faster
than alternate tools that are available in the market.
Compared with most previous approaches, this
appliance offers an order of magnitude improvement
in speed and flexibility of access to the data. This
improvement is significant in business contexts
where existing approaches impose quantifiable costs.
This application benefits business that has high
volumes of data enables quick access to any data
with a low amount of administrative effort and is
especially useful for sophisticated scenarios with
unpredictable query types, high data volumes and
high frequency of queries. BIA is also useful when
aggregates or database indexes are not sufficient, or
when these methods become too costly to maintain.

Large scale businesses handle large data volumes
which are being fetched in queries/reports by
thousands of users. As the data volume increases, the
performance degrades drastically and hence
becomes a challenge to boost the business. To
address this emerging demand for such large-scale,
ad hoc analytic activity, SAP developed the Net
Weaver BI Accelerator for the BI users

Business enterprises are faced with an uphill task of
collecting unstructured data from various sources
and having to transform that into structured
management information to drive sound decision
making, targeted action and robust business results.
This has resulted in SAP business warehouse
solutions to grow to tens of terabyte territory and
increasing requirement to report data as close to
real-time as possible.

A.PRIMARY MARKET TRENDS

As business analytics gain more mainstream
acceptance, two primary trends are driving the
technical requirements of software platforms to
support decision making.

Growth in the number of end users:
Externally, research shows that already 44% of large
companies provide business intelligence reports to
external users, such as suppliers, customer, partners,
and other stakeholders.

Growth in data volumes:
NICE-2010
Acharya Institute of Technology, Bangalore-560090 268

Thirty percent of companies with $500 million or
more in revenue expect their data warehouses to
growth at least 100% over the next three years.

To address these Requirements and short comings IT
Companies are more focusing on performance tuning
techniques.

II. BI ACCELERATOR
SAP BI accelerator (BIA) presents itself like an
appliance because it combines in one package
software and hardware. To create the BI accelerator
appliance, SAP has partnered with Intel, which
provides the processors, and HP and IBM, which
provide their respective server and storage
technologies

SAP Net Weaver BI customers adopting the BI
Accelerator can expect radical improvements in
query performance through sophisticated in-
memory data compression and horizontal and
vertical data partitioning, with near zero
administrative overhead

Figure 1. Architecture

Indexing: SAP BI accelerator includes indexes that
are vertically inverted reproductions of all the data
included in Info Cubes (i.e., fact and dimension tables
as well as master Data).
Engine: The second primary component of SAP BI
accelerator is the engine that processes the queries
in memory. The software is running on an
expandable rack of blade servers. The operating
system used for BI Accelerator is 64-bit Linux.
A. HOW BIA WORKS
Data is loaded from source systems into an
SAP Info Cube.
An index is built for this Info Cube and stored
inside the BI accelerator appliance. These are
search engine indexes built using SAPs TREX
search technology. They are stored in a file
system using vertical decomposition (a
column-based approach as opposed to the
row-based approach that requires more read
time). This results in highly compressed data
BI accelerator indexes are loaded into
memory where the query is processed. In
memory, joins and aggregations are done at
run time. Loading of indexes into memory
happens automatically at first query request,
or it can be set for preloading whenever new
data is loaded.
At run time, query requests are sent to the
analytic engine, which reroutes the query to
the BI accelerator.
Query results are returned to the end-user
application, in addition to having no
database license cost, there is also no OS
license cost.
Users want to execute reports at a flash and get the
desired results within few seconds by not spending
tremendous bouts of wait to check if the report is
still running.

There is a need to balance user demands that can
process more data in less time with a reality of cost
control.
Performance Improvement techniques are especially
useful for sophisticated scenarios with unpredictable
query types, high data volume, and high frequency of
queries.
B. BI ACCELERATOR INSTALLATION
During the ramp-up of SAP Net Weaver
2004s the BI accelerator will only be
available in a box (= BI accelerator
appliance).The box will be delivered with
the complete BI Accelerator preinstalled.
The box may be standalone or fit into an
existing customer rack.
The box will contain blade servers with
64-bit Intel Xeon CPUs in
Hewlett-Packard or IBM hardware.
The OS for the blades is Linux SLES 9.
C. INITIAL SETTINGS
Communication between the BI system and
the BI accelerator server takes place using
RFC modules. In order to connect a BI
accelerator server to the BI system, first you
have to maintain the following settings for
the RFC destination.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 269

Set up the RFC destination for the BI
accelerator server (transaction SM59).
Set the RFC destination for the BI accelerator
server transaction RSADMIN, the parameter
HPA RFC Destination has to correspond to
the above RFC destination

D. PURCHASE AND IMPLEMENTATION:

Figure 2 Implementation
E. INDEX MAINTENANCE
Similar to aggregate maintenance via context menu
for a particular Info Cube
Or direct access via transaction RSDDV.

Figure 3. Index Maintenance


A BI Accelerator index is a redundant data store of a
BI Info Cube on the BI Accelerator server. The BI
Accelerator server is a specific part of the server for
the SAP Net Weaver Search and Aggregation Engine
(TREX).With the new BIA index maintenance wizard
you can create, activate, fill and delete BI Accelerator
indexes.
F. BENEFITS:
Faster query processing and response time.
Query performance is improved by factor 10
100.
No need of change run in aggregates , i.e.
aggregate change runs due to master data
changes are handled by the BI accelerator
rather than on top of info cubes.
High potential scalability. as demands grow,
system scales up by adding blades.
Lower Maintenance Costs
Faster load times, as aggregate change runs
due to master data changes are
handled by the BI accelerator rather than on
top of Info Cubes
Lower maintenance costs:
BI accelerator eliminates the need to create
relational aggregates.
BI accelerator may eliminate the need to deal
with an OLAP Cache.
BI accelerator may decrease the need for
logical partitioning on the Net Weaver BI
side. However, there are other benefits
beyond improving query processing speeds
to having logical partitions.
Attractive packaging as an appliance that is
preconfigured for analytic processing using
SAP software and partners hardware,
which allows non intrusive implementation
G. BI ACCELERATOR TEST RESULT
Performed various Lab tests with real SAP Net
Weaver BI customer data on Multi Providers with
Info cubes and DSOs about 900 million records
together, Used aggregates, executed important
business critical queries and tested. Noticed drastic
improvement in performance.

Lab tests with real SAP Net Weaver BI customer data
Multi Provider with 9 Info Cubes,,
About 850 million records together,
Customer aggregates used,
The 7 most important, critical queries tested
Improvement factor 25

III. CONCLUSIONS

SAP Net weaver Bi Accelerator extends the
traditional environment, crunching through
terabytes of data in seconds, enabling faster business
insight and turbo-charging your business intelligence
solution. It allows quicker access to data that has
been warehoused for varying periods of time which,
NICE-2010
Acharya Institute of Technology, Bangalore-560090 270

in the past, has been too complicated to retrieve and
utilize effectively. Enables clients to deploy SAP Net
Weaver BI Accelerator in an easy, cost-effective way.

BIA is no longer some thing exotic. Many of the large
BI systems have already implemented BIA and many
more projects are underway in Europe and in
Americas.

BI Accelerator a very user friendly computer
appliance which has preinstalled software on a
predefined hardware and thus speeds up queries
performance. Gives confidence to the customer that
they can buy themselves out of performance
bottlenecks by adding inexpensive hard ware (blade
racks).
This software runs on an expandable rack of blade
servers. The operating system used for BW
Accelerator is 64-bit SUSE Linux Enterprise Server
(SLES). The software is optimized for specific
hardware and operating system combinations.
The Hard ware partners delivered the appliance is:
IBM BW Accelerator Solution, HP, Fujitsu Siemens,
Sun Bi Accelerator Offering.



Figure 4. Some of SAP reference clients.


REFERENCES

[1] Thomas Schroder, SAP BW Performance
Optimization Guide, 1st ed., Galileo Press,
Germany, (2006)
[2] Andrew J. Ross, SAP Net weaver BI
Accelerator, 1st ed., Galileo Press, Boston
(MA), (2009)
[3] Christian Merwald, Sabine Morlock, Data
Ware Housing with SAP BW7, BI in SAP Net
Weaver 2004s, 1st ed., Rocky Nook Inc., (2009)
[4] Biao Fu, Henry Fu. SAP BW a step by step
guide,2002.
[5] Arshad Khan, SAP and BW Data Warehousing.
How to plan and implement, (2005)
[6] www.sdn.sap.com
[7] http://www.sap.com/platform/netweaver
pdf/BWP_AR_IDC_BI_Accelerator.pdf
[8] Kevin McDonald, Andreas Wilmsmeier,
David C. Dixon, W.H Inmon, Mastering the
SAP Business Information Ware house Wiley
Publishing Inc, USA (2002)
[9] Elizabeth Vitt, Michael Luckevich, Stacia
Misner, Making better Business Intelligence
Decisions faster, Microsoft Press, USA


























NICE-2010
Acharya Institute of Technology, Bangalore-560090 271




FAST INTRUSION DETECTION SYSTEM IN NETWORK BASED ON GENETIC
ALGORITHM
Ismath Unnisa
1
, Debjani Nath
2

M.Tech, 1
st
SEM
1
, Sr. Lecturer
2
,
Department of CSE, TOCE, Bangalore
panof_i@yahoo.com
1
, debjani.nath@gmail.com
2

_____________________________________________________________________________________________________________________________

ABSTRACT

The process of monitoring the events
occurring in a computer system or network and
analyzing them for sign of intrusions is known as
intrusion detection system (IDS). Current Intrusion
Detection Systems (IDS) examine all data features to
detect intrusion or misuse patterns. Some of the
features may be redundant or contribute little (if
anything) to the detection process. The purpose of this
study is to identify important input features in
building an IDS that is computationally efficient and
effective. This paper describes a technique of applying
Genetic Algorithm (GA) to network Intrusion Detection
Systems (IDSs).

Keywords- Intrusion Detection, Genetic
Algorithm, Cross Over, Mutation

INTRODUCTION

Intrusion detection systems are increasingly a
key part of systems defense. Various approaches to
intrusion detection are currently being used, but
they are relatively ineffective. Information has
become an organizations most precious asset.
Organizations have become increasingly
dependent on the information since more
information is being stored and processed on
network-based systems. The wide spread use of e-
commerce, has increased the necessity of protecting
the system to a very high extend. Intrusion detection
has become an integral part of the information
security process. But it is not technically feasible to
build a system with no vulnerabilities; intrusion
detection continues to be an important area of
research. Intrusion detection is useful not only in
detecting successful intrusions, but also in
monitoring attempts to break security, which
provides important information for timely counter
measures. The primary aim of Intrusion Detection
Systems (IDS) is to protect the availability,
confidentiality and integrity of critical networked
information systems. Intrusion Detection Systems
(IDS) are defined by both the method used to detect
attacks and the placement of the IDS on the network.


Figure1: A generic intrusion detection model

Intrusion detection is classified into two types:
misuse intrusion detection and anomaly intrusion
detection. Misuse intrusion detection uses well
defined patterns of the attack that exploit
weaknesses in the system and application software
to identify the intrusions. Anomaly intrusion
detection identifies deviations from the normal
usage behavior patterns to identify the intrusion. The
motivation for using the Genetic approach is to
improve the accuracy of the intrusion detection
system when compared to using individual
approaches. The Genetic approach combines the best
results from the different individual systems
resulting in more accuracy.

LITERATURE SURVEY

Intrusion Detection Systems are increasingly a
key part of systems defense.
Whenever an intrusion occurs, the security and value
of a computer system is compromised. Network-
based attacks make it difficult for legitimate users to
access various network services by purposely
occupying or sabotaging network resources and
services. This can be done by sending large amounts
NICE-2010
Acharya Institute of Technology, Bangalore-560090 272

of network traffic, exploiting well-known faults in
networking services, and by overloading network
hosts.
Existing intrusion detection especially commercial
intrusion detection systems that must resist
intrusion attacks are based on misuse detection
approach, which means these systems will only be
able to detect known attack types and in most cases
they tend to be ineffective due to various reasons like
no availability of attack patterns, time consumption
for developing new attack patterns, insufficient
attack data etc. IDSs can also be divided into two
groups depending on where they look for intrusive
behavior: Network-based IDS (NIDS) and Host-based
IDS. The former refers to systems that identify
intrusions by monitoring traffic through network
devices (e.g. Network Interface Card, NIC). A host-
based IDS monitors file and process activities related
to a software environment associated with a
specific host. Some host-based IDSs also listen to
network traffic to identify attacks against a host. We
have many techniques for intrusion detection such as
neural tree and hybrid algorithm. In this paper we
deal with the genetic algorithm.

GENETIC ALGORITHM
A genetic algorithm starts with selection of two
parents from current population. The initial
population is selected at random, which could be the
toss of a coin, computer generated or by some other
means, and the algorithm will continue until a certain
time or a certain condition is met. The three basic
operators of genetic algorithms are: selection,
crossover and mutation. The strength of GA's is their
ability to heuristically search for solutions when all
else fails. But before one can use the algorithm we
must be able to represent the solutions to the
problem in a suitable format, such as a series of
1'sand 0's. If we can do this then the GA will do the
rest. Figure 2 shows the structure of a simple genetic
algorithm. It starts with a randomly generated
population, evolves through selection, recombination
(crossover), and mutation. Finally, the best
individual (chromosome) is picked out as the final
result once the optimization criterion is met.


Figure 2: Structure of a simple genetic algorithm.


A genetic algorithm is quite straightforward in
general, but it could be complex in most cases. For
example, during the crossover operation, there could
be one-point crossover or even multiple point
crossovers. There are also parallel implementations
of genetic algorithms. Sometimes series of
parameters (for example, mutation rate, crossover
rate, population size, chromosome size, number of
evolutions or generations, and how the selection is
done) needs to be considered with specific selection
process. The final goal is to search the solution space
in a relatively short period of time.

A. Selection
Suppose we have population P of n individuals
sorted according to individuals fitnesses. We need to
select two parents for crossover operator. The
selection in our work is done by weighted roulette
algorithm.

B. Crossover
In GP the tree structure crossover operation is
implemented by taking randomly selected subtrees
in the individuals and exchanging them.

C. Mutation
We randomly select two individual sequences
from the current population and generate random list of
integers that represent the site indices of the mutant
positions in the selected individual sequences. For each
randomly selected site position we interchange the site
position characters between the two sequences. The
maximum position limit value in the random list is the
length of the smaller sequence.

For example:
Individual Sequence1: ATCGCCGTACCCGGTAAATTTT
Individual Sequence 2: CGCTTACAAGGCCCC
Random List of interchange positions: 2, 6,10
New mutant child 1: AGCGCAGTAGCCGGTAAATTTT
New mutant child 2: CTCTTCCAACGCCCC

NICE-2010
Acharya Institute of Technology, Bangalore-560090 273

INTRUSION DETECTION USING GENETIC
ALGORITHM

Genetic algorithms can be used to evolve simple
rules for network traffic. These rules are used to
differentiate normal network connections from
anomalous connections. These anomalous
connections refer to events with probability of
intrusions. The rules stored in the rule base are
usually in the following form
if { condition } then { act }
For the problems we presented above, the condition
usually refers to a match between current network
connection and the rules in IDS, such as source and
destination IP addresses and port numbers (used in
TCP/IP network protocols), duration of the
connection, protocol used, etc., indicating the
probability of an intrusion. The act field usually
refers to an action defined by the security policies
within an organization, such as reporting an alert to
the system administrator, stopping the connection,
logging a message into system audit files, or all of the
above.
For example, a rule can be defined as:
if {the connection has following information: source
IP address 124.12.5.18; destination IP address:
130.18.206.55; destination port number: 21;
connection time: 10.1 seconds }then {stop the
connection}This rule can be explained as follows: if
there exists a network connection request with the
source IP address 124.12.5.18, destination IP address
130.18.206.55, destination port number 21, and
connection time 10.1 seconds, then stop this
connection establishment. This is because the IP
address 124.12.5.18 is recognized by the IDS as one
of the blacklisted IP addresses; therefore, any service
request initiated from it is rejected.



Figure 3: Flowchart for Genetic Algorithms



Figure 4: Representation of trees

Crossover:

Crossover rules are: Two strings from mating pool
selected (randomly), Locations of Crossover location

Acharya Institute of Technology, Bangalore

determined (randomly) and bits of strings
interchanged between crossover location(s).

Figure 5: shows the Crossover

Single point Crossover
(0) 0 0 0 0 0 0 0 1 (1)

(11) 1 0 1 1 1 0 1 0 (10)

Parent Strings Child Strings

Crossover can be beneficial or detrimental,
Crossover probability PC 0.7 to 0.9

Mutation:

Flipping of bits of strings at random
local search around current solution
maintain genetic diversity of population
enable search to climb towards global
optimum
mutation probability pm0.005 to 0.015

Illustration with pm = 0.10

Original population
0 0 1 0 , 1 0 0 1 , 0 0 0 1 , 1 1 1 1 , 0 1 1 1
(2) (9) (1) (16) (7)
After Mutation
0 0 1 0 , 1 0 0 1 , 0 1 0 1 , 1 1 1 1 , 0 1 1 0

, Bangalore-560090
determined (randomly) and bits of strings
interchanged between crossover location(s).


Crossover can be beneficial or detrimental,


maintain genetic diversity of population
enable search to climb towards global
0.005 to 0.015
1 1 1 1 , 0 1 1 1
(2) (9) (1) (16) (7)
0 0 1 0 , 1 0 0 1 , 0 1 0 1 , 1 1 1 1 , 0 1 1 0
(2) (9) (5) (16) (6)

SYSTEM ARCHITECTURE
Figure 6. Architecture of
detection

Figure 6 shows the structure of this implementation.
We need to collect enough historical data that
includes both normal and anomalous network
connections. The data set for testing IDSs, which is
represented in the Tcpdump
choice. This is the first part inside the system
architecture. This data set is analyzed by the network
sniffers and results are fed into GA for fitness
evaluation. Then the GA is executed and the rule set
is generated. These rules are stored in a database to
be used by the IDS.

CONCLUSION
In this paper, we discussed a methodology of
applying genetic algorithm into network intrusion
detection techniques. A brief overview of Intrusion
Detection System (IDS), genetic algorith
related detection techniques are discussed. The
system architecture is also introduced. Factors
affecting the GA are addressed in detail.
we presented a Genetic Algorithm for intrusion
detection systems (IDS) on improving the
detection/classification performance by reducing the
input features. Other techniques used for intrusion
detection are Neural tree, Immune Systems,
Antcolony Optimization and so on.

REFERENCES
[1] Bace R.G Intrusion Detection,
Publishing (ISBN 1-57870
[2] Lunt. T. Detecting intruders in computer
systems. Conference on auditing and computer
technology, 1993.
[3] Wei Li. Mississippi State University, Mississippi
State, MS 39762
[4]GAlib A C++ Library of Genetic Algorithm
Components, http://lancet.mit.edu/ga/
[5] Bezroukov, Nikolai. 19 July 2003. Intrusion
Detection (general issues). Softpanorama: Open
Source Software
NICE-2010
560090 274
(2) (9) (5) (16) (6)
SYSTEM ARCHITECTURE

Figure 6. Architecture of applying GA into intrusion
Figure 6 shows the structure of this implementation.
We need to collect enough historical data that
includes both normal and anomalous network
connections. The data set for testing IDSs, which is
Tcpdump binary format, is a good
choice. This is the first part inside the system
architecture. This data set is analyzed by the network
sniffers and results are fed into GA for fitness
evaluation. Then the GA is executed and the rule set
se rules are stored in a database to
In this paper, we discussed a methodology of
applying genetic algorithm into network intrusion
detection techniques. A brief overview of Intrusion
Detection System (IDS), genetic algorithm, and
related detection techniques are discussed. The
system architecture is also introduced. Factors
affecting the GA are addressed in detail. In this paper,
we presented a Genetic Algorithm for intrusion
detection systems (IDS) on improving the
on/classification performance by reducing the
Other techniques used for intrusion
detection are Neural tree, Immune Systems,
Antcolony Optimization and so on.
[1] Bace R.G Intrusion Detection, Technical
57870-185 6).
[2] Lunt. T. Detecting intruders in computer
Conference on auditing and computer
Wei Li. Mississippi State University, Mississippi
[4]GAlib A C++ Library of Genetic Algorithm
Components, http://lancet.mit.edu/ga/
Bezroukov, Nikolai. 19 July 2003. Intrusion
Detection (general issues). Softpanorama: Open
NICE-2010
Acharya Institute of Technology, Bangalore-560090 275

Educational Society. Nikolai Bezroukov. URL:
http://www.softpanorama.org/Security/intrusion_d
etection.shtml (30
Oct. 2003).
[6] Crosbie, Mark, and Gene Spafford. 1995.
Applying Genetic Programming to Intrusion
Detection. In Proceedings
of 1995 AAAI Fall Symposium on Genetic
Programming, pp. 1-8. Cambridge, Massachusetts.
URL:
http://citeseer.nj.nec.com/crosbie95applying.html
(30 Oct. 2003).








































































NICE-2010
Acharya Institute of Technology, Bangalore-560090 276

ON FIVE-TECHNIQUE ROUTING FOR SPANNING TREE & FPGA.

B.G.Prasanthi
1
, Dr.T.Bhaskara Reddy
2
Dept.of Computer Science & Technology , S.K.University, Anantapur.
Email:journal.balaji@gmail.com
____________________________________________________________________________________________________________________________________

ABSTRACT
Spanning tree is one of the well known
redundancy problem .It has many applications in VLSI
layout and routing, wired communication and various
other fields. The paper presents a new performance
and rout ability driven routing algorithm for Spanning
tree protocol(STP).A key contribution of our work is
the overcoming of the four essential limitation of the
previous routing algorithm. Avoiding redundancy and
increasing the efficiency with better utilization of
storage and hardware resources. To this end we
formulate an exact routing density, net and delay
calculation that is based on precise analysis of the
structure of spanning tree protocol and utilize it
consistently in global and detailed routings. With the
introduction to the proposed accurate routing ,We
describe a new routing algorithm called Integrated
Autonomous Router System which is fast and yet
produce s remarkable routing results in terms of both
rout ability ,net and path delays. We performed
extensive experiments to show the effectiveness of our
algorithm.
Interconnection of multiple LANs via bridges, the
QoS flow depends on the length of an end-to-end
forwarding path is reported in [1]. A review of the
dynamic effects that occur in large local area
networks (LANs) are described. Effects are
considered at three different levels: single-network
segments, connections between adjacent segments,
and problems that appear only in large networks. In
[3][4]There are incracies involved like path
redundancy, change in topology, election of the root
.Paper [5] deals with providing efficient routing
between devices in computer networks. Specifically,
we assume the existence of a spanning tree from
each access
point to all devices within the transitive transmission
range of the access point.[6] This paper studies the
problem of constructing a minimum-weight spanning
tree (MST) in a distributed network. This is one of the
most important problems in the


area of distributed computing.[7] This algorithm
tries to eliminate loops in bridged networks. In this

study the correctness of STP algorithm is formally
verified using Extended ebeca[8]. It will resolve
the route over any arbitrary network topology
between any pair of communicating end-stations to
be the shortest possible.[9][10]
INTRODUCTION
1 Spanning Tree Operation:
The process of forwarding a frame from a source to
the direction of its destination is a two-step process.
First step is learning, and the second is forwarding of
the frame. If the destination address is not known,
the frame will be forwarded to all ports (except the
port from where it arrived).

Spanning tree is needed in networks with
redundancy, where undesirable loops can form.

To provide path redundancy, STP defines a tree that
spans all switches in an extended network. STP
forces certain redundant data paths into a standby
(blocked) state. If one network segment in the
Spanning Tree Protocol becomes unreachable, or if
Spanning Tree Protocol costs change, the spanning
tree algorithm reconfigures the spanning-tree
topology and reestablishes the link by activating the
standby path.

There are further intricacies involved, like election of
root, and propagation of change in topology etc, but
describing them would be beyond the scope of this
document.

1.1 Problem definition and conceptual solution
1) The Spanning tree protocol ensures a loop
free network, but at a cost. The cost is that
the redundant link will be blocked, and no
traffic can flow through it. When network
users buy a chassis being charged on a per
port basis, this becomes considerable cost. It
could be more serious for core or central
layer 2 switches as it is important to have
redundant links between such chassis.
Consider fig 1, where there are two paths
existing for traffic to flow from B to C. With
STP active, only path via root will be open.

Although loop is avoided, this clearly has two
NICE-2010
Acharya Institute of Technology, Bangalore-560090 277



disadvantages.

1) Assume 1 Gb links connecting each of the
switches, from B to C, effectively there is 2 Gbps
bandwidth. But only 1Gb is practically available.
2) To reach B from C, the shorter path is directly
from B to C, but being blocked, the path B-A-C has to
be taken.


Technique1 Solution

1) As solution, we introduce the concept of
channeling among such redundant links. This is to be
configured by the administrator, preferably over the
high traffic zones, backbone switches. The channel
will be formed typically between switches B and C
non root devices here and will tell the software that
there exists a redundant path, one directly and
another via the root, A.

The administrator can either configure a traffic
threshold, beyond which he might want the switch to
use the redundant path (B to C).

The channel will consider the link B to C and B-A-
C as one bundle, sharing common properties.

Alternative solution used in field now is to make
the switches A, B and C as roots for different VLANs,
so that different sets of links become as blocking and
hence load balancing to an extent is achieved. But
then if one VLAN is known to send more traffic than
other or if we have just 1 or 2 VLANs, the problem is
not effectively solved.

2) For networks having huge number or layer 2
devices, the convergence time is often not acceptable,
just for any minor topology change. For instance,
consider a network as in Figure 2, for a
change/toggle in link, say X (marked in fig) the TCN
exchange and then stabilizing the network to be safe
for traffic forwarding would take atleast 30-60 secs.

Algorithm

Set the nodes according to the path criticality
estimated using the measure
For each set ni in the list
Find the edges in the net according to edge. path
pi.
If(edge. Path pi is not already existing)
Perform shortest path fromsignal source.
Select the k-1 shortest two terminal nets
whose route match with the signal flow of ni.
Else
Assign edge-path pi to particular pair terminals
Construct a minimum spanning tree Generate
two terminal net for each edge of the spanning
tree.
End if;
Endfor.

1.2 Problem definition and conceptual solution

In the growing LAN network to control the
redundant link failures in layer2 and to over come
the issue faced by administrators in configuring
different types of protocols.

Switch
A
Switch
B
Switch
C
Host 2 Host 3
Host 1
blocking
ROOT
Switch B
To A
To C

B1
B2
B3
B4
Bs MAC Address table
B3 MAC 6
B1 MAC 5
B1 MAC 1
NICE-2010
Acharya Institute of Technology, Bangalore-560090 278















As there are four stages a node has to pass on
from source to the destination in sending the
packets. They are blocked state, listening state
,learning and forwarding states. So instead of all the
nodes passing through four stages, it is time
consuming so in order to fasten up the data sending
,the optimized solution is provided for the above
fixed network scenario.

Technique 2 Solution

Switch B and switch C can be configured
which allows the switch port to begin forwarding as
soon as the end system is connected bypassing the
listening and learning states and eliminating up to 30
seconds of delay before the end system can begin
sending and receiving traffic.

Switch A and Switch D can be configured on
access switch which is directly connected to root
bridge and it is effective only when the
i.e. frames received from root as well as from blocked
channel port.

Limitation:
This concept however has the limitation that
the administrator has to have the additional
configuration over the STP blocked ports, and the
switch has the overhead of having two sets of MAC
addresses.
1.3 Spanning tree zones

For this second level of optimization, consider the fig
2 as below, divided into zones.
Figure 6

The administrator configures the topology
into zones. Spanning tree now elects roots within the
zones.

Assume that in zone 1, A becomes the root
and E in zone 3. Now, A only has to talk to roots of
other zones. Among the roots, again one is elected as
primary roots, which as per normal spanning tree
BPDU exchange will be the lowest MAC. This way the
additional overhead is less. Now, A sees zone 2 and
zone 3 as just one device. Spanning tree
convergence will result in the port between zone 3
and zone 2 to be blocked, ie E to H, via G. Switches
like G here should act as transparent to BPDU
exchange between zone roots.
Technique 3 Solution

To solve such issues in larger networks, we
introduce the concept of division into zones or areas.
This is similar to what routing protocols like OSPF
use. The administrator has the task of dividing the
network to zones. Each zone will have a primary root
based on selection criteria similar to selecting root in
a combined network.

Consider a zone to be comprising of switches E, F
and G. Assume E is the root here. E now has the
responsibility of talking (directly or indirectly) to the
outside world. What we achieve out of this is that
topology changes or fluctuations within a zone, like
say switches A,B,C and D will be contained and
resolved without putting the whole stream of
switches downstream on discarding or blocking
state.


This done, now to reap the benefits! Consider
a link breakage as in Y (fig 6) which is in forwarding
mode among E, F and G. Intra zone spanning tree will
D
E
Root
B C
A
B
A
D
C
E
G
F
H
I
X
Root
Figure 2
NICE-2010
Acharya Institute of Technology, Bangalore-560090 279

immediately activate or move the forwarding the link
via F. All this will be transparent to zones 1 and 2,
and with less number of switches per zone,
convergence is a matter of milliseconds. Similar for a
case of topology change.


The Spanning Tree Zone concept is mainly for huge
clusters of switches which have spanning tree
configured on them. These take high convergence
times, even to address a link failure or topology
change at one end. By dividing into zones & with the
advanced features for the switches , the convergence
time and recalculation overhead is considerably
reduced and is not felt across the network .
.
1.4 Problem definition and conceptual solution

This algorithm describes technique to
generate and manage unique integer indexes
from a specified range of integers. Generated
index can be used in any application where a
unique integer from a specified range need
to be served as key to a particular record and
when record is freed index need to be
reused.

Efficiency of this algorithm lies in its
simplicity and capability to manage large
range of index in minimal recourses, such as
running time and memory requirement.

Technique 4 Solution


This Algorithm is best suited for problems
where 0 or 1-based unique indexes (however
non 0 or 1 based indexes can also be
managed with calculating a fix offset) are
required to be managed with frequent
operation like checking whether a index is
free or not, finding first free index, reserving
and freeing indexes with optimal memory
usage in average case.

One of the application but not limited to, is
index generation for MIB tables where a
unique index need to be used with a
conceptual row for
creation/retrieval/destroy operations.


Algorithm

Algorithm works on bit state (0 or 1), one of
the states is used to indicate free or used
index in the memory. Thus 1 bit memory is
required to represent one integer index, and
state of this bit can be used to determine
whether index is free or occupied.

For example to generate 512 integer indexes
we need 512 bit memory that is 64 Byte. In
addition to this we also need few more byte
to make searching of the indexes faster and
efficient to use.





Above picture illustrate core of the algorithm
for 8 bit base size. At level 0 has 8 bit
memory which can point up to another 8
byte memory of level 1. Again at level 1 each
bit can point to another 1 byte at next level.
Thus level 1 has 8 byte memory and level 2
has 64 bytes of memory. If level 2 is our final
level, then each bit at level 2 represents a
unique index.

In above example level 2 has 64 Byte and is
capable to manage 64*8 = 512 unique
indexes.

Occupying an Index:
Initially all bits at all levels will be set to
0. This indicates all indexes are free to
be used.

reserveIndex (index)
h = MAX_LEVEL 1
while h >= 0
p= index%BASE_SIZE
index = index/BASE_SIZE
idx_db[h][index] OR (1<<p)



511 .. 65 64 . 7 6 5 4 3 2 1 0
NICE-2010
Acharya Institute of Technology, Bangalore-560090 280


if idx_db[h][index] = ALL_SET
h = h - 1
else
break



Reserve index 1

Reserve index 7, Case 2a


1.5 Problem definition and conceptual solution

. Routers and Forwarders in a Router domain would
likely have different amount of hardware Resources.
The least capable switch should not hold the
performance of the Router network domain to
ransom. This write up lists some smart choices the
forwarders and routers in the Router domain can
make to most efficiently utilize the hardware
forwarding capabilities of each node in the network.
In a Router domain, all forwarders and routers
belonging to a single Router domain keep every
other Forwarder or router updated of all the host
routes for each host known and present in the
network.
The above is done for every subnet in the Router
domain. Each forwarder and router install all the
routes in the fast path forwarding database(for
example hardware forwarding tables).
In this new scheme, all the routers and forwarders
learn all the routes, build the topology table for all
subnets, but the difference is the router/forwarder
can choose to put subnet of the routes in the fast
path database(hardware database). This would save
of the hardware resources, without sacrificing the
network performance in most cases. In the cases
where network path chosen is potentially suboptimal
a further set of enhancements comes and loosens
the optimization to further improve the network
performance without wasting the forwarding
resource utilization

Technique 5 Solution


Our algorithm produces better results as compared
to the above one, as its more aggressive but needs
more information, to get its job done right, like no
routing loops.
Its needs to do *one* of the following :
A new Router route distribution protocol is used that
propagates the link state topology of the network,
and every host route is advertised with the source
forwarder (first forwarder to which the lost is
connected). Now if the link state topology is known
along with the source forwarder for the host route,
any forwarder or router in the network can safely
compute the adjacencies That can be used to reach
the host(these adjacencies may have different load
distribution values), but there is a guarantee that
there will never be a routing loop. The forwarder can
ignore the load distribution when performing the
equality check in step IIa below, and just treat all the
adjacencies as equal cost. Picking up the less optimal
non looped paths enables the algorithm in step IIa to
perform an aggressive Less specific match, as it takes
all the paths to reach the destination, Irrespective of
the load distribution.
Another alternative to the above is, play with the link
costs such that the cost of the distribution links is
less than that of the forwarder to router uplinks.
Then use EIGRP for route distribution and exploit a
concept similar to variance. Exploit concepts the
EIGRP Concepts of feasible distance, reported
distance, feasible successor to always prevent a
routing loop, but still pick up less optimal paths. If
the less specifics adjacency set, consists of the same
adjacency members as the host route, then treat the
routes as equal, if not increase the member set of
host routes by including the feasible successors, now
if the sets have the same members, optimization can
be done. Picking Up the less optimal non looped
paths enables the algorithm in step IIa to perform an
aggressive less specific match, as it takes all the paths
to reach the destination, irrespective of the load
distribution. That is why is called adaptive algorithm
optimization algorithm.

Algorithm

1) During a Measurement-Period, t (Cx.1,Mx)
for every neighbor node j do
Sij a monitoring scheme for the link from node i to
node j
if Sij == PASSIVE or ACTIVE then
monitor egress traffic to node j
else if Sij == COOPERATIVE then



Level 0
Level 1
Level 2
511 .. 65 64 . 7 6 5 4 3 2 1 0

Acharya Institute of Technology, Bangalore

monitor egress traffic from node i to node k that
node j overhears
end if
if node i received a cooperation request
node j then
overhear cross traffic from node j to node .
end if
end for
(2) At the end of a Measurement-Period, t = Mx
for every neighbor j do
record measurement results from node i to node j
if node i received a cooperation request (.) from
node j then
send node j a report of overhearing traffic from node
j to node .
end if
end for
3) During an Update-Period, t (Mx,Mx + Ux)
process a measurement report(s) from other nodes,
if any
(4) End of an Update-Period, t = Mx + Ux (or, t = Cx)
for every neighbor j do
calculate the quality of link from node i to j using Eq.
(2.1)
run the transition algorithm (in Figure 2.2) for node j
if transition to COOPERATIVE then
choose node k that node j can overhear
send a cooperation request (k) to node j
else if transition to ACTIVE then
schedule active probe packets
end if
end for

Experimental results

We have implemented the IARS proposed routing
algorithm in C programming language .We explore
the tradeoffs and performance between SEGA and
TRACER with the four circuits and the results are as
follows:


Circuits SEGA TRACER ours
2 bit alu 947 714 618
4 bit alu 1392 1037 1444
apex 7 339 299 320
term 1 129 136 131


, Bangalore-560090
monitor egress traffic from node i to node k that
if node i received a cooperation request (.) from
overhear cross traffic from node j to node .
Period, t = Mx
record measurement results from node i to node j
if node i received a cooperation request (.) from
send node j a report of overhearing traffic from node
(Mx,Mx + Ux)
process a measurement report(s) from other nodes,
Period, t = Mx + Ux (or, t = Cx)
calculate the quality of link from node i to j using Eq.
run the transition algorithm (in Figure 2.2) for node j
choose node k that node j can overhear
send a cooperation request (k) to node j
We have implemented the IARS proposed routing
algorithm in C programming language .We explore
the tradeoffs and performance between SEGA and
circuits and the results are as
ours
618
1444
320
131

Circuits TRACER
2 bit alu 66
4 bit alu 207
apex 7 30
term 1 16


Circuit
s

Routability
Performan
ce
Types
Trac
e
Delay Trac
e
Delay
alu2 9
714
11
707
alu4 11
1037
15
1096
apex7 8
299
13
258
term1 7
136
16
103

0
1000
2000
3000
4000
5000
2 bit
alu
4 bit
alu
apex 7
0
50
100
150
200
250
0 2 4
NICE-2010
560090 281

ours
70
200
18
12

Performan
Routability
Performa
nce
Delay Trac
e
Delay Tra
ce
Delay
707
9
514
11
692
1096
12
70
14
922
258
11
131
11
147
103
9
84
8
89
apex 7 term 1
Path delay
ours
Path delay
TRACER
Path delay
SEGA
6
Router
time
TRACER
Router
time
ours
NICE-2010
Acharya Institute of Technology, Bangalore-560090 282



CONCLUSION
Thus,We have presented a new performance and
rout ability driven routing algorithm for STP. The key
contribution of our work is the formulation of much
more reliable and accurate metrics which is derived
from the careful analysis of the inter connection of
the switch block. this then led us to develop an
efficient routing technique called Integrated
automated router system, which is well suited to
improve the metrics very effectively. Extensive
experimental data showed that the proposed routing
algorithm is very effective in improving the overall
performance of the design as well as the rout ability.
In summary ,compared to the results produced by
SEGA,TRACER for the bench mark circuits, our
algorithm reduced of the longest net by 53.8
percent (on average) and the delay of the longest
path by 46.8 percent even with about 1.1625 times
less execution time.


REFERENCES

1 . King shaum lui, Whay Choui Lee , A
transparent spanning tree bridge protocol with
alternate routing - SIGCOMM Computer
Communication,Vol 32 , p-33-46.

2 . L. Bosack and C. Hedrick. Problems in Large
LANs. IEEE Network Magazine, 2(1), p-22-28 ,Jan. .

3. Duato , Jos Serrano, A New Transparent Bridge
Protocol for LAN Internetworking using Topologies
with Active Loops, Proceedings of the 1998
International Conference on Parallel Processing,
p.295-303, August 10-14, 1998.

5.King Lui , Whay Chiou Lee, Spanning Tree
Alternate Routing Bridge Protocol, University of
Illinois at Urbana-Champaign, Champaign, IL.
5. Roy Friedman, Efficient route discovery in hybrid
networks Technion Institute of technology,p-69-
75,2008.

6. Michael Elkin ,A faster distributed protocol for
constructing a minimum spanning tree , Journal of
Computer and System Sciences, Vol 72 , P 1282-
1308,2006.

7 . Hussein Hojjat, Hootan Nakhost, Marjan Sirjani
Formal verification of Spanning tree protocol -
Electronic Notes in Theoretical Computer
Science, Vol159,p-139-154,2006.

8. N. Linge, E. Ball, R. Tasker , A Bridge protocol
for creating a spanning tree topology with in an IEEE
802 extended LAN environment -Computer
Networks and ISDN Systems , Vol 13,p-323-332.
.
9 . Understanding and Designing Networks using
Spanning Tree and UplinkFast Groups

10 Ether channel concepts





















0
200
400
600
800
1000
1200
0 5 10
alu2
alu4
apex7
term1
NICE-2010
Acharya Institute of Technology, Bangalore-560090 283

A PROPOSED ARCHITECTURE FOR SCADA NETWORK SECURITY
Ms.V.Lakshmi priya,
Kalasalingam University, Anand Nagar, Krishnankoil
lakshmipriya.0810@gmail.com
Mr.C.Bala Subramanian, Lecturer of IT
Kalasalingam University, Anand Nagar, Krishnankoil
baluece@gmail.com
_____________________________________________________________________________________________________________________________
ABSTRACT
Nowadays security plays the major role in the
network. Growing size of such proprietary networks
creates increased opportunity for successful attack.
Supervisory control and data-acquisition (SCADA)
networks are more secure and vulnerable to attach
from both internal and external intruders. It is the
mechanism which is used to provide continuous
monitoring and authentication to the network.
Authentication provided for the individual users
and the nodes. Data collected from the various
wireless sensor nodes. Active mode of operation is
used to block the unauthorized nodes. After that the
packets are given to the main system. MD5
(Message-Digest algorithm 5) is a widely used
cryptographic hash function with a 128-bit hash
value which is used to verify the packets. The
packets may be the normal or malicious packets.
Normal packets are given to the overall network.
The malicious packets are given to the sink node for
finding the compromised node in the network.

Keywords: Supervisory control and data-
acquisition (SCADA) system, Wireless sensor
nodes, Packet format.

1. INTRODUCTION
A SCADA system is a security mechanism
which is used to protect the network from the
attackers [both the internal and external
intruders]. The main aim of the scada system is to
provide reliability, security and availability. This
system is used for all the real time environments.
In previous cases, they are using various types of
modes. Each mode contains some limitations.
The earlier system, passive modes are mostly
used. Passive mode is just like a mirror page. It
will monitor and analyze the packet flow then
send the packet to the destination. All the packets
including malicious packets also passed through
the traffic. Only information is get from this mode
of operation. Figure1 is the example diagram for
passive mode of operation.
After that we go for half active mode. Its also
some what similar to the previous one. Monitor
and gather information, if any malicious packet
will occur means halt the transaction. Figure2 is
the example diagram for Active mode of
operation. The various types of limitations are
there so we go for the new proposed system.
Continuous monitoring is present, may avoid loss
of packets. The trust system intercepts and reacts
to status messages and commands from network
nodes destined for the master control station and
other nodes in the network. The trust systems
cost-effective, modular acquisition and
employment options are well suited for meeting a
wide range of implementation requirements.



Fig.1 Passive mode of operation

Previous existing system had an active
mode with TCP/IP. Active mode [1] is much better
than the previous modes of operations. The whole
network is continuously monitored and analyzes
the packet flow, the report it to the system.

Fig.2 Active mode of operation

In our proposed system, the same active mode
of operation is used. The informations are
collected from the various wireless sensor nodes
[2]. When compare to the existing one, the datas
or packets are collected from the network which
is present near to the system. We can not able to
collect packets from the wireless node.

II. SCADA SYSTEM
In the past, a lot of these control systems
operated in isolated environments with
NICE-2010
Acharya Institute of Technology, Bangalore-560090 284

proprietary technologies. Consequently, they
faced little to no cyber-security risk from external
attackers. But today, modernization and the
adoption of available commercial technologies
have resulted in these systems becoming
increasingly connected and interdependent.
Security has been lagging during the increased
modernization of these systems.
Authentication is fairly common for
devices in the control space to use default
passwords for access and control. The problem is
further complicated by the move toward
commercial, off-the-shelf (COTS) appliances and
systems being integrated with the networks or
part of the control systems themselves. While
cutting costs and eliminating some of the
proprietary nature of control systems, these
appliances and systems bring with them the well-
known passwords and vulnerabilities that each
product may be subject to. Often these COTS
systems may end up providing a point of entry for
an attacker into the critical control network.
Attacks focusing on inserting faulty data
can originate at the sensors on the
communication networks that carry the data.
Sensors that provide information about the
control systems are subject to data falsification.
They are the core of the control system and
provide a fairly centralized point of control and
data aggregation. These systems are subject to
directed exploits in the control system software,
exploits against the operating system, Trojans,
malware, spy ware, and pretty much any attack
other computers are subject to.

III. LITERATURE REVIEW

A. Wireless Sensor Mesh Networks in Highly
Critical Systems
The SCADA systems are mainly focused
on wireless sensor nodes [2]. These sensors are
used to measure environmental data, such as
temperature, pressure, vibration, light intensity
etc. basically these sensors are located in various
remote areas. The main aim of this sensor is to
collect information from their current location.
These devices should be autonomous. Light
weight devices are used. The hardware and
software for the node should be reliable and
efficient one. By using this wireless sensor nodes,
the whole system will be protected from the
intruders.
There are various types of attacks are
present. Sniffing attack is the one of the main
attack present in the system. A sniffing attack may
be carried out by both an insider and an outsider.
These are the some of the common attacks
present in the system. They are Jamming Attack,
Sink hole Attack, Worm Hole Attack.

B. Understanding Trust and Security in SCADA
System
In real time environment, they are using
various types of architecture. Each architecture
will give certain ways to implement our network
in an efficient manner. In this we have trusted
secure networked architecture based on
interlocking rings (SNAIR). The whole
architecture is considered as a ring. Initially it will
starts from zero. The architecture should be well
developed and then checked for their reliability.
There are many real time applications
present. They are Oil and Gas, Air Traffic and
Railways, Power generation and Transmission,
Manufacturing and Water Management etc.

C. A Trust System Architecture for SCADA
Network Security
The main aim of the trust system is to
improve security by using existing utility systems.
In the networking systems, the trust system is
placed in the broader context. It will improve the
security and flexibility using TCP traffic. A trust
system can perform at or near the real-time
requirements that the supervisory control and
data acquisition (SCADA) network requires even
with the overhead of TCP/IP and UDP/IP
communications, Internet Protocol Security
(IPsec) encryption, firewall rules, format check,
and access control functions.
The main aspect is to share the
information with regional utilities with some
enhanced security features. So many types of
operations are there. They are passive mode, half-
active mode, tunnel mode and gateway mode.
Each mode of operation contains some limitations
and security problems. In this trust system uses
an active mode of operation; it means that there is
no need to upgrade the system every time. Active
mode of operation is used because restructuring
and all time upgrading process is reduced. The
trust system intercepts and reacts to status
messages and commands from network nodes
destined for the master control station and other
nodes in the network.
The original trust system only had an
active mode router-based implementation. This
paper introduces passive mode, half-active mode,
and tunnel/gateway mode trust systems to
greatly add to the range of situations where
security can be added to existing SCADA systems.
The new trust system implementations allow
NICE-2010
Acharya Institute of Technology, Bangalore-560090 285

rewall and intrusion detection security to be
embedded through tunneled connections when
SCADA traffic must pass through the Internet or
other unsecured networks. Passive and half-active
implementations also allow for trust systems in
environments where router replacements or
direct modications are not possible.

D. Improving Security for SCADA Control
Systems
A SCADA system is a common process
automation system which is used to gather data
from sensors and instruments located at remote
sites and to transmit data at a central site for
either control or monitoring purposes. The reality
is that a growing number of worms and viruses
spread by exploiting software design, operations
and human interface. Solutions for preventing the
attacks are becoming more important. Security
knowledge is likely to include policy, standards,
and design and attack patterns, thread models,
code samples, reference architecture, and secure
development frame work.
Information security management
principles and processes need to be applied to
SCADA systems without except ion. More efforts
should be planned on reducing the vulnerabilities
and improving the security operations of these
systems. Methods for risk management that are
based on automated tools and intelligent
techniques are more beneficial to SCADA systems
because they require minimum or no human
intervention in cont rolling the processes. SCADA
systems evolution allows us to better understand
many security concerns. The regular environment
is placing increased demands on SCADA system,
driving data capture and retention,
documentation, training, security, policy, and
reporting requirements.

Fig 3. Integrated SCADA Architecture

This is the general diagrammatic format
of the integrated SCADA Architecture. In this all
the hosts and various Remote terminal Units are
connected to the SCADA nodes. LAN is used for
local area connection. Information security
management principles and processes need to be
applied to SCADA systems without except ion.
More efforts should be planned on reducing the
vulnerabilities and improving the security
operations of these systems.
In addition, the LANs that these
architectures use raise a new set of security
concerns, leading to the introduction of features
such as encrypted data sets and dedicated access
mechanisms in information assurance
applications. without any connect ion to the
Internet these systems are still vulnerable to
external or internal attackers that can exploit
vulnerabilities in software such as operating
systems, custom and vendor software, data
storage software, databases, and applications.
Modern products are often based on component
architectures using commercial off-the-shelf
products (COTS) elements as units. Most SCADA
systems are not protected with appropriate
security safeguards.

IV.EXISTING SYSTEM
In the existing system, isolated
environments are mostly used because at that
time the attackers target levels are low. Nowadays
the technologies are improved. So the degree of
attack also increased. Security has been lagging
during the increased modernization of these
systems. Various modes of operations are used.
Passive modes are mostly used. Passive mode is
just like a mirror page. It will monitor and analyze
the packet flow then send the packet to the
destination. All the packets including malicious
packets also passed through the traffic. Only
information is get from this mode of operation.
Figure1 is the example diagram for passive mode
of operation.
After that we go for half active mode. Its
also some what similar to the previous one.
Monitor and gather information, if any malicious
packet will occur means halt the transaction.
Figure2 is the example diagram for Active mode of
operation.
Previous system had an active mode with
TCP/IP. Active mode [1] is much better than the
previous modes of operations. The whole network
is continuously monitored and analyzes the
packet flow, the report it to the system. In our
proposed system, the same active mode of
operation is used. The informations are collected
NICE-2010
Acharya Institute of Technology, Bangalore-560090 286

from the various wireless sensor nodes [2]. When
compare to the existing one, the datas or packets
are collected from the network which is present
near to the system. We can not able to collect
packets from the wireless node. The various types
of limitations are there so we go for the new
proposed system. Continuous monitoring is
present, may avoid loss of packets.

i. Limitation of the System
The existing system has many limitations.
Here, the passive mode of operation is used, so in
that the system will maintain the intruders list.
Passive mode of operation will collect all the
intruders information. This mode of operation
allows the intruder pass the packet to the
network. The network will know the information
about the intruder after he sends the packet. This
is the main demerit of the existing system. So we
go for the new proposed system. It will overcome
the all above mentioned problems.

V. PROPOSED SYSTEM
Nowadays security plays the major role in
the network. Growing size of such proprietary
networks creates increased opportunity for
successful attack. Supervisory control and data-
acquisition (SCADA) networks are more secure
and vulnerable to attach from both internal and
external intruders. It is the mechanism which is
used to provide continuous monitoring and
authentication to the network. Authentication
provided for the individual users and the nodes.
Data collected from the various wireless sensor
nodes.

Fig.3 Architecture diagram for proposed system

Active mode of operation is used to block
the unauthorized nodes. After that the packets are
given to the main system. MD5 (Message-Digest
algorithm 5) is a widely used cryptographic hash
function with a 128-bit hash value which is used
to verify the packets. The packets may be the
normal or malicious packets. Normal packets are
given to the overall network. The malicious
packets are given to the sink node for finding the
compromised node in the network.

5.1Trust System Solutions:
Even with technical training, regular
application of the latest patches, security software
and hardware, and dedicated specialists for
round-the-clock monitoring, even the most
heavily defended IT networks see their share of
system compromises throughout the year from
Internet connections. The trust system records
suspicious event details useful for IT and security
personnel to prove to management the types and
quantities of attacks against the network. These
records should prove useful when investing in
security purchases.

6. CONCLUSION
The proposed system will provide the
secured environment to the user. Active mode of
operation is used. Continuously monitor the
network will provide the secured environment.
Authorization is provided by the master, when the
unauthorized user enters into the network means
that particular IP address is blocked. In the
existing system focused on the technical operation
of the system by augmenting routers to protect
user datagram protocol (UDP) based traffic.
In our proposed system, the data are
collected from the wireless sensor nodes (WSN).
By using this we can able to collect datas from
various areas. These are protected by digital
certificates to prevent unauthorized users from
intercepting the information or introducing false
data into the SCADA system. Authorization is the
main security part present in this project. My
future work is going to develop the proposed
system with some security features. Checking
process will be provided for both users and the
packets. Continuous assessment is the main part
of this project. By using this we can able to avoid
the malicious packets as well as users.
In conclusion, believe that as long as the
security issues are adequately addressed, the
Proposed System should be able to achieve great
success in future.

7. REFERENCES

[1] Bowen.C.L, T.K. Buennemeyer, and R.W.
Thomas (2005), Next generation SCADA security:
Best practices and client puzzles, in Proc. 6th
Annu. IEEE SMC Information Assurance
Workshop, West Point, NY, pp. 426427.
NICE-2010
Acharya Institute of Technology, Bangalore-560090 287


[2] Birman.K.P, J. Chen, K. M. Hopkinson, R. J.
Thomas, J. S. Thorp,R. Van Renesse, and W. Vogels
(May 2005), Overcoming communications
challenges in software for monitoring and
controlling power systems, Proc. IEEE, vol. 93, no.
5, pp. 10281041.

[3] Byres, E. J., Hoffman, D., & Kube, N. (2006). On
shaky ground A study of security vulnerabilities
in control protocols. 5
th
American Nuclear Society
International Topical Meeting on Nuclear Plant
Instrumentation, Controls, and Human Machine
Interface Technology, American Nuclear Society,
Albu-querque, NM.

[4] Cristina Alcaraz & Javier Lopez (Jul2010), A
Security Analysis for Wireless Sensor Mesh
Networks in Highly Critical Systems, IEEE Trans.,
vol.40, No.4.

[5] Clifford Neuman (2006), Understanding Trust
and Security in SCADA Systems Proc. IEEE, vol.
93, no. 5, pp. 10281041.

[6] Clint Bodungen, Jeff Whitney & Chris Paul,
SCADA Security, Compliance, and Liability- A
Survival Guide.

[7] Gregory M.Coates, Kenneth M.Hopkinson,
Scoot R.Graham, Stuart H.Kurkowski, (Jan 2010),
A Trust System Architecture for SCADA system,
IEEE Trans., vol.25, No.1.

[8] Kenneth P.Birman, Jie Chen, Ken Hopkinson,
Bob Thomas, Jim Thorp, Werner Vogels,
Overcoming Communications Challenges in
Software for Monitoring and Controlling Power
Systems.

[9] Mariana Hentea (2008), Improving Security
for SCADA Control Systems, vol.3.


[10] Niedermayer.H, A. Klenk, and G. Carle (2006),
The networking perspective on security
performanceAMeasurement study , presented
at the 13
th
GI/ITG Conf. Measurement, Modeling,
and Evaluation of Computer and Communication
Systems, Nrnberg, Germany.

[11] Na.L, N. Zhang, S. Das, and B. Thuraisingham
(2009), Privacy preservation in wireless sensor
networks: A state-of-the-art survey, Ad Hoc
Netw., vol. 7, no. 8, pp. 15011514.

[12] Roosta .T,D.Nilsson,U. Lindqvist, andA.Valdes
(2008) , An intrusion detection
system for wireless process control systems, in
Proc. 5th IEEE Int. Conf. Mobile Ad Hoc Sens. Syst.
(MASS), pp. 866872.

You might also like