Professional Documents
Culture Documents
] X [ E = ,
x '
P ] X X [ E
0 0 0
= ,
0 0
U
] U [ E = ,
u '
P ] U U [ E
0 0 0
= ,
xu '
P ] U X [ E
0 0 0
= .
As we can see in (3) the input vector
k
U is not
observable through the measurement process.
Treating
k
X and
k
U as the augmented system
state, the AUSKE is described by
) X H Z ( K X X
Aug
k | k
Aug
k k
Aug
k
Aug
k | k
Aug
k | k 1 1 1 1 1 1 1 + + + + + + +
+ =
(4)
Aug
k | k
Aug
k
Aug
k | k
X A X =
+1
(5)
1 '
1 | 1 1
'
1 | 1 1
] ) ( [ ) (
+ + + + + +
+ =
k
Aug
k k k
Aug
k
Aug
k k k
Aug
k
R H P H H P K
(6)
k
' Aug
k k | k
Aug
k k | k
Q ) A ( P A P + =
+1
(7)
k | k
Aug
k
Aug
k k | k
P ) H K I ( P
1 1 1 1 1 + + + + +
=
(8)
Where
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 9, SEPTEMBER 2012, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 120
2012 Journal of Computing Press, NY, USA, ISSN 2151-9617
(
=
k
k Aug
k
U
X
X ,
(
=
u
k
x
k Aug
k
K
K
K ,
(
=
u
k
' xu
k
xu
k
x
k
k
P ) P (
P P
P
(
=
k n m
k k Aug
k
C
B A
A
0
,
'
0
(
=
m p
k
Aug
k
H
H
(
=
u
k
' xu
k
xu
k
x
k
k
Q ) Q (
Q Q
Q
Where the superscript Aug denotes the
augmented system state, I denotes the identity
matrix of any dimension and
n m
0 is a n m zero
matrix. It is clear from (4)-(8) that the
computational cost of the AUSKE increases with
the augmented state dimension. The reason for
this computational complexity is the extra
computation of
xu
k
P terms in each sample time k .
Therefore, if this term can be eliminated, one can
reduce the complexity of computational effort. In
this paper, we propose a new optimal two-stage
Kalman estimator without calculating the term of
xu
k
P explicitly. Therefore, the proposed scheme is
developed to reduce the computational cost of an
AUSKE by partitioning the equations (4)-(8) into
two subsystems [17].
1. DERIVATION OF THE OPTIMAL PARTITIONED
STATE KALMAN ESTIMATOR
The design of a new two-stage Kalman estimator
is described as follows. First, define a modified
input-free model and design a modified input-free
filter by ignoring the input term. Second, derive
an input filter to compensate the modified input-
free filter in order to minimize mean square error.
These two filters are used to build a new scheme,
which is equivalent to the AUSKE. The major
derivation is the relation between the
measurement residues of the two different filters.
One is the measurement residue of the input-free
filter, which does not consider unknown input
vector, and the other is the measurement residue
of the input filter. Based on the measurement
residues of the two filters, an input estimation
algorithm is derived using the minimum mean
square estimation technique [17].
The input-free model can be obtained by
ignoring the input term ( 0 =
k
U ) in (1) as below:
x
k k k k
W X A X + =
+1
(9)
Where the state vector of the input-free model is
denoted by
k
X . The input-free filter is just a
Kalman filter based on the model (9) and (3) as
below:
) X
H Z ( K X
k | k k k k k | k k | k 1 1 1 1 1 1 1 + + + + + + +
+ = (10)
k | k k k | k
X
A X
=
+1
(11)
1
1 1 1 1 1 1
+ + + + + +
+ = ] R ) H ( P H [ H P K
k
'
k
x
k | k k
'
k
x
k | k k
(12)
x
k
'
k
x
k | k k
x
k | k
Q ) A ( P A P + =
+1
(13)
x
k | k k k
x
k | k
P ) H K I ( P
1 1 1 1 1 + + + + +
= (14)
In the following, we propose an expression which
relates the state vector of the input model
k
X to
the state vector of the input-free model
k
X . The
state vector of the input model in (1) using the
state of the input-free model in (9) can be
calculated for each sample time [17]:
u
x u
x
W C B U C B X
W ] W C U C [ B X A
W U B X A X
0
1
0 0 1
1
0 0 1
0 0
1
0 1
1
0 0 0 0
0 0 0 0 0 1
+ =
+ + =
+ + =
(15)
Since
k
C is a nonsingular matrix,
0
U in (15)
replaced by (3). It is assumed in (15)
that
0 0
X X = , so
x x
W X A W X A X
0 0 0 0 0 0 1
+ = + = .
Following the same procedure, we can define and
derive the expression for
2
X by using (15):
u
u
u u
u
x u
x
W C B A
W C B C B A U C B C B A X
W C B A W C U C B C B A X
W C B A U B C B A X
W U B W C B U C B X A
W U B X A X
0
1
0 0 1
1
1
1 1
1
0 0 1 2
1
1 1
1
0 0 1 2
0
1
0 0 1 1
1
1 2
1
1 1
1
0 0 1 2
0
1
0 0 1 1 1
1
0 0 1 2
1 1 1 0
1
0 0 1
1
0 0 1 1
1 1 1 1 1 2
] [ ] [
] ][ [
] [
] [
+ + + =
+ + =
+ + =
+ + + =
+ + =
(16)
For an arbitrary sample time k , we will have
=
+ + + +
+ =
k
i
u
i i k k k k
W U M X X
0
1 1 1 1
e (17)
Where
1
0 0 1
1
1
3 2
+
=
= + =
C B M
,.... , k , C ] B M A [ M
k k k k k
(18)
1
0 0
1
0
0
1
1
0
1
2 1
=
= + =
C B ) A (
k ,.., , i , C ] B ) A ( [
j k
k
j
i i j k
i k
j
i i
H e
H e e
(19)
By using zero-mean property of
u
k
W and one-step
prediction logic from dynamic equation (17) the
predicted state is obtained as follow [17]:
1 1 1 1 + + + +
+ =
k k k | k k | k
U M X
(20)
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 9, SEPTEMBER 2012, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 121
2012 Journal of Computing Press, NY, USA, ISSN 2151-9617
The updated state for the input model can be
considered using the state of the input-free model
as below:
1 1 1 1 1 1 + + + + + +
+ =
k k k | k k | k
U N X
(21)
Where
1 + k
N in (21) must be calculated. Hence,
the relation between two innovation matrices
k
M
and
k
N needs to be determined. Suppose that the
updated state
1 1 + + k | k
X
H Z ( K X
k | k k k k k | k k | k 1 1 1 1 1 1 1 + + + + + + +
+ =
(22)
By some manipulation and using (10) and (22), an
expression to relate
1 + k
N and
1 + k
M can be
obtained as below:
1 1 1 1
1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1 1
+ + + +
+ + + +
+ + + + +
+ + + + +
+ + + + + +
=
=
+ =
=
k k k k
k | k k | k k k
k | k k k k k | k
k | k k k k k | k
k | k k | k k k
U M ] H K I [
] X
][ H K I [
] X
H Z [ K X
] X
H Z [ K X
U N
(23)
The equation, which relates
1 + k
N and
1 + k
M can
be obtained by comparing both sides of the above
equation;
1 1 1 1 + + + +
=
k k k k
M ] H K I [ N
(24)
Once the initial value of
k
M is chosen,
1 + k
M can
be recursively calculated by (18), and the
1 + k
N
can be obtained by (24) in each iteration. Note
that the
1 + k
K for the input filter is equal to
1 + k
K
for the input-free filter since the input term
assumes to be nonrandom. In addition, both filters
have the same covariance matrices. In other
words, the equality of filter gain for
k
X
and
k
X
in these filters arises from the same error
covariance matrices of
k
X and
k
X denoted
by
x
k | k
P . The innovations
1 + k
Z
~
and
1 + k
Z
~
as the
measurement residues of the input-free model and
the input model are defined, respectively [17];
k | k k k k
X
H Z Z
~
1 1 1 1 + + + +
= (25)
k | k k k k
X
H Z Z
~
1 1 1 1 + + + +
= (26)
The difference of these two innovations is defined
as:
] X
[ H Z
~
Z
~
Z
~
k | k k | k k k k k 1 1 1 1 1 1 + + + + + +
= = A
(27)
Using (27) and (20), we represent
1 + k
U in the
equation, which relates the input state to the
measurement residue as below:
1 1 1 1
1 | 1 | 1 1 1
~
~
]
[
~
+ + + +
+ + + + +
+ =
+ =
k k k k
k k k k k k k
Z U M H
Z X X H Z
(28)
It is clear from the input-free filter (10)-(14)
which is a standard Kalman filter, that the
measurement residue
k | k
Z
~
1 +
exists and easily
obtainable while the measurement residue
k | k
Z
~
1 +
is
actually not available. Note that (28) is in the
standard form
1 1 1 1 + + + +
+ =
k k k k
v _ H Z , and can be
viewed as an observation model of
1 + k
U . In (28),
the measurement residue
k | k
Z
~
1 +
is a non-zero mean
white random process since the input term
introduces a bias in the innovation
k | k
Z
~
1 +
. The
amount of this bias will supply the information
about the existence of input value. In contrast, the
measurement residue
k | k
Z
~
1 +
is a zero mean white
random process ( 0
1
=
+
] Z
~
[ E
k
) with covariance
matrix
z
k | k
P . In other words, we would like to
estimate
1 + k
U in order to minimize the error
covariance matrix
u
k
P
1 +
under the
constraint 0
1 1 1 1
= =
+ + + +
] X
H Z [ E ] Z
~
[ E
k | k k k k
,
or
k | k k
X
] X [ E
1 1 + +
= , yield unbiased estimation.
The desired form of the filtering solution for
estimating the unknown vector
1 + k
U is a
difference equation expressing
1 1 + + k | k
U
in terms of
k | k
U
and
1 + k
Z
~
. In the following, we derive a
recursive algorithm to estimate
1 + k
U in order to
minimize the error covariance matrix of the input
vector or
u
k
P
1 +
. Suppose that
k | k
U
~
1 +
denotes the
input state residue;
k | k k k | k
U
U U
~
1 1 1 + + +
=
(29)
Using (28) and zero-mean property of
1 + K
Z
~
, we
would like to obtain a recursive algorithm in the
form of Kalman filter as below [17]:
] U
M H Z
~
[ K U
] Z
~
Z
~
[ K U
k | k k k k
u
k k | k
k | k k
u
k k | k k | k
1 1 1 1 1 1
1 1 1 1 1 1
+ + + + + +
+ + + + + +
+ =
+ =
(30)
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 9, SEPTEMBER 2012, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 122
2012 Journal of Computing Press, NY, USA, ISSN 2151-9617
Using the residue of
1 + k
U defined in (29) and
using (30) and (28), gives
1 1 | 1 1 1 1 | 1
| 1 1 1 1
1 1 1 1 | 1
| 1 1 1 1 1
| 1 1
1 | 1 1 1 | 1
~ ~ ~
]
~
[
~
]
~
[
~
+ + + + + + +
+ + + +
+ + + + +
+ + + + +
+ +
+ + + + +
=
+
=
=
=
k
u
k k k k k
u
k k k
k k k k k
k k k
u
k k k
k k k k k
u
k
k k k
k k k k k
Z K U M H K U
U M H Z
U M H K U
U M H Z K
U U
U U U
(31)
The covariance matrix of
1 1 + + k | k
U
~
defined in (31)
gives
) (
) (
) (
) ( ) (
) (
1
'
1
'
1 | 1 1
| 1 1 1 | 1 1 1 1
| 1 1 1 1 1 | 1
1
'
1
'
1 | 1 1 | 1 1
'
1
'
1
'
1 | 1 1 1 1 | 1
1 | 1
' +
' +
'
' ' +
+ =
+ + + + +
+ + + + + + +
+ + + + + +
+ + + + + + +
+ + + + + + + +
+ +
u
k k k
zu
k k
u
k
zu
k k
u
k
u
k
uz
k k k k
u
k
u
k k k k
u
k
u
k
uz
k k
u
k k k
u
k k
u
k
z
k k
u
k
u
k k k
u
k k k k
u
k
u
k k
u
k k
K H M P K
P K K P M H K
P M H K K P
K H M P K P K
K H M P M H K P
P
(32)
Where the covariance matrix of
1 + k
Z
~
, denoted by
z
k | k
P
1 +
must be calculated. From (26):
1 1 1 1 1 + + + + +
+ =
k
'
k
x
k | k k
z
k | k
R H P H P
(33)
It is seen that the input state residue
k | k
U
~
1 +
is time-
correlated with the measurement residue of the
input model
1 + k
Z
~
, denoted by
zu
k | k
P
1 +
.
} U
~
Z
~
{ E P
'
k | k k
zu
k | k 1 1 1 + + +
= (34)
The extra computation of this cross covariance
matrix
zu
k | k
P
1 +
(which relates to
xu
k | k
P
1 +
) is the reason
for the computational complexity in the
augmented state methods. Therefore, if this term
can be eliminated, one can reduce the complexity
of computational effort. In the following, we
propose an expression which relates
zu
k | k
P
1 +
to
u
k | k
P
1 +
. Since the magnitude of the input term
1 + k
U in (20) is unknown, we can only use the
estimation of
1 + k
U
M X
1 1 1 1 + + + +
+ =
(35)
Using the equations (17), (3), (26) and (35), gives:
1
0
1 | 1 1 1 | 1 1
1
0
1
| 1 1 1 1 | 1 1 1
1 | 1 1 | 1 1
0
1 1 1 1
1 | 1 1
1 | 1 1 1 | 1 1 1
1
~
~
]
[ ]
[
]
[
] [
~
]
~
+
=
+ + + + + +
+
=
+
+ + + + + + +
+ + + + +
=
+ + + +
+ + +
+ + + + + + +
+
+
+ =
+
+ =
+ +
+ =
+ =
+ =
=
k
k
i
u
i i k k k k k k k k
k
k
i
u
i i k
k k k k k k k k k
k k k k k k k
k
i
u
i i k k k k
k k k k
k k k k k k k k k
k
V
W H U M H X H
V W H
U U M H X X H
V U M X H
W U M X H
V X H
V X X H X H Z
Z
e
e
e
(36)
1
0
1
| 1 1 1 | 1 1 | 1
~
~
~
+
=
+
+ + + + + +
+
+ =
k
k
i
u
i i k
k k k k k k k k k
V W H
U M H X H Z
e
(37)
It should be noted that the term
1 1 1 + + +
+
k k | k k
V X
~
H
in (37) is not equal to
1 + k
Z
~
in (16). Using (37),
the cross covariance matrix
zu
k | k
P
1 +
can be
calculated:
u
k k k k
k k k k k k
k k k
k
i
u
i i k
k k k k k k k
k k k
zu
k k
P M H
U U E M H
U V W H
U M H X H E
U Z E P
| 1 1 1
'
| 1 | 1 1 1
'
| 1 1
0
1
| 1 1 1 | 1 1
'
| 1 1 | 1
}
~ ~
{
]
~
)
~
~
[(
]
~ ~
[
+ + +
+ + + +
+ +
=
+
+ + + + +
+ + +
=
=
+
+ =
=
e
(38)
Where ] U
~
X
~
[ E
'
k | k k | k 1 1 + +
, 0
1
0
=
+
=
] U
~
W [ E
'
k | k
k
i
u
i i
e
and 0
1 1
=
+ +
] U
~
V [ E
'
k | k k
are equal to zero. One
important property of the optimal estimation of
1 + k
U
becomes [17]:
) (
) (
) (
) ( ) (
) (
1
'
1
'
1 | 1 1 1 1
| 1 1 1 1
1
'
1
'
1 | 1 1 1 1
| 1 1 1 1 1
'
1
'
1 | 1
1
'
1
'
1 | 1 1 | 1 1
1
'
1
'
1 | 1 1 1 1 | 1
1 | 1
'
+
' +
'
' ' +
' + =
+ + + + + + +
+ + + +
+ + + + + + +
+ + + + + + + +
+ + + + + + +
+ + + + + + + +
+ +
u
k k k
u
k k k k
u
k
u
k k k k
u
k
u
k k k
u
k k k k
u
k
u
k k k k
u
k
u
k k k
u
k k
u
k k k
u
k k
u
k
z
k k
u
k
u
k k k
u
k k k k
u
k
u
k k
u
k k
K H M P M H K
P M H K
K H M P M H K
P M H K K H M P
K H M P K P K
K H M P M H K P
P
(40)
u
k k k k
u
k
u
k k k
u
k k
u
k
z
k k
u
k
u
k k k
u
k k k k
u
k
u
k k
u
k k
P M H K
K H M P K P K
K H M P M H K P
P
| 1 1 1 1
1
'
1
'
1 | 1 1 | 1 1
1
'
1
'
1 | 1 1 1 1 | 1
1 | 1
2
) ( 2 ) (
) ( 3
+ + + +
+ + + + + + +
+ + + + + + + +
+ +
' ' +
' +
=
(41)
The second and the third terms of the above equation
can be considered as ) K ( W W K
u
k
u
k
' '
+ + 1 1
, where
z
k | k
'
k
'
k
u
k | k k k
P H M P M H W W
1 1 1 1 1 1
3
+ + + + + +
+ = ' (42)
Since the covariance matrices
u
k | k
P
1 +
and
z
k | k
P
1 +
are
symmetric, we can find a decomposition in the
form of W W ' , for appropriate matrix W , then
D D D W K D W K P
P
u
k
u
k
u
k k
u
k k
' ' +
=
+ + +
+ +
] ][ [
1 1 | 1
1 | 1
(43)
Comparing (41) and (43), the term D is defined
as below:
1
1 1 1
2
+ + +
' = ) W ( H M P D
'
k
'
k
u
k | k
(44)
The minimum
u
k | k
P
1 1 + +
is obtained by setting
W K
u
k 1 +
equal to D. Therefore,
1
| 1
'
1
'
1 | 1 1 1
'
1
'
1 | 1
1 '
1
'
1 1 | 1 1
] 3 [
2
) ( 2
+ + + + + +
+ + +
+ + + + +
+
=
' =
z
k k k k
u
k k k k
k k
u
k k
k k
u
k k
u
k
P H M P M H
H M P
W W H M P K
(45)
The minimum error covariance
u
k | k
P
1 1 + +
is
obtained:
u
k k k k
u
k
u
k k k k
z
k k k k
u
k k k k k k
u
k k
u
k k
k k
u
k k k k
u
k k
u
k k
u
k k
u
k k
P M H K I
P M H P H M
P M H H M P P
H M P W W H M P
P D D P
P
| 1 1 1 1
| 1 1 1
1
| 1
'
1
'
1
| 1 1 1
'
1
'
1 | 1 | 1
'
1
'
1 | 1
1 '
1
'
1 | 1
| 1 | 1
1 | 1
] 2 [
]
3 [ 4
] 2 [ ) ]( 2 [
+ + + +
+ + +
+ + +
+ + + + + + +
+ + +
+ + +
+ +
+ +
=
+
=
' '
= ' =
(46)
Based on (2), we have
u
k
'
k
u
k | k k
u
k | k
Q C P C P + =
+1
(47)
If the value of
u
k
K
1 +
given by (45) substitute in
(30), the estimation of
1 + k
U
leads to a minimum
error covariance [17].
IV. CONCLUSION
In this article, we introduce Cloud Computing and
perusal about influences of it on the processes of these
days. Cloud computing is combination of various
computing entities, globally separated, but
electronically connected. As the geography of
computation is moving towards corporate server
rooms, it bring more issues including security, such as
virtualization security, distributed computing,
application security, identity management, access
control and authentication. However, strong user
authentication is the paramount requirement for cloud
computing that restrict illegal access of cloud server.
In cloud computing, cloud providers can offer cloud
consumers two provisioning plans for computing
resources, namely reservation and on-demand plans. In
general, cost of utilizing computing resources
provisioned by reservation plan is cheaper than that
provisioned by on-demand plan, since cloud consumer
has to pay to provider in advance. With the reservation
plan, the consumer can reduce the total resource
provisioning cost.
Although creating a Cloud Computing architecture that
is scalable and is usable for sharing all kind of
resources, has so many problem and complexions, but
it can be usable for optimization and removing all IT
requirements. These days lots of technologies migrate
from traditional systems into cloud, and cloud
computing has developed and used in so many
countries. These countries are using cloud computing
in many of industries with different applications and
also the range of using cloud computing is increasing
in different countries and with different applications.
Although, there is some worry about security in cloud
computing, but the number of persons that save their
personal information in servers of third company for
example Google, is increasing. We presented our new
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 9, SEPTEMBER 2012, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 124
2012 Journal of Computing Press, NY, USA, ISSN 2151-9617
idea for improving its security; using of two stage
Kalman estimator for predicting and updating data
about the amount of crowd on different cloud sections
and also improving the security level of such network.
With regard to lots of cloud computing advantages,
specially, costs reduction of implementation in large
scale, investing capital is increasing in this filed. Cloud
Computing is advancing with fast rate and also it will
be complete with little deficiencies rather than other
technologies. It is predict that Cloud computing is the
basic platform for IT in next 20 year [16].
REFERENCES
[1] David C. Wyld; the cloudy future of government
IT: cloud computing and the public sector around
the world, IJWesT, Vol. 1, Num. 1, Jan. 2010.
[2] Jean-Daniel Cryans, Alain April, Alain Abran;
criteria to compare cloud computing with current
database technology, R. Dumke et al. (Eds.):
IWSM / MetriKon / Mensura 2008, LNCS 5338,
pp. 114-126, 2008.
[3] Anil Madhavapeddy, Richard Mortier, Jon
Crowcroft, Steven Hand; multiscale not
multicore: efficient heterogeneous cloud
computing, published by the British Informatics
Society Ltd. Proceedings of ACM-BCS Visions of
Computer Science 2010.
[4] Harold C. Lim, Shivnath Babu, Jeffrey S. Chase,
Sujay S. Parekh; automated control in cloud
computing: challenges and opportunities,
ACDC09, June 19, Barcelona, Spain.
[5] N. Sainath, S. Muralikrishna, P.V.S. Srinivas; a
framework of cloud computing in the real world;
Advances in Computational Sciences and
Technology, ISSN 0973-6107, Vol. 3, Num. 2,
(2010), pp. 175-190.
[6] Kyle Chard, Simon Caton, Omer Rana, Kris
Bubendorfer; social cloud: cloud computing in
social networks
[7] G. Bruce Berriman, Eva Deelman, Paul Groth,
Gideon Juve; the application of cloud computing
to the creation of image mosaics and management
of their provenance,
[8] Roy Campbell, Indranil Gupta, Michael Heath,
Steven Y. Ko, Michael Kozuch, Marcel Kunze,
Thomas Kwan, Kevin Lai, Hing Yan Lee, Martha
Lyons, Dejan Milojicic, David OHallaron, Yeng
Chai Soh; open cirrus TM cloud computing
testbed: federated data centers for open source
systems and services research
[9] Rajkumar Buyya, Chee Shin Yeo, Srikumar
Venugopal, James Broberg, Ivona Brandic; cloud
computing and Emerging IT platforms: Vision,
Hype, and Reality for delivering computing as the
5th utility
[10] Lamia Youseff, Maria Butrico, Dilma Da Silva;
toward a unified ontology of cloud computing
[11] Daniel A. Menasce, Paul Ngo; understanding
cloud computing: experimentation and capacity
planning; Proc. 2009, Computer Measurement
Group Conf. Dallas, TX. Dec. 2009.
[12] Won Kim; cloud computing: today and
tomorrow; JOT, Vol. 8, No. 1, Jan-Feb 2009.
[13] Richard Chow, philippe Golle, Markus Jakobsson,
Elaine Shi, Jessica Staddon, Ryusuke Masuoka,
Jesus Molina; controlling data in the cloud:
outsourcing computation without outsourcing
control; CCSW09, Nov. 13, 2009, Chicago,
Illinois, USA.
[14] Bo Peng, Bin Cui, Xiaoming Li; implementation
issues of a cloud computing platform; Bulletin of
the IEEE computer society technical committee on
data engineering.
[15] Daniel Nurmi, Rich Wolski, Chris Grzegorczyk,
Graziano Obertelli, Sunil Soman, Lamia Youseff,
Dmitrii Zagorodnov; the eucalyptus open-source
cloud computing system.
[16] Mehdi Darbandi, Hoda Purhosein; Perusal about
influences of Cloud Computing on the processes
of these days and presenting new ideas about its
security, Int. IEEE Conf. Azerbaijan, Bakku.
[17] H. Khaloozade, M. Darbandi; optimal partitioned
state Kalman estimator Int. IEEE Conf. Harbin,
China.
Mohsen Panahi
Mehdi
Darbandi
Mohammad
Abedi
Ali Hamzenejad
JOURNAL OF COMPUTING, VOLUME 4, ISSUE 9, SEPTEMBER 2012, ISSN (Online) 2151-9617
https://sites.google.com/site/journalofcomputing
WWW.JOURNALOFCOMPUTING.ORG 125
2012 Journal of Computing Press, NY, USA, ISSN 2151-9617