You are on page 1of 112

RANDOM PROCESSES

ABDELKADER BENHARI
Mar kov Chai ns, Poi sson Pr ocesses and Queuei ng Theor y
Summary
Marko Chains Markov Chains
Discrete Time Markov Chains
H d h M k Homogeneous and non-homogeneous Markov
chains
Transient and steady state Markov chains y
Continuous Time Markov Chains
Homogeneous and non-homogeneous Markov g g
chains
Transient and steady state Markov chains
A.BENHARI
The Poisson Process The Poisson Process
Properties of the Poisson Process
I t i l ti Interarrival times
Memoryless property and the residual lifetime
paradox p
Superposition of Poisson processes
Random selection of Poisson Points
Bulk Arrivals and Compound Poisson Processes
A.BENHARI
Littles La Littles Law
Queueing System Notation
St ti A l i f El t Q i Stationary Analysis of Elementary Queueing
Systems
M/M/1 M/M/1
M/M/m
M/M/1/K M/M/1/K

A.BENHARI

Queueing Theory
Markov Processes
The definition of a Markov Process The definition of a Markov Process
The future of process X(t) does not depend on its past, only on its
present
( ) ( ) ( ) { }
P | X X X
( ) ( ) ( ) { }
( ) ( ) { }
1 1 0 0
1 1
Pr | , ...,
Pr |
k k k k
k k k k
X t x X t x X t x
X t x X t x
+ +
+ +
= = =
= s =
Since we are dealing with chains, X(t) can take discrete
values from a finite or a countable infinite set.
For a discrete time Markov chain the notation is also For a discrete-time Markov chain, the notation is also
simplified to
{ } { }
1 1 0 0 1 1
Pr | ,..., Pr |
k k k k k k k k
X x X x X x X x X x = = = = = =
{ } { }
1 1 0 0 1 1
Pr | ,..., Pr |
k k k k k k k k
X x X x X x X x X x
+ + + +
Where x
k
is the value of the state at the kth step
A.BENHARI
Transition Probability
Define the one step transition probabilities Define the one-step transition probabilities
( )
{ }
1
Pr |
ij k k
p X j X i
k
+
= = =
Clearly for all i k and all feasible transitions from state i Clearly, for all i, k, and all feasible transitions from state i
( )
( )
1
ij
j
i
p
k
eI
=

( ) { }
Pr |
,
ij k n k
p X j X i
k k n
+
= = =
+
Define the n-step transition probabilities
x
i
x
1

x
j
x
R
k u k+n
A.BENHARI
Chapman-Kolmogorov Equations
x
1
x
i
x
1
x

x
j
x
R
k u k+n
R
Using total probability
( ) { } { }
1
Pr | , Pr |
,
R
ij k n u k u k
r
p X j X r X i X r X i
k k n
+
=
= = = = = =
+

Using the memoryless property of Marckov chains
{ } { }
Pr | , Pr |
k n u k k n u
X j X r X i X j X r
+ +
= = = = = =
g y p p y
Therefore, we obtain the Chapman-Kolmogorov Equation
( ) ( ) ( )
1
,
, , ,
R
ij ir rj
r
p p p k u k n
k k n k u u k n
=
= s s +
+ +

p g q
A.BENHARI
Matrix Form
Define the matrix Define the matrix
( )
( ) ,
,
ij
p
k k n
k k n
= + (
+

H
We can re-write the Chapman-Kolmogorov Equation
( ) ( ) ( ) , , , k k n k u u k n
=
+ +
H H H
Choose, u = k+n-1, then
( ) ( ) ( ) , , 1 1, k k n k k n k n k n
=
+ + + +
H H H
( ) ( ) ( )
( )
( )
, 1
1
k k n
k n
=
+
+
H P
One step transition
probability
Forward Chapman-
Kolmogorov
A.BENHARI
Matrix Form
Choose, u = k+1, then
( ) ( ) ( ) , , 1 1, k k n k k k k n
=
+ + + +
H H H
( ) ( ) ( )
( )
( )
, , ,
1, k k n
k
=
+ +
P H
One step transition
probability
Backward Chapman-
Kolmogorov
A.BENHARI
Homogeneous Markov Chains
The one step transition probabilities are independent of The one-step transition probabilities are independent of
time k.
( )
{ }
1
or Pr |
ij
k k
p
X j X i
k
+
= = = = ( (

P P ( )
{ }
1
|
ij
k k
p
j
k
+

Even though the one step transition is independent of k,
this does not mean that the joint probability of X and X this does not mean that the joint probability of X
k+1
and X
k
is also independent of k
Note that
{ } { } { }
{ }
1 1
Pr , Pr | Pr
Pr
k k k k k
k
X j X i X j X i X i
p X i
+ +
= = = = = =
= =
{ }
Pr
ij k
p X i
A.BENHARI
Example
Consider a two processor computer system where, time p p y
is divided into time slots and that operates as follows
At most one job can arrive during any time slot and this can
happen with probability .
Jobs are served by whichever processor is available, and if
both are available then the job is given to processor 1.
If both processors are busy, then the job is lost
When a processor is busy, it can complete the job with
probability during any one time slot.
If a job is submitted during a slot when both processors are
busy but at least one processor completes a job then the job is busy but at least one processor completes a job, then the job is
accepted (departures occur before arrivals).
Describe the Markov Chain that describe this model.
A.BENHARI
Example: Markov Chain
For the State Transition Diagram of the Markov Chain, each
transition is simply marked with the transition probability
p
11
0 1 2
p
01
p
12
p
00
p
10
p
21
p
22
p
10
21
p
20
( )
1
p p o
0 p =
( )
00
1
p
o
=

01
p o =
02
0 p =
( )
10
1
p |
o
=

( )
( )
11
1
1
p o| |
o
= +

( )
12
1 p o | =
( )
2
( )
2
20
1
p |
o
=

( )
( )
2
21
2 1
1
p | o | |
o
= +

( ) ( )
2
22
2 1 1 p o| | | = +
A.BENHARI
Example: Markov Chain
p
11
0 1 2
p
01
p
12
p
22
0 1 2
p
00
p
10
p
21
p
20
Suppose that = 0.5 and = 0.7, then,
(
0.5 0.5 0
0.35 0.5 0.15
ij
p
(
(
= = (

(
(
P
0.245 0.455 0.3
(

A.BENHARI
State Holding Times
Suppose that at point k, the Markov Chain has pp p
transitioned into state X
k
=i. An interesting question is
how long it will stay at state i.
Let V(i) be the random variable that represents the Let V(i) be the random variable that represents the
number of time slots that X
k
=i.
We are interested on the quantity Pr{V(i) = n}
( )
{ } { }
Pr Pr | V n X i X i X i X i
i
= ( )
{ } { }
{ }
{ }
1 1
1
Pr Pr , ,..., |
Pr | ,...,
k n k n k k
k n k n k
V n X i X i X i X i
i
X i X i X i
+ + +
+ +
= = = = = =
= = = =
{ }
1 1
Pr ,..., |
k n k k
X i X i X i
+ +
= = =
{ }
{ }
1
Pr |
|
k n k n
X i X i
+ +
= = =
{ }
{ }
1 2
2 1
Pr | ...,
Pr ,..., |
k n k n k
k n k k
X i X X i
X i X i X i
+ +
+ +
= =
= = =
A.BENHARI
State Holding Times
( )
{ } { }
Pr Pr | V n X i X i
i
= = = = ( )
{ } { }
{ }
{ }
1
1 2
Pr Pr |
Pr | ...,
P |
k n k n
k n k n k
V n X i X i
i
X i X X i
X X X
+ +
+ +
= = = =
= =
{ }
2 1
Pr ,..., |
k n k k
X i X i X i
+ +
= = =
( ) { }
{ }
1 2
1 Pr |
P |
ii k n k n
p X i X i
X X X
+ +
= = =
{ }
{ }
2 3
3 1
Pr | ,...,
Pr ,..., |
k n k n k
k n k k
X i X i X i
X i X i X i
+ +
+ +
= = =
= = =
This is the Geometric Distribution with parameter p
( )
{ } ( )
1
Pr 1
n
ii ii
V n p p
i

= =
This is the Geometric Distribution with parameter p
ii
.
Clearly, V(i) has the memoryless property
A.BENHARI
State Probabilities
An interesting quantity we are usually interested in is the An interesting quantity we are usually interested in is the
probability of finding the chain at various states, i.e., we
define
( )
{ }
Pr
i k
X i
k
t = ( )
{ }
i k
For all possible states, we define the vector
( ) ( ) ( )
| |
0 1
, ...
k k k
t t =
Using total probability we can write
( )
{ } { }
1 1
Pr | Pr
j k k k
i
X j X i X i
k
t

= = = =

( ) ( )
1
i
ij i
i
p
k k
t =

In vector form one can write In vector form, one can write
( ) ( ) ( )
1 k k k
=

P ( ) ( )
1 k k
=

P
Or, if homogeneous
Markov Chain
A.BENHARI
State Probabilities Example
Suppose that pp
( )
| |
1 0 0
0
=
with
0.5 0.5 0
0.35 0.5 0.15
(
(
=
(
(
P
Find (k) for k=1,2,
0.245 0.455 0.3
(

0 5 0 5 0
(
( )
| | | |
0.5 0.5 0
1 0 0 0.35 0.5 0.15 0.5 0.5 0
1
0 245 0 455 0 3
(
(
= =
(
(

0.245 0.455 0.3


(

Transient behavior of the system: MCTransient.m
In general, the transient behavior is obtained by solving In general, the transient behavior is obtained by solving
the difference equation
( ) ( )
1 k k
=

P
A.BENHARI
Classification of States
Definitions
State j is reachable from state i if the probability to go
from i to j in n >0 steps is greater than zero (State j is
reachable from state i if in the state transition diagram reachable from state i if in the state transition diagram
there is a path from i to j).
A subset S of the state space X is closed if p
ij
=0 for
every ieS and j eS every ieS and j eS
A state i is said to be absorbing if it is a single
element closed set.
A closed set S of states is irreducible if any state jeS
is reachable from every state ieS.
A Markov chain is said to be irreducible if the state A Markov chain is said to be irreducible if the state
space X is irreducible.
A.BENHARI
Example
Irreducible Markov Chain Irreducible Markov Chain
0 1 2
p
01
p
12
p
p
22
0 1 2
p
00
p
10
p
21
Reducible Markov Chain Reducible Markov Chain
p
01
p
12
p
23
p
0 1 2
3
p
00
p
10
p
14
p
32
3
Absorbing
p
22
4
p
33
Absorbing
State
Closed irreducible set
A.BENHARI
Transient and Recurrent States
Hitting Time
{ }
min 0 : T k X i X j = > = =
Hitting Time
Recurrence Time T
ii
is the first time that the MC returns to
state i.
{ }
0
min 0 : ,
ij k
T k X i X j = > = =
Let
i
be the probability that the state will return back to i
given it starts from i. Then,
{ }
P T k

{ }
1
Pr
i ii
k
T k
=
= =

The event that the MC will return to state i given it started


from i is equivalent to T
ii
< , therefore we can write
{ } { }
Pr Pr
i ii ii
T k T

= = = <

1 k =
A state is recurrent if
i
=1 and transient if
i
<1
A.BENHARI
Theorems
If a Markov Chain has finite state space, then at least one p ,
of the states is recurrent.
If state i is recurrent and state j is reachable from state i
then, state j is also recurrent.
If S is a finite closed irreducible set of states, then every
state in S is recurrent.
A.BENHARI
Positive and Null Recurrent States
Let M
i
be the mean recurrence time of state i
i
A t t i id t b iti t if M< If M
| | { }
1
Pr
i ii ii
k
M E T k T k

=
= =

A state is said to be positive recurrent if M


i
<. If M
i
=
then the state is said to be null-recurrent.
Theorems
If state i is positive recurrent and state j is reachable
from state i then, state j is also positive recurrent.
If S is a closed irreducible set of states then every If S is a closed irreducible set of states, then every
state in S is positive recurrent or, every state in S is
null recurrent, or, every state in S is transient.
If S i fi it l d i d ibl t f t t th If S is a finite closed irreducible set of states, then
every state in S is positive recurrent.
A.BENHARI
Example
p
01
p
12
p
23
p
00
p
10
p
p
32
0 1 2
3
p
14
p
22
4
p
33
Transient
States
Positive
Recurrent
States
Recurrent State
A.BENHARI
Periodic and Aperiodic States
Suppose that the structure of the Markov Chain is such pp
that state i is visited after a number of steps that is an
integer multiple of an integer d >1. Then the state is
called periodic with period d called periodic with period d.
If no such integer exists (i.e., d =1) then the state is called
aperiodic.
Example
1 0.5
0 1 2
0.5
0 1 2
1
0 1 0
(
(
Periodic State d = 2
0.5 0 0.5
0 1 0
(
=
(
(

P
A.BENHARI
Steady State Analysis
Recall that the probability of finding the MC at state i after
the kth step is given by
( )
{ }
Pr
i k
X i
k
t =
( ) ( ) ( )
| |
0 1
, ...
k k k
t t =
An interesting question is what happens in the long run An interesting question is what happens in the long run ,
i.e., ( ) lim
i
k
k
i
t t

This is referred to as steady state or equilibrium or


Questions:
This is referred to as steady state or equilibrium or
stationary state probability
Do these limits exists?
If they exist, do they converge to a legitimate
b bilit di t ib ti i 1

probability distribution, i.e.,


How do we evaluate
j
, for all j.
1
i
t =

A.BENHARI
Steady State Analysis
Recall the recursive probability
( ) ( )
1 k k
=
+
P
If steady state exists, then (k+1) ~ (k), and therefore
the steady state probabilities are given by the solution to the steady state probabilities are given by the solution to
the equations
= P
and
1
i
t =

If an Irreducible Markov Chain the presence of periodic


states prevents the existence of a steady state probability
i
Example: periodic.m
0 1 0
(
(
0.5 0 0.5
0 1 0
(
=
(
(

P ( )
| |
1 0 0
0
=
A.BENHARI
Steady State Analysis y y
THEOREM: In an irreducible aperiodic Markov chain THEOREM: In an irreducible aperiodic Markov chain
consisting of positive recurrent states a unique stationary
state probability vector exists such that
j
> 0 and
( )
1
lim
j j
k
j
k
M
t t

= =
where M
j
is the mean recurrence time of state j
The steady state vector is determined by solving The steady state vector is determined by solving
= P
and
1
i
i
t =

Ergodic Markov chain.


A.BENHARI
Birth-Death Example
1-p 1-p 1-p
p
p p
0 1 i
p
p p
p
1 0
0 1
p p
p p

(
(

0 1
0 0
p p
p
(
=
(
(
(

P


(


Thus, to find the steady state vector we need to solve , y
= P
and
1
i
i
t =

A.BENHARI
Birth-Death Example
In other words
0 0 1
p p t t t = +
In other words
( )
1 1
, 1, 2,... 1
j j j
p j p t t t
+
= + =
1 0
1 p
t t

=
Solving these equations we get
2
2 0
1 p
t t

| |
=
|
1 0
p
2 0
p
t t
|
\ .
In general
0
1
j
j
p
t t

| |
=
|
\ .
0 j
p
|
\ .
Summing all terms we get
i i
| | | |
0 0
0 0
1 1
1 1
i i
i i
p p
p p
t t

= =

| | | |
= =
| |
\ . \ .

A.BENHARI
Birth-Death Example
Therefore, for all states j we get
j i
If p<1/2 then
0
1 1
j i
j
i
p p
p p
t

=

| | | |
=
| |
\ . \ .

0
1
i
i
p
p


| |
=
|
\ .

If p<1/2, then
0, for all
j
j t =
All states are transient
0 i
p
= \ .
All states are transient
i
| |
If p>1/2, then
1
2 1
j
p
p

| |
0
1
0
2 1
i
i
p
p
p
p

| |
= >
|

\ .

1
2 1
, for all
j
p
p
j
p
p
t
| |
=
|
\ .
All states are positive recurrent All states are positive recurrent
A.BENHARI
Birth-Death Example
If 1/2 then If p=1/2, then
1
i
p
p


| |
=
|
\ .

0, for all
j
j t =
All states are null recurrent 0 i
p
= \ . All states are null recurrent
A.BENHARI
Reducible Markov Chains
Transient
Set T
Irreducible
Set S
1
Irreducible
Set S
2
In steady state, we know that the Markov chain will
eventually end in an irreducible set and the previous
analysis still holds or an absorbing state analysis still holds, or an absorbing state.
The only question that arises, in case there are two or more
irreducible sets, is the probability it will end in each set
A.BENHARI
Reducible Markov Chains
Transient
Set T
I d ibl
r
s
1
Irreducible
Set S
i
r
s
n
Suppose we start from state i. Then, there are two ways to
go to S go to S.
In one step or
Go to r eT after k steps, and then to S.
Define
( )
{ }
0
Pr | , 1, 2,...
i k
X S X i k
S
= e = =
A.BENHARI
Reducible Markov Chains
First consider the one-step transition
Next consider the general case for k=2,3,
{ }
1 0
Pr | X S X i e = =
ij
j S
p
e

First consider the one step transition


{ }
1 1 1 0
Pr , ..., |
k k k
X S X r T X r T X i

e = e = e = =
{ }
1 1 1 0
Pr , ...,| ,
k k k
X S X r T X r T X i

= e = e = e =
{ }
{ }
1 1 1 0
1 0
|
Pr |
k k k
X r T X i = e =
{ }
1 1 1
Pr , ...,|
k k k ir
X S X r T X r T p

= e = e = e
{ }
( )
r ir
p
S
=
( ) ( )
i ij r ir
j S r T
p p
S S

e e
= +

A.BENHARI
Continuous-Time Markov Chains
In this case, transitions can occur at any time , y
Recall the Markov (memoryless) property
( ) ( ) ( ) { }
1 1 0 0
Pr | ,...,
k k k k
X t x X t x X t x
+ +
= = =
( ) ( ) { }
1 1
Pr |
k k k k
X t x X t x
+ +
= = =
where t
1
< t
2
< < t
k
Recall that the Markov property implies that
X(t
k+1
) depends only on X(t
k
) (state memory)
It d t tt h l th t t t ( ) ( It does not matter how long the state at X(t
k
) (age
memory).
The transition probabilities now need to be defined for The transition probabilities now need to be defined for
every time instant as p
ij
(t), i.e., the probability that the MC
transitions from state i to j at time t.
A.BENHARI
Transition Function
Define the transition function
{ } ( )
( ) ( )
{ }
Pr | ,
,
ij
p X j X i s t
s t
t s
= = s
The continuous-time analogue of the Chapman-
Kolmokorov equation is Kolmokorov equation is
( )
( ) ( ) ( )
{ }
( ) ( )
{ }
,
Pr | Pr |
ij
p
s t
X j X r X i X r X i
t

= = = = =

Using the memoryless property


( ) ( ) ( )
{ }
( ) ( )
{ }
Pr | , Pr |
r
X j X r X i X r X i
t u s u s
= = = = =

( )
( ) ( )
{ }
( ) ( )
{ }
P | P | X j X X X i

( )
( ) ( )
{ }
( ) ( )
{ }
Pr | Pr |
,
ij
r
p X j X r X r X i
s t
t u u s
= = = =

Define H(s,t)=[p
ij
(s,t)], i,j=1,2,then
( ) ( ) ( )
,
, , ,
s u t
s t s u u t
= s s H H H
Note that H(s, s)= I
A.BENHARI
Transition Rate Matrix
Consider the Chapman-Kolmogorov for s t t+t p g
( ) ( ) ( ) , , , s t t s t t t t
=
+ A + A
H H H
Subtracting H(s,t) from both sides and dividing by t g ( , ) g y
( ) ( )
( ) ( ) ( ) , ,
, ,
s t t t t
s t t s t
t t

+ A
+ A
=
A A
H H I
H H
Taking the limit as t0
( )
( )
( )
, s t
t
cH
H Q
( )
( )
( )
,
, s t
t
t
=
c
H Q
where the transition rate matrix Q(t) is given by
( )
( )
0
,
lim
t
t t t
t
t
A

+ A
=
A
H I
Q
A.BENHARI
Homogeneous Case
In the homogeneous case, the transition functions do not g ,
depend on s and t, but only on the difference t-s thus
( )
( )
,
ij ij
p p
s t
t s
=

It follows that
( )
( ) ( )
, s t
t s t
=

H H P
and the transition rate matrix
( )
( ) ( ) ,
li li t t
t t t
t

+ A
A
H I
H I
Q Q ( )
( ) ( )
0 0
,
lim lim , constant
t t
t
t
t t
A A
= = =
A A
Q Q
Thus
( )
( ) ( )
1 if
with
0
0 if
ij
i j
t
p
t
i j t
=
c
= =

= c

P
P Q ( )
t
e
t
=
Q
P
A.BENHARI
State Holding Time
A.BENHARI
State Holding Time
A.BENHARI
Transition Rate Matrix Q.
Recall that
( )
( )
t
t
cP
P Q
Evaluating this at t = 0, we have and then
( )
t
t
=
c
P Q
( )
0
I = P
( ) ( )
0 0
ij
ij
t t
p
t t
Q q
t
t
= =
c c
= =
c
c
P
If , : exponential residual lifetime
i j =
( )
0
ij
ij
p
t
q
t
c
= =
c
0
ij
ij
ij
e
t
t

=
=
( ) Pr{ } 1
ij
t
ij
p t t e

t

= < =
0 t
t
=
c
( ) { }
ij
p
In other words q
ij
is the rate of the Poisson process that
activates the event that makes the transition from i to j. activates the event that makes the transition from i to j.
A.BENHARI
Transition Rate Matrix Q.
If , : exponential residual lifetime
i j =
, p
j
( ) Pr{ }
ii
t
ii
p t t e

t

= > =
( )
0
ii
ii
t
p
t
q
t
=
c
= =
c
0
ii
t
ii
ii
t
e

=
=

( )
ii
ii
p
t
q
c
=
( )
| |
1
q
p
t

c
= =
Probability that
chain leaves state
i
0
ii
t
q
t
=
c
( )
| |
0
1
ii ii
ii
t
q
p
t
t

=
= =

c
Note that for each row i, the sum
( ) 1 0
ij ij
j j
p t q = =

,
A.BENHARI
Transition Probabilities P.
Suppose that state transitions occur at random points in Suppose that state transitions occur at random points in
time T
1
< T
2
<< T
k
<
Let X
k
be the state after the transition at T
k
{ }
1
Pr |
ij k k
P X j X i
+
= = =
k k
Define
Recall that in the case of the superposition of two or more
Poisson processes, the probability that the next event is
from process j is given by
j
/. p j g y
j
In this case, we have
ij
q
,
ij
ij
ii
q
P i j
q
= =

0
ii
P =
and
A.BENHARI
Example
Assume a computer system where jobs arrive according to Assume a computer system where jobs arrive according to
a Poisson process with rate .
Each job is processed using a First In First Out (FIFO)
policy.
The processing time of each job is exponential with rate .
The computer has buffer to store up to two jobs that wait The computer has buffer to store up to two jobs that wait
for processing.
Jobs that find the buffer full are lost.
Draw the state transition diagram.
Find the Rate Transition Matrix Q.
Find the State Transition Matrix P
A.BENHARI
Example
a a a
d
0 1 2 3
a
d d
The rate transition matrix is given by
d d d
0 0
(
(
( )
( )
0
0


(
+
(
=
(
+
(
Q
0 0
(

(

( )
0 0 0
0 0
1
P


( +
(
(
=
(
The state transition
( )
( )
0 0
0 0 0
P



=
(
+
(
+
(

The state transition
matrix is given by
A.BENHARI
State Probabilities and Transient
Analysis Analysis
Similar to the discrete-time case we define Similar to the discrete time case, we define
( ) ( )
{ }
Pr
j
X j
t t
t =
In vector form
( ) ( ) ( )
| |
1 2
, ,...
t t t
t t =
With initial probabilities ( ) ( ) ( )
| |
1 2
, ,...
0 0 0
t t =
Using our previous notation (for homogeneous MC) Using our previous notation (for homogeneous MC)
( ) ( ) ( )
0 t t
= = P ( )
0
t
e
Q

Obtaining a general
solution is not easy!
Differentiating with respect to t gives us more inside
( )
( )
d
t
t
=

Q
( )
( ) ( )
j
jj j ij i
d
t
q q
t t
t
t t = +

( )
t
dt
Q ( ) ( )
jj j ij i
i j
q q
t t
dt
t t
=
+

Note: ( )
t t t
e Qe e Q
'
= =
Q Q Q
A.BENHARI
Probability Fluid view
We view
j
(t) as the level of a probability fluid that is We view
j
(t) as the level of a probability fluid that is
stored at each node j (0=empty, 1=full).
( )
j
d
t
t

( )
( ) ( ) ( ) ( )
j
jj j ij i jr j ij i
i j r j i j
d
t
q q q q
t t t t
dt
t
t t t t
= = =
= + = +

f Change in the
probability fluid
inflow
outflow
r i
q
j
q
ij

q
jr

jj jr
r j
q q
=
=

Inflow Outflow
A.BENHARI
Steady State Analysis
Often we are interested in the long-run probabilistic Often we are interested in the long run probabilistic
behavior of the Markov chain, i.e.,
( ) lim
j j
t
t
t t

=
t
As ith the discrete time case e need to address the
These are referred to as steady state probabilities or
equilibrium state probabilities or stationary state probabilities
As with the discrete-time case, we need to address the
following questions
Under what conditions do the limits exist?
If they exist, do they form legitimate probabilities?
How can we evaluate these limits?
A.BENHARI
Steady State Analysis
Theorem: In an irreducible continuous-time Markov Chain
consisting of positive recurrent states, a unique stationary
state probability vector with
( ) lim
j j
t
t t = ( )
j j
t
t

These vectors are independent of the initial state


probability and can be obtained by solving
and 1 t

Q = 0 and 1
j
j
t =

Q = 0
i
Using the probability fluid
view
( )
t
t
( ) ( ) 0
jj j ij i
q q
t t
t t = +

outflow
r i
j
q
ij

q
jr

( )
0
j
t
dt
t
=
jj j j
i j =

0 Change
inflow
j

Inflow Outflow
A.BENHARI
Example
a a a
d
0 1 2 3
a
d d
For the previous example, with the above transition
function, what are the steady state probabilities
d d d
| |
( )
0 0
0

(
(
+
(
Solve
| |
( )
( )
0 1 2 3
0
0
0 0

t t t t


+
(
= =
(
+
(

(

Q 0

(

0 1 2 3
1 t t t t + + + =
A.BENHARI
Example
The solution is obtained The solution is obtained
0 1
0 t t + =
1 0

t t

=
( )
0 1 2
0 t t t + = +
2
2 0

t t

| |
=
|
\ .
3
| |
( )
1 2 3
0 t t t + = +
3
3 0

t t

| |
=
|
\ .
0 1 2 3
1 t t t t + + + =
0
2 3
1
t

=
| | | | | |
0 1 2 3
2 3
1


| | | | | |
+ + +
| | |
\ . \ . \ .
A.BENHARI
Birth-Death Chain

0
1

i 1

i
0 1 i
1 i-1 i

1

i

i+1
Find the steady state probabilities
Similarly to the previous example,
( )
0 0
1 1 1 1
0

(
(
+
(
= Q

( )
2 2 2
0
(
=
(
+
(
(

Q


And we solve
= Q 0
0
1
i
i
t

=
=

and
A.BENHARI
Example
The solution is obtained
0 0 1 1
0 t t + =
0
1 0
1

t t

=
( )
0 0 1 2 2 1 1
0 t t t + = +
0 1
2 0
1 2

t t

| |
=
|
\ .
In general In general
( )
1 1 1 1
0
j j
j j j j j

t t t
+ +
+
+ =
0
1 0
1 1
...
...
j
j
j

t t

+
+
| |
=
|
|
\ .
j
\ .
Making the sum equal to 1
0 1
...
1 1
j

| |
| |
|
|

Solution exists if

| |
0 1
0
1 1
1 1
...
j
j j
t

=
+ =
|
|
|
|
\ .
\ .

0 1
1 1
...
1
...
j
j j
S

=
| |
= + <
|
|
\ .

A.BENHARI
Uniformization of Markov Chains
In general discrete time models are easier to work with In general, discrete-time models are easier to work with,
and computers (that are needed to solve such models)
operate in discrete-time
Thus, we need a way to turn continuous-time to discrete-
time Markov Chains
Uniformization Procedure
Recall that the total rate out of state i is q
ii
=(i). Pick
if t h th t f ll t t a uniform rate such that (i) for all states i.
The difference - (i) implies a fictitious event that
returns the MC back to state i (self loop) returns the MC back to state i (self loop).
A.BENHARI
Uniformization of Markov Chains
Uniformization Procedure
Let P
U
ij
be the transition probability from state I to state j for the
discrete-time uniformized Markov Chain, then
ij
q

if
ij
U
ij
ij
j i
q
i j
P
q


if
ij
j i
q
i j

ij
q

ii
q +
i k

q
ij
q
ik
Uniformization
i k

ik
q

q
ik

ik
q

A.BENHARI
Poisson Processes Poisson Processes
A.BENHARI
Summary
The Poisson Process The Poisson Process
Properties of the Poisson Process
I t i l ti Interarrival times
Memoryless property and the residual lifetime
paradox p
Superposition of Poisson processes
Random selection of Poisson Points
Bulk Arrivals and Compound Poisson Processes
A.BENHARI
The Poisson Counting Process
Let the process {N(t)} which counts the number of events p { ( )}
that have occurred in the interval [0,t). For any 0 t
1

t
k

( )
( ) ( ) ( )
1 2
0 ... ...
0
k
N N N N t t t = s s s s s
6
Process with independent
increments: The random
4
2
variables N(t
1
), N(t
1
,t
2
),,
N(t
k-1
,t
k
),are mutually
independent.
0
t
1
t
2
t
3
t
k-1

t
k
Process with stationary
independent increments:
The random variable
( ) ( ) ( )
1 1
,
k k k k
N t t N t N t

=
N(t
k-1
, t
k
), does not depend
on t
k-1
, t
k
but only on t
k
-t
k-1
A.BENHARI
The Poisson Counting Process
Assumptions: p
At most one event can occur at any time instant (no two
or more events can occur at the same time)
A process with stationary independent increments
( ) { } ( ) { }
1 1
Pr , Pr
k k k k
N t t n N t t n

= = =
( ) { } ( ) { }
1 1 k k k k
Given that a process satisfies the above assumptions Given that a process satisfies the above assumptions,
find
( ) ( )
{ }
Pr , 0,1, 2,...
n
P N n n
t t
= =
A.BENHARI
The Poisson Process
Step 1: Determine Step 1: Determine
( ) ( )
{ }
0
Pr 0 P N
t t
=
Starting from g
( )
{ }
Pr 0 N
t s
= =
+
( )
( )
{ }
Pr 0 and 0
,
N N
t t s
t
= =
+
( )
{ }
( )
{ }
Pr 0 Pr 0 N N
t s
= = =
Stationary independent
i t
{ } { }
increments
( ) ( ) ( )
0 0 0
P P P
t s t s
=
+
Lemma: Let g(t) be a differentiable function for all t0
such that g(0)=1 and g(t) 1 for all t >0. Then for any t,
s0 s0
( ) ( ) ( ) ( )
t
g t s g g g e
t s t

+ = =
for some >0
A.BENHARI
The Poisson Process
Therefore
( ) ( )
{ }
P 0
t
P N

Therefore
( ) ( )
{ }
0
Pr 0
t
P N e
t t

= =
Step 2: Determine P
0
(t) for a small t.
( ) ( )
2 3
t t A A
( )
{ }
Pr 0
t
N e
t
A
= = =
A
( ) ( )
1 ...
2! 3!
t t
t

A A
A + +
( ) 1 . t o
t
= A +
A
( ) 1 . t o
t
A +
A
Step 3: Determine P
n
(t) for a small t.
For n=2,3,since by assumption no two events can
( ) ( )
{ }
( ) Pr
n
P N n o
t t t
= =
A A A
occur at the same time
A lt f 1
( ) ( )
{ }
( )
1
Pr 1 P N t o
t t t
= = A +
A A A
As a result, for n=1
A.BENHARI
The Poisson Process
Step 4: Determine P
n
(t+t) for any n p
n
( ) y
( ) ( )
{ }
Pr
n
P N n
t t t t
= =
+ A + A
( ) ( )
0
n
n k k
k
P P
t t

=
A

( ) ( ) ( ) ( ) ( ) P P P P o
t t t t t
+ +
A A A
( ) ( ) ( ) ( ) ( )
0 1 1
.
n n
P P P P o
t t t t t

= + +
A A A
Mo ing terms bet een sides
( )
| |
( ) ( )
| |
( ) ( )
1
. 1
n n
P P o t o t o
t t t t t


= + + A + A +
A A A
Moving terms between sides,
( ) ( )
( ) ( )
( )
1
.
n n
n n
P P o
t t t t
P P
t t
t t

+ A A
= + +
A A
Taking the limit as t 0
t t A A
( ) dP
t
( )
( ) ( )
1
n
n n
dP
t
P P
t t
dt


= +
A.BENHARI
The Poisson Process
Step 4: Determine P
n
(t+t) for any n p
n
( ) y
( ) ( )
{ }
Pr
n
P N n
t t t t
= =
+ A + A
( ) ( )
0
n
n k k
k
P P
t t

=
A

( ) ( ) ( ) ( ) ( ) P P P P o
t t t t t
+ +
A A A
( ) ( ) ( ) ( ) ( )
0 1 1
.
n n
P P P P o
t t t t t

= + +
A A A
Mo ing terms bet een sides
( )
| |
( ) ( )
| |
( ) ( )
1
. 1
n n
P P o t o t o
t t t t t


= + + A + A +
A A A
Moving terms between sides,
( ) ( )
( ) ( )
( )
1
.
n n
n n
P P o
t t t t
P P
t t
t t

+ A A
= + +
A A
Taking the limit as t 0
t t A A
( ) dP
t
( )
( ) ( )
1
n
n n
dP
t
P P
t t
dt


= +
A.BENHARI
The Poisson Process
Step 5: Solve the differential equation to obtain p q
( ) ( )
{ }
( )
Pr , 0, 0,1, 2,...
!
n
t
n
t
P N n e t n
t t
n


= = > =
This expression is known as the Poisson distribution and
it full characterizes the stochastic process {N(t)} in [0,t)
! n
p { ( )} [ , )
under the assumptions that
No two events can occur at exactly the same time, and
Independent stationary increments Independent stationary increments
You should verify that
( )
| |
E t N
t
=
and | |
var ( ) t N t = ( )
| |
E t N
t

and | |
var ( ) t N t
Parameter has the interpretation of the rate that events
arrive.
A.BENHARI
Properties of the Poisson Process:
Interevent Times Interevent Times
Let t
k 1
be the time when the k-1 event has occurred and let
k-1
V
k
denote the (random variable) interevent time between
the kth and k-1 events.
Wh t i th df f V G ( )? What is the cdf of V
k
, G
k
(t)?
( )
{ } { }
Pr 1 Pr
k k k
G V t V t
t
= s = >
{ }
1 Pr 0 arrivals in the interval [ ) t t t = +
{ }
1 1
1 Pr 0 arrivals in the interval [ , )
k k
t t t

= +
( )
{ }
1 Pr 0 N
t
= =
1
t
Stationary independent
increments
1
t
e

=
t
k
t
k
+ t
( ) 1
t
G e
t

V
k
t
k-1
t
k-1
+ t
( ) 1 G e
t
=
N(t
k-1
,t
k-1
+t)=0
Exponential Distribution
A.BENHARI
Properties of the Poisson Process:
Exponential Interevent Times Exponential Interevent Times
The process {V
k
} k=1,2,, that corresponds to the p {
k
} , , , p
interevent times of a Poisson process is an iid stochastic
sequence with cdf
( )
{ }
P 1
t
G V

s
Therefore, the Poisson is
also a renewal process
( )
{ }
Pr 1
t
k
G V t e
t

= s =
The corresponding pdf is
p
( ) , 0
t
g e t
t


= >
O il h th t One can easily show that
| |
1
k
E V

= | |
2
1
var
k
V

=
and


A.BENHARI
Properties of the Poisson Process:
Memoryless Property Memoryless Property
Let t
k
be the time when previous event has occurred and let V denote
the time until the next event.
Assuming that we have been at the current state for z time units, let Y
be the remaining time until the next event.
V
What is the cdf of Y?
( )
{ } { }
Pr Pr |
Y
F Y t V z t V z
t
= s = < >
t
k
t
k
+z
V
Y=V-z
( )
{ } { }
|
Y
t
{ }
{ }
Pr and
Pr
V z V z t
V z
> < +
=
>
{ }
{ }
Pr
1 Pr
z V z t
V z
< < +
=
<
( ) ( )
( ) 1
G G
t z z
G
z

+
=

{ }
Pr V z > { }
( )
1 1
1 1
z
t z
z
e e
e

+
=
+
Memoryless! It does not matter that we
have already spent z time units at the
current state.
( ) ( ) 1
t
Y
F e G
t t

= =
A.BENHARI
Memoryless Property
This is a unique property of the exponential distribution. If q p p y p
a process has the memoryless property, then it must be
exponential, i.e.,
{ } { } { }
Pr | Pr Pr 1
t
V z t V z V t V t e

s + > = s s =
{ } { } { }
Pr | Pr Pr 1 V z t V z V t V t e s + > = s s =
Poisson Exponential
M l
Poisson
Process

Exponential
Interevent Times
G(t)=1-e
-t
Memoryless
Property
A.BENHARI
Superposition of Poisson Processes
Consider a DES with m events each modeled as a Poisson Process
with rate
i
, i=1,,m. What is the resulting process?
Suppose at time t
k
we observe event 1. Let Y
1
be the time until the
next event 1. Its cdf is G
1
(t)=1-exp{-
1
t}.
Let Y
1
,,Y
m
denote the residual time until the next occurrence of the
corresponding event.
Their cdfs are:
e
1
e
j
V
j
( ) 1
i
t
i
G e
t

=
t
k
V
1
= Y
1
e
1
e
j
Memoryless
Property
Y
j
=V
j
-z
j
p y
Y
j
V
j
z
j
Let Y
*
be the time until the next event (any type).
{ }
*
min
i
Y Y =
Therefore, we need to find
( )
{ } *
*
Pr
Y
G Y t
t
= s
A.BENHARI
Superposition of Poisson Processes
( )
{ }
*
Pr G Y t
t
= s
{ } { }
Pr min Y t = s ( )
{ } *
Pr
Y
G Y t
t
= s
{ } { }
Pr min
i
Y t = s
{ } { }
1 Pr min
i
Y t = >
{ }
1
1 Pr ,...,
m
Y t Y t = > >
{ }
1 Pr
m
Y t = >
[
1
i
m
t
e

=
[
{ }
1
1 Pr
i
i
Y t
=
= >
[
1
1
i
e
=
[
( )
t A
h
m
A

Independence
The superposition of m Poisson processes is also a
( )
*
1
t
Y
G e
t
A
=
1
where =
i
i

=
A

The superposition of m Poisson processes is also a
Poisson process with rate equal to the sum of the rates of
the individual processes
A.BENHARI
Superposition of Poisson Processes
Suppose that at time t
k
an event has occurred. What is Suppose that at time t
k
an event has occurred. What is
the probability that the next event to occur is event j?
Without loss of generality, let j=1 and define
Y=min{Y: i=2 m}
{ }
m

{ }
Pr next event is 1 j = =
Y =min{Y
i
: i=2,,m}.
{ }
1
Pr Y Y
'
s =
{ }
2
~1-exp 1 exp
i
i
t t
=

'
= A
`
)

{ }
Pr next event is 1 j
{ }
1
Pr Y Y s
1 1
1 1
0 0
y
y y
e e dy dy

'

' ' A
' '
= A
} }
0 0
( )
1 1
0
1
y
y
e dy
e

' ' A

' '
= A

}
1

=
A
1
where =
m
i
i

=
A

A.BENHARI
Residual Lifetime Paradox
Suppose that buses pass by the
V
pp p y
bus station according to a Poisson
process with rate . A passenger
arrives at the bus station at some
random point
V
b
k
b
k+1
p
random point.
How long does the passenger has
to wait?
Y Z
S l ti 1 Solution 1:
E[V]= 1/. Therefore, since the passenger will (on average) arrive
in the middle of the interval, he has to wait for E[Y]=E[V]/2= 1/(2).
But using the memoryless property the time until the next bus is But using the memoryless property, the time until the next bus is
exponentially distributed with rate , therefore E[Y]=1/ not 1/(2)!
Solution 2:
Using the memoryless property, the time until the next bus is
exponentially distributed with rate , therefore E[Y]=1/.
But note that E[Z]= 1/ therefore E[V]= E[Z]+E[Y]= 2/ not 1/!
A.BENHARI
Random selection of Poisson Points
Let represent random arrival points of a
, , , ,
2 1 i
t t t
Let represent random arrival points of a
Poisson process X(t) with parameter t. Associated with
each point, we define an independent Bernoulli R.V. N
i
where
, , , ,
2 1 i
where
{ 1} , { 0} 1 .
i i
P N p P N p q = = = = =
1
t
i
t
t
2
t

Define
) ( ) ( ) 1 ( ) ( ; ) (
) (
1
) (
1
t Y t X N t Z N t Y
t X
i
i
t X
i
i
= = =

= =
We claim that both Y(t) and Z(t) are independent Poisson
processes with parameters pt and qt respectively.
A.BENHARI
Random selection of Poisson Points
Proof: ( ( ) ) { ( ) | ( ) } { ( ) )} P Y t k P Y t k X t n P X t n

= = = = =

Proof: ( ( ) ) { ( ) | ( ) } { ( ) )}.
n k
P Y t k P Y t k X t n P X t n
=
= = = = =

( )
{ ( ) } .
n
t
t
P X t n e

= =
( )
{ ( ) | ( ) } , 0 ,
k n k
n
k
P Y t k X t n p q k n

= = = s s
{ ( ) }
! n
( ) ( )
!
( )! ! ! ( )!
{ ( ) } ( )
!
n n k
q t
k t
t q t t k n k k
n
n k k n n k
n k n k
p e
P Y t k e p q t
k




= =
= = =

(1 )
( )
( ) , 0, 1, 2,
! !
q t
e
q t k
k pt
e pt
pt e k
k k

= = =
~ ( ). P pt
A.BENHARI
Random selection of Poisson Points
Similarly:
{ ( ) } ~ ( ). P Z t m P qt =
More generally,
{ ( ) , ( ) } { ( ) , ( ) ( ) } P Y t k Z t m P Y t k X t Y t m = = = = = { ( ) , ( ) } { ( ) , ( ) ( ) }
{ ( ) , ( ) }
{ ( ) | ( ) } { ( ) }
P Y t k X t k m
P Y t k X t k m P X t k m
= = = +
= = = + = +
( )

( ) ( ) ( )

( )! ! !
k m n n
k m t pt qt
k m
k
t pt qt
p q e e e
k m k m


+

+
= =
+

( ( ) ) ( ( P Y t k P Z =

) )
{ ( ) } { ( ) },
t m
P Y t k P Z t m
=
= = =

A.BENHARI
Bulk Arrivals and Compound Poisson
Processes Processes
Consider a random number of events C
i
occur Consider a random number of events C
i
occur
simultaneously at any event instant time of a Poisson
process.

3
1
= C

2
2
= C

4 =
i
C
t
1
t
2
t
n
t

t
1
t
2
t
n
t


(a) Poisson Process
(b) Compound Poisson Process
Let , then is a
compound Poisson process where N(t) is an ordinary
, 2 , 1 , 0 }, { = = = k k C P p
i k
(a) Poisson Process
( ) p
( )
1
( ) ,
N t
i
i
X t C
=
=

compound Poisson process, where N(t) is an ordinary
Poisson process.
{ ( ) } { ( ) | ( ) } { ( ) } P X t m P X t m N t n P N t n

= = = = =

0
{ ( ) } { ( ) | ( ) } { ( ) }
n
P X t m P X t m N t n P N t n
=
= = = = =

A.BENHARI
Bulk Arrivals and Compound Poisson
Processes Processes
( ) { } { }
i
C k
C i
z E z P C k z |

= = =

0
( )
( ) { } { }
( ) { } { ( ) }
X
C i
k
X t m
z E z P C k z
z E z P X t m z
|
|
=

= = =

0
0 0
{ ( ) | ( ) } { ( ) }
m
m
m n
P X t m N t n P N t n z
=

= =
= = = =

0 0
0 0 1
( { } ) { ( ) }
m n
n
m
i
n m i
P C m z P N t n
= =

= = =
= = =

1
0 0
( { }) { ( ) } ( { } ) { ( ) }
n
i
i i
C m
C n
n n
E z P N t n E z P N t n
=
=

= =

= = = =

0
( ( ))
n
C
n
z |

=
=

(1 ( ))
{ ( ) }
C
t z
P N t n e
|
= =
A.BENHARI
Bulk Arrivals and Compound Poisson
Processes
2
1 2
(1 ) (1 ) (1 )
( )
k
k
t z t z t z
|

Processes
We can write
1 2
( ) ( ) ( )
( )
k
X
t t
z e e e

| =
We can write
where which shows that the compound Poisson
b d th f i t l d
,
k k
p =
process can be expressed as the sum of integer-scaled
independent Poisson processes Thus
. ), ( ), (
2 1
t m t m

. ) ( ) (
1

=
=
k
k
t m k t X
M ll li bi ti f i d d t More generally, every linear combination of independent
Poisson processes represents a compound Poisson
process.
A.BENHARI
Queueing Theory I Queueing Theory I
A.BENHARI
Summary
Littles La Littles Law
Queueing System Notation
St ti A l i f El t Q i Stationary Analysis of Elementary Queueing
Systems
M/M/1 M/M/1
M/M/m
M/M/1/K M/M/1/K

A.BENHARI
Littles Law
a(t): the process that counts the number of arrivals up to t. ( ) p p
d(t): the process that counts the number of departures up to t.
N(t)= a(t)- d(t)
a(t)
d(t)
N(t)
Area (t)
Time t
Average arrival rate (up to t)
t
= a(t)/t
A ti h t d i th t T (t)/ (t) Average time each customer spends in the system T
t
= (t)/a(t)
Average number in the system N
t
= (t)/t
A.BENHARI
Littles Law
a(t)
A ( )
d(t)
N(t)
Area (t)
t t t
N T =
T ki th li it t t i fi it
Time t
Taking the limit as t goes to infinity
| | | | E E
N T
=
Expected number of
customers in the system
Expected time in the system
Arrival rate IN the system
A.BENHARI
Generality of Littles Law
Littles Law is a pretty general result
| | | | E E
N T
=
p y g
It does not depend on the arrival process distribution
It does not depend on the service process distribution
It does not depend on the number of servers and buffers
in the system.
Queueing
Network

Aggregate
e o
Aggregate
Arrival rate
A.BENHARI
Specification of Queueing Systems
Customer arrival and service stochastic models Customer arrival and service stochastic models
Structural Parameters
Number of servers Number of servers
Storage capacity
Operating policies p g p
Customer class differentiation (are all customers
treated the same or do some have priority over others?)
Scheduling/Queueing policies (which customer is
served next)
Admission policies (which/when customers are Admission policies (which/when customers are
admitted)
A.BENHARI
Queueing System Notation
Arrival Process
M: Markovian
D: Deterministic
Service Process
M: Markovian
D: Deterministic
A/B/m/K/N
Er: Erlang
G: General
Er: Erlang
G: General
Number of
servers m=1,2,
N b f t
Storage Capacity K=
1,2,
Number of customers
N= 1,2,
(for closed networks
(if then it is omitted)
otherwise it is omitted)
A.BENHARI
Performance Measures of Interest
We are interested in steady state behavior y
Even though it is possible to pursue transient results, it is a
significantly more difficult task.
E[S] average system time (average time spent in the [ ] g y ( g p
system)
E[W] average waiting time (average time spent waiting
in queue(s)) in queue(s))
E[X] average queue length
E[U] average utilization (fraction of time that the
b i d) resources are being used)
E[R] average throughput (rate that customers leave the
system) y )
E[L] average customer loss (rate that customers are lost
or probability that a customer is lost)
A.BENHARI
Recall the Birth-Death Chain Example

0

1
j 2

j-1

2

j 0
0 1

1
1
2

j-2
j-1

j-1
j 1
j

2
j

j+1
At steady state, we obtain
0 0 1 1
0 t t + =
0
1 0
1

t t

=
1

In general
( )
1 1 1 1
0
j j
j j j j j

t t t
+ +
+
+ =
0
1 0
...
j
j

t t

+
| |
=
|
|
\ .
( )
j j
j j j j j
1 1
...
j
j

+
|
\ .
Making the sum equal to 1

| |
| |
Solution exists if
| |
0 1
0
1 1
...
1 1
...
j
j j

t

=
| |
| |
+ =
|
|
|
|
\ .
\ .

0 1
1 1
...
1
...
j
j j
S

=
| |
= + <
|
|
\ .

A.BENHARI
M/M/1 Example
Meaning: Poisson Arrivals, exponentially distributed service g , p y
times, one server and infinite capacity buffer.



0 1

j-1


Using the birth death result and we obtain Using the birth-death result
j
= and
j
=, we obtain
0
, 0,1, 2,...
j
j
j

t t

| |
= =
|
\ .
Th f

\ .
Therefore
1
j

| |
| |
|

f / 1
0
1 t =
0
1
1
1
j

=
| |
=
+ |
|
|
\ .
\ .

for / = <1
( )
, 1, 2,... 1
j
j
j t = =
A.BENHARI
M/M/1 Performance Metrics
Server Utilization
Throughput
| | ( )
0
1
1 1 1
j
j
E
U
t t

=
= = = =

Throughput
| |
( )
0
1
1
j
j
E
R
t t

=
= = = =

Expected Queue Length


| |
( ) ( )
{ }
1 1
j
j
j
d
E j j
X
d


= = = =

0 0 0 j j j
d
= = =
( ) ( )
1
1 1
j
d d



= = =
` `

( ) ( )
( ) ( )
0
1 1
1 1
j
d d


=
= = =
` `


) )

A.BENHARI
M/M/1 Performance Metrics
Average System Time g y
| | | | | | | |
1
E E E E
S S X X

= =
| |
( ) ( )
1 1
1 1
E
S


= =

Average waiting time in queue
| | | | | | | | | | | | E E E E E E | | | | | | | | | | | | E E E E E E
S W W S Z Z
= + =
| |
1 1
E
W

| |
( ) ( ) 1 1
E
W

= =

A.BENHARI
M/M/1 Performance Metrics Examples
=0.5
35
40
e
r
s

[S]
25
30
b
e
r

o
f

c
u
s
t
o
m
e
[W]
15
20
e

u
n
i
t
s
)

/

N
u
m
b
[]
[ ]
5
10
D
e
l
a
y

(
t
i
m
e
[ ]
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
rho
A.BENHARI
PASTA Property
PASTA: Poisson Arrivals See Time Averages PASTA: Poisson Arrivals See Time Averages
Let
j
(t)= Pr{ System state X(t)= j }
Let (t) P { A i i t t t fi d X(t) j } Let a
j
(t)= Pr{ Arriving customer at t finds X(t)= j }
In general
j
(t) a
j
(t)!
Suppose a D/D/1 system with interarrival times equal to 1 and Suppose a D/D/1 system with interarrival times equal to 1 and
service times equal to 0.5
a a a a
0 0.5 1.0 1.5 2.0 2.5 3.0
Thus
0
(t)= 0.5 and
1
(t)= 0.5 while a
0
(t)= 1 and a
1
(t)= 0!
A.BENHARI
Theorem
For a queueing system, when the arrival process is Poisson q g y , p
and independent of the service process then, the probability
that an arriving customer finds j customers in the system is
equal to the probability that the system is at state j In other equal to the probability that the system is at state j. In other
words,
( ) ( ) ( )
{ }
Pr , 0,1,...
j j
a X j j
t t t
t = = =
Proof:
A i l i i l
Proof:
( ) ( )
( )
{ }
0
limPr |
,
j
t
a X j a
t t t
t t
A
=
+ A
Arrival occurs in interval (t,t)
( ) { }
( )
( ) { }
( ) { }
0
Pr ,
,
lim
Pr
,
t
X j a
t t t
t
a
t t t
A
=
+ A
=
+ A
( )
{ } ( ) { }
( ) { }
( )
{ }
( )
0
Pr Pr
,
lim Pr
Pr
,
j
t
X j a
t t t
t
X j
t t
a
t t t
t
A
=
+ A
= = = =
+ A
A.BENHARI
M/M/m Queueing System
Meaning: Poisson Arrivals, exponentially distributed service g , p y
times, m identical servers and infinite capacity buffer.
11


m
0 1

2
2
m
m
m+1
m
3
m

if 0
and =
if
j j
j j m
m j m

s s

=

>

A.BENHARI
M/M/m Queueing System
Using the general birth-death result g g
0
1
, if
!
j
j
j m
j

t t

| |
= <
|
\ .
0
, if
!
j
m
j
m
j m
m
m

t t

| |
= >
|
\ .
Letting =/(m) we get
( )
0
if
!
j
m
j m
j

<

0
0
!

if
!
j
m j
j
j
m
j m
t

>

! m

( )
1
j
m j
m
| |
( ) ( )
1
1
j m
m

| |
To find
0
( )
1
0
1
1
1
! !
j
m j
m
j j m
m
m
j m


t

= =
| |
=
+ + |
|
\ .

( ) ( )
( )
1
0
1
1
! ! 1
j
m
j
m m
j m

t

=
| |
=
+ + |
|

\ .

A.BENHARI
M/M/m Performance Metrics
Server Utilization
( )
j
j
| |
| |
{ }
1
1
Pr
m
j
j
E j m X m
U
t

=
= + >

( )
1
0
1
! !
j
m j
m
j j m
m
m
j m
j m


t

= =
| |
=
+ |
|
\ .

( ) ( )
j m
| |
( )
( )
( )
( )
( )
1
0
2
! ! 1 1
j m
m
j
m m
m m
m j

t

=
| |
=
+ + |
|

\ .

1 1 1 1 j
| |
( )
( )
( )
( )
( )
( )
( )
( )
1 1 1 1
1
0
2
1
! ! ! ! 1 1
1 1
j m m m
m
j
m m m m m
m
m j
m m

t

=
| |
=
+ + + |
|


\ .

( ) ( )
( )
1
0
1
1
! ! 1
j m
m
j
m m
m
j m

t

=
| |
=
+ + |
|

\ .

0
0
1
m m

t
t
= = =
A.BENHARI
M/M/m Performance Metrics
Throughput
1 m
g p
Expected Queue Length
| |
1
1
m
j j
j j m
E j m
R
t t
= =
= + =

p Q g
| |
( )
1
0
0
1
...
! !
j
m
m
j
j
j
j j m
m
m
E j
j j
X
j m

t t


=
= =
| |
= = =
+ |
|
\ .


j j
\ .
| |
( )
( )
0
2
!
1
m
m
E m
X
m

= +

Using Littles Law Using Little s Law


| | | |
( )
( )
0
2
1 1
!
1
m
m
E E m
S X
m


t


| |
= = + |
|

\ .
( )
\ .
Average Waiting time in queue
| | | |
1
E E
W S

=
A.BENHARI
M/M/m Performance Metrics
Queueing Probability Q g y
{ }
( )
( )
0
0
Pr
! ! 1
m
m j
Q j
m
m
P X m
t

t t

= > = = =

{ }
( )
! ! 1
Q j
j m j m
m m
= =


Erlang C Form la Erlang C Formula
A.BENHARI
Example
Suppose that customers arrive according to a Poisson pp g
process with rate =1. You are given the following two
options,
Install a single server with processing capacity 1 5 Install a single server with processing capacity
1
= 1.5
Install two identical servers with processing capacity
2
= 0.75 and

3
= 0.75
S lit th i i t ffi t t h ith b bilit 0 5 d Split the incoming traffic to two queues each with probability 0.5 and
have
2
= 0.75 and
3
= 0.75 serve each queue.


3
C
A.BENHARI
Example
Throughput Throughput
It is easy to see that all three systems have the same
throughput E[R
A
]= E[R
B
]= E[R
C
]=
Server Utilization
| |
1 2
A
E U

= = =
| |
1
1.5 3
A
E U

| |
1 4
B
E U

= = = Therefore, each server is 2/3 utilized
| |
2
0.75 3
B

,
| |
0.5 1 2
2 0 75 3
C
E U

= = =
| |
2
2 0.75 3
C

Therefore, all servers are similarly loaded.
A.BENHARI
Example
Probability of being idle y g
0
1
1
1
3
A

t

= =
1
2

| |
| |
2
4
1
4
3
1
5
2
3
2
| |
| |
|
|
\ .
|
= =
+ +
|
| |
|
|
( ) ( )
( )
1
1
0
1
1
! ! 1
j m
m
j
m m
j m

t

B
| |
=
+ + |
|

\ .

1
1

5
2
3
2
1
3
|
| |

|
|
\ .
\ .
( )
1
! ! 1
j
j m
=
\ .
For each server
0
2
1
2 3
C
t

= =
A.BENHARI
Example
Queue length and delay Queue length and delay
| |
1
1
2
1.5 1
A
E X


= = =

| | | |
1
2
A A
E E S X

= =
1
1.5 1
| |
( ) 12
m
m
E m X


t = + =

| | | |
1 12
E E S X = = | |
( )
0
2
! 5
1
B
E m X
m
t

= + =

| | | |
5
B B
E E S X

= =
For each queue!
| |
1
2
/ 2 0.5
2
/ 2 0.75 0.5
C
E X


= = =

| | | |
1
2 4
C C
E X E X = =
| | | |
1
4
C C
E X E X

= =
A.BENHARI
M/M/ Queueing System
Special case of the M/M/m system with m going to p y g g

0 1 2

m
m
m+1
( 1)
3

0 1

2
2
m
m
m 1
(m+1)
3
and = for all
j j
j j =
0
!
j
j

t t =
Let =/ then, the state probabilities are given by
0 0
1 1
!
j
e

t t

| |
+ = =
|
!
j
j
e
j

t

=
0
!
j
j
0 0
1
!
j
j
=
|
\ .

!
j
j
System Utilization and Throughput System Utilization and Throughput
| |
0
1 1 E e
U

t

= =
| | E
R
=
A.BENHARI
M/M/ Performance Metrics
Expected Number in the System p y
| |
( )
1
0 0 1
! ! 1
j j
j
j j j
E j j e e
X
j j


t



= = =
= = = =


( )
j j j
j j
Number of busy servers
Using Littles Law
| | | |
1 1 1
E E
S X


= = =
No queueing!
A.BENHARI
M/M/1/K Finite Buffer Capacity
Meaning: Poisson Arrivals, exponentially distributed service g , p y
times, one server and finite capacity buffer K.


Using the birth death result and we obtain


0 1

K-1

Using the birth-death result


j
= and
j
=, we obtain
0
, 0,1, 2,...
j
j
j K

t t

| |
= =
|
\ .
Th f

\ .
Therefore
1
j
K

| |
| |
|

f /
0
1
1
1
K

0
1
1
1
j

=
| |
=
+ |
|
|
\ .
\ .

for / =
( )
1
1
, 1, 2,...
1
j
j
K
j K

t

= =

A.BENHARI
M/M/1/K Performance Metrics
Server Utilization
Throughput
| |
( ) ( )
0
1 1
1 1
1 1
1 1
K
K K
E
U


t

+ +

= = =

Throughput
| |
( )
0
1
1
1
1
K
K
E
R

t

= = <

Blocking Probability
( )
1
1
K
B K
K
P

t
+

= =
1
1
B K
K

Probability that an arriving customer


fi d th f ll ( t t t ) finds the queue full (at state K)
A.BENHARI
M/M/1/K Performance Metrics
Expected Queue Length p Q g
| |
( ) ( )
{ }
1 1
0 0 0
1 1
1 1
j
K K K
j
j
K K
j j j
d
E j j
X
d


t

+ +
= = =

= = = =


( ) ( )
( )
1
1 1
0
1 1
1
1 1 1
K
K
j
K K
j
d d
d d


+
+ +
=




= = =
` `


) )

( )
( ) ( )
( )
( )
1
2 1
1
1 1 1
1
1
K
K
K
K


+
+
| |

+
= =
|
|


\ .
( )
\ .
1
1
1 1
K
K
K
K



+
| |

=
|

\ .
Net arrival rate (no losses)
System time
| |
( )
| | 1
K
E E
S X
t =
Net arrival rate (no losses)
A.BENHARI
M/M/m/m Queueing System
Meaning: Poisson Arrivals, exponentially distributed service g , p y
times, m servers and no storage capacity.


Using the birth death result and we obtain


0 1

2
2
m-1
(m-1)
m
m
3
Using the birth-death result
j
= and
j
=, we obtain
0
1
, 0,1, 2,...
!
j
j
j m
j

t t

| |
= =
|
\ .
Th f
! j

\ .
Therefore
1
1
j
m

t
| |
| |
=
|
|
|
f /
1
0
0
!
j
m
j
j

t

=
| |
=
|
\ .

0
0
1
!
j
j
t

=
=
|
|
|
\ .
\ .
for / =
\ .
0
, 1, 2,...
!
j
j
j m
j

t t = =
A.BENHARI
M/M/m/m Performance Metrics
Blocking Probability Blocking Probability
!
0
/ !
j
m
B m
m
j
m
P

t = =

Erlang B Formula
!
0
j
j =

Probability that an arriving customer


finds all servers busy (at state m)
Throughput
finds all servers busy (at state m)
Throughput
| |
( )
!
0
/ !
1
1 j
m
m
m
j
j
m
E
R

t
| |

| = = <
|
\ .

!
0
j
j =
\ .

A.BENHARI
M/M/1//N Closed Queueing System
Meaning: Poisson Arrivals, exponentially distributed service g , p y
times, one server and the number of customers are fixed to N.

Using the birth-death result,

1
we obtain
( )
0
!
, 1, 2,...
!
j
j
N
j N
N j
t t = =

( )
!
j
N j
( )
1
0
!
!
N
j
N
N j

t

(
=
(

N (N-1)
2

(N-2)
N
( )
0
0
!
j
N j
=
(

0 1

N-1

( )
A.BENHARI
M/M/1//N Closed Queueing System
N (N-1)
2

(N-2)
Response Time
0 1

N-1

( )
Response Time
Time from the moment the customer entered the queue until it
received service.
| |
( )
| |
0
1 E E
S X
t =
For the queue, using Littles law we get,
I th thi ki t
| |
( )
0
1
1 E
N X
t

In the thinking part,


( )
1
1 N
Therefore
| |
( )
( ) ( )
0
0 0
1
1
1 1
N
N
E
S
t

t t

= =

Therefore
A.BENHARI

A.BENHARI
Bibliography
Cinlar, E. (1975). Introduction to Stochastic Processes. Prentice-Hall, Englewood Cliffs, New york
Cox, D.R. and H.D. Miller (1965). The Theory of Stochastic Processes. John Wiley & Sons, New York.
Doob, J.L. (1953). Stochastic Processes. John Wiley & Sons, New York.
GoodmanProbability and Stochastic Processes, John Wiley and Sons, Second Edition, 2005
J.R.Norris: Markov chains, Cambridge University Press, 1997
M. O'Flynn, Probabilities, Random Variables, and Stochastic Processes. New York,NY, USA:
Harper & Row, Publishers, Inc., 1982.
Athanasios Papoulis, Random Variables and Stochastic Processes. New York, NY, USA: McGraw-Hill Book Company,
2nd ed., 1984
Athanasios Papoulis, Probability, Random Variables, and Stochastic Processes. NewYork, NY, USA: McGraw-Hill Inc.,
3rd ed., 1991
Parzen, E. (1962). Stochastic Processes. Holden-Day, Francisco, California.
Prabhu, N.U. (1965). Stochastic Processes. Macmillan, New York.
Yu Rozanov: Probability Theory, Random Processes, and Mathematical Statistics, Kluwer Academic Publishers, 1995
Sheldon Ross. A First Course in Probability. Englewoods CLiffs, NJ, USA: Prentice-Hall, Inc., 1994.
A.N.Shiriyaev:Probability, Springer-Verlag, New-York, 1984
Spitzer, F. (1964). Random Walks Nostrand. Princeton, New Jersey.
H. L. Van Trees, Detection, Estimation and Modulation Theory, Part I: Detection, Estimation, and Linear Modulation Theory
New York, NY, USA: John Wiley & Sons Inc., 1968.
Y. Viniotis, Probability and Random Processes for Electrical Engineering. New York, NY, USA: McGraw-Hill Inc., 1998.
D.Williams: Probability with Martingales, Cambridge Math.Textbooks, Cambridge, 1991 Roy D. Yates and David J.

You might also like