Professional Documents
Culture Documents
VOLUME J: THEORY
Leonard Kleinrock
Prof essor
Computer Science Dep artment
S chool of Engineering and Applied Sci ence
University of California, Los Angeles
A Wiley-Interscience Publication
.I
a Qui Sait
Attendre (1890)
27947
perm itted by Sections 107or 108of 'he 1976 United States Cop yright Ac t wi thout the permission of the copyrig ht owne r is unlawful. Requests fo r perm ission or furthe r in fo rmatio n should be
addressed to the Permissio ns Depart ment , Jo hn wi ley & Son s. Inc .
5 19.8'2
IS BN 0-47 1-49110- 1
13 1415
74-98 46
Preface
How much time did yo u waste waiting in line this week ? It seems we cann ot
escape frequent delays, and they are getting progressively worse ! In this text
we study the phen omena of sta nding , waiting, and serving, and we call this
study queueing theory .
An y system in which arrivals place demands upon a finite-cap acity resource
may be termed a queueing system. In particular, if the arri val times of these
demand s are unpredictable , or if the size of these demands is unpredictable ,
then conflicts for the use of the resource will ari se and queu es of waiting
customers will form . The lengths of these queue s depend upon two aspects of
the flow pattern : first, they depend upon the average rate at which demands
are placed upon the resource; an d second, they depend upon the statistical
flu ctuations of this rate. Certainly, when the average rate exceeds the cap acity,
then the system breaks down and unbounded queue s will begin to form ; it is
the effect of this average overload which then dominates the growth of queue s.
Ho wever, even if the average rate is less than the system capacity, then here,
too , we have the forma tion of queues due to the sta tistical fluctu ati on s and
spurts of arrivals that may occur; the effect of these var iatio ns is greatly
magnified when the average load approaches (but does not necessarily
exceed) that of the system cap acity. The simplicity of these queueing structure s is decepti ve, a nd in our studies we will often find ourselves in deep
ana lytic waters. Fortunately, a familiar and fund amental law of science
perme ate s our queueing investigations . This law is the conservation of flow,
which states that the rate at which flow increases within a system is equal to
the difference between the flow rat e int o and the flow rate out of tha t system.
Thi s observation permits us to write down the basic system equa tions for
rath er co mplex structures in a relativel y easy fashion .
The pu rpose of this book , then , is to present the theory of queue s at the
first-year gradua te level. It is assumed that the student has been expose d to a
first co urse in probabi lity theory ; however, in Appendi x II of this text we
give a pr obability theory refresher and state the basic pr inciples that we shall
need. It is also helpful (but not necessary) if the student has had some
exposure to tran sform s, alth ough in this case we presen t a rat her com plete
vii
viii
PREFACE
PREFA CE
ix
PREFACE
Volume I
Vo lume II
are keyed to the page where they appear in order to simplify the task o f
locating the explanatory material associated with each result.
Each chapter contains its own list of references keyed alphabetically to the
author and year; for example, [KLEI 74] would reference this book . All
equations of importance have been marked with the symbol - , and it is
these which are included in the summary of important equations. Each chapter
includes a set of exercises which , in some cases, extend the material in that
chapter ; the reader is urged to work them out.
XII
PREFACE
the face of the real world's complicated models, the mathematicians proceeded
to ad vance the field of queueing theory rapidly and elegantl y. The frontiers
of this research proceeded into the far reache s of deep a nd complex mathematics. It was soo n found that the really intere sting model s did not yield to
solution and the field quieted down considerably. It was mainly with the
advent of digital computers that once again the tools of queueing theory were
brought to bear on a class of practical problems, but thi s time with great
success. The fact is that at present, one of the few tools we have for an alyzing
the performance of computer systems is that of queuein g the ory , and this
explains its popularity am ong engineers and scientists today. A wealth of
new problems are being formulated in terms of this theory and new tools and
meth ods are being developed to meet the challenge of these problems. Moreover, the application of digital computers in solving the equations of queuein g
theory has spawned new interest in the field. It is hoped that thi s two-volume
series will provide the reader with an appreciation for and competence in the
methods of analysis and application as we now see them .
I take great pleasure in closing th is Preface by acknowledgin g those
indi vidual s and institutions that made it possible for me to brin g this book
int o being . First , I would like to thank all tho se who participated in creatin g
the stimulating environment of the Computer Science Department at UCLA,
which encouraged and fostered my effort in this directi on. Acknowledgment
is due the Advanced Research Projects Agency of the Department of Defense ,
which enabled me to participate in so me of the most exciting and ad vanced
computer systems and networks ever developed . Furthermore , the John
Simon Guggenheim Foundation provided me with a Fellowship for the
academic year 1971 -1 972, during which time I was able to further pursue my
investigati ons. Hundreds of students who have passed through my queueingsystems courses have in major and minor ways contributed to the creation
of this book , and I am happy to ackn owledge the specia l help offered by
Arne Nilsson, Johnny Wong, Simon Lam, Fouad Tobagi, Farouk Kam oun,
Robert Rice, and Th omas Sikes. My academic and profes sional collea gues
have all been very suppo rtive of th is endeavour. To the typi sts l owe all. By
far the lar gest port ion of this book was typed by Cha rlo tte La Roche , and I
will be fore ver in her debt. To Diana Skoc ypec and Cynthia Ellm an I give my
deepest thanks for carrying out the enormous task of. proofreading and
correction-making in a rapid , enthusiastic, and suppo rt ive fash ion. Others who
contributed in major ways are Barbara Warren , Jean Dubinsky, Jean D'Fucci ,
and Gloria Roy. l owe a great debt of thanks to my fam ily (and especially to
my wife, Stella) who have stood by me and supported me well beyond the
call of duty or marriage contract. Lastl y, I would certainly be remiss in
omitting an ackn owledgement to my ever-faithful dictatin g machine, which
was constantly talking back to me.
LEONARD KLEI NROCK
March, 1974
Contents
VOLUME I
PART
I: PR ELIMINARIES
PART
3
3
8
10
10
19
26
44
53
Chapter 3
89
90
94
99
101
102
103
105
106
107
108
X III
xiv
CONTEN TS
I 15
11 5
119
126
130
134
137
139
147
167
241
261
26 I
267
CONTDITS
PART
XV
Chapter 8
8. 1.
8.2.
8.3.
8.4 .
275
275
283
299
304
319
Epilogue
Appendix I : Transform Theory Refresher: z-Transforrn and Laplace
Transform
..
321
327
338
355
363
368
377
38 1
388
393
Glossary of Notation
396
400
Index
4 11
-.
xvi
CONTENTS
VOLUME 1/
Chapter I
I. Notation
2.
3.
4.
5.
6.
7.
8.
9.
10.
Gene ra l Results
Markov, Birth-Death, and Poisson Processes
The M /M / l Que ue
The MI M l m Q ueueing System
Markovian Que ueing Networks
The M / G/l Queue
The GIMII Queue
T he GI Mlm Queue
The G/G /l Que ue
Chapter 2
2.
3.
4.
5.
6.
7.
8.
9.
10.
Chapter 3
Priority Queueing
I . The Model
2. An Approach for Calculating Average Waiting Times
3. The Delay Cycle, Generalized Busy Periods, and Waiting
T ime Distributions
4. Conservation Laws
5. The Last-Come- First-Serve Queueing Discip line
CONTENTS
6.
7.
8.
9.
Chapter 4
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
Chapter 5
I.
2.
3.
4.
5.
6.
7.
8.
9.
10.
II .
12.
Computer-Communication Networks
Resource Sharin g
Some Contrasts and Trade-Off's
Networ k Structures and Packet Switchin g
The ARPANET-An Operational Descripti on of an
Existing Network
Definitions, the Model, and the Problem Statement s
Delay An alysis
Th e Capacity Assignment Problem
Th e Traffic Flow Assignm ent Problem
The Capac ity and Flow Assignment Problem
Some Topological Co nside rations-Applicatio ns to the
ARPAN ET
Satellite Packet Switchin g
Grou nd Radio Packet Switching
xvii
xvi ii
CONTENTS
Chapter 6
1.
2.
3.
4.
5.
6.
7.
8.
9.
Computer-Communicatio n N etworks
Measurement, Flow Control and ARP ANET Traps
Glossary
Summary of R esults
In dex
QUEUEING SYSTEMS
VOLUME I: THEORY
PART
PRELIMINARIES
It is difficult to see the forest for the trees (especially if one is in a mob
rather than in a well-ordered queue). Likewise, it is often difficult to see the
impact of a collection of mathematical results as you try to master them; it is
only after one gains the understanding and appreciation for their application
to real-world problems that one can say with confidence that he understands
the use of a set of tools .
The two chapters contained in this preliminary part are each extreme in
opposite directions. The first chapter gives a global picture of where queueing
systems arise and why they are important. Entertaining examples are provided
as we lure the reader on. In the second chapter, on random processes, we
plunge deeply into mathematical definitions and techniques (quickly losing
sight of our long-range goals); the reader is urged not to falter under this
siege since it is perhaps the worst he will meet in passing through the text.
Specifically, Chapter 2 begins with some very useful graphical means for
displaying the dynamics of customer behavior in a queueing system. We then
introduce stochastic processes through the study of customer arrival, behavior, and backlog in a very general queueing system and carefully lead the
reader to one of the most significant results in queueing theory, namely,
Little's result, using very simple arguments. Having thus introduced the
concept of a stochastic process we then offer a rather compact treatment
which compares many well-known (but not well-distinguished) processes and
casts them in a common terminology and notation, leading finally to Figure
2.4 in which we see the basic relationships among these processes; the reader
is quickly brought to realize the central role played by the Poisson process
because of its position as the common intersection of all the stochastic
processes considered in this chapter. We then give a treatment of Markov
chains in discrete and continuous time; these sections are perhaps the toughest sledding for the novice, and it is perfectly acceptable ifhe passes over some
of this material on a first reading. At the conclusion of Section 2.4 we find
ourselves face to face with the important birth-death processes and it is here
PRELIMINARIES
One of life' s more disagreea ble act ivities, namel y , waiting in line, is the
delightful subject of thi s book. One might reasonably ask, " Wha t does it
profit a man to study such unpleasant phenomena 1" The an swer , of course,
is that through understanding we gain compassion, and it is exactl y this
which we need since people will be wa iting in lon ger and lon ger queues as
civilizat ion progresses, an d we must find ways to toler ate the se unpleasant
situa tio ns. Think for a moment how much time is spent in one's daily
acti vities waiting in some form of a queue: waiting for breakfast ; sto pped at a
traffic light ; slowed down on the highways and freewa ys ; delayed at th e
en tran ce to o ne's parking facility; queued for access to an elevat or ; sta nding
in line for the morn ing coffee; holding the telephone as it rin gs, and so o n.
The list is endless, and too often also are the queues.
The orderliness of queues varies from place to place ar ound the world.
Fo r example, the English are terribly susceptible to formation of o rderly
queues, whereas so me of the Mediterranean peopl es con sider th e idea
ludicrous (have yo u ever tried clearing the embarkation pr ocedure at the
Port of Brindisi 1). A common sloga n in the U.S. Army is, "Hurry up and
wait." Such is the nature of the phenomena we wish to study.
1.1.
SYSTEMS O F FLOW
the railway net wor k, the dam , the teleph one or telegraph networ k, the
supermarke t checkout counter, and the computer processing system, respectively. The " finite capacity" refers to the fact th at the channel can satisfy
the demands (placed upon it by the commodity) at a finite rate only. It is
clear that the ana lyses of man y of these systems requ ire analytic tools drawn
from a variety of discipline s and , as we shall see, queueing the ory is ju st one
such disciplin e.
When one an alyzes systems of flow, they naturally break int o two classes :
steady and unsteady flow. The first class con sists of those systems in which the
flow proceeds in a predictable fashion . Th at is, the qu antity of flow is
exactly known and is const ant over the int erval of interest; the time when tha t
flow appears at the channel, and how much of a demand that flow places upon
the channel is known and consta nt. These systems are trivial to an alyze in the
case of a single channel. For example , consider a pineapple fact ory in which
empty tin cans are being transported along a conveyor belt to a point at
which they must be filled with pineapple slices and must then proceed further
down the conveyo r belt for addi tional operatio ns. In this case, assume that
the cans arrive at a constant rate of one can per second and that the pineap ple-filling operation takes nine-tenths of one second per can . The se numbers
are constant for all cans and all filling operations. Clearly thi s system will
funct ion in a reliable and smooth fashion as long as the assumptions stated
above continue to exist. We may say th at the arrival rate R is one can per
second and the maximum service rate (or capacity) Cis 1/0.9 = 1.11111 . ..
filling operations per second . The example above is for the case R < c.
However , if we have the condition R > C, we all know wha t happens : cans
and /or pineapple slices begin to inundate and overflow in the factory! Thus
we see that the mean capacity of the sys tem must exceed the average flow
requirements if chaotic congestion is to be avoided ; this is true for all systems
of flow. Th is simple observation tells most of the sto ry. Such systems a re of
little interest theoretically.
T he more interesting case of stea dy flow is that of a net work of cha nnels.
For stable flow, we obviously require that R < C on each cha nnel in the
networ k. However we now run int o some serio us combinat orial problem s.
For example , let us consider a rail way networ k in the fictitious lan d of
Hatafla. See Fig ure 1.1. The scena rio here is that figs grown in the city of
Abra must be transported to the destination city of Cadabra , makin g use
of the railway netwo rk shown. Th e numbe rs on each chann el (sectio n of
railway) in Figure 1.1 refer to the maximum numb er of bushels of figs which
that cha nnel can handle per day. We are now co nfro nted with th e following
fig flow problem: How man y bushels of figs per day can be sent from Ab ra to
Cadabra and in wha t fashion sha ll this flow of figs take place ? The answer to
such questions of maximal " traffic" flow in a variety of networ ks is nicely
1.1.
Zeus
Nonabel
Cadabra
Abra
Sucsamad
Oriac
QUEUEING SYSTEMS
1.1.
SYSTEMS O F FLOW
conta ined and described withi n q ueue ing th eory, to which much o f th is text
devote s itself. Th is requires a back ground in pr obability th eory as well as a n
understanding o f complex variables and so me of the usual tr an sformcalculus meth ods ; th is material is re viewed in Appendices I and II.
As in the case of determin istic flow , we may enla rge our sco pe of p robl ems
to that of networks of channels in which random flow is encountered. An
exa mple of such a system would be that of a computer network. Such a
system con sists of computers connected together by a set of communica tion
lines where the capacity o f the se lines for carryin g information is finite. Let us
return to the fictiti ous land of Hatafla and assume that the railway net work
considered earli er is now in fact a computer net work. Assume th at user s
located a t Abra requ ire computational effort on the facility at Cad abra. The
particular times a t which these requests are made are themselves unpredictable , and th e commands or instructio ns that describe these requests are also
of unpredictable len gth . It is the se commands which mu st be transmitted to
Cadabra over our communication net as message s. When a message is
inserted int o the netw ork a t Abra , and after an appropriate deci sion rule
(referred to as a routing procedure) is accessed, then th e message proceeds
through the netw ork a long so me path. If a port ion of this pa th is busy, a nd
it may well be, then the message must queue up in front of the bu sy channel
and wait for it to bec ome free. Const ant decisions must be made regarding
the flow of messages "and routing procedures. Hopefully, the message will
eventually emerge at Cadabra, the computation will be performed , and the
results will then be inserted into the network for delivery back at Abra.
It is clear th at the problems exemplified by our computer net wor k involve a
variety of extremely complex qu eueing problems, as well as networ k flow
and deci sion problems. In a n earlier work [KLEI 64] the auth or addressed
him self to certain as pects of the se questions. We de velop the an alysis of the se
systems lat er in Volume II , Chapter 5.
Having thus classified * systems of flow , we hope th at the reader understands
where in the genera l scheme of things the field of queueing the ory ma y be
placed . The method s from thi s the ory a re central to a nalyzing most stochas tic
flow problems, an d it is clear from a n examina tion of the current litera ture
that the field and in particular its applications a re growing in a viable a nd
purposeful fashion.
The classifica tion described a bove places qu eueing systems within the class of systems of
flow. This approach identifies and emph asizes the fields o f applicatio n for queu eing theory.
A n a lterna tive a pproa ch wo uld ha ve been to place queueing theory as belongi ng to the
field of app lied stochas tic processes ; this classifica tion would have emphasized the mat hema tical structure of queueing theory ra ther than its a pplica tions. Th e poin t of view taken
in th is two-volume book is the form er one, namely. with a pplica tion of the theory as its
major goal rat her than extension of the math emat ical for mal ism a nd results.
t]
(I.I )
The as sumption in most of queueing theory is that th ese interarrival time s are
independent, identically distributed random variables (a nd, therefore, the
strea m of arrivals forms a stationary renewal process ; see Chapter 2). Thus,
onl y the distribution A (t) , which de scribes the time between a rrivals, is usually
of significa nce. The second sta tistica l quantity th at mu st be described is the
am ount of demand the se arrivals place upon the channel; thi s is usuall y
referred to as the service tim e whose probability distribution is den oted by
B(x) , that is,
B(x) = P[service time ~ x ]
( 1.2)
Here service time refers to the length of time th at a customer spends in the
ser vice facility .
N ow regarding the st ructure and discipline of the service facility , one must
spec ify a variety of additio na l qu antities. One o f the se is the extent of
storage capacity available to hold waiting customers a nd typically thi s quantit y is described in term s of the variable K ; often K is taken to be infinite. An
additional specification involves the number of service stations ava ilable, and
if more th an one is available, then perhaps the distribution o f service time
will differ for each , in which case the distribution B(x) will include a su bscript
to indicate that fact. On the other hand, it is so metimes the ca se that the
arriving strea m con sists of more th an one identifiable class of customers ; in
such a case the interarrival distributi on A (t ) as well as the service distr ibut ion
B(x) may each be characteristic of each cla ss and will be identified aga in by
use of a subscrip t o n these d istr ibution s. An other importa nt structura l
descripti on o f a queueing system is th at of the queueing discipline; thi s
describes the order in which customers a re taken from the queue a nd a llo wed
int o service . For example, so me sta nda rd queueing disciplines are first-co mefirst-serve (F C FS), last-come-first-serve (LCFS), a nd random o rder of
service. When the arriving customers are distin guishable according to gro ups,
then we encou nter the case of priority queueing disciplines in which priority
Th e notat ion P[A] denotes, as usua l. the " pro bability of the event A,"
1.2.
among gro ups may be esta blished. A further sta tement regarding the availability of the service facility is also necessary in case t he service faci lity is
occasio na lly requ ired to pay attention to other ta sks (as, fo r example, its
own breakdown). Beyo nd th is, q ueue ing systems may enjoy custo mer
behavio r in the fo rm of defections from th e qu eue, j ock ey ing a mo ng th e man y
qu eues, balking before ent ering a queue, bribing fo r queue positi on , cheating
for q ueue po sition, a nd a variety of o the r interesting a nd not-unexpected
humanlike cha rac terist ics. We will encounter these as we move th rough t he
text in an o rderly fashion (first-come-fi rst-serve ac co rding to page nu mber).
No w tha t we ha ve indi ca ted how one must specify a queueing system, it is
appropriate t hat we ide nti fy the meas ures of performance a nd effectiveness
th a t we sha ll obtai n by ana lysis. Basicall y, we a re int erested in the waiting time
for a custo mer, the number of customers in the system, th e length of a busy
period (the contin uo us interva l d uring which th e serve r is busy), the length of
an idle period , a nd th e cu rrent 1I'0rk backlog expressed in un its of tim e. All
t hese quant ities a re ra ndom varia bles a nd thus we seek th eir complete
p rob a bilistic desc rip tion (i.e., their proba bility dist ribu tion fu nction ).
Us ually , ho wever , to give th e distribution functio n is to give more th an
one can easi ly make use of. Consequ en tly, we often settle fo r the first few
mo ments (mean , var iance, etc.).
Ha ppily, we shall begin with simp le co nside rations a nd de velop the tools
in a st raigh tforwa rd fashio n , paying a tte ntio n to th e essential details of
a na lysis. In t he followi ng pages we will enco unter a va riety of simple qu eueing
problems, simple at least in the sense of description and usually rather
so phistica ted in term s of so lution. However , in o rde r to do t his pr op erly, we
first devote our efforts in the following chapter to desc ribing some of t he
imp orta nt ra ndo m processes that ma ke up the a rriva l a nd service processes
in o ur q ueueing systems.
REFERENCES
FORD 62 Ford, L. R. and D. R. Fu lkerson, Flows in Networks, Princeton
University Press (Princeton, N.J.), 1962.
FRAN 71 Frank . H. and I. T. Frisch, Communication. Transmission , and
Transportation Ne twork s, Addison-Wesley (Reading, Mass.), 1971.
KL EI 64
Kleinrock, L. . Communication Nets ; Stochastic Message Flow and
Delay . McGraw-Hili (New York), 1964 , out of print. Reprinted by
Dover (New York ), 1972.
We assume that the read er is familiar with the basic elementary notions,
terminology, and concepts of probability theo ry. Th e particular aspects of
that theory which we requ ire are presented in summary fashion in Appendix
II to serve as a review for th ose readers desi ring a quick refresher and
remin de r; it is recommended that the material therein be reviewed, especially
Sectio n 11.4 on transform s, generating functions , and characteristic function s.
Included in Appendix " are the fo llowing important definitions, concep ts,
and results :
2.1.
t Appendix
10
2.1.
II
(2. 1)
Thus, we may portray o ur system as in Figure 2.1 in which the box represents
t he qu eu eing syste m a nd the flow of cust omers both in a nd o ut of the system
is shown . One can immediately define so me rand om processes of int ere st.
For example , we a re int erested in N( I) where *
N(I) ~ number of cust om ers in the system at time
(2.2)
(2.3)
. ,J; ~
Oueoei nq
system
;I
-
----'
12
The det ail s of the se sto chastic processes may be ob served first by defining
the foll owing va riables and then by displayin g the se va riab les on a n appropriate time diagram to be d iscussed belo w. We begin with the definition s.
Recalling that the nth cu stomer is den oted by Cn. we define his a rriva l time
to the queuein g system as
Tn
(2.4)
(2. 5)
P[t" ~ t]
A(t)
(2.6)
(2.7)
'(2.8;
(2.9
The total time spent in the system by COl is the sum of his waiti ng time a n,
service time, which we denote by
= II' n + x"
(2. 1(
Thus we have defined for the nth cu stomer his a rriva l time, " his" intera rri v.
time , his service time, his waiting time , a nd his system t ime. We find
* T he terms " waiting ti me" and " queueing time" have conflicting defini tions within tl
body of queueing-theory literatu re. T he fo rmer so metimes refers to the tota l time spent
system. an d the latter then refers to the total time spent on queue ; however . these tv
definitions are occasio na lly reversed . We a ttem pt to remove tha t confusion by defini
waiting a nd queu eing time to be the sa me quant ity. name ly. the time spent wa iting '
queu e (but not being served ); a more appropriate term perhaps would be " wasted tim'
Th e tota l time spent in the system will be referred to as "sys tem time" (occasiona lly kn o a s " flow time" ).
2.1.
13
I'im
n-e cc
1n
(2.11)
which we den ote by I n -+ i. (We ha ve already requ ired that the interarrival
times In have a distribution independent of n, but this will not necessaril y be
the case with many other random variables of interest.) The typical notation
for the probab ility distribut ion function (PDF) will be
(2.12)
and for the limiting PDF
P[i ~ I] = A (I)
This we denote by A n(l) -+ A(t ) ; of course, for the interarrival time we have
as sumed that A n(l ) = A (I) , which gives rise to Eq. (2.6). Similarly, the
probability den sity function (pdf) for t n a nd i will be an(l) and aCt), respectively, and will be den oted as an(t) -+ aCt). Finally, the Laplace transform
(see Appendix II) o f the se pdf's will be denoted by An *(s) and A *(s),
respecti vely, with the obvio us notation A n*(s ) -+ A *(s) . The use of the letter
A (a nd a) is meant as a cue to remind the reader that they refer to the
interarrival time . Of .course, the moments of the interarrival time are of
interest a nd the y will be denoted as follows * :
E[l n]';;'i.
I)
d
))
al
it
he
in
vo
ng
on
-vn
(2.13)
Acc ording to our usual notation , the mean interarrival time for
random variable will be given] by i in the sense that i. -+ i. As
i, which is the average interarrival time between customers,
frequently in o ur equ ati on s that a special notation ha s been
follows :
_ <l I
1=-
i.
(2.14)
the limiting
it turns out
is used so
adopted as
(2.15)
(2. 16)
The no tat io n E[ J denotes the expecta tion of the quant ity within sq uar e brackets. As
shown, we a lso ad opt the overbar notat ion to deno te expectat ion.
t Actually, we sho uld use the no tatio n I with a tilde and a ba r, but this is excessive a nd will
be simplified to i. T he sa me simplificatio n will be a pplied to ma ny of o ur ot her ra ndom
va riab les.
14
_
t
=-=
A
j.
a1
(2. 17)
That is, th ree special notations exist fo r the mean interarrival time; in particular, the use of the symbol a is very common and vario us of the se form s
will be used throu ghout the text as a p propria te. Summarizing th e information
with regard to the interar riva l time we have the followin g shortha nd glossa ry :
t n -+
A n(t)
I
t= ~
-+
A(t),
a1
t;
a,
->-
A n*(s)
->-
A*(s)
(2.18)
ak
and
Sn
X n -+
X,
-X
-x = -1 = b1 = b ,
n -..
X n , I\'n,
B.(x) -+ B(x ),
b . (x) -+ b(x ),
B n*(s)
-~
B*(s)
(2.19)
f-l
IV n
(2.20)
-s;
s;
-+
s,
= system
time for C,
sn(Y) -+ s(y),
S n"(s)
-+
S *(s)
(2.21)
2.1.
s.
C ll _ 1
'I
C.
cr. ,!'.
15
C n +2
Servicer
C~
T.
Queue
Cn +1
C:"'+2
Tlme v-c-e--
T/u 2
'n +1
(11 +2
Cn
en +1
Cn -t2
(2.22)
16
12, -- - - - - - - - - - -- - - - - - - - -- ----,
11
10
~
'5
c
'0
"
~
~
9
8
7
6
5
4
3
2
1
o L----"'="'"
Time
(2.23)
Sample functions for these two stochastic processes are sho wn in Figur e 2.3.
Clearly N( t), the number in the system at time t, must be given by
N( t )
= (r) -
bet)
On the other hand, the tot al are a bet ween these two curves up to some point ,
say t , repr esents t he tota l time all customers have spe nt in the system (measur ed in unit s of customer-seconds) during the int erval (0, t) ; let us denote this
cumulative area by yet ). M oreover, let At be defined as the average arrival
rate (custo mers per second) during the interval (0, t); th at is,
(2.24)
We may define T, as the system time per customer averaged over all custome rs
in the interval (0, t); since yet ) repre sents the accumulated customer- seconds
up to time t , we may divide by the number of arriva ls up to that poin t to
ob tain
yet)
Tt = (X( t)
Lastly, let us define NI as the average nu mber of custome rs in the qu eueing
system during the interval (0, r): this may be obtained by dividin g the
accumulated number of customer-seconds by the total interval length t
2.1.
17
thusly
y(t)
N,= -
N, = A,T,
Let us no w assume that our queueing system is such that the following limits
exist as t -.. CtJ:
A = lim J. ,
,-'"
T=lim T,
,- '"
Note that we are using our former definitions for A and T representing the
average customer arrival rate and the average system time , respectively. If
these last two limits exist, then so will the limit for N" which we denote by N
now representing the average number of customers in the system ; that is,
N= AT
- (2.25)
Thi s last is the result we were seeking and is known as Little's result . It states
that the average number of customers in a queueing system is equal to the
average arrical rate of customers to that system, times the average time spent
in that sys tem. * The above proof does not depend upon any specific assumption s regarding the arrival distribution A (r) or the service time distribution
B(x) ; nor does it depend upon the number of servers in the system or upon the
particular queueing discipline within the system. Th is result existed as a
" folk the orem" for man y years ; the first to establish its validity in a formal
way was J. D. C. Little [LITT 61] with some later simpl ifications by W. 'S .
Jewell [JEWE 67) . S. Eilon [EILO 69 ) and S. Stidh am [STID 74). It is important to note that we have not precisely defined the boundary around o ur
queueing system. For exampl e, the box in Figure 2.1 co uld apply to the entire
system com pose d of qu eue an d serv er , in whic h case Na nd T as defined refer
to qu ant ities for the entire syste m; on the othe r hand , we co uld have co nsidered the bound ar y of the queueing sys tem to co ntai n o nly the q ueue itsel f, in
which case the relationsh ip wou ld have been
No = AW
- (2.26)
where No represents the average number of customers in the queue and , as
defined ea rlier, W refers to the avera ge time spent waitin g in the queue. As a
third possible alterna tive the queueing system defined could have surrounded
An intuitive proof of Little' s result depends on the observa tio n that an arriving customer sho uld find the sa me average number, N, in the system as he leaves behind upon
his departure. Thi s latter quantity is simp ly the arrival ra te A times his a verage time in
system , T.
18
only the server (or servers) itself; in th is case our equation would have reduced
to
R. = AX
(2.27)
where R. refers to the average number of customers in the service facility
(or facilities) and x, of course, refers to the average time spent in the service
box. Note that it is always true that
T = x+ W
_ (2.28)
The queuein g system could refer to a specific class of customers, per haps
based on priority or some other attribute of this class, in which case the same
relationship would apply. In other words, the average arri val rate of custo mers
to a "queueing system" times the average time spent by customers in that
"system" is equal to the average number of customers in the " system ,"
regardless of how we define that " system."
We now discuss a basic parameter p, which is comm only referred to as the
utilization fa ctor. The utilization factor is in a fundamental sense really the
ratio R /C, which we introduced in Chapter I. It is the rat io of the rate at
which "work" enter s the system to the maximum rat e (capacity) at which the
system can perform this work; the work an arri ving customer brings into the
system equals the number of seconds of service he requires. So, in the case of
a single-server system, the definition for p becomes
p ,;; (average arrival rate of customers) X (average service time)
= AX
_ (2.29)
Thi s last is true since a single-server system has a maximum capacity for
doin g work , which equals I sec/sec and each ar riving customer brings an
am ount of work equal to x sec ; since, on the average, ..1. customers ar rive per
second , then }.x sec of work are brought in by customers each second that
passes, on th e average. In the case of multiple servers (say, III servers) the
definition remains the same when one considers the ratio R/C , where now th e
work capacity of the system is III sec/sec; expressed in terms of system parameters we then have
a -AX
p =
_ (2.30)
m
Equat ions (2.29) and (2.30) apply in the case when the maximum service
rat e is independent of the system sta te; if th is is not the case, then a more
careful definition must be pro vided. The rate at which work enters the
system is sometimes referred to as the traffic intensity of the system and is
usually expressed in Erlangs ; in single-server systems, the utilizat ion facto r is
equal to the traffic inten sity whereas for (m) multiple servers, the tr affic
intensity equal s mp . So long as 0 ~ p < I, then p may be interpreted as
p
(2.3I)
2.2.
19
[In the case of a n infinite number of servers, the ut ilizati on fact or p plays no
impor ta nt part, and instead we are interested in the number of busy servers
(and its expectati on).]
Indeed , for the system GIGII to be stable , it must be th at R < C, that is,
o ~ p < I. Occasionally, we permit the case p = 1 with in the ran ge of
sta bility (in particul ar for the system 0 /0 /1). Stability here once again refer s
to the fact that limiting distributions for all random vari ables of interest
exist , and that all customers are eventually served. In such a case we may
carry out the following simple calcul ation . We let 7 be an arbitrarily long
t ime interval ; during this interval we expect (by the law of large numbers)
with probability 1 that the number of arrivals will be very nearly equal to .AT.
M ore over , let us define Po as the probability that the server is idle at some
randomly selected time . We may, therefore, say that during the interval 7,
the server is busy for 7 - TPO sec, and so with pr obability I , the number of
customers served during the interval 7 is very nearly (7 - 7po)fx . We may
now equate the number of arri vals to the number served during thi s int erval,
which gives, for lar ge 7,
1 - P
(2.32)
The interpretati on here is that p is merely the fracti on of time the server is
bu sy ; thi s suppo rts the conclusion in Eq. (2.27) in which Ax = p was shown
equal to the averag e number of customers in the service facilit y.
Thi s, then , is a rapid look at a n overall queueing system in which we ha ve
exp osed so me of the ba sic sto chast ic processes, as well as some of the
impo rta nt de finition s a nd notation we will enc ounter. More over , we have
establi shed Little's result , which permits us to calcul ate the average number
in the system once we have calculated the average time in the system (or vice
versa). N ow let us move on to a more careful study of the imp ortant stochas tic
processes in our queueing systems.
2.2 *.
20
(2.33)
for a ll x = (Xl' X., . . . , X n ) , t = (II> I. , ... , In), and 11. As menti oned there,
thi s is a formidable ta sk ; fortunately , many interesting sto chastic processes
perm it a simpler description. In any ca se, it is the funct ion Fx(x ; t) th at really
de scribes the dependencies a mo ng the random va ria bles of th e stoc has tic
process. Below we de scribe some of the usual type s o f sto chas tic pr ocesses
th at a re ch aracterized by different kinds of dependency relati on s am on g their
rand om va riables. We provide thi s cla ssificati on in order to give t he read er a
global view of this field so that he may better understand in which particular
2.2.
21
region s he is o pera ting a s we proceed with our st udy of queueing theory and
it s related sto chas tic pr ocesses.
(a) Stationary Processes. As we discuss at the ver y end of Appendix II,
a sto chas tic process X (I ) is sa id to be sta tiona ry if Fx(x ; t) is inv ari ant to
shifts in time for a ll va lues o f its arguments; th at is , given an y con stant T
the following must hold :
FX(x ; t
+ T) = Fx (x ; t)
(2.34)
(2.35)
In th is case we are stretching th ings somewhat by calling such a sequence a
rand om proce ss since there is no stru cture or dependence among the random
variables. I n the case of a continuous random process, such an independent
pr oce ss may be defined, and it is commonl y referred to as " white noise"
(an example is the time derivative of Brownian motion).
(c) Markov Processes. In 1907 A. A. Mark ov published a paper [MARK
07] in which he defined and investigated the properties of what are now
kn own as Mark ov processes . In fact, what he created was a simp le and
highly useful form o f dependency amon g the random vari ables forming a
stochastic process, which we now describe.
A Mark ov proces s with a di screte state space is referred to as a Markov
chain. The d iscrete-time Markov chain is the easiest to conceptualize and
understand. A set of random variables {X n } forms a Markov chain if the
pr obability that the next va lue (sta te) is X n +1 depends onl y up on the current
value (state) X n and not upon any previous va lues. Thus we have a random
sequence in which the dependency extends backwards one unit in time. That
22
is, the way in which the entire past histo ry affects the future of the process is
completely summarized in the current value of the process.
In t he case of a discrete-time Markov chain the instants when state changes
may occur are preord ained to be at the integers 0, 1,2, . . . , n, . . . . In the
case of the continuous-time Markov chain, however, the transition s between
states may take place at any instant in time. Thu s we are led to consider the
rand om variable that describe s how long the proce ss remain s in its curr ent
(discrete) state before making a tr ansition to some ot her state . Because the
Markov pr operty insists that the past history be compl etely summarized in
the specification of the current state, then we are not free to requ ire th at a
specification also be given as to how long the proce ss has been in its current
sta te ! Th is imposes a heavy con straint on the distribution of time that the
process may remain in a given state. In fact , as we shall see in Eq. (2.85),
"this state time must be exponent ially distributed. In a real sense, then , the
exponential distribution is a continuous distribution which is "rn emoryless"
(we will discuss this not ion a t considerable length later in this chapter).
Similarl y, in the discrete-time Markov chain , the process may remain in the
given state for a time that must be geome trically distributed ; this is the only
discrete pr obab ility mass funct ion that is memoryless. This memoryless
property is requi red of all Markov cha ins and restri cts the generality of the
processes one would like to cons ider .
Expressed analytically the Marko v property may be written as
P[X(tn+l) = x n+1 X (t n) = Xn, X( t n_1) = Xn_l>' . . ,X(t l) =
= P[X(t n+l) = x n+1 I X (t n) = xnl
1
xtl
(2.36)
where t 1 < t 2 < . .. < I n < t n + 1 and X i is included in some discrete sta te
space.
The consideration of Markov processes is central to the study of queueing
theory and much of this text is devoted to th at study. Therefore , a good
porti on of thi s chapter deals with discrete-and continuous-time Mar kov
chain s.
(d) Birth-death Processes. A very important special class of Mar kov
chains has come to be known as the birth-death process. The se may be either
discrete-or continuou s-time processes in which the defining condit ion is that
sta te transition s take place between neighboring sta tes only. That is, one may
ch oose the set of integer s as the discrete state space (with no loss of generality)
and then the birth-death process require s that if X n = i, then Xn+l = i - I,
i, or i + I and no other. As we shall see, birth -death processes have played a
significant role in the development of queueing the ory. Fo r the moment,
however , let us proceed with our general view of stoc hastic processes to see
how each fits int o the gener al scheme of thin gs.
2.2.
23
S n = X,
+ X z + ... + X n
I, 2, . . .
(2.37)
24
walk, whereas if they a re taken from a continuum , th en we have a con tinu oustime random walk. In any case , we assume th at the interval between these
tr an sition s is distributed in an a rbitra ry way a nd so a random walk is a
special case of a semi-Ma rkov process. * In the case when the co mmon
distribution for X n is a discrete distribution , th en we ha ve a discrete- stat e
random wal k ; in thi s case the transiti on probability Pi; of goi ng from sta te i
to stat e j will depend only up on th e differenc e in indices j - i (which we
den ote by q;_;).
An exa mple of a continuous-t ime rand om walk is tha t of Brownian mot ion ;
in the disc rete-time case a n exa mple is th e total number of head s observed in a
seq uence of indepe ndent coin tosses.
A random walk is occasionally referred to as a process with " independent
increments."
(g) Renewal Processes. A renewal proce ss is rela ted] to a random walk.
However, the interest is not in followin g a pa rticle am ong man y sta tes but
rather in counting transitions th at take place as a functi on of time . T ha t is,
we co nsider the real time axi s on which is laid ou t a sequence of points ; the
distribution of time between adj acent point s is an a rbitrary common distribution and each point corresponds to an instant of a state tra nsition. We
ass ume tha t the process begins in sta te 0 [i.e., X(O) = 0] a nd increases by
unity at each transiti on ep och ; th at is, X (t) eq uals the number of sta te tran siti on s that ha ve taken place by t. In thi s sense it is a special case of a rand om
walk in which q, = I and q; = 0 for i ~ I. We may think of Eq. (2.37) as
describin g a rene wal pr ocess in which S ; is the random va riable den ot ing the
time a t which the nt h tr an siti on tak es place. As earl ier , the seq uence {Xn } is a
set of inde pende nt identically distributed random variab les where X n now
represent s th e time bet ween the (n - I)th a nd nth tr an sition. One sho uld be .
careful to distinguish the interpretati on of Eq. (2.37) when it ap plies to
renewal pr ocesses as here and when it a pplies to a random walk as ea rlier.
The d ifference is that here in the renewal process th e equ at ion describes the
time of the nth renewal or tran sition , whereas in the rand om walk it describes
the state of the pr ocess and the tim e between sta te tr a nsitions is some ot he r
rand om va ria ble.
An impo rta nt example of a renewal process is th e set of a rrival insta nts
to th e G /G/m queue. In th is case, X n is identified with the interarrivaI time .
Usually, the distribution of time between intervals is of lillieconcern in a random walk;
emphasis is placed on the value (position) S n after n transitions. Often, it is assumed that
this distribution of interval time is memoryless, thereby making the randomwalk a special
case of Markov processes; we are more generous in our definition here and permit an
arbitrary distribution.
t It may be considered to be a special case of the random walk as defined in (f) above. A
renewal process is occasionally referred to as a recurrent process.
2.2.
S
Prj
IT
MP
Pi j arbitrary
~l
25
arbitrary
arbitrary
RW
n.:
IT
qj - i
arbi trary
RP
q, =
ITarbitrary
Figure 2.4 Relationships among the interesting random processes. SMP : SemiMarkov process; MP: Markov process; RW: Random walk; RP: Renewal process;
BD: Birth-Death Process.
So there we have it-a self-con sistent classification of some interesting
stoc hastic processes. In order to aid the reader in understanding the relationship amo ng Markov .pr ocesses, semi-Markov processes, a nd the ir special
cases, we have prepared the diagram of Figure 2.4, which sho ws thi s relationship for discrete-state systems. The figure is in the form of a Venn diagram.
Moreover , the symbol Pii denotes the probability of making a tran sition next
to sta te j given that the process is currently in state i. Also , fr den ote s the
distribution of time between transitions; to say th at "fr is mernoryless"
implies that if it is a discrete-time process, thenfr is a geometric distribution ,
whereas if it is a continuous-time process, then fr is an exponential distributi on . Furthermore, it is implied that fr may be a functi on both of the
current a nd the next state for the pr oce ss.
The figure shows that birth-death processes form a subset of Markov
pr ocesses, which them selves form a sub set of the class of semi-Markov
processes. Similarl y , renewal processes form a subset of random walk
pr ocesses which also are a subset of semi-Ma rko v processes. Moreover ,
there are some renewal processes that may also be classified as birth-death
26
2.3.
27
P[X.
= j I Xl =
iI' X 2 = i 2, .. . , X . _1 = in_I]
= P[X. = j
IX . _
= in_I]
- (2.38)
In term s of our example, this defin ition merely states that the city next to be
visited by the hippie depends only upon the city in which he is currently
located and not up on all the pre vious cities he has visited . In th is sense the
memory of the random p rocess, or Markov chain , goes baek only to the
mo st recent position of the particle (hippie). When X . = j (the hipp ie is in
cit y j on day 11), then the system is said to be in state E ; at time n (or at the
nth step) . T o get our hippie started on day 0 we begin with some in itial prob ab ilit y distribution P [X o = j] . The expression on the right side of Eq. (2.38)
is referred to as the (o ne-step) transition probability and gives the conditional
p robability of making a transition from state E i . _ 1 at step n - I to sta te E;
at the nth step in the proces s. It is clear that if we are given the initial state
probability distributi on and the transition probabilities, then we can
uniquely find the probabi lity of bei ng in various states at time n [see Eqs.
(2.55) and (2.56) below].
If it turns out that' the transition probabilities are independent of n, then
we have what is referred to as a hom ogeneous Markov chain and in th at case
we make the further definition
Pi; ,;; P[X.
= j IX n_ 1 =
i]
(2.39)
which gives the probability of going to sta te E; on the next step, given that
we a re currently at sta te i. What fo llows refers to homogene ous Mark ov
ch ain s only. These chain s are such tha t thei r transiti on probabilities are
statio na ry with time ": therefore, given the current city o r state (pun) the
p robability of various states III steps into the future depends only upon m and
not up on the current time; it is expedient to define the m-step tr ansition
probabilities as
p:i ) P[X . +m = j I X n = i]
- (2.40 )
From the Markov property given in Eq. (2.38) it is easy to establ ish the following recursive formula for calculating plj ):
( m)
Po
,,
.,
<m- i )
Pik
Pki
III
= 2,3 , . ..
(2.41)
28
PH
Further, let A be the set of all states in a Markov chain. Then a subset of
states Al is said to be closed if no one-step transition is possible from any
state in Al to any state in Ale (the complement of the set AI). If Al consi sts of
a single state, say E i , then it is called an absorbing state; a necessary and
sufficient condition for E, to be an absorbing state is P = I. If A is closed
and does not contain any proper subset which is closed , then we have an
irreducible Markov chain as defined above. On the other hand , if A contains
proper subsets that are closed, then the chain is said to be reducible. If a
closed subset of a reducible Markov chain contains no closed subsets of
itself, then it is referred to as an irreducible sub-M arkov chain; the se subchains
may be studied independently of the other sta tes.
It may be that our hippie prefers not to return to a previously visited city.
However, due to his mode of travel thi s may well happen, a nd it is imp ortant
for us to define th is quantity. Accordingly , let
fj
2: f~n ) =
n_ 1
It is now possible to classify sta tes of a Markov cha in according to the value
obtained for /;. In particular, if!j = I then sta te E, is said to be recurrent ;
if on the other hand ,/; < I, then sta te E , is said to be transient. Furthermore,
if the o nly possible steps at which our hippie can return to sta te E , a re
y , 2y , 3y , . . . (where y > 1 and is the largest such integer), th en sta te E, is
said to be periodic with peri od y; if y = 1, then E, is aperiodic.
Con sidering sta tes for which/; = 1, we may then define th e mean recurrence
tim e of E, as
1'V[ j
(2.42)
n =l
Man y of the intcresti ng Markov chains which one encounters in queueing theory are
irreducible.
2.3.
29
This is me rely the average time to return to E;. With thi s we may then classify
sta tes even further. In particular, if M ; = 00 , then E; is said to be recurrent
null , whereas if M ; < 00, then E; is said to be recurrent nonnull. Let us define
71"Jnl to be the pr ob ability of finding the system in state E; at the nth step,
that is,
71"jnl ;; P[X n = j]
_ (2.43)
We may now state (without proof) two important the orems. The first
comments o n the set of sta tes for an irreducible Markov chain.
= lim 1T ~ n )
(2.44)
n -",
alway s exist and are independent of the initial state probability distr ibution. .M oreover, either
(a) all states are transient or all states are recurrent null in which
cases 71"; = 0 f or all j and there ex ists no sta tio na ry distribution,
or
(b) all states are recurrent nonnull and then 71"; > 0 f or all j , in
which case the set {1T;} is a stationary probability distribution and
Tr j
=-
(2.45)
AI ;
1
1T j
=I
=
71",
(2.46)
L 1Ti P U
(2.47)
30
(1)
Abra (0 )
2.3.
31
1t
as
(2.49)
1tP
- (2.50)
P=
1
4
1
4
1
4
~l
~J
a nd so we may so lve Eq. (2.50) by conside rin g the three equation s deri vable
from it, th at is,
7T1
= -
7TO
0 1T1
+ -1
4
7T2
(2.51)
131
=
1To + - 1T 1 + - 7T.,
- 4
4
2 -
7T.
N ote from Eq . (2.51) that the first of the se three equ ati on s equ als the negat ive
sum of the seco nd a nd third , indicating th at there is a linear dependence
am ong them. It alway s will be the case th at o ne of the eq ua tions will be
linea rly de penden t on the others, and it is therefore necessary to int roduce the
addition al con servati on relat ionship as given in Eq. (2.46) in order to solve
the system. In ou r example we then requi re
(2.52)
Thus the sol utio n is obtai ned by simultane ously so lving any two of the
32
equations given by Eq. (2.51) along with Eq. (2.52). Solving we obtain
170
= -1 =
17,
= -7 =
0.20
0.28
25
172
(2.53)
= -13 = 0.52
25
- (2.54)
Now using the definition of tran sition pr obabil ity and makin g use of Definition (2.48) we have a method for calculatin g 1t1l) expressible in term s of P
and the initial state distribution 1t101 That is,
1tlll
= 1t101P
= 1tlllp
1,2, ...
_ (2.55)
I , 2, . ..
- (2.56)
Equation (2.55) gives the general method for calculatin g the state probabilities
steps int o a process, given a tran sition pr obability mat rix P and an initial
state vector 1t1O ). From our earlier definitions , we have the stationary probability vector
II
1t
= lim 1t( n )
. assumin g the limit exists. (From Theorem 2, we know that this will be th e
case if we have an irreducible aperiodic homogeneous Markov chain.)
2.3.
33
and so
7t
7tP
which is Eq. (2.50) again. Note that the solution for 7t is independent of the
initial state vecto r. Applying this to our example , let us assume that our
hippie begins in the city of Ab ra at time 0 with probability I, th at is
7t(0 )
[1 ,0,0]
(2.57)
From thi s we may calculate the sequence of values 7t( n ) and the se are given
in the chart below. The limitin g value 7t as given in Eq . (2.53) is also entered
in this chart.
n
7T~n )
I
0
0
(n )
7T1
(n )
7T2
0
0.75
0.25
co
0.250
0.062
0.688
0.187
0.359
0.454
0.203
0.254
0.543
0.20
0.28
0.52
We may alternati vely have chosen to assume th at the hippie begins in the
city of Zeu s with pr obability I , which would give rise to the init ial sta te
vecto r
7t (O) =
[0, I, 0]
(2.58)
and which result s in the following table:
n
( n)
7T0
o
o
(n )
( n)
7T,
7T2
0.25
0.75
0.187
0.375
0.438
0.203
0.250
0.547
0.199
0.289
0.512
0.20
0.28
0.52
7T~n)
0
0
(n)
7T1
( n)
7T2
0.25
0.25
0.50
= [0, 0, I]
(2.59)
0.187
0.313
0.500
0.203
0.266
0.531
0.199
0.285
0.516
0.20
0.28
0.52
34
see th at after only four steps the quantities 11";"1 for a given value of i are
a lmost identic al regardless of the city in which we began . The rapid ity with
which these quantities converge, as we shall soo n see, depends up on the
eigenvalue s of P. In all cases, however , we o bserve th at the limiting values at
infinity are rapidly approached and, as stated earlier, are independent of the
init ial positi on of the particle.
In order to get a bett er ph ysical feel for what is occurri ng, it is instructive
to follow the probabilities fo r the vari ous states of the Mark ov chain as time
evo lves. T o this end we introduce the noti on of baricentric coordinates,
which are extremely useful in portraying probabil ity vecto rs. Consider a
pr obabil ity vecto r with N components (i.e., a Markov process with N sta tes
in o ur case) and a tetrahedron in N - I dimensions. In our example N = 3
and so o ur tetrahedron becomes an equil ateral triangle in two dimen sions. In
genera l, we let the height of thi s tetrah edr on be unity. Any pr obability vecto r
1t 1n l may be repre sented as a point in this N - I space by identifying eac h
component of tha t pr obability vecto r wit h a distance from one face of the
tetrahedron . Th at is, we mea sure from face j a distance equal to the pr oba bility assoc iated with th at component 11"~"); if we do this for each face and
th erefore for each compon ent , we will specify o ne point within th e tetr ahedr on and that point co rrectly identifies our prob ab ility vecto r. Eac h unique
prob ability vecto r will map into a un ique point in th is spa ce, and it is easy to
determine .the pr obability measure from its locati on in th at space. In our
exa mple we may plot the three initial sta te vecto rs as given in Eqs . (2.57)(2.59) as show n in Figure 2.6. The numbers in parentheses represen t which
pr oba bility compon ents a re to be measu red from the face associa ted with
th ose nu mbers. Th e initial state vecto r corresponding to Eq. (2.59), for
[a, 0 , I J
Height = 1
[0. 1, OJ
(2)
[ 1. 0 , OJ
2.3.
35
II(z) 4,
L: 1t l n lZ n
(2.60)
n= O
Th is tran sform will certa inly exist in the unit disk , th at is, Izi ..,:; I. We no w
a pply the transfor m method to Eq . (2.55) over its ran ge of app licatio n
(11 = 1,2 , . . . ,); thi s we do by first multiplying that equ ati on by c :' a nd th en
sum ming from I to infinity, thu s
* The
n= l
ste ps involved in ap plying this meth od are summa rized on pp . 74-5 of th is chap ter.
36
The parenthetical term o n the right-ha nd side of thi s last equation is rec0llt
nized as D (z) simply by cha nging the index of summati on. Thus we find
D (=:) -
= zD(z)P
7t(O)
~ 7t (n )
7t(O)p n
(2.62)
[I - zP j- l ~ P "
l'
- (2.63)
Of course P " is precisely what we are looking for in order to obta in our
transient solutio n since thi s will directly give us 7t( n l from Eq . (2.56). All that
is required , therefore, is that we form the matrix inverse indi cated in Eq.
(2.63). In gene ral this bec omes a rather complex ta sk when the number of
sta tes in our Markov chain is at all lar ge. Nevertheless, th is is one formal
procedure for ca rrying ou t the transient a na lysis.
Let us apply these techn iques to our hipp ie hit chhiking example. Recall
that the transiti on probability matrix P was given by
P=
I
4
I
0
I
4
3
4
I
2.3.
37
-- z
- z
- - z
4
1 - zP =
1
-- z
4
1- 1. z
2
Next, in order to find the inverse of this matrix we must form its determinant
thus :
det (I - zP) = 1 -
1. z _ :L Z2 _ l.. Z3
2
16
16
+~ zr
[I _ ?P j- t
-
1
(1 - z) [I
+ (1/4)zf
1-
1. Z _1- z 2
2
1
4-
16
I .
16 -
- "'+- .,.I
- z
+ -1
16
Z2
3
5 .
- z - - z-
1. z _l.. Z2
-1 z
4
3
- z
1-
16
1z
-t
16
3 z2
+16
+ -9
Z2
16
+ -1
.
z-
16
_1- z 2
16
Having found the matrix inverse, we are now faced with finding the inverse
transform of thi s matrix which will yield P ", This we do as usu al by carrying
out a partial fraction expansion (see Appendix I) . The fact that we have a
matrix presents no problem ; we merely note that each element in the matrix
is itself a rati onal function of z which must be expanded in partial fraction s
term by term . (This task is simplified if the matrix is written as the sum of
three matrices: a constant matrix ; a constant matrix times z ; and a constant
matrix times Z2.) Since we have three roots in the denom inator of o ur rational
functions we expect th ree terms in our partial fraction expansion. Carrying
38
[I - =P ]-
1/25 [5
= -- 5
1- z
-~ -~]
13]
1/5
[0
13 + (1 + =/4)2 0
13
1/25 [20
-5
1 + =/4
-5
-2
33
8
-17
-53]
-3
22
(2.64)
We observe immediately from this expansion that the matrix associated with
the root (l - e) gives precisely the equilibrium solution we found by direct
methods [see Eq . (2.53)]; the fact that each row of this matrix is identical
reflects the fact that the equilibrium solution is independent of the initial
state. The other matrices associated with roots greater than unity in absolute
value will always be what are known as differential matrices (each of whose
row s must sum to zero). Inverting on z we finally obtain (by our tables in
Appendix I)
P"
-8
13]
1
1 n[O
13 +:5 (n + 1)(- 4) 0 2
13
;,[:
+--4.
25
20
-5
-17
-~]
-2
n = 0, 1, 2, . .. (2.65)
22
This is then the complete solution since application o f Eq. (2.56) directly
gives 7t ( n ) , which is the transient solution we were seeking. N ote th at for
II = 0 we obtain the identity matrix whereas for II = I we mu st, of course,
obtain the transition probability matrix P. Furthermore, we see that in thi s
case we have two transient matrices, which deca y in the limit leaving only the
con stant matrix representing our equilibrium solution. When we think ab out
the decay of the transient, we are reminded of the shrinking triangles in
Figure 2.6. Since the transients decay at a rate related to the characteristic
values (one over the zeros of the determinant) we therefore expect the
permitted positions in Figure 2.6 to decay with II in a similar fashi on . In
fact, it can be sh own that these triangles shrink by a con stant factor each time
II increases by 1. This shrinkage factor for any Markov process can be
shown to be equ al to the absolute value of the product of the characteristic
values of its tr ansition probability matrix; in our example we have characteristic value s equal to 1, 1/4, 1/4 . Their product is 1/16 and thi s indeed is
the fact or by which the area of our triangles decreases each time II is increased.
2.3.
39
ffl
(2.66)
This, of course, is the geometric distribution as we cla ime d . A simila r argument will be given later for the continuous-time Markov chain.
So far we have concerned ourselves principally with homogeneou s Markov
processes. Recall that a homogeneous M arkov ch ain is one for which the
tr ansit ion prob abilities a re ind ependent of time. Amon g the qu antities we
were able to calculate wa s the m-step transiti on pr ob ab ility p\il, which gave
the probability of passing from state E, to state E, in m steps ; the recursive
formula for thi s calculation wa s given in Eq . (2.4 1). We now wish to take a
mo re gene ra l point of view and permit the transit ion prob ab ilities to depend
u pon ti me. We intend to deri ve a relation ship not unlike Eq . (2.4 1), whic h
will form our point of depa rture for many further developments in the
a pp lica tion o f Markov pr oce sses to queueing p roblems. F or th e time bei ng
we con tinue to res trict ourselves to d iscrete-t ime , discre te- stat e Ma rkov
cha ins .
Generalizin g the homogene ou s definition for the mu ltistep tr an sition
prob ab ilities given in Eq. (2.40) we now define
_ (2.67)
which gives the probability that the system will be in state E, at step n, given
The memoryless prope rty is discussed in some detail la ler.
40
In
Time step
(2.68)
p;lm, n) = L P[X n = j , X. = k X m = i]
k
for m ~ q ~ 11. Thi s last equation must hold for any stochastic process (not
necessarily Markovian) since we are considering all mutually exclusive and
exha ustive possibilities. From the definition of conditional probability we
may rewrite this last equation as
.1
Pilm , n) = LP[X. = k
I x; =
i]P[X n =j
Ix ; =
i, X . = k ]
(2.69)
<! I x;
= i, X. = k] = P[ X n = j
I x, =
k]
Appl ying this to Eq . (2.69) and makin g use of our definition in Eq. (2.67) we
finally arrive at
poem, 11)
= L p;k(m , q)Pk,(q , 11 )
k
- (2.70)
2.3.
41
+ I)]
- (2.71)
- (2.72)
_ (2.73)
H (m, n -
I)P (n - I)
- (2.75)
42
soxn;
Equ ati ons (2.74) and (2.75) are known as the fo rward Chaprnan-Kolrnogorov
equations for discrete-time Markov chains since they a re writt en at the
for ward (most recent time) end of the interv al. On the other hand, we could
ha ve chosen q = m + I, in which case we obtain
pilm, n)
(2.76)
H (m, II)
P (m)H (m
+ 1, n)
- (2.77)
+ I) . .. P(I! -
I)
m::;, I! -
- (2.78)
That this solves Eqs. (2.75) and (2.77) may be establi shed by direct substitution . We observe in the homogeneou s case that this yields H (m , n) = p n- m
as we have seen earlier. By similar arguments we find that the time -dependent
probab ilities {1rl n l } defined earlier may now be obtained through the following
equation :
whose solution is
7t(n+!1 = 7t(oIP (O)P (I ) .. . P (II)
_ (2.79)
The se last two equations corre sp ond to Eqs. (2.55) and (2.56), respectively,
for the hom ogeneous case. The Chapman-Kolmogorov equations give us a
mean s for describing the time-dependent probabilities of man y interesting
queu eing systems that we develop in later chapters. *
Before leaving discrete -time Markov chains, we wish to introduce the
special case of discrete time birth-death processes. A birth-death process is an
example of a Mark ov proces s that may be thought of as modelin g chan ges
in the size of a popul ation. In what follows we say that the system is in state
Ek when the popul ation consists of k members. We further assume th at
chan ges in popul ati on size occur by at most one; th at is, a " birt h" will chan ge
the popul ati on' s size to one greater, whereas a "death" will lower the
popul at ion size to one less. In consider ing birth -death processes we do not
perm it multiple birth s or bulk disasters; such possibilities will be con sidered
* It is clear fro m this develop ment that a ll Mar kov processes must sati sfy the Chapma nKolmogorov equatio ns. Let us note, however , that a ll proc esses that sa tisfy the Cha pmanKolmogorov equation are not necessarily Mark ov processes; see . for exam ple. p. 203 of
[PAR Z 62].
2.3.
43
later in the text and correspond to rand om walks . We will con sider the Mar kov
chain to be hom ogene ous in that the transition probabilities P i; do not change
with time; howe ver , cert ainly the y will be a functi on of the state of the
system. Thus we"have that for our discrete-time birth-death process
j= i - I
PH =
{ I-h.
d, - d,
b,
j=i
j
(2.80)
=i+I
ot herwise
Here d, is the pr obability that at the next time step a single death will occur,
driving the population size down to i - I , given that the population size
now is i. Similarly, b, is the probability that a single birth will occur, given
th at the current size is i, thereby dri ving the populati on size to i + I at the
next time step. I - b, - d, is the probability that neither of these event s will
occur and that at the next time step the population size will not change.
Onl y these three possibilities are permitted. Clearly do = 0, since we can
have no deaths when there is no one in the populati on to die. However,
contrary to intuition we do permit b o > 0; this correspond s to a birth when
there are no members in the population. Whereas this may seem to be
spo ntaneo us generation, or perhaps divine creation , it does provide a
meaningful model in term s of queueing the ory. The model is as follows : The
population corresponds to the custo mers in th e queueing system ; a death
corresponds to a customer departure from that system; and a birth corresponds to a customer arrival to th at system. Thus we see it is perfectly feasible
to ha ve a n arrival (a birth) to an empty system ! The sta tiona ry pr obab ility
tran sition matrix for the general birth-death pr ocess t hen appears as follows :
I- bo
d,
bo
I - b, - d ,
d'!,
b,
I - bz-dz b,
di
I- bi - d,
b,
p =
...
If we are dealing with a finite cha in, then the last row of thi s matri x would be
[00 . . . 0 ds I - dsL which illustrates the fact th at no births are permitted
when the populati on has reached its maximum size N . We see th at th e P
44
matrix has nonzero terms only along the main diagonal and along the diagonals directly above and below it. This is a highly specialized form for the
transition probability matrix, and as such we might expect that it can be
solved . To solve the birth-death process means to find the solution for the
state probabilities 1t(n l . As we have seen, the general form of solution for
these probabilities is given in Eqs. (2.55) and (2.56) and the equation that
describes the limiting solution (as n -- 00) is given in Eq , (2.50). We also
demonstrated earlier the z-transform method for finding the solution. Of
course, due to this special structure of the birth-death transition matrix, we
might expect a more explicit solution. We defer discussion of the solution to
the material on continuous-time Markov chains , which we now investigate.
2.4.
Ifwe allow our particle in motion to occupy positions (take-on values) from
a discrete set, but permit it to change positions or states at any point in time,
then we say we have a continuous-time Markov chain. We may continue to
use our example of the hippie hitchhiking from city to city, where now his
transitions between cities may occur at any time of day or night. We let X(f)
denote the city in which we find our hippie at time f. X(f) will take on values
from a discrete set, which we will choose to be the ordered integers and which
will be in one-to-one correspondence with the cities which our hippie may
visit.
In the case of a continuous-time Markov chain, we have the following
definition :
DEFINITION: The random process X(f) forms a continuous-time
Markov chain if for all integers n and for any sequence f" f 2 , , f n+l
such that f 1 < f 2 < ... < f n +1 we have
P[X(tn+l)
= j I X(tl) =
ii' X(t2)
;2' .. . , X(t n)
= P[X(tn+l)
= j
in]
I X(t n) =
in]
(2.81)
= j I X(T)
for
TI
T2
< I] = P[X(I ) = j I X h
)]
2.4.
45
ourselves wit h some of the deeper que stions of con vergence of limits in
passing fro m discrete to continuous time ; for a car eful treatm ent the reader
is referred to [PARZ 62, FELL 66].
Earl ier we stated for any Markov process that the time which the process
spends in any state must be " memoryless" ; th is implies th at the discrete-time
Mark ov chain s mu st have geo metrically distributed state time s [which we
have already pr oved in Eq. (2.66)] and that continuous-t ime Ma rkov chai ns
must ha ve exponentially distributed sta te time s. Let us now prove thi s last
sta tement. F or thi s purpose let T i be a random variable that repre sent s the
time which the process spends in state Ei . Recall the Markov pr operty which
sta tes that the way in which the past trajectory of the process influence s the
future de velopment is completely specified by giving the cur rent sta te of the
process. In particular, we need not specify how long the pr ocess ha s been in
its curren t state. This mean s th at the remaining time in E, mu st have a
distribution that depends only upon i and not up on how lon g the pr ocess
has been in E i . We may write th is in the followin g fo rm :
+ 1 It , > s] =
P['Ti > s
h(l)
where h(l ) is a function only of the additional time 1 (and not of the expended
time s) *. We may rewrite thi s conditional probability as follows :
P['Ti
_
P~
['T.!..i..:..>_
5 --:+----'I'_'T!...i :::..
>_5-,]
P['Ti > 5]
> 5 + I]
P[Ti > 5]
> s + 1 implie s the
P[Ti
Setting s
P['T i > s
Ti
> s.
(2.82)
> s]h( l)
= and observing that P['T i > 0] = I we have immed iately th at
+ I] =
P[Ti > I]
P[Ti
h(l )
P['Ti
> 5 + I] =
> I]
(2.83)
46
dt
(P[T i
> tJ) = -d
dt
( I - Ph
(2.84)
- JT,(t )
t])
> s + t]
----"-'------" = - JT (s) P [T i
ds
Ti.
Now let us
diff~~en
> t]
where we have taken advantage ofEq . (2.84). Dividing both sides by P[T i
and setting s = 0 we have
dP[T,
P[T i
> t] =
> t]
> I]
- /,(0) ds
T,
we obtain
or
P[Ti
JT,( t)
Ti
as
= JT,(O)e- Ir ,lOlt
(2.85)
which hold s for I ~ O. There we have it: the pdf for the time the process
spends in state E, is exponentially distributed with the parameter ;;,(0),
which may depend upon the state E,. We will have much more to say abo ut
this exponential distribution and its imp ortance in Mark ov processes sho rtly.
In the case of a discrete-time hom ogeneous Mar kov chai n we defined the
transition probabilities as Pis = P[Xn = j I X n _ 1 = i] and also the m-step
transiti on probabilities as p~j ) = P[X n+ m = j I X n = i] ; th ese quant ities
were independent of n due to the homogeneity of the Markov chain. In the
case of the nonhomogeneous Markov chain we found it necessary to identify
points along the time axis in an absolute fashion and were led to th e import ant
tr ansition probability definition Pii(m , n) = P[Xn = j I X m = i]. In a
completel y analogous way we must no w define for our continuous-time
Markov chain s the following time-d ependent transition probability:
p,;(s, t) ~ P[X(t)
=j
I X es) = i]
- (2.86)
2.4.
47
con sider the following three successive time instants for our continuous time
chain s ~ 1I ~ I . We may then refer back to Figure 2.7 and iden tify so me
sample paths for what we willnow consider to be a continuous-time Mark ov
chain; the critic al observa tion once again is that in passing from sta te E, at
time s to state E , at time t, the process must pass through some intermed iate stat e E. at the intermediate time 1I. We then proce ed exactly as we did
in derivin g Eq. (2.70) and arrive at the followi ng Chapman-Kolmogoro v
equa tio n for continuous-time Markov chains:
(2.87)
where i.] = 0, I, 2, .. . . We may pu t thi s eq uation into matri x form if we
first define the matrix con sisting of elements Pii(S, t) as
H (s , t) ~ [Pii(S, t)
- (2.88)
- (2.89)
H (m, n) - H(m, n - I)
= H (m, n
= H (m, n -
I )P (n -
I) - H (m, n -
I)[P(n - I) - I]
I)
(2.90)
We must now con sider some limits. Ju st as in the discrete case we defined
p en) = H(n, n + I), we find it con venient in this continuous-time case to
48
+ 6.t)
- (2.9 1)
a H(s , t)
at
H(s, t)Q(t)
- (2.92)
q;;(t)
p;;(t, t
= 11m
+ 6.t)
- 1
(2.94)
D.t
41-0
. P"J( t, t + 6. t)
q;;(t) = 11m
41 -0
6.t
,e j
(2.9 5)
T hese limits have the following inte rpretation. If the system at time t is in
state E ; then the probability that a transition occurs (to any state other than
E i) during the inte rval (t, t + D.t) is given by -qii(t) 6.t + o(6.t). * Thus we
may say th at - q;;(t ) is the rat e at which the process de par ts from sta te E ;
when it is in that sta te. Similarly, given that th e system is in sta te E, at time t,
the co nditional probabil ity tha t it will make a transition from this state to
state E; in the time interval (t, 1 + 6.t) is given by q;;(I) 6.1 + o(6.t) . Thus
As usual, the notation o(~t) denotes allY functio n that goes to zero with !!'t faster than !!'t
itself, that is ,
lim oeM)
~,- o !it
t~~,
I I
get)
ye t)
2.4.
49
(2.96)
T hus we have interp reted the terms in Eq. (2.92); th is is nothing more than
the for ward Chapman -Kolmo gorov equ ation for the continuou s-time
Ma rkov ch a in.
In a sim ilar fas hion, beginning with Eq . (2.77) we may deri ve the back ward
Ch apman -Kolm ogor ov equ ati on
aH~;, t)
= - Q(s)H (s, t)
- (2.97)
The for ward and backward matrix equations j ust deri ved may be expressed
through their indi vidu al terms as follows. The forward equation gives us
[with t he addi tio na l condition that the pa ssage to the limit in Eq. (2.95) is
uniform in i for fixed j]
(2.98)
Th e initial sta te E, a t t he initia l time s affect s the solution of thi s set of
differential equ ati on s only through the initial condition s
pil s, s)
= {~
if j = i
if j ~ i
pilt , t) =
{~
if i = j
if i ~ j
These equ ati on s [(2.98) and (2.99)] uniquely determine the tr an siti on
p rob abilities p,ieS, t) and mu st, of course , a lso satisfy Eq. (2.87) as well as
the ini tial condition s.
In matrix not ati on we may exhibit the solution to the forw ard a nd backward Eqs. (2.92) and (2.97), respectively, in a stra ightfo rwa rd manner ; the
50
result is'
H(s, I)
exp
[fQ(U) duJ
- (2.10 0)
We observe that thi s so lutio n also sa tisfies Eq. (2.89) and is a continuou s-time
an alog to the discrete-time so lut ion given in Eq. (2.78) .
N ow fo r the sta te p rob ab ilit ies the mselves : In a na logy with 7Tl n ) we now
define
7T;(t) ~ P[X(I) = jj
- (2. 101)
as well as the vecto r of the se probabilitie s
n et ) ~ [1TO(t ), 7T I (t),
7T2
(1), . .. j
(2.102)
If we are given the initial state d istr ibution n CO) then we can so lve for the
t ime-dependen t sta te probabilitie s from
n et)
n(O)H(O, t)
(2.103)
n (l)
n CO) exp
[1'
Q(u) duJ
- (2. 104)
This corresponds to the discrete-time solution given in Eq , (2.79). The mat rix
differential equ ation corresp onding to Eq. (2.103) is easi ly seen to be
dn( l )
-
dl
= n (I)Q (t)
=I+
PI
(2
p 2_
2!
(3
p 3_
3!
+ .. .
.I
2.4 .
51
qij ~ qij(l )
H (I)
=.l
.l
=
i,j = 1, 2, . . .
+ I) =
H (s, s
[pi,(I)
(2 .107)
... (2.108)
Q
Q(I) = [qij)
(2.109)
In this ca se we may list in rapid o rder the corresponding results. Fir st , the
Chapman-Kolmogorov equations become
Pij(S + I)
= L Pik(S)Pk,(l )
k
H (s
+ I) =
H (s)H(I)
dpi,(l )
- d- = q jiPi](l )
I
+ ,L, qk;Pik(l)
(2.1 10)
.~ ~j
and
(2 . 111)
dH (I)
dl
H (I)Q
- (2 . 112)
QH(I)
- (2. 113)
and
dH(I)
dl
H (I)
eO t
N ow for the sta te probabilit ies them sel ves we have th e d ifferenti al eq ua tio n
d7T,(t)
- d - = q ji 7T j ( l)
t
+ L qkj7Tk(l)
(2.114)
k:;:.j
d7t(I)
-
dl
= 7t(I)Q
52
Fo r an irreduci ble hom ogeneou s Mark ov chain it can be shown that the
follow ing limits a lways exist and a re independent of the initi al sta te o f th e
ch a in , name ly,
lim Pil(t)
TT j
1-",
This set { TTj } will fo rm the lim iting sta te p robab ility di stribut ion . For a n e rgodi c M arkov ch a in we will ha ve the furth er limit , whic h will be ind ependen t
of th e in itia l d istr ibu tion, nam el y,
lim TT /t )
TT j
I- x
This limit ing di stribution is given uniquely as the so lutio n of th e follo win g
system o f linear equati on s :
. , + 2: q kj TTk
= 0
(2 . 115)
k* i
= 0
- (2. 116)
r-
2.5.
53
Inde ed it is fair to say that much of the balance of this textb ook depend s
upon addi tional mate ria l from the theory of stochas tic pr ocesses and will be
developed as needed. Fo r the -time being we choose to specialize the results
we have obtaine d from t he co ntinuous-time Mar kov chai ns to the class of
birth-death pr ocesses, which , as we have fore warned , playa majo r role in
queu eing systems anal ysis. Th is will lead us directl y to the imp ortant Poisson
p rocess.
2.5.
BIRTH-DEATH PROCESSES
* Thi s is true in the one-dimensio nal case. Later, in Chap ter 4, we consider multidimensiona l systems for which the sta tes are described by discrete vectors, a nd then each state
has two neighbors in each dimension . For example , in the two-dim ensional case, the sta te
descriptor is a cou plet (k t , k,) denoted by k, .k, whose four neighbo rs are k, - t .k" k"k, - I '
k,+1.k,- and k , . k, ~I'
54
q kk
= _ (flk + Ak)
(2.118)
Thus our infinitesimal generator for the general hom ogeneou s birth -death
process takes the form
-,10
)'0
fll
-()., + fll)
Ai
fl .
i'2
Q=
-(A.
+ P.)
fl 3
- (;'3 +
fl 3)
,13
Note that except for the main , upper, and lower diagonals, all term s are zero.
T o be more explicit, the assumptio ns we need for the birth-death process
are th at it is a hom ogeneous Markov chain X (t ) on the sta tes 0 , I , 2, . . . ,
that births and death s are independent (this follows directl y from the Markov
pr operty), and
B, :
+ nt ) I k
in populat ion]
= ;'k n t + o( nl)
D1 :
P[exactly I death in (t , t
+ nt ) I k
in population]
= Pk nt + o(nt)
B.:
+ nt) Ik
in population]
= I - ;'k n l
D. :
+ o(nt)
+ nt ) I k in population]
= I - Pk nl
+ o(n t)
2.5.
55
k]
(2.119)
Thi s calculation could be carried out directly by using our result in Eq.
(2.114) for 7T J(t) and our specific values for q i j ' However , since the deriva tion
of these equation s for the bir th-death process is so straightforward and
follows from first principles, we choose not to use the heavy machine ry we
developed in the previou s section , which tend s to cam ouflage the simplicity
of the basic approach, but rather to rederive them below. The reader is
encouraged to identify the parallel steps in this development and compare
them to the more general steps taken earlier. Note in term s of our previous
definiti on th at Pk(t) = 7T k(t). Moreover , we are " suppressing" the initial
condition s temporarily, and will introduce them only when required .
We begin by expre ssing the Chapman-Kolmogoro v dynam ics, which are
quite trivial in this case. In particular, we focus on the possible motions of
our particle (that is, the number of members in our population) during an
interval (t , t + ~t) . We will find ourselves in state E k at time t + ~t if one of
the three follo wing (mutually exclusive and exhau stive) eventualities occurred:
f
1.
2.
3.
that we had k in the population at time t and no state chan ges occur red;
that we had k - 1 in the population at time t and we had a birth
during the interval (t , t + ~t);
that we had k + 1 members in th e populati on at time t and we had one
death during the interval (t, t + ~t).
Th ese three cases ar e portrayed in Figure (2.8). T he p robability for the first
of these possibilities is merely the probability Pk(t) that we were in st ate E k at
time f time s the probability hk(~f) that we moved from state Ek to state E,
(i.e., had neither a birth nor a death) durin g the next ~f seconds ; thi s is
rep resented by the first term on the right-hand side ofE q . (2.120) below. T he
second and th ird terms on the right-hand side of th at equ ati on correspond ,
respectivel y, to the second and third cases listed ab ove. We need no t concern
ourselves specifically with transition s fr om states other than neare st neighb or s
to state E k since we have assumed that such transitions in a n interval of
We use X (r) here to denote the num ber in system at time I to be consistent with the use of
for ou r genera l stochastic process. Cer ta inly we cou ld have used N(t) as defined
earlier; we use N (t) outside of this chapter.
X (l )
"/
56
Time
+ Ar) =
Pk(t)Pk.k(!:;.t)
+ Pk_1(t)Pk_l.k(!:;.t )
+ Pk+ (t)Pk+l.k(!:;.t)
+ oeM) k~l
1
(2.120)
We may add the three probabilitie s ab ove since these events are dearly
mutually exclusive. Of course, Eq. (2.120) only make s sense in the case for
k ~ I , since clearly we could no t have had - I members in the population .
For the case k = 0 we need the special boundary equati on given by
Po(t
+ !:;.t) = Po(t)Poo(!:;.t )
+ P, (t )PlO (!:;.t )
+ o(!:;.t) k = 0
(2.12 1)
2.5.
57
in these equ ati on s. Carrying out thi s opera t ion our eq uati on s convert to
Pk(t
(2.123)
(2. 124)
+ llt ) =
k ~ 1
k=O
If we now subtract Pk(t) from both sides of each equation and divide by llt,
we have the following : .
Po(t
+ Ill ) III
Po(l )
- i.oPo(t )
+ fllP l(t ) + o( ll t)
=0
(2.126)
III
- (2.127)
k =O
The set of equations given by (2.127) is clearly a set of different ial-difference
equati on s and represent s the dynamics of our probabil ity system ; we
58
recognize them as Eq. (2.114) and their solution will give the behavio r of
Pk(t ). It remains for. us to solve them. (No te that t his set was obtai ned by
2.5.
DlRTH-DEAT H PR OCESSES
59
notion is crucial an d prov ides for us a simple intu itive mea ns fo r writing
d own the equ ati on s of motion for the probabil ities Pk(t) . Specifically, if we
focus up on sta te E k we observe th at the rat e at which probability " flows"
into this state at time t is given by
Fl ow rate into Ek
(I'k + flk)Pk(t )
Clearly the difference between th ese two is the effective probability flow rate
into this sta te, that is,
dPk(t )
~ =
_
Ak_1 Pk_1( t)
+ flk+l Pk+l(t) -
U k + fJk)Pk(t )
- (2.128)
But thi s is exactly Eq . (2. l27) ! Of course, we ha ve not a tte nded to th e details
for the bound ar y state Eo but it is easy to see that the rate argument ju st given
lead s to the correct equa tio n fo r k = O. Ob ser ve that each ter m in Eq .
(2.128) is of the form : pr obability of bein g in a particular state at tim e t
multiplied by the infinitesimal rate of leaving that state. It is clear that wha t
we have done is to draw an imaginary boundary surrou nd ing sta te Ek and
hav e calcul ated th e pr obability flow rates cr ossing th at boundary , where we
place opposite signs . on flows entering as oppos ed to leaving ; thi s tot al
computatio n is th en set equa l to the time derivati ve of the prob ability flow
rate into that sta te.
Actu ally there is no reason for selecting a single sta te as the "system" for
wh ich the flow equ ati on s mu st hold . In fact one may encl ose an y number of
sta tes wit hin a contour a nd th en write a flow equ ati on for all flow crossing '
th at boundary. Th e only d an ger in de aling with such a con glomerate set is
th a t one may write down a dependent set of equ ati ons rather than an independent set; on the other hand , if one systema tically encloses each sta te
sing ly a nd writes d own a con servation law for each, then one is guaran teed to
have a n independent set o f equ a tions for the syste m with the qu ali fication
th at the co nservatio n of prob ability given by Eq. (2. 122) mu st also be
a pplied. * T hus we have a simple inspection techn iqu e for a rriving at the
equa tions of moti on for the birth-death proce ss. As we sha ll see lat er th is
ap proa ch is perfectly suita ble for other M ar kov pr ocesses (includi ng sem i- .
M arkov p rocesses) a nd will be used extensively ; the se observa tio ns also lead
us to the no tion of globa l and local balan ce equ ati on s (see C ha pter 4).
At thi s point it is imp ortant for the reader to recognize and accept the fact
that the birth-death pr ocess descr ibed abov e is capa ble of pr ovidin g the
When the number of states is finite (say. K states) then any set of K - I single-node sta te
equations will be indepe ndent. T he addi tio nal equatio n needed is Eq . (2.122).
60
+ APk_,(I)
k?:
(2.129)
dPo(t) = - APo(l )
dl
k=O
For simplicity we assume that the system beg ins at time 0 with 0 members,
that is,
k=O
k,.,O
(2.130)
+ u:"
J,te- AI
k!
AI
k ?: 0, I ?: 0
_ (2.131)
2.5.
BIRTH-DEATH PROCESSES
61
E[K] =
L kPk(t)
k ~O
-ll ~
= e
(i.t) k
L. k - -
k!
k _O
= e- 1 1 1
( ,lt)k
k _l
(k - I )!
-.<t , ~(At)k
= e
At L . - k _O
k!
sosts
62
- (2. I32)
E[K] = At
1)]
'"
= 2,
k(k
- l)P k (t)
k =O
'"
= e- ll 2, k(k
(,l. )k
_ 1) _t_
k!
k~ o
'"
( ~t )k-2
Now for ming the va riance in terms of this last qu antity a nd in term s of
E[K], we have
(fK
= At
- (E [K ])2
and so
(fK
- (2.133)
Thus we see that the mean and variance of the Poi sson process ar e identical
and each equal to At.
In F igure 2.10 we plot the family of cur ves Pk(t) as a functi on of k a nd as a
function of At (a con venient normali zing form for r),
Recollect from Eq . (11.27) in Appendix II that the z-tra nsfo rm (p ro ba bility
generating function) for the probability mass distributi on of a discrete random
va riable K where
gk = P[K = k]
is given by
G(z) = E[zK]
=
2, Zk g k
k
for [z]
I. Applying this to the Pois son distribution deri ved ab ove we have
co
E[zK]
= 2, Zk p k (t)
k= O
= ~ e- At (lt z )k
k~O
e- 1t +J. f%
k!
2.5.
63
Ik II)
~I
;/
Figure 2.10 The Poisson distribution .
a nd so
G(z)
E[zK]
eW =- 1l
- (2.134)
OZ
E[ZK]!
,_1
E[K ]
E[K ]
Alew Z- 111,_1
At
A lso
(JK 2 = GI2\1)
+ G(l)(l ) -
[G lll (1)]"
(J//
At - ().t)2
At
64
I "k.'
We have intr oduced the Poisson process here as a pure birth process and
we ha ve found an expression for Pk(t), the probability distribution for the
number of arrivals during a given.interval of length t. Now let us consider the
joint distribution of the arrival instants when it is known beforehand that
exactly k arrivals have occurred during that interval. We break the interval
(0 , t) into 2k + I interv als as shown in Figure 2.11. We a re intere sted in A k ,
which is defined to be the event that exactly one arri val occurs in each of the
intervals {PJ and that no arri val occurs in any of the inter vals {O'.J. We wish
to calculate the probability th at the event A k occur s given that exactly k
arrivals have occurred in the inter val (0, r) : from the definiti on of conditi onal
pr obability we thu s have
iv.:.::a.1:.:s.:.::
. (
.P..!C
[A
..:ck!E..a::.:n~d=--=ex.:.::a::.::c.:.::tI:.o.
y.:.::k.:.::
::.:ar..:cr.:.::
: in~...c(":'O:.....t:..:.!
)]
P[ Ak I exact Iy k. arrivals In 0, t )] = P[exactl y k arrivals in (0. t )]
(2.135)
When we consider Poisson arrivals in nono verlapping interv als, we are
consider ing independent events whose joint probabilit y may be calculated
as the pr oduct of the individual pr obabilitie s (i.e., the Poisson process has
independent increments). We note from Eq. (2.131), therefore , that
Prone arriv al in interval of length Pd
i.{3je- ' P,
e-A'j
and
Using thi s in Eq. (2.135) we have directly
P[Ak I exactl y k: a rrivals in (0, t )]
=
=
(2.136)
On the other hand, let us consider a new process th at selects k points in the
interval (0, t ) independently where each point is uniformly distributed over
th is interval. Let us now make the same calculati on that we did for the Poisson
p rocess, namely,
(2.137)
2.5 .
BIRTH-DEATH PROCESSES
65
where the ter m k! come s abo ut since we do not dist inguish a mo ng the
permutations of the k points am on g the k chosen intervals. We observe that
the two con ditiona l prob abi lities given in Eq s. (2.136) and (2.137) a re the
sa me a nd, the refore, conclude that if a n interval ofl ength t cont ains exactl y k
arrivals from a Poisson process, then the j oint distribution of the instants
when the se a rrivals occurred is "t he sa me as the d istribution of k points
un iformly distribu ted over the sa me interva l.
Furthermore , it is easy to show from the pr operties of our birth process
that -the Poisson process is one with independent increments ; that is, definin g
X es, s + t) as th e number of arrival s in t he interval (s, s + t) then the
followin g is true:
P[X(s, s
+ t) =
k]
(}.tle- 1 1
= -'---''--k!
I - pel
110
> t]
arrivals occur in (0, r), that is,
I - p oet)
I - e:"
t~O
(2.138)
t~O
(2. 139)
66
1--------- _
(b ) PDF
P [i ~ t
P[t
P[i
> toJ
and so
P[i ~ t
+ to I i > toJ =
1 - e- "
(2.140)
This result sho ws that the distribution of rem aining time until the next
a rr ival , given that to sec has elap sed since the last a rrival, is iden tically equal
to the uncondition al distribution of intera rrival time . The imp act of this
stat ement is that our probabilistic feeling regard ing the time unt il a future
a rrival occurs is inde pendent of how lon g it has been since th e last arr ival
occurred. Th at is, the future of an exponentially distributed rand om vari able
...
I
I
2. 5.
BIRTH-DEATH PROCESSES
67
68
arrival occurs within the next !::>.f sec. From Eq . (2.140) we have
P[i ~ 1 +!::>.I Ii> 1 ] = I - e- a t
0
[ 1 - I. !::>.I
= 1-
(A !: >. t)"
+ 2"!" - ..-]
A!::>.I + O(!::>. I)
(2. 14 1)
Equation (2.14 1) tells us, given that a n arrival has not yet occurred, th at the
prob ability of it occurring in the next interval of length !::>.I sec is A!::>.t +
O(!::>.I). But thi s is exactly assumption B[ from the opening paragraphs of thi s
section. Furthermore, the probability of no a rrival in the interval (to, to + !::>.t)
is calculated as
P[i
1 - P[i ~ 10 + !::>.I
= 1 - (1 - e- w )
= e- a t
1 - I. !::>. I
= 1 - I.!::>.I
I i > 10 ]
(A !::>.I)"
+- - .. .
2!
+ O(!::>.I)
+ !::>.I)]
P[none in (to, 10
= 1 - [1 = 0(!::>.1)
1.!::>.1
+ !::>.I)] -
+ O(!::>.I)] -
Pron e in (to, 10
[I.!::>.1
+ !::>.I)]
+ O(!::>.I)]
f =
fa;; l a(t ) dt
Jo
=i
"'IAe- ;t dl
We use a trick here to evaluate the (simple) inte gral by recogni zing t ha t the
integrand is no more than the partial deri vati ve of the following integral ,
2.5.
BIRTH-DEATH PROCESSES
69
f"'
a f OO :". dt
ti.e-A'dt = - J. -
aJ.
a nd so
(2.142)
t=-
i.
Thus we have that the ave rage intera rriva l time for an exponential distrib ut ion
is give n by I Ii.. This result is intuitively pleasi ng if we examine Eq . (2. 14 1)
a nd observe th at the proba bility of a n a rrival in a n interva l of length 6>t is
given by J. 6>t [+ o(6)t)] and thus i. itse lf must be the average rate of arrivals ;
thus the average time between arrivals must be l{i.. In orde r to evaluate the
variance, we first calculate the second moment for the interarrival time as
follows:
70
a/ =
E[(i)"] -
vr
=:2- (1Y
and so
1
(2.143)
a( = ::;
) .-
As usual, these two moments could more easily have been calculated by
first considering the Laplace transform of the probability density functi on
for this random variable. The notati on for the Laplace transform of the
interarrival pdfis A *(s) . In thi s special case of the exponential distribution we
then have the followin g :
A*(s)
~ r)~-'la(t) dt
=
1""e- "i.e-J.' dt
and so
A *(s)
= -)'-
(2.144)
s +}.
Equation (2.144) thus gives the Laplace transform for the exponential density
functi on . Fr om Appendix II we recognize that the mean of thi s density
function is given by
f = _ dA*(s)
ds
(s
I
,~ O
i.
+ if ,_0
i.
The second moment is also calculated in a similar fashion:
2
E[(i)2] =
d A *(s)
ds
-2-
2i.
I
,~ O
= (s + }.)3 ,~O
2
2.5.
71
an d so
Thus we see the ease with which moments can be calculated by making use
of tr an sforms.
No te also, th at the coefficien t of variation [see Eq . (II .23)] fo r the exp onential is
(2.145)
It will be of further interest to us later in the text to be able to calculate the
pdf for the time interval X required in o rder to collect k arrivals from a
Poisson pro cess. Let us define th is random variable in terms of the random
va riables In where In = time between nth and (n - I)th arrival (where the
" zeroth" arrival is assumed to occur a t time 0). Thus
We define f x(x) to be the pdf for this random vari able . From Appendix II
we sho uld immedia tely recognize that the density of X is given by the
con volu tion of the den sities on each of th e I,:S , since they are indep endently
distri buted. Of course, thi s con voluti on operatio n is a bit lengthy t o carry
out, so let us use our further result in Appendix II, which tells us that th e
Lapl ace tr an sform of th e pdf for the sum of independent random va ria bles is
equ al to the product of the Laplace transforms of the den sity for each . In
our case each I n ha s a comm on exponential distribution and therefor e the
Laplace transform for th e pdf of X will merel y be the kth po wer of A *(s)
where A *(s) is given by Eq. (2.144); that is, defining
X*( s)
f'
e-SXfx(x) d x
X *(s) = [A *(S)]k
thus
X *(s)
= ( -?-)k
s +?
(2.146)
72
fx(x)
A(Ax )k-I
.x
(k _ I)! e-'
x ~ O
(2.147)
This family o f density functi ons (one for each va lue of k) is referred to as the
family of Erlang distributions. We will have con siderable use for thi s famil y
later when we di scu ss the method of stages, in Chapter 4.
So much for the Poisson arrival process and its relati on to the exponential
di stribution. Let us now return to the birth-death equations a nd consider a
m ore genera l pure birth process in which we perm it state-dependent birth
rates Ak (for the Poisson process , we had Ak = A). We once again insist that
th e de ath rates fl.k = O. From Eq . (2.127) thi s yield s the set of equations
dPk(t)
-- =
dt
dPo(t)
- - =
dt
- AkPk(t)
+ Ak_IPk_l(t )
k~1
(2 .148)
k=O
-)'oPo(t)
Again, let us a ssume the initial di stribution as given in Eq. (2. 130), which
states that (with probability one) the population begin s with 0 members a t
time O. Solving for poet) we have
p oet)
e- Ao'
The general solution * for Pk(t) is given bel ow with a n explicit expression for
the first two va lues of k:
Pk(t )
= 0, 1,2, . ..
(2 . 149)
i.o(e-Ao' _ e- AI')
P I( t) = -=-----'
i' l
io
A"
A,'
P (t ) = . i,o)'I. [e- - :- A" _ e- 2
Al "'0
A2 - 1.1
i' 2 - Ao
-
e-Ao'J
2.5.
73
-flPk(t )
+ flPk +l(t )
O<k< N
dP.,(I )
k = N
----;;;- = - fl Ps ( I)
k=O
P (I) =
k
O<kS N
(N - k) !
dPo( t)
fl (fll )-'-I _"I
-- =
e
dt
(N - I)!
(2.150)
k=O
MIMI 1
I \
Ak = A} -o(~
fl k = fl
(A(I)
B(x)
=
=
1-
e -).I
(2.151)
1 - e-,n
It sho uld be clear why A (I) is of exponential form from our earlier discussion
relating the exponential interarrival distribution with the Poisson arri val
pr ocess. In a similar fashi on , since the death rate is constant (fl k = fl,
k = I, 2, ...) then the same reason ing leads to the observation that the time
between deaths is also exponentially distributed (in this case with a parameter fl) . However, deaths correspond in the queueing system to service
74
dP
- k(t
- )
dt
dPo(t )
=-
("It
_, ()
(
+ f-l) Pk()t + I.P
k t + ,u Pk+l t)
- - = - APo(t )
.dt
k";?:. l
(2.152)
+ ,u P,(t)
k=O
Many meth ods are available for solving th is set of equ ati ons. Here, we choose
to use the meth od of z-transfo rms developed in Appendi x I. We have already
seen o ne application of this meth od earlier in this chapter [when we defined
the tr ansform in Eq. (2.60) and a pplied it to the system of equ at ion s (2.55)
to obta in the algebra ic equ ati on (2.6 1)]. Recall th at the steps involved in
a pplying the meth od of z-transfo rrns to the solution of a set of difference
equations may be summarized as follows:
1.
2.
2.5.
6.
7.
75
d ifferen tial * equation. U se the co nserva tion rela tion ship , Eq. (2. 122),
to elimi na te the last unkn own term. t
In ver t the .solution to get an explic it solution in terms of k.
If step 6 ca nnot be carried out, then mom ents may be ob ta ined by
diffe rentiati ng with respect to z and setti ng z = I.
Jet
P(z, t)
I Pk(t)Zk
(2.153)
k =O
-Iext we multiply the kth differential equation by zk (step I) and then sum
rver all permitted k (k = 1,2 , .. .) (step 2) to yield a single differential
qu ati o n for th e z-tra nsfo rm of Pk(t) :
r operty 14 from Table I.l in Appendix I permits us to move the differentiao n opera to r outside the summa tion sign in th is last equ ati on . Thi s summaon th en a ppears very much like P(z , r) as defined above, except th at it is
iissing the term for k = 0 ; the sa me is true of the first summa tion on th e
ght-ha nd side of thi s last eq uat ion. In the se two cases we need merel y add
nd subtrac t the ter m Po(t)ZO, whieh perm its us to form t he tran sform we
re seeking. The seco nd summati on o n the right-hand side is clearly }.zP(z , t )
nee it contain s an extr a fact or of z, but no missing term s. The last summation
missing a fac tor of z as well as the first two term s of th is sum. We ha ve now
We sometimes.ob tain a differential equatio n at this stage if our original set of difference
lua tions was. in fact , a set of differential-difference equat ions. When th is occurs, we arc
Iectivcly back to step 1 of this procedure as far as the differential varia ble (usually time)
concerned. We then proceed thro ugh steps 1- 5 a second time using the Lapl ace transform
this new variable; our transform multipl ier becomes e- t o ur sums become integrals .
id our ' 'tricks" become the properties associa ted with Lapl ace transforms (see Append ix
. Similar " returns to step I " occu r whenever a function of more than one variable is
If
m sformed ; for each discrete variab le, we require a e-transform and , for each cont inuous
.riable, we require a Laplace transform .
When additiona l unknowns remain, we must a ppeal to th e a nalyticity o f the tran sform
rd obser ve that in its region of ana lyticity the tra nsform must have a zero to cance l eac h
ile (singularity) if the transform is to remai n bou nded . Th ese ad ditio nal conditions
mplet ely remo ve a ny remai ning unkn owns. Thi s procedure will often be used and
pla ined in the next few chap ters.
76
.. [P(z, t)
at
=-
- Po( t)]
+ ~ [P(z, r) -
The equati on for k = 0 ha s so far not been used and we now apply it a s
de scribed in step 4 (K = I) , which pe rmits us to eliminate certain terms in
Eq. (2.154) :
at~ P(z , t) =
+ AZP(z , r) + ~~ [P(z, t) -
Po(t)]
E.. P(z, t)
at
(2. 155)
P*(z , s) =
(2 . 156)
0+
Returning to step I , a pp lying thi s tran sform to Eq. (2. 155), an d taking ad vantage o f pr operty II in Table 1.3 in Appendix I, we obtain
Po*(s)
~ l'''e- "po(t ) dt
(2. 158)
P (z, s)
(1 - z)(p - I.z )
(2 . 159)
For con venience we take the lower limit of integration to be 0-'- ra ther than our usual
con vention of using 0- wit h the nonnegati ve rand om variables we o ften deal with . As
a co nsequence, we must includ e th e initial condition P(=. 0+) in Eq , (2.157).
2.5.
77
Let us carry this ar gument just a bit further. From the definition in Eq,
(2.153) we see th at
co
P(: , O+)
= L Pk(O+)Zk
(2.160)
k=O
Of course, Pk(O+) is just our initial condition ; whereas earlier we took the
simple point of view that the system was empty at time 0 [that is, Po(Q+) = 1
an d all other terms Pk(O+) = 0 for k ~ 0), we now genera lize and permit i
customers to be pre sen t at time 0 , th at is,
k=i
(2.161)
k~i
When i = 0 we have our original initi al condi tion. Sub stituting Eq, (2.161)
int o Eq. (2. 160) we see immediately that
P(:,O+)
= :i
sz -
(2.162)
We are almos t finished with step 5 except for the fact that the unknown
funct ion Po*(s) appears in o ur equ ation. The second footnote to step 5 tells
us how to proceed. F rom here on the analysis becomes a bit complex and it is
beyond our desire at thi s point to continue the calcul at ion ; instead we
relegate the excruciating details to the exerci ses below (see Exerci se 2.20). It
suffi ces to say that Po*(s) is determined throu gh the denomin at or root s of
Eq. (2. 162), which th en leaves us with an explicit expre ssion for our double
transfo rm . We are now at step 6 and mu st attempt to invert o n both the
transform varia bles ; the exercises require the reader to show th at the result
of th is inversion yields the final solution for our transient an alysis, namely,
+ plk- i- 1l /2Ik+i+1(at )
+ (I
p)pkj~%i+2p-i/2Ij(a/)J
- (2.163)
wher e
),
(2.164)
p= -
,u
(2.165)
and
k
Z.
-1
(2.166)
EXERCIS ES
79
EXERCISES
2.1. Consider K independent sources of customers where the in.t er~rrival
time between customers for each source is exp onentially distributed
with parameter Ak (i.e. , each source is a poiss?n proc.ess). Now consider
the arrival stream , which is formed by merging the Input from each of
the K sources defined above. Prove that this mer ged stream IS al so
Poiss on with parameter A = Al + }'2 + ... + }'K'
2.2.
Referring back to the previous problem, consid.er thi.s mer ged Poi sson
stream and now assume that we wish to break It up Into several branches. Let Pi be the probability that a customer from .the mer ged strea m
is assigned to the substream i, If the overall rate IS A cu stomers per
second, and if the substream probabilities Pi are ch osen for e~ch
customer independently, then show that each of the se substreams IS a
Poi sson process with rate APi'
2.3.
78
where Ik(x) is the modified Bessel functi on of the first kind o f order k . This
last expression is most disheartening. What it has to say is that an appropriate
model for the simp les t interesting queueing system (di scu ssed further in the
next chapter) leads to an ugly expression for the time-dependent beh avior o f
its state probabilities. As a consequence , we can only hope for grea ter
complexity and obscurity in attempting to find time-dependent behavior of
more general queueing systems.
More will be said about time-dependent results later in the text. Our main
purpose now is to focus upon the equilibrium behavior of queueing systems
rather than upon their transient behavior (which is far more difficult). In the
next chapter the equilibrium behavior for birth-death queueing systems will
be studied and in Chapter 4 more general Markovian queues in equilibrium
will be considered. Only when we reach Chapter 5, Chapter 8, and then
Chapter 2 (Volume II) will the time-dependent behavior be considered again .
Let us now proceed to the simplest equilibrium behavior.
REFERENCES
BHAR 60
80
= 1T3? (Give
(a)
(b)
(c)
2.7.
1T2
1t.
Consider a Markov chain with states Eo, E" E2> . . . and with transition probabilities
-1
PH = e
(i)
s: n P q
i-n
n- O
,1f - n
( J. - n) '.
(c)
= equilibrium probability of E,
(d)
2.8.
.-0
Recursi vely (i.e., repeatedly) appl y the result in (c) to itself and
show that the nth recursion gives
P(z)
(e)
'" 1T Z i
=L
i
+ pO(z -
1)]
EXERCISES
81
2.9.
Consider a pure birth process with constant birth rate ,1.. Let us
consider an interval oflength T, which we divide up into m segments
each of length TIm. Define t1t = TIm.
(a) For t1t small, find the probability that a single arri val occurs in
each of exactly k of the m intervals and that no arrivals occur in
the remaining m - k intervals.
(b) Consider the limit as t1t ->- 0, that is, as m ->- co for fixed T,
and evaluate the probability Pk(T) that exactly k arrivals occur
in the interval of length T.
2.10.
ze-.t'
.t .
1- z + ze: ,
k=O
k=1
k#-O
k#-I
82
2.12.
(a)
P(z, t) = 2,Pk(t)Zk
k=O
(c)
and find the partial differential equation that P(z, t ) must sat isfy.
Show that the solution to this equ ation is
P(z, t) = exp (;
(1- e-")(z - 1)
(d)
2.13.
Consider a system in which the birth rate decre ases and the death rate
increases as the number in the system k increases, th at is,
(K - k )A
Ak
={
k~K
k > K
(a)
EXERCIS ES
(c)
Pk(t)
poet)
(d)
(e)
2.15.
83
=
=
1, 2, . . .
P(z, t) = ~ Pk(t)Zk
k -O
(c)
What is the value of P(I, t)? Give a verbal interpretation for the
expression
(d)
(e)
2.16.
.-1az
P/(s)
i'"
Pk(t)e- $/ dt
For our initial condition we will assume poet) = I for t = 0. Transform Eq. (2.148) to obtain a set of linear difference equations in
{Pk *(s)}.
(a) Show that the solution to the set of equations is
k- l
IT )' i
P/( s) =
-k-,-,
i -,-"
O_-
IT (s + Ai)
i= O
(b)
).
(i
0, I, 2, ...).
84
2.17.
2.18.
2.19.
= pe-~'
2.20. In thi s problem we wish to proceed from Eq. (2. 162) to the transient
solution in Eq. (2.163). Since P* (z , s) must converge in the region
[z] ~ 1 for Re( s) > 0 , then, in this region, the zero s of the denom inator in Eq. (2.162) mu st also be zeros of the numerat or.
(a) Find those two values of z th at give the denominator zeros, and
denote them by ct1 (s) , ~ (s) where 1ct2 (s) 1 < Ict 1 (s)l.
I
]
EXERCISES
85
Using Rouche's theo rem (see Appendix I) show that the denominator of P* (z, s ) has a single zero within the unit disk Izl ~ 1.
(c) Req uiring that the -numerator of P *(z , s) vanish at z = CX2(S)
from our earlier considerations, find an explicit expression for
Po* (s).
(d) Write P* (z, s) in terms of cx,(s) = cx, and cx 2 (s) = cx 2 Then show
that this equati on may be reduced to
(b)
P*(z s)
- CX2)
Acx,(1 - Z/CX1)
(e)
Usin g the fact that Icx 21 < I and that CX 1CX2 = P./A show th at the
inversion on z yields the followi ng expression for Pk *( s), which is
the Laplace transform for our tra nsient probabilities Pk(t):
(f)
[s+
";S2 -
2.A.
4A.u]-k
where p and a are as defined in Eqs. (2. 164), (2.165) and where
f k(x) is the modified Bessel function of the first kind of order k as
defined in Eq . (2.166). Using these facts and the simple relations
among Bessel function s, namel y,
X(t)
LXi
i =l
86
e .lt[4>.rlul -ll
I
I
,I
I
PART
II
ELEMENTARY
QUEUEING THEORY
Elementary here means that all the systems we consider are pure Markovian and, therefore, our state description is convenient and manageable. In
Part I we developed the time-dependent equations for the behavior of
birth-death processes ; here in Chapter 3 we address the eq uilibrium solut ion
for these systems. The key eq uation in th is chapte r is Eq. (3.11), and the
balance of the material is the simple application of that formula. It , in fact ,
is no more than the solution to the equation 1t = 1tP deri ved in Chapter 2.
The key tool used here is again that which we find throughout the text,
namely, the calculation of flow rates across the bou ndaries of a closed
system. In the case of equilibrium we merely ask that the rate of flow into be
equal to the rate of flow out of a system. The application of these basic
results is more than just an exercise for it is here that we first obtain some
equations of use in engineering and designing queueing systems. The classical
M IM II queue is studied and some of its important performance measures
are evaluated. More comple x models involving finite storage, multiple
servers, finite customer population, and the like, are developed in the balance
of this chapter. In Chapter 4 we leave the birth-death systems and allow
more general Markovian queue s, once again to be studied in equilibrium. We
find that the technique s here are similar to our earlier ones, but find that no
general solution such as Eq. (3.11) is available ; each system is a case unt o
itself and so we are rapidly led into the solutions of difference equations,
which force us to look carefully at the method of z-transforms for these
solutions. The ingenious method of stages introduced by Erlang is considered
here and its generality discussed. At the end of the chapter we introduce (for
later use in Volume II) networks of Markovian queues in which we take
exquisite ad vantage of the memoryle ss propertie s that Mark ovian queues
provide even in a network environment. At this point , however, we have
essent ially exhausted the use of the memoryless distribution and we must
depa rt from that crutch in the following parts.
87
3
Birth-Death Queueing Systems
in Equilibrium
t In addit ion to these equations, one requires the con servat ion relat ion given in Eq. (2.122)
and a set of initia l conditions {Pk(OJ}.
89
90
3.1.
(3.1)
,-'"
o=
o=
-(At
k ~ 1
=0
(3.2)
(3.3)
The annoying task of providing a separate equation for k = 0 may be overcome by agreeing once and for all that the following birth and death
k
3.1.
91
A_1
flo
== A_ 2 = A_3 =
= fL-1 = fL-2 =
=0
=0
o=
(Ak
(3.4)
(3.5)
2.Pk = 1
k= O
Recall from the previ ous chapter that the limit given in the Eq. (3. 1) is
independent of the initial conditions.
Ju st as we used the state-transition-rate diagram as an inspection technique
for writing down the equations of motion in Chapter 2, so may we use the
sa me concept in writing down the equilibrium equations [Eq s. (3.2) and (3.3)]
directly from that d iagram. In thi s equilibrium case it is clear that flow mu st
be conserved in the sense that the input flow must equal the output flow from a
given state. For example, if we look at Figure 2.9 once again and concentrate
on sta te E k in equilibrium, we observe that
and
"
92
= - O Po
(3.8)
fll
o=
o=
o=
and so
-(J' I
flI
AIAo
flI
+ AoPo + fl2P2
Po - J..,po
P2
AOA I
=-
fllfl2
(3.9)
Po
If we examine Eqs. (3.8) and (3.9) we may justifiably guess that the general
solution to Eq. (3.4) must be
(3.10)
fllfl 2 . . . flk
To validate this assertion we need merely use the inductive argument and
apply Eq. (3.10) to Eq, (3.4) solving for PHI' Carrying out this operation we
do. in fact . find that (3.10) is the solution to the general birth-death process
in this steady-state or limiting case. We have thus expressed all equilibrium
probabilities P in terms of a single unknown constant Po:
k-I }...
Pk = Po I1'
i=O
k=0.1,2 .. .
- (3.II )
I' i+l
I +~
k -I
I1-'
k= l i = O
Pi+l
- (3.12)
3.1.
93
= Po =
(3.16)
g_1
and so the constant in Eq. (3.16) must be O. Setting gk equal to 0, we immediately obtain from Eq , (3.14)
P HI
A.k
= - - Pk
(3.17)
P HI
(3.19)
It is easy to construct counterexa mples to th is case. and so we requ ire the precise argument s which follow.
94
5,
< IX)
52 =
IX)
On the other hand , all states will be recurrent null if and only if
Recurrent null:
5,
52
=
=
IX)
IX)
5,=
52
IX)
< IX)
It is the ergodic case that gives rise to the equilibrium probabilities {fk} and
that is of most interest to our studies. We note that the condition for
ergodicity is met whenever tbesequence {).,JPk} remains below unit y from so me
k onwards, that is, if there exists some k o such th at for all k ~ k o we have
Ak < I
(3.20)
Pk
We will find this to be true in most of the queueing systems we study.
We are now ready to apply our general solution as given in Eqs. (3. 11)
and (3. 12) to some very important special cases. Before we launch headlong
into that discussion. let us put at ease those readers who feel that the birthdeath constraints of permitting on ly nearest-ne ighb or tran sition s are too
confining. It is true that the solution given in Eqs. (3. 1I) and (3. 12) applies
only to neare st-ne ighb or birth-death processes. H owever . rest assured that
t he equilibrium meth ods we have described can be extended to more general
than neare st-neighbor system s ; these generalizat ions a re co nside red in
Chapter 4.
3.2.
k = 0, 1,2, .. .
k=I ,2,3 . . .
That is, we set all birth * coefficient s equal to a constant A and all death *
In this case. the average intera rrival time is f = 1/). a nd the average service time is
l /p; this follows since t and i are both exponen tially distributed .
i =
---3.2. M /M /I:
A
95
~ ...
Figure 3.1 State-transition-rate diagram for M/M /I.
Pk = Po II -
'-0 /-l
or
(3.21)
The result is immediate. The condi tion s for our system to be ergodic (and ,
therefore, to ha ve an equilibrium solution P > 0) are that S, < CXJ and
So = CXJ; in this case the first condition becomes
The series On the left-hand side of the inequality will converge if and only if
Af/-l < I. The second conditi on for ergodicit y becomes
T his last condition will be satisfied if Af/-l ::;; I ; thus the necessary and sufficient condition for ergodicity in the M IMII queue is simply}. < /-l. In order
to solve for Po we use Eq. (3.12) [or Eq . (3.5) as suits the reader] and obtai n
96
< fl and so
1
1
Alp1 - ;./fl
Thus
Po = 1 - (3.22)
PFrom Eq. (2.29) we have p = Alfl' From our stability conditi ons, we therefore require that 0 ~ p < 1; note that this insures that Po > O. From Eq.
(3.21) we have, finally,
Pk = (I - p)pk
= 0, 1,2, .. .
- (3.23)
= (I
co
- p),I kp k
k= O
Using the trick similar to the one used in deriving Eq. (2.142) we have
<Xl
N= (I-p ) p - I l
ap k- O
= (1 -
p) p - - -
N =P1- p
ap 1 - p
- (3.24)
If we inspect the transient solution for M/M /l given in Eq. (2.163), we see the term
p)pk ; the reader may verify that , for p < 1, the limit of the transient solution agrees
with our solution here.
(I -
3.2. MIM/l :
97
l- p
(l -p)p
(1_p)p2
G,v" = Z(k -
R)"Pk
k- O
- (3.25)
We may now appl y Little's result directly from Eq . (2.25) in order to obtain
o
Figure 3.3 The average number in the system MIMI!.
98
o
p ~
Figure 3.4
T=-
;.
T=
C~)G)
IIp.
T=-l-p
- (3.26)
We observe at p = I that the system behavior is unstable ; this is not surprising if one
recalls that p < I was our condition for ergodicity. What is perhaps surprising is that the
behavior of the average number iii and of the average system time T deteriorates so badly as
p - I from below; we had seen for steady flow systems in Chapter I that so long as R < C
(which corresponds to the case p < I) no queue formed and smooth, rapid flow proceeded
through the system. Here in the M IMI! queue we find this is no longer true and that we pay
an extreme penalty when we attempt to run the system near (but below) its capacity. The
3.3.
99
= I'" Pi
i =k
= I'"( l
- p)pi
i- k
P[~k in system] = pk
- (3.27)
Thus we see that the probability of exceeding some limit on the number of
customers in the system is a geometrically decrea sing function of that number
and decays very rapidly.
With the tools at hand we are now in a position to develop the probability
density function for the time spent in the system. However, we defer that
development until we treat the more general case of M/Gfl in Chapter 5
[see Eq. (5.118)]. Meanwhile, we proceed to discuss numerous other birthdeath queues in equil ibrium .
3.3.
DISCOURAGED ARRIVALS
This next example considers a case where arrivals tend to get discouraged
when more and more people are present in the system. One possible way to
model this effect is to choose the birth and death coefficients as follows :
ex
)'k
= - -
k = 0, 1,2, . . .
ft k
= ft
k = 1, 2, 3, . . .
k+1
100
ex
ex!2
cxlk
cxI(k + l )
~ ...
Figure 3.5 State-transition-rate diagram for discouraged arrivals.
ca se is as shown in Figure 3.5. We apply Eq . (3.11) immediately to obtain
iox p;
oc/(i + 1)
II :..:!...OC-'-2
k -1
Pk = Po
Pk
po(~r
(3.28)
fl
;- 0
:!
(3.29)
Po =
fl
k-1
Po
k!
e~!p
1-
e -a /
(3.30)
= (OC~)k e- ' !p
< 00 .
Going back to
0, 1, 2, . . .
(3.31)
fl
In o rder to calculate T , the average time spent in the system , we may use
Little's result agai n. For this we require A, which is dir ectly calculated from
p = AX = Al fl ; thus from Eq. (3.30)
). =
flP
= fl(l
e -o / P )
(3.32)
oc
= L kJ.kPk- The
reader should
3.4.
101
0, 1,2
k = 1,2.3,
.
.
Here the state-t ransition-ra te dia gram is th at shown in Figure 3.6. G oing
dire ctly to Eq. (3.11) for the solution we obtain
i,
k -1
Pk = Po .~
ITo (.I + I )f-l
(3.33)
(}'If-l)k
- Al p
----z! e
k = 0. 1. 2. ..
(3.34)
N=!
f-l
Her e. too . the ergodic condition is simply i.l f-l < 00. It a ppea rs then that a
system of d iscouraged arriva ls beh aves exactly the same as a system th at
includ es a resp onsive server. H owever, Little 's result provid es a different
(and simp ler) form for T here th an th at given in Eq . (3.32) ; thus
I
T=-
f-l
This answer is, of co urse. obvio us sinc e if we use t he interpreta tion where
each a rriving customer is gra nted his own server. then his tim e in system will
be merely his service time which clearl y equ als l il t o n the ave rage.
11
211
~ ...
Figure 3.6 State-transition-rate diagram for the infinite-server case M/M/oo.
r:
102
I' k
I.
0, 1,2, . . .
= {kfl
0 ~
ks
m ~ k
mft
From Eq. (3.20) it is easily seen that the condition for ergodicity is A/mft < I.
The state-transition-rate diagram is shown in Figure 3.7. When we go to
solve for P from Eq. (3.11) we find that we must separate the solution into
two parts, since the dependence of ftk upon k is also in two parts. Accordingly,
for k ~ m ,
k-l
A
Pk = Po II (. 1)
'- 0
fl
po(;r~!
Similarly, for k
(3.35)
m,
m-l
k- l
Pk = Po II .
II '-0 (I + l)ft ;-m mft
Ak
= Po ( ft)
m! mk- m
(3.36)
Collecting together the results from Eqs. (3.35) and (3.36) we have
(mp)k
POI:!
Pk =
{
- (3.37)
(p) kmm
k~
Po--m!
where
p = mfl
<1
(3.38)
}.
m~
m~
~ . ..
~
2~
(m - l)~
m~
3.6. M /MfI / K :
FI NIT E STORAGE
103
Po = [ I
m- 1(mp )k
co
(mp)k
a nd so
)m) ( I
--+ (mp
---
m- 1 ( m p)k
Po= [ I
k-
J-1
+kd
I -k!- +k~I m -m!- -m k-- m
O
k!
m!
)J-1
I - p
- (3.39)
P[queueing]
=I
Pk
k-m
Thus
(
P[queuemg] =
.
m p)m)
m!
["f
(mp)k
k-o
k!
(_1)
1(_I_)J
p
+ (mp)m)
m!
- (3.40)
1- p
104
~ ... ~
11
P.
JJ
IJ
Figure 3.8 State-transition-rate diagram for the case of finite storage room
M/M/I /K.
lk =
{~
P.k = P.
<K
k:? K
k
1, 2, . . . , K
From Eq. (3.20), we see that th is system is alw ays ergodic. The sta te-tra nsiti on-rate diagram for thi s finite M a rkov chain is sho wn in F igure 3.8. Proceeding directly with Eq. (3.11) we o btai n
k-l l
Pk = Po II ;_ 0 P.
k ::;; K
or
(3.4 1)
Of co urse, we also ha ve
Pk = 0
>K
(3.42)
In order to solve for Po we use Eqs . (3.4 1) and (3.42) in Eq. (3.12) to obtai n
and so
I - i.fp.
Po = 1 _ (l/p.)K+1
Thus, finally,
1 - I./P. ( l)k
r
Pk = l~ - (l/p.)K+1 P.
- (3.43)
otherwise
3.7. M jM jm jm :
A
105
~ ... ~
21'
I'
Im - l )p
mu
Figure 3.9 State-transition-rate diagram for m-server loss system M/M /m/m.
For the case of blocked calls cleared (K = 1) we have
1
k=O
1+
Nil
Nil
Pk =
1
+ ;.jll
1= K
(3.44)
otherwise
k<m
k
>:
1,2 , . . . , m
k- l
Pk = Po IT
(.I + 1)Il
, ~o
~ III
or
Pk =
A)k1
Po( - 0 Il k!
k
Po =
- (3.45)
I (Il-A)k-k!1
m
k_ O
J1
III
106
- (3.46)
2,(A lfl)klk!
k- O
fl k = !t
I , 2, .. .
}.(M - i )
Pk = Po IT -'-------'b O
fl
Europeans use the nota tion 1 m(J./Il)
t Recall tha t a blan k entry in e ither of the last two opt iona l positions in this notation means
an entry of 00; thus here we have the system M/M /I / oo/M .
2),
(M- l)'
~ ...
Il
107
Il
~
Il
Il
Pk =
:0;
(
A)k
M!
(M - k) !
- (3.47)
k>M
Po
- (3.48)
k_ O
3.9.
We again consider the finite population case, but now provide a separate
server for each customer in the system. We model this as follows:
Ak
. {.il(M -
k)
otherwise
P.k = kp.
k = I , 2, ...
Clearly , this too is an er godic system. The finite state-transition-rate diagram
is shown in Figure 3.11. Solving this system, we have from Eq . (3. II)
k- l
Pk =
=
}.(M - i)
PoIT (. + 1)P.
,-0
Po(~r(~)
~k~M
(3.49)
(
.11),
(,11-1 ) ),
M)
k
.l
M!
k! ( Af -
k)!
2) , ) ,
~ . .. ~
I'
21'
(.II- l lp
M I'
Figur e 3.11 State-tran sition-rate diagram for "in finite"-server finite population
system M/M/ ooI/M.
108
I
I
I
I-
O~ k~M
(3.50)
otherwise
We may easily calculate the expected number of people in the system from
Ik(2:)k(M)
>- 0
(I
fL
+ i./fL)M
1+ AlfL
3.10. M/M/m/K/M: FINITE POPULATION, m-SERVER CASE,
FINITE STORAGE
This rather genera l system is the most complicated we have so far considered and will reduce to all of the pre viou s cases (except the example of discouraged arrivals) as we permit the parameters of thi s system to vary. We
assume we have a finite population of M customers , each with an " arriving"
parameter A. In addition , the system has m servers, each with parameter fl.
The system also ha s finite storage room such that the total number of cust omers in the system (queueing plus th ose in service) is no more th an K.
We assume M ~ K ~ m; cust omers arriving to find K alre ad y in the system
are "lost" and return immediately to the arriving state as if they had just
completed service. This lead s to the followin g set of birth-death coefficients:
i' =
{OA(M -
fl k =
k fl
{mfl
k)
0~ k ~
otherwise
K-
(1.1- 1) ~
(M- m+ l p
(M- K+ I) ~
.. :e:B:GD3: ...~
~
IJ
(M- m+ 2) .\ ( .\I- m) ~
109
Zu
(m - I) J:
mu
mu
/Il 1J
A(M - i)
k- I
Pk = Po IT
(.I + I) P.
.~o
-_ po(~)k(Mk)
r:
O~k~lII-l
(3.51)
K we have
m- 1
A(M _ i) k- 1 ;,(M - i)
i~ O
(I
Pk = Po IT
m!
(3.52)
The expression for Po is rather complex and will not be given here, although
it may be computed in a straightforward manner. In the case of a pure loss
system (i.e., M ~ K = m), the stationary state probabilities are given by
Pk =
(~)(;r
. ~o
(~) (2:)'
I
= 0, 1, .. . , m
(3.53)
p.
110
th eory. In the next chapter (4) we con sider the equilibrium solutio n for
M arkovian queues ; in Chapter 5 we will generalize to semi-Markov processes
in which th e service time distribution B(x) is permitted to be genera l, and in
Chapter 6 we revert back to the exponential service time case, but permit
the interarrival time d istribution A (I) to be general; in both of the se cases
a n im bedded Markov chain will be identified and solved. Onl y when both
A(I) a nd B (x) a re nonexpon ential do we requ ire the methods of adva nced
queueing theory discu ssed in Chapter 8. (There are so me special none xp onentia l distribution s tha t may be described wit h th e the ory of Markov pr ocesses
and these too are discussed in Chapter 4.)
EXERCISES
~Consider
.\...:.;;::
--??.'"
! ~
.~1 ~~
\
_'>j
...;
- . .!'!
"
'.~ )
'1>;a;;~ -!1
~')
\ !J "
" ~ (a)
(b)
3.2.
A.
k
~k ~
K <k
{A.
2A.
P,k=P,
k=I , 2, .. .
0, 0
~ a:
<
k~1
(a)
(b)
3.3.
lim Pk( l )
I -a:
EXERC ISES
III
3.4.
3.5.
+ I,
K, + I,
A. = ock(K. - k )
k = K" K ,
, K.
fl . = fJk (k - K,)
k = K"
K.
where K , ::::; K. and where these coefficients are zero outside the range
K , ::::; k ::::; K a- Solve for P (assuming tha t the system initially co ntai ns
K , ::::; k ::::; K. customers).
3.7. Consider an M/M /m system that is to serve the pooled sum of two
Poisson arrival streams; t he ith stream has an average arriva l rate
given by Ai and exponentially distribute d service times with mean
I /p , (i = 1, 2). The first stream is an ordina ry stream whereby each
ar rival requires exactly one of the In servers ; if all In servers are busy
then any newly arrivi ng custom er of type I is lost. Customers from the
second class each require the simultaneous use of Ino servers (and will
occupy them all simulta neously for the same exponenti ally distributed
amo unt of time whose mean is I Ip. sec); if a customer from th is class
finds less than mo idle servers then he too is lost to the system. Find
the fracti on of type I custo mers and the fraction of type 2 customers
that are lost.
112
3.8.
Consider a finite customer pop ulation system with a single server such
as that considered in Section 3.8 ; let the parameters M, A be replaced
by M, i:. It can be shown that if M ->- 00 and A' ->- such that lim
MA' = A then the finite population system becomes an infinite
population system with exponential interarrival time s (at a mean rate
of ). customers per second). Now consider the case of Section 3.10 ;
the par ameters of that case are now to be denoted M, A' , m, p" Kin
the obvi ous way. Show what value these parameters must take on if
the y are to repre sent the earlier cases described in Sections 3.2, 3.4 , 3.5,
3.6,3 .7,3.8 , or 3.9.
3.9.
Usin g the definition for Bim, A/p,) in Section 3.7 and the definiti on of
C(m, Alp,) given in Section 3.5 establish the following for A/p, > 0,
m
(a)
= 1,2, . ..
S( m))
< ~ (A/p,)k ": < c(m))
p,
k!
p,
k -m
(b)
3. 10.
c(m,;)
EXER CISES
(c)
3.11.
have occur red at the end of this interval as well as any customers
who are about to leave at this point.
Solve for the expected value of the number of customers at these
points.
=~
PCZ1, 2:2)
" Z2 -
""' I
00
~ Pk;ZtkZ/
1.-= 0 j= O
show that
aZ2
+ ,u[1 -
Zl
a ~J}PCZ1, Z2)
I - a ZI
-1
=,u
(c)
3.12.
113
a I -I - Zl
a ZOJ
-: P(O, Z2)
"" I
~~---y----'
Stage 1
~~-----y--~)
114
Both servers are of the exponential type with rate s JlI and Jl2, respectively. Let
P(z) =
2: P k Zk
k=Q
(d)
F ind Pk.
3.13.
3.14.
116
pQ
LPt=l
,..
+ Il)PI
= },Po
+ IlP2
IlP2 =
2Po + API
(4.2)
(4.3)
where Eqs . (4.1), (4.2). and (4.3) correspond to the flow conservation for
states Eo, E" and E 2 , respectively . Observe also that the last equation is
exactly the sum of the first two; we always have exactly one redundant equation in these finite Markov chains. We know that the additional equation
required is
Po
+ PI + P2 =
4.1.
117
(4.4)
Vo ila ! Simple as pie . In fact, it is as " simple" as invertin g a set of sim ultaneous linear equations.
We ta ke adva ntag e o f this inspection technique in so lving a number o f
Mark ov chains in equilib rium in the bal ance of th is chapter. *
As in the prev iou s chapter we are here concerned with the limit ing probability defined as P = lim P[N(t ) = k] a s t ~ co, a ssuming it exists . Th is
p roba b ility may be inte rpreted a s giving the p roportio n of time th at the
system spends in sta te Ek One could, in fact, estimate th is pr ob ability by
meas u ring how ofte n the system contained k cust omers as comp ared to the
tot al mea su rem ent time. A no ther qu antity o f interes t (perhaps of grea te r
interest) in queueing systems is the pr obability th at an arri ving customer finds
the sys tem in sta te E k ; th at is, we consider the equil ibrium probability
' k
in the cas e of an ergod ic system. One might intui tive ly feel that in all cas es
Pk = ' k, but it is ea sy to show that th is is not genera lly true. For example ,
let us con sider the (no n- Ma rkov ia n) system 0 /0/1 in which arri val s are
un iformly spaced in time such tha t we ge t one a rriva l every i sec exactl y;
the serv ice-time req uirements a re identical for all cu st omers a nd equa l, say
It should also be clear that this inspection technique permits us to wri te down the timedependent state probabilities Pk(r ) directly as we have already seen for the case of birthdeath processes; these time-dependent equations will in fact be exactly Eq. (2. 114).
118
where Pk(t ) is, as before, the probability that the system is in sta te E k at time t
and where Rk(t) is the probability that a customer arriving a t time t find s the
system in state Ek Specifically, for our system with Poisson arrivals we define
A (t , 1+ UI) to be the event that an arrival occurs in the interval (I, I + UI) ;
then we have
+ ut)]
(4.5)
A t ..... 0
[where N (t) gives the number in system at time I]. Using our definition of
conditional probability we may rewrite Rk(l) as
. _P-,--[N--O(--,t)_=_
k ,:.-A-:(-:
t ,_I-:+_U
_I-,-,
)]
Rk(t ) = lim
"'-0
P[A(t, t + ut)]
Now for the case of Poi sson arrivals we know (due to the mem or yless
property) th at the event A(I , I + UI) must be independent of the number
in the system at time I (and also of the time I itself); consequently P[A (I, I +
UI) N(I) = k] = P[A (I, I + UI)] , and so we have
"'-0
or
(4.6)
4.2.
E,
I 19
Thi s is wha t we set out to prove, namely, that the time-dependent pr obability
of an arrival finding the system in state E k is exactly equal to the time-dependent probability of the system being in state E k Clearly this also
applies to the equilibrium probability 'k that an arrival finds k customers in
the system and the proportion of time Pk that th e system finds itself with k
customers. Thi s equi valence does not surprise us in view of the memoryless
property of the Poisson pr ocess , which as we have ju st shown generates a
seq uence of arri vals that take a really "random look" at the system.
4.2.
The "method of stages" permits one to study queueing systems that are
more general than the birth--death systems . This ingenious meth od is a
fur ther test imonial to the brill iance of A. K. Erlang, who developed it early
in this century long before our tools of modem probability theory were
available . Erlang recognized the extreme simplicity of the exponential
distribution and its great power in solving Markovian queueing systems.
H owever , he also recognized that the exponential distribution was not alwa ys
an appropriate candidate for representing the true situation with regard to
service time s (and interarri val times). He mu st also have observed th at to
allow a more general service distribution would have destroyed the Markovian
property and then would have required some more complicated solution
meth od . * The inherent beauty of the Markov chain was not to be given up so
easily. What Erlang conceived was the notion of decomposing the service]
time distribution int o a collection of structured exponential distributions.
The principle on which the meth od of stages is ba sed is the memoryless
pr operty of the exponential distribution ; agai n we repeat that this lack of
memory is reflected by the fact that the distribution of time remaining for an
expone nt ially distr ibuted random variable is independent of the acq uired
"age" of that random variable.
Consider th e diagram of Figure 4.2. In this figure we are definin g a service
facility with an expo nentially distributed service time pdf given by
~
dB(x)
b(x) = -
dx
= pe -
x ~ O
(4.7)
The notation of the figure shows an oval which repre sent s the service facility
and is labeled with the symbol /1, which repre sent s the service-rate parameter
* As we shall see in Chapter 5, a newer approac h to this problem, the "method of imbedded
Markov chains," was not ava ilable at the time of Erlang.
t Identical observa tions a pply a lso to the interarrival time distribut ion.
120
Service
facili ty
as in Eq. (4 .7). The reader will recall from Ch apter 2 that the exp onential
d istribution ha s a mean and va ria nce given by
E[ i]
= 1.
fI.
(]b -
= ---:;
fI. (1b 2
(4 .8)
Thu s the mean a nd va ria nce for h(y) a re E (fi ) = I f2f1. a nd (1." = (l f2f1.)2 .
The fa shion in which th is two- st age service facilit y funct ion s is that up on
departure o f a cu stomer from th is facility a ne w cu st omer i s all owed to enter
fro m the left. T his new cu st omer enters stage I a nd rem ains there for a n
a mo u nt of time rand omly ch osen fr om h(y). U po n hi s de pa rture from this
first stage he then proceeds immediat ely int o th e seco nd stage and spends an
a mo u nt of time th ere equal to a random vari able dra wn indepe nde ntly once
a gain from h(y). After thi s seco nd random in terva l expi res he th en dep arts
fr om the service fac ilit y a nd a t thi s p oint only may a new cu st omer enter the
facility fro m th e left. We see th en , that o ne, and on ly one, custo me r is
Service facility
4.2.
E;
121
a llowed into the box entitled " ser vice facility" a t any time. * This imp lies
that at least o ne of the two service stages must always be empty. We now
inq uire as to the specific distribution o f total tim e spent in the service fac ility.
Clea rly th is is a random va ria ble , wh ich is the sum of two independent a nd
identically distributed random variables. Thus, as sh own in Append ix II ,
we must fo rm the con volution of the density function associ ated with each
of the two summa nds . Altern atively, we may ca lculate th e Laplace tr an sform
of the ser vice time pdf as being equal to the product of the Laplace transform
o f the pdf's associat ed with each of the summands. Since both random va riables a re (independen t a nd) ide nt ically distributed we mu st form the product
of a function with itself. F irst , as always, we define the a ppro p riate transforms
as
L'" e-SZb(x) d x
H*(5) ~ L'" e-'"h(y) dy
8 *(5) ~
(4.9)
(4.10)
8 *(5) = [H*(5)]2
But , we a lready kn ow the transform of the exponential fr om Eq . (2.144)
and so
2#H *(5) = 5
Thus
8 *(5)
+ 2#
= ( -2#-)2
5
+ 2#
(4.11)
We must now invert Eq . (4. 11). However , the reader may recall that we
a lrea dy have seen thi s form in Eq . (2. 146) with its inverse in Eq. (2. 147).
App lying th at result we have
b(x) = 2#(2#x) e- 2 z
x ~0
(4 .12)
We may now ca lcula te the mean and vari ance of th is two-stage system in one
of three possible ways: by a rguing on the ba sis of the structure in Fi gure 4.3 ;
by using the moment genera ting properties of B *(s) ; or by direct calculati on
As a n example of a two-stage service facility in which only one stage may be act ive at a
time, consider a courtroom in a sma ll town. A queue of defend ant s forms, waiting for trial.
The judge tries a case (the first service stage) a nd then fines the defendan t. Th e second stag e
consists of paying the fine to the cou rt clerk. Ho wever, in th is sma ll town, the j udge is a lso
the clerk and so he moves over to the clerk 's desk, collects the fine, releases the defendant,
goes back to his bench, and then accepts the next defend ant into "service."
;-=
122
I:
II
!I
from the den sity funct ion given in Eq. (4. 12). We choose the first of th ese
three met hod s since it is most straightforwa rd (the reader may verify the
other two for his own satisfaction). Since the time spent in service is the sum
of two random variables, then it is clear that the expected time in service is
the sum of the expectati ons of each. Thus we ha ve
E[i] = 2EW] =
p.
Similarly, since the two rando m variables being summed are independent,
we may, therefore , sum their variances to find the variance of the sum :
O"b
O"h
+ O"h.
1
2p.2
=-
Note that we have arra nged mat ter s such that the mean time in service in the
single-sta ge system of Figure 4.2 and the two-stage system of Figure 4.3
is the same. We accompli shed thi s by speeding up each of the two-stage
service stations by a factor of 2. Note further that the variance of the
two-stage system is one-half the varia nce of the one-stage system.
The previou s paragraph introduced the noti on of a two-stage service
facility but we ha ve yet to discuss the crucial point. Let us consider the state
variable for a qu eueing system with Poisson arrivals and a two-stage exponenti al server as given in Figure 4.3. As a lways, as part of ours tate descript ion,
we must record the number of cust omers waitin g in the queue. In additio n we
must supply sufficient information abo ut the service facility so as to summarize the relevant past history. Owing to the memoryless property of the
exponential distribution it is enou gh to indicate which of the following three
possible situatio ns may be found within the service facility: either both stages
are idle (indicating an empt y service facility); or the firs t stage is busy and the
second stage is idle; or the first stage is idle a nd the second stage is busy.
Th is service-facility state information may be supplied by ident ifying the
stage of service in which the customer may be found . Our sta te description
then becomes a two-dimen sional vector that specifies the number of custo mers
in queue an d the number of stages yet to be completed by our customer in
service. Th e time this customer has already spent in his current stage of
service is irrelevant in calculatin g the future behavior of the system. O nce
aga in we have a Markov process with a discrete (two-dimensio nal) sta te space!
Th e method generalizes and so now we consider the case in which we
provide an r-stage service facility , as shown in Figure 4.4. In this system, of
cour se, when a custo mer departs by exiting fro m the right side of the oval
service facilit y a new customer may then enter from the left side and proceed
one stage at a time thro ugh the sequence of r stages. Upon his departure from
4.2.
E,
123
~
C 0---0-'- ... --0-- ... --0 ~
~
Service facility
(4.13)
The total time that a customer spends in thi s service facility is the sum of ,
independent identically distributed random variables, each chosen from the
distribution given in Eq . (4. 13). We have the followin g expectati on and
vari ance asso ciated with each stage :
E[Y]
= 1'flo
It sho uld be clear to the reader that we have cho sen each stage in thi s system
to have a service rate equ al to 'Il in order that the mean service time remain
con stant :
E[i]
,(J...) = .!
'flo
flo
Similarly, since the stage time s are independent we may add the vari anc es to
obtai n
c, =
J~
(4.14)
(-!l!:....-)'
5
+ ' flo
(4.15)
124
x ~O
- (4.16)
(1)
_I
r"
...; ' fl
Thus we see that the sta nda rd deviat ion for the r-stage Erlangian distribution
is I//r times th e sta nda rd deviation for the single stage. It should be clea r
t o the sop histicated read er th at as r increases , the den sity funct ion given by
Eq. (4.16) mu st approach th at of the normal or Gaussian distribution due to
the central limit the orem . This is indeed true but we give more in Eq . (4. 16)
by specifying the actu al seq uence of distributions as r increases to show the
fashion in which the limit is a pproa ched . In Figure 4.5 we show the family of
r-stage Erla ngian distribution s (compa re with Figure 2.10). From this figure
we observe that the mean hold s constant as th e width or sta nda rd deviat ion
of the density shrinks by I/Jr. Below , we show th at the limit (as , goes to
infinity) for thi s density functi on must , in fac t, be a unit impulse functi on
4.2.
125
(see Appendi x I) at the point x = I Ip ; thi s impl ies th at the time spent in an
infin ite-sta ge Erlan gian service facilit y approaches a constant with prob ability
I (this con stant, of course, equals the mean l /.u). We see further th at the
peak of the famil y shown moves to the right in a regular fashion. T o calculate
the locatio n of the peak , we differenti ate the den sity functi on as given in
Eq. (4.16) and set this deri vativ e equ al to zero to obta in
d b( x ) _ (r,u)2(r dx
l)(r,uxy-2 e- T.'
(r -
I)!
or
(r - I) = ru x
and so we have
X p ea k
(r ~ I) ~
(4. 17)
Thus we see th at the locat ion of the peak mo ves rather qui ckly toward its
final location at If.u.
We now show that the limiting distribution is, in fact , a unit impulse by
considering the limit of the Lapl ace transform given in Eq . (4.15):
lim B*(s)
!- '"
lim
T- ",
(-!1!:.-)T
s + r,u
= Iim (
T- '" I
lim B*(s)
e- ,I.
)T
+ slru
(4.18)
126
distribution by the symbol E, (no t to be confused with the notation for the
sta te of a rand om process). Since our state variable is discrete, we are in a
position to anal yze the queuein g system * M/Er/l. Th is we do in the following
sectio n. Moreover, we will use the same technique in Secti on 4.4 to decompose
the interarrival time distribution A (t ) into an r-sta ge Erla ngian distribution.
Note in these next two sections that we neurotically require at least one of
our distributi ons to be a pure exponential (this is also true for Chapters 5
and 6).
4.3.
b( x) -
i.e- A'
I ~
rfl(rfl x )'-le- r x
(r -
I)!
>_ 0
Thus
j=rk-i+1
(4. 19)
(4.20)
Pk =
:L P;
; E:(k-l)r+ l
1,2,3, . . .
Clearly this is a special case of the system M IG II which we will ana lyze in Chap ter 5
using the imbedded Markov chain approach .
t Note that this converts our proposed two-dimensional state vector into a one-dimensiona l
description .
4.3.
TH E QUEUE
M IErl1
127
rll
rll
ru
rll
j<O
(4.21 )
We may now write down the system sta te equations immediately by using
our flow conservation inspection method. (Note that we are writing the
forwa rd equations in equilibrium.) Thus we have
)'Po = rfl P1
1,2, .. .
(4.22)
(4.23)
Let us now use ou r " familiar" meth od of solving difference equat ions,
namely the z-tra nsform. Thus we define
co
P(z) =
2. P ;z;
j=o
As usual, we multiply thej th equation given in Eq. (4.23) by z; and then sum
over all applicable j. Thi s yields
co
co
co
.',
128
(A
+ r/l)[P(z) -
Pol
U P(z)
+ r; [P(z) -
Po - P,zl
The first term on the right-hand side of thi s last equation is obtained by
ta king special note of Eq . (4.21). Simplifying wehave
_P,,-,
o[cA--,+,--'--,r/ l_-_(",--r,--/ll,--z",-"
)l__r/l,--P
-"
A + r/l - U - (r/ll z)
We may now use Eq. (4.22) to simplify this last further :
P(z) =
P(z) =
yielding finally
_ ----'r/l,-P--,o,-,-[I_----'(--'I/-'
z)~l_
A + r/l - U - (r/llz)
r/lPo(I - z)
P(z) = - --'--=----'--r/l + Azr+l - (A + r/l )z
We may evaluate the con stant Po by recognizing that P(l)
L'Hospital's rule , thus
P(l)
=I=
(4.24)
I and using
r/lP o
r/l - ).r
Po = 1 - /l
In thi s system the arri val rate is Aand the avera ge service time is held fixed at
I//l independent of r, Thus we recognize that our utilizat ion fact or is
<l. _
(4.25)
p = J.x =-
/l
Substituting back into Eq . (4.24) we find
rp(1 - p)(1 - z)
(4.26)
rfl + Azr+l - (A + r/l)z
We must now invert th is z-transform to find the distr ibution of the number of
stage s in the system.
The case r = I , which is clearly the system M/M/l , presents no difficulties;
thi s case yields
P(z) =
P( z) = _fl
!.....:(,--I _- _ ,p--,).o....
( I_------'
z)'--/l + ),Z2 - (A + /l )z
( I - p)(1 - z)
+ p Z2 -
(I
+ p)z
4.3.
TH E QUEUE
M IE,II
129
The denominator factors int o (I - z)( 1 - pz) and so canceling the common
term (I - z) we obtain
P(z)
I - P
1 - pz
P(z)
= - - - - - - '1- -P-- --
r, = (I
- p) L A,{z;)- i
i= l
j = 1,2, . . . , r
- (4.29)
Many of the ana lytic pro blems in queueing theory red uce to the (difficult) task of locating
the roots of a funct ion.
130
and where as before Po = I - p. Thus we see for the system M/Er/1 that the
distribution of the number of stages in the system is a weighted sum of
geometric distributions. The waiting-time distribution may be calculated
using the methods developed later in Chapter 5.
4.4.
t ~ O
(4.30)
b(x ) = p.e- P z
x ~ O
(4.3 1)
Here the roles of interarrival time and service time are interchanged from th ose
of the previou s section ; in many ways these two systems are dual s of each
other. The system operates as follows : Given that an arrival has ju st occurred,
then one immediately introduces a new " arri ving" customer into an r-stage
Erlangian facility much like that in Figure 4.4 ; however, rather than consider
this to be a service facility we consider it to be an "arriving" facility. When this
arriving cust omer is inserted from the left side he must then pass th rough r
exponential stages each with parameter rA. It is clear that the pdf of the time
spent in the arri ving facility will be given by Eq, (4.30). When he exits from
the right side of the arriving facility he is then said to " arrive" to the queue ing
system ErlM /1. Immed iately upon his arriv al, a new customer (taken from an
infinite pool of available customers) is inserted into the left side of the arriving
box and the proces s is repeated. On ce having arrived, the customer joins the
queue , waits for service, and is then served accordin g to the distribution
given in Eq. (4.31). It is clear th at an appropriate state descripti on for th is
system is to specify not only the number of customers in the system, but also
to identify which stage in the arriving facility the arriving customer now
occupie s. We will consider that each customer who has already arrived (but
not yet departed) is contributing r stages of " arrival" ; in add ition we will
count the nu mber of stages so far completed by the arriving customer as a
further contribution to the number of arrival stages in the system. T hus our
state descript ion will consist of th e total number of stages of arrival currently
in the system ; when we find k customers in the system and when our arriving
customer is in the ith stage of arrival ( I ~ i ~ r) then the total number of
stages of a rrival in the system is given by
j
rk
+i-
On ce again let us use the definition given in Eq. (4.20) so that Pi is defined
to be the numb er of arrival stag es in the system; as always Pk will be the
4.4 .
rX
rX
THE QUEUE
ET/M /I
131
r,\
...
Figure 4.7 State-transition-rate diagram for number of stages: ET/M/l.
equilibrium probability for number of customers in the system , and clearly
they are related through
r(k+I) - l
Pk
= j =Lr k
P;
The system we have defined is an irreducible ergodic Markov chain with its
state -transition-rate diagram for stages given in Figure 4.7. Note that when a
customer departs from service, he "removes" r stages of "arrival" from the
system. Using our inspection method, we may write down the equilibrium
equations as
(4.32)
rAPo = p,Pr
+ p,PHT
rAP;_1 + p,PHT
I~j~r-I
rAP; = rAP;_1
(rA + p,)P; =
(4.33)
(4.34)
r ~j
P(z)
= LP;z;
j= O
L(P,
+ r).)Pjz; -
j= 1
r -l
co
;=1
;=1
LP,P;z; = LrAP;_1 zj
+ LP,PHTz j
(J)
j= 1
+ rA)[P(z) -
j
Po] - Ip,p jZ = rAzp(z)
+ ~[P(Z) - t P jzj]
We may now use Eq. (4.32) to eliminate the term P and then finally solve
for our tran sform to obtain
T- 1
(l - ZT) L P jz;
j~ O
P(z) =
(4.35)
rpzT+1 - (I + rp)zT + 1
T
132
P(z) =
__
('-I _
-_z--'r)'-----_
K (I - z)( 1 - * 0)
K = r/(I - Ilzo)
a nd so we ha ve
( I - z')( 1 - I/zo)
P( z) = - ' - - ---'---'--"'r (1 - z)( 1 - z/ zo)
(4.36)
4.4.
THE QUEUE
E,/ MfJ
133
z' F(z) <=> / "_,, where we recall that the notation <=> indica tes a transform
pair. With thi s observation then , we carry out the following pa rti al fraction
expan sion
P(z)
= (I
I - z
P; = /; - /;_,
By inspection we see th at
(1
j; =
(I -
ZO;-I)
j ~ O
j<O
1Zo' - ;-1( 1 _
P i --
(4.38)
(4.39)
Zo
We may simplify thi s last expression by recogni zing that the den om ina tor of
Eq. (4.35) must equal zero for z = zo; th is ob servation lead s to the equality
rp (zo - I) = I - zo-', and so Eq. (4. 39) becom es
P; = p(zo - I )z~- H
j ~ r
(4.40)
P,
J
( I - zoH )
( p(zo _
O~j<r
(4.41)
I)Z~-;-I
U sing o ur earlier relati onship between Pk and P; we find (the reader sho uld
check thi s a lgebra for himself) that the di stribution of th e nu mber of custo mers
in th e system is given by
P = {
I - p
k =O
p(z; - 1)zi)'k
k >O
- (4.42)
We note that thi s distribution for number of customers is geo me tric with a
slightly mod ified first term. We could a t this point calcul ate the waiting time
dis tr ibutio n , but we will postpone th at unt il we study th e system G/M /l in
Cha pter 6.
134
gi ~ P[bulk size is i]
(4.43)
(As an example, one may think of random-size families arriving at the doctor's
office for individual vaccinations.) As usual, we will assume that the arrival
rate (of bulks) is i.. Taking the number of customers in the system as our
state variable, we have the state-transition-rate diagram of Figure 4.8. In
this figure we have shown details only for state E k for clarity. Thus we find
that we can enter Ek from any state below it (since we permit bulks of any
size to arrive); similarly, we can move from state E k to any state above it, the
net rate at which we leave Ek being i.g, + i.g. + ... = AL;';., gi = A. If,
as usual we define Pk to be the equilibrium probability fer the number of
customers in the system, then we may write down the following equilibrium
To make the correspondence complete. the parameter for this exponential distribution
should indeed be ru, However, in the following development, we will choose the parameter
merely to be Il and recall this fact whenever we compare the bulk arrival system to the
system M/ETII.
4.5.
135
(A + fl)Pk = flPk+l
k-l
+ iL
PiAgk- i
=O
(4.44)
Apo = flPl
(4.45)
Equation (4.44) has equated the rate out of state Ek (the left-hand side) to
the rate into that state, where the first term refers to a service completion
and the second term (the sum) refers to all possible ways that arrivals may
occur and drive us into state Ek from below. Equation (4.45) is the single
boundary equation for the state Eo. As usual, we shall solve these equations
using the method of z-tra nsforms ; thus we have
00
(A + fl) L PkZk =
k =l
'
Jl
GO
Q)
-Z kL"",lPk+l i
k- l
+ L L PiAgk_iZk
(4.46)
k =l i =O
We may interchange the order of summation for the double sum such that
GO
k- l
<:0
00
(4.47)
The z-transform we are seeking is
P(z)
<Xl
= LPkZk
k- O
and we see from Eq. (4.47) that we should define the z-transform for the
distribution of bulk size as *
G(z) 4, gkzk
(4.48)
k- l
We could just as well have permittedgo > 0, which would then have allowed zero-size
bulks to arrive, and this would have put self-loops in our state-transition diagram corresponding to null arrivals. Had we done so, then the definition for G(z) would have ranged
from zero to infinity, and everything we say belowapplies for this case as well.
'I
136
(), + fl )[P(z) -
Po] =
~ [P(z)
- Po - P1z]
+ AP(Z)G(z)
N ote that the product P(z)G(z) is a manifestat ion of prop erty II in Ta ble I. I
of Appendi x 1, since we have in effect formed the tran sform of the convoluti on
of the sequence {Pk} with th at of {gk} in Eq. (4.44). Appl ying the bound ary
equation (4.45) and simplifying, Wi: have
P(z) =
fl Po(1 - z)
fl(1 - z) - ).z(l - G(z)]
(4.50)
fl
It is instructive to consider the special case where all bulk sizes are the same,
namely,
k
~ r
Clearly, this is the simplified bul k system discussed in the beginning of thi s
section ; it correspond s exactly to the system M/Er/l (where we must make the
minor modification as indicated in our earlier footn ote that fl must now be
replaced by rfl) . We find immediately that G(z) = ZT and after substituting
this into our solution Eq . (4.49) we find that it correspo nds exactly to our
earlier solution Eq. (4.26) as, of course, it must.
4.6.
BU LK SERVICE SYSTEMS
137
138
I'
I'
(4.51)
P(z) = ~p,tZk
k _O
We then multiply by z\ sum, and then identify P(z) to obtain in the usual way
(l
+ p.)[P(z) -
Po]
= ;[P(Z) -
ktp~] + lzP(z)
P(z)
+ fl)Pozr
(l + fl)zr + p.
fl ~ p,tZ' - (l
~kc=
-,,O -----}.Zr+1 _
From our boundary Eq. (4.51) we see that the negative term in the numerator
of this last equation may be written as
- zr(}.po
+ p.Po) =
-flZr~Pk
k= O
and so we have
r -I
P(z) =
Pk(Zk - zr)
k_O
rpzr+l - (1
+ rp) zr +
(4.52)
where we have defined p = Afflr since, for this system, up to r customers may
be served simultaneously in an interval whose average length is l /fl sec. We
4.7.
139
PtCzk - ZT)
k= O
1- z
rp zT+l - (1
+ rp)zT + 1
(1 - z)(1 - Z/Zo)
Taking advantage of this last equation we may then cancel common factors
in the numerator and denominator of Eq. (4.52) to obtain
1
P(z) = 1 - I/zo
1 - z/ zo
I,
(4.53)
This last we may invert by inspection to obtain finally the distribution for the
number of customers in our bulk service system
k = 0, 1,2, . . .
-(4.54)
Once again we see the familiar geometric distribution appear in the solution
of our Markovian queueing systems!
140
varia tio n that is less than th a! of the exponenti al distributi on [from Eq.
(4.14) we see that Co = IIJ r wherea s for r = I the exponenti al gives
C b = I] and so in some sense Erlang ian rand om variables are " mo re regular"
than exponent ial variables. Thi s situation is cert ainly less than completely
general.
One dire ction for generalizatio n would be to remove the restriction that
one of our two basic queueing distributi on s must be exponential ; tha t is, we
certa inly could consider the system ErJErJ I in which we have an ra-stage
Erlangian distributi on for the interarr ival time s and an rb-stage Erlan gian
distribution for the service times . * On the other hand , we could atte mpt to
generali ze by broadening the class of distributions we consider beyond that
of the Erlangian. Thi s we do next.
We wish to find a stage-type arran gement that gives larger coefficient s of
va riation than the exponential. One might consider a generalizatio n of the
r-stage Erlangi an in which we permit each stage to have a differ ent service
rate (say, the ith stage has rate fl ,). Perhaps this will extend the ran ge of C,
ab ove unit y. In this case we will ha ve instead of Eq. (4. 15) a Lapl ace tran sform for the service-time pdf given by
B*(s) -
(~)(~)
s + fll S + fl2
...
(~)
S + fl r
(4.55)
The service time density hex) will merely be the con volution of r exponen tial
den sities each with its own parameter fl i. The squa red coefficient of variati on
in this case is easily shown [see Eq. (11.26), Appendix II] to be
But for real a, ~ 0, it is always tru e th at I i a/ ~ (I i a;)2 since the right hand side contains the left-hand side plus the sum of all the nonn ega tive
cros s term s. Ch oosing a, = I lfl;, we find that C b2 ~ I. Thu s, unfortuna tely,
no gener alization to larger coefficients of variation is obtained this way.
We previou sly found that sending a customer th rough a n increas ing
sequen ce of faster exponential stages in series tended to reduce the vari abil ity
of the service time , and so o ne might expect that sending him through a
parallel arra ngement would increase the variability. Thi s in fact is tru e. Let
us therefore con sider the two-sta ge parallel service system show n in Figure
4.10. The situation may be contrasted to th e service st ructure shown in
Figure 4.3. In Figure 4.10 an entering customer approaches the lar ge oval
(which represents the service facility) from the left. Upo n entry into the
We co nsider this short ly.
4.7.
141
Service facility
a nd also we ha ve
+ fl,
0(2
~
5
+ fl 2
142
Service facility
x =
R C1.
1: i = l }li
i
(4.58)
Now, Eq . (II. 35), the Cauchy-Schwarz ineq uality , may also be expre ssed as
follo ws (fo r ai' b, real):
(4.59)
4.7.
143
(T his is often referred to as the Cauchy inequality.) lfwe mak e th e asso ciatio n
ai = J CJ. i , hi = J "- J,u,, then Eq. (4.59) shows
Jli
Pi
But from Eq, (4.56) the first factor on the right-hand side of thi s inequ alit y
is ju st unity; thi s result along wit h Eq. (4.58) permits us to write
- (4.60)
which pr oves the desired result.
One might expect t hat an a nalysis by the method of stages exists for the
systems M/H rt/I , H rt/M fI, H R a / H rtb fI , and thi s is indeed true. The rea son
th at the ana lysis can proceed is that we may take account of the nonexponential character of the service (or arrival) facilit y merely by specifying which
stage within the service (or arri val) facility the customer currentl y occupies.
Thi s inform at ion along with a sta tement regarding the number of customers
in the system creates a Mark ov chain , which may then be studied much as
was done earlier in this chapt er.
For exa mple, the system M/H 2 /l would have the sta te-tra nsitio n-rate
diag ram show n in Figure 4.12. In this figure the designati on k, implies th at
the system contains k customers and that the customer in service is locat ed
in stage i (i = I , 2). T he transitions for higher numbered sta tes are ide ntica l
to the transitions between states I, and 2,.
We are now led directly int o the foll owing genera lization of series stages
and parallel stages ; specifica lly we are free to combine series and par allel
144
r,
Service facility
- rt.i
,~l
x
I
x~O
(4.61)
4.7.
145
",
",
Service facilit y
r, (
= LR , IT
.uzu.:
II
;- 1
;- 1
+ {-IH
(4.63)
146
MARKOVIAN QU EU ES IN EQUILIBRIUM
service time may have poles located anywhere in the negati ve half s-plane
[that is, for Re(s) < 0]. Cox [COX 55] has studied this pr oblem and suggests
that complex values for the exponential parameters rill . be permitted ; the
ar gument is that whereas this correspond s to no physically realizable exponential stage, so long as we provide poles in complex conju gate pai rs then the
entire service facility will have a real pdf, which corresponds to the feasible
cases. If we permi t complex-conjugate pair s of poles th en we have complete
generality in synthesizing any rational functi on of s for our service-time
tran sform B *(s). In addition, we have in effect outlined a meth od of solving
these systems by keeping track of the state of the service facility. Moreover ,
we can similarly construct an interarrival time distri buti on from seriesparallel stages, and thereby we are capable of con siderin g any G/G/ I system
where the distributions have transform s that are rational function s of s.
It is further true that any nonrati onal functi on of s may be approx imated
arbitrarily closely with rational functi ons. * Thus in pr inciple we have solved
a very general problem. Let us discuss this meth od of solution. Th e sta te
descript ion clearly will be the number of customers in the system, the stage
in which the arriving cust ome r finds himself within the (stage-type) arriving
box and the stage in which the cust omer finds himself in service. Fr om thi s
we may draw a (horribly complicated) state-transition dia gram . Once we
have this diagram we may (by inspect ion) write down the equilibrium
equations in a rather straightfo rward manner ; th is large set of equ ati on s
will typ ically have many bound ary conditions. H owever, these equ ati on s
will all be linear in the unknown s and so the solution meth od is straightforward (albeit extremely tedi ou s). What more natural setup for a computer
solutio n could one ask for ? Ind eed , a digital co mputer is extremely adept at
solving large sets of linear equ ati ons (such a task is much eas ier for a digital
computer to handle than is a sma ll set of nonlinear equ ations). In carrying
out the digital solution of this (typically infinite) set of linear equa tions, we
must redu ce it to a finite set; thi s can only be done in an ap pro ximate way by
first deciding at what point we ar e satisfied in truncatin g the seq uence
Po ,PI> p", .. . . Then we may solve the finite set and perh ap s extrap olate the
In a rea l sense, then, we are faced with an approximation pro blem ; how may we "best"
app roximate a given dist ribution by one tha t has a rat iona l tra nsform. If we a re given a
pdf in numerical form then Prony' s method IWHI T 44] is one acceptable procedure. On
the other hand, if the pdf is given analytica lly it is difficult to describe a genera l proced ure
for suita ble approxi mation. Of course one wou ld like to make these approximati ons with
the fewest number of stages possib le. We comment that if one wishes to fit the first an d
second moment s of a given distributi on by the method of stages then the number of stage s
canno t be significantly less than I / Cb" ; unfortun ate ly, this implies that when the distribut ion tends to concentrate ar ound a fixed value, then the num ber of stage s required
grows ra ther quickly.
4.8.
147
solution to the infinite set; all this is in way of ap proximation and hopefull y
we are able to carry out the .computation far enough so that the neglected
terms a re indeed negligible.
One must not overemphas ize the usefulne ss of this pr ocedure ; this solutio n
meth od is not as yet a utomated but does at least in principl e provide a meth od
of approach. Other anal ytic meth od s for handling the more comple x qu eueing
situatio ns are discussed in the balance o f this book.
8f---t--~Of----
148
exponential server also of rate p,. The basic que stion is to solve for the interarrival time distribut ion feeding node two ; th is certainly will be equivalent to
the interdeparture time distribution from node one . Let d (t ) be the pdf
describing the interdeparture process from node one and as usual let its
Laplace transform be denoted by D*(s). Let us now calculate D*(s). When a
customer departs from node one either a second customer is ava ilable in the
queue and ready to be taken into service immed iately or the queue is empt y.
In the first case, the time until this next customer departs from node one will
be distributed exactly as a service time and in that case we will have
D* (s ) l node one nouempty =
B*(s)
On the other hand , if the node is empty upon th is first customer's departure
then we must wait for the sum of two intervals, the first being the time until
the second customer arrives and the next being his service time ; since these
two intervals are independently distributed then the pdf of the sum must be
the convoluti on of the pdf's for each. Certainly then the tran sform of the sum
pdf will be the pr oduct of the transforms of the individual pdfs and so we
have
=- -
s +A
B*(s)
where we have given the explicit expression for the tran sform of the interarrival time densit y. Since we ha ve an expo nential server we may also write
B*(s) = p,/ (s + p, ); furthermore , as we shall discuss in Ch apter 5 the probability of a departure leaving behind an empty system is the same as the
probability of a n a rrival finding an empty system, namely, I - p. T his
permits us to write down the unc onditi onal transform for the inte rdepa rture
time density as
D*(s)
p) D*(S)lnode one
(I -
empty
(I _
p)(~)(_P ) + p(---.f!- )
S +A
s+ p
s+ p
=~
(4.65)
S +A
e-).'
t~ O
4.8.
149
T hus we find the remar kable conclu sion that the interdeparture times are
expo nentia lly distribut ed with t he same parameter as the interarrival times!
In other words (in the case of a stable sta tionary queueing system), a Poisson
pr ocess driving an exponential server generate s a Poisson process for departures. This startling result is usually referred to as Burk e's theorem
[BURK 56]; a number of others also studied the pr oblem (see, for example,
the discussion in [SAAT 65]). In fact , Burke' s theorem says more, namely,
that the steady-sta te output of a stable M/M /m queu e with input parameter
Aand service-time parameter flo for each of the m cha nnels is in fact a Poisson
process at the same rate A. Burke also established that the output process was
independent of the other processes in the system. It has also been sho wn tha t
the M/M /m system is the only such FCFS system with this pro perty. Returning
no w to Figure 4.15 we see therefore that node two is dri ven by an independent
Poisson arrival process and therefore it too beha ves like an M/M fJ system
and so may be analyzed independently of node one. In fact Burke's the orem
tells us that we may connect many multiple- server nodes (each server with
exponential pdf) together in a feedfor ward * network fashion and still
preserve th is node-by-node decomp osition .
Jack son [JACK 57) addressed himself to this question by considering an
arbitrar y net work of queue s. The system he studied consists of N nodes
where the it h node consists of m , exponential servers each with par ameter fIo i;
fur ther the ith node receives arrivals from outside the system in the form of a
Poisson process at rate Y i' Th us if N = I then we have an M/M /m system.
Upon leaving the ith node a customer then proceeds to the jth node with
probability r ii ; this formul ati on permits the case where r ~ O. On the other
ha nd, aft er completing service in the ith node the proba bility that the customer departs from the netwo rk (never to return again) is given by I - Li'.:,l r ii .
We must calculate the total ave rage arriva l rate of customers to a given node.
T o do so, we must sum the (Poisson) ar rivals from out side the system plu s
arrivals (no t necessarily Poisson) from all intern al nodes; that is, den oting
th e total average a rrival rate to node i by j' i we easily find that this set of
par ameters must sa tisfy the following equ at ions :
S
Ai =
r, + L
}1i i
i=I , 2, .. . , N
- (4.66)
j= l
I n order for all nod es in this syste m to represent ergodic Ma rkov cha ins we
require that i'i < m ill i for all i; aga in we cau tio n the read er not to confuse
t he nodes in this discussion with the system states of each node from our
Specifically we do not permit feedba ck pat hs since this may dest roy the Poisso n nature of
the feedback depart ure stream. In sp ite of this, the following discussion of Ja ckson's work
points ou t that even networks with feedback are such that the individua l node s behave
as if they were fed totall y by Poisson arrivals, when in fact they are not.
l
ISO
I,
and ' pi (k ,) is given as the solutio n to the classical M/M / m system [see. for
example, Eqs . (3.37)-(3.39) with the obvious chan ge in not ation ]! This last
result is commonly referred to as Jack son's theorem . On ce agai n we see the
"product" form of solution for Mark o vian queues in equ ilibriu m.
A mod ificat ion of Jack son 's network of queues was con sidered by G ordon
and Ne well [GORD 67]. The modification th ey investiga ted was th at of a
closed Mark ovian netw ork in the sense that a fixed and finite number of
cust omers, say K , are con side red to be in the system and a re trapped in that
system in the sense th at no o thers may enter and none of the se may leave : this
cor responds to Jack son's case in which ~;:. \ r ij = I and Yi = 0 for all i.
(A n interestin g example of thi s class of systems know n as cyclic queues had
been con sidered earli er by K oenigsberg [KO EN 58]; a cyclic queue is a
tandem q ueue in which the last stage is conn ected bac k to the first.) In the
general case co nsidered by G ord on and Ne well we do not quite expect a
pr oduct soluti on since there is a dep end ency a mo ng the element s of the sta te
vecto r (k\ . k, . . . k s ) as foll ows :
S
I ki =
(4.68)
i= l
As is the case for Jackson 's model we ass ume that this discre te-state Ma rkov
pr ocess is irred ucible and therefor e a unique equ ilibrium pr o bability
distribution exists for p(k\ . k" . . . , k s ). In thi s mo del, however , th ere is a
finite num ber of sta tes; in particular it is easy to see that the num ber of
dist ingui shable states of th e system is eq ual to the nu mber of ways in which
o ne can place K custom ers a mo ng th e N nodes. and is eq ua l to the binomial
coefficient
(
+K
N -
I)
4.8.
151
The following equations desc ribe the behavior of the equilibrium distribution
of custo mers in this closed syste m and may be written by inspection as
.v
i= 1
s s
IOkj_ICf.,(k i
k2 ,
l,
k, -
1, . . . , k ,
i = l r "", l
+ I , ... , ks )
(4.69)
where the discrete unit step-funct ion defined in Appendix I ta kes the for m
k = 0, 1, 2, . . .
'" {I0
(4.70)
k< O
and is included in' the eq uilibri um equations to indicate the fact that the
service rate must be zero when a given node is empty ; furthe rmore we define
Ok =
Cf.i(k i) =
k.
'
{11l
k 1<
-
nI ,,
which merely gives th e number of cust omers in service in th e ith node when
there a re k, custo mers a t th at nod e. As usual the left-h a nd side of Eq . (4.69)
des cribes the flow of 'pro bability out of sta te (k l , k 2 , , k".) whereas the
right-hand side acco unts for the flow of probability into that state from
neighboring states. Let us proceed to write down the solution to these equation s. We define the function (li(k i) as follows :
k<
, - m ,
Consider a set of numbers {Xi}' which are solutio ns to the foliowing set of
linea r equations :
N
# iXi
= L p j x jr ji
i = 1,2, . . . , lV
(4.71)
;=1
152
G(K)
x .k j
fliCk ,)
(4.73)
Here we imply that the summation is taken over all state vectors k ~ (k" . . . ,
k N ) that lie in the set A, and this is the set of all state vectors for which Eq.
(4.68) holds. Thi s then is the solution to the closed finite queueing network
pr oblem, and we observe once aga in that it has the product form.
We may expose the pr oduct formulati on somewhat further by co nsidering
the case where K ~ 00 . As it turns out, the quantities x;/m, are critical in this
calculation ; we will assume that there exists a unique such rati o that is
largest and we will ren umber the nodes such that x,!m, > x;/m, (i,e I).
It can then be shown that pik, k 2 , . . , k N ) ~ 0 for any state in which k, <
00 . Thi s implies that an infinite number of customers will form in node one ,
and th is node is often referred to as the " bottleneck" for the given network .
On the other hand , however, the marginal distribution p (k 2 , , k,v) is
well-defined in the limit and takes the form
(4.74)
Thus we see the pr oduct solution directly for this marginal distribution and ,
of cour se, it is similar to Jackson's theorem in Eq. (4.67); note that in one
case we have an open system (one that permit s external a rrivals) and in the
other case we have a closed system. As we shall see in Chapter 4, Volume II ,
th is model has significant applications in time-shared and multi-access
computer systems.
Jack son (JACK 63] earlier con sidered an even more genera l open queue ing
system, which includes the closed system just considered as a special case.
The new wrinkles introduced by Jackson a re, first , that the customer arrival
proce ss is permitted to depend up on the total number of customers in the
system (using this, he easily creates closed network s) and, second, that the
service rate at any node may be a function of the number of cust omers in that
node. Thus defining
S(k) ~ k , + k, + . .. + k
4.8.
153
we the n permit the tota l a rrival rate to be a function of S(k) when the system
sta te is given by the vecto r k. Similarl y we define the exp onential service
rat e a t node i to be Ilk, when there are k , cu stome rs at that nod e (includ ing
th ose in ser vice). As earlier, we ha ve the node transiti on probabilities ' ij
(i , j = 1,2 , . . . , N) wit h the following additional definitions : '0, is the
probability th at the next externally generated arrival wiII enter the network
at node i ; ' i .N +l is the probability that a cu stomer leaving node i departs
from the system ; and 'O, N +l is the probability th at the next arrival will
require no service from the system and leave immediately upon arrival. Thus
we see that in this case y, = 'Oiy(S(k, where y(S(k is the total external
arrival rate to the system [conditioned on the number of customers S (k) at
the moment] from our external Poisson process. It can be seen that the
prob ability o f a customer arriving at node i l and then passing through the
node sequence i 2 , i 3 , . . , in and then departing is given by ' oil' I,,',,i,' "
" . _ l i . 'i V+l ' Rather than seek the solution of Eq . (4.66) for the traffic
rates, since the y are funct ion s of the total number of cu stomer s in the system
we rather seek the solution for the following equivalent set :
N
e,
(4.75)
[In the case where the arrival rates are independent of the number in the
system then Eqs. (4.66) and (4.75) differ by a multiplicative factor eq ual to
the total arrival rate of customers to the system.] We assume th at the solution
to Eq. (4.75) exists, is unique , and is such that e, ~ 0 for all i; th is is equ ivalent to assuming that with prob ability I a cu stomer' s j ourney throu gh the
netwo rk is of finite length . e, is, in fact , the expected number of times a
customer will visit nod e i in passing through the netw ork.
Let us define the time-dependent state probabilities as
Pk(t ) = P[system (vecto r) state a t time t is k]
(4.76)
By our usual methods we may write down the differential-difference equations go vern ing these probabilities as follows:
..v
J.V
1.V
i = l j= l
(4.77)
i /:- j
where terms a re omitted when any component of the vector a rgument goes
negative ; k (i-) = k except for its ith component, which takes o n the value
a:
154
k, - 1; k (i+) = k except for its ith comp onent , which takes on the value
k , + I; and k (i,j) = k except that its ith comp onent is k , - I and its jth
component is k , + I where i ~ j . Complex as this notati on appears its
interpretat ion sho uld be rather straightforward for the reader. Jackson shows
that the equilibrium distribution is unique (if it exists) an d de fines it in our
earl ier notati on to be lim Pk(t ) g Pk g pt k, k 2 , , k N) as t ->- 00. In
order to give the equilibrium solution for Pk we must unfortunately define
the following furt her notation :
gII
K- l
F(K )
y(S(k
K = 0, 1, 2, . ..
(4.78)
S lk ) ~ O
ki
II II .5-
f( k) ';'
(4.79)
1"", 1 ij = l f-l; i
H(K )
I f(k )
(4.80)
k e..l
g{K~/(K)H(K)
(4.81)
otherwise
where the set A shown in Eq . (4.80) is the same as that defined for Eq. (4.73).
In ter ms of these definiti ons then Jackson's more general theorem states that
if G < 00 then a unique equilibrium-state prob ability distribution exists for
the general state-dependent networks and is given by
Pk =
f( k) F(S( k
(4.82)
Again we detect the product form of solutio n. It is also possible to show that
in the case when arrivals are independent of the total number in the system
[that is, y g y( S(k ) then even in the case of state-dependent service rates
Jack son's first the orem applies, namely, that the jo int pdf fact ors into the
produc t of the individual pd f' s given in Eq. (4.67). In fact PiCk;) tu rns out to
be the same as the probabi lity distribut ion for the nu mber of customers in a
single-node system where arriv a ls come from a Poisson pr ocess at rate y e;
and with the sta te-dependent service rates fl., such as we ha ve derived for our
general birth-death process in Chapter 3. Thu s one impact of Jackson's
second theorem is that for the constant-arrival-rate case, the equilibrium
prob abili ty distributions of number of customer s in the system at individ ual
4.8.
155
centers are independent of other centers; in addition, each of these distri but ions is identical to the weil-known single-node service center with the
sa me pa ra meters. * A remar kable result!
This last theo rem is perhap s as far as one can got with simple Markovian
networks, since it seems to extend Burke' s theo rem in its most genera l sense.
When one relaxes the Mar kovian assumpti on on arrivals and/o r service
times, then extreme complexity in the inter depar ture process arises not only
from its marginal distri butio n, but also from its lack of independence on
othe r state variables.
These Markovian queuein g network s lead to rath er depr essing sets of
(linear) system equ ations ; this is due to the enormous (yet finite) sta te
descripti on. It is indeed remar kable that such systems do possess reasonably
straightforward solutions. The key to solution lies in the observation that
these systems may be repr esented as Mark ovian population processes, as
neatly described by Kingman [KI NG 69) and as recently pursued by Chandy
[CHAN 72). In particular , a Mar kov popu lation process is a continuous-time
Markov cha in over the set of finite-dimen sional sta te vectors k = (k 1 , k 2 , ,
k s ) for which transitions are permitted only between sta tesf : k a nd k (i+)
(an external ar rival at node i) ; k and k (i- ) (an external departure from node
i) ; and k and k(i ,j ) (an internal tra nsfer from node ito nodej). Kingman
gives an elegant discussion of the interesting classes and properties of these
processes (using the notion and properties of reversible Markov chai ns).
Chandy discusses so me of these issues by observing that the equilibrium
pr obabi lities for the system sta tes obey not only the global-balance equati ons
that we have so far seen (and typica lly which lead to product-form solutions)
bu t also that this system of equati ons may be decomposed into many sets of
smaller systems of equations, each of which is simpler to solve. Th is tran sfor med set is referred to as the set of " local" -balance equa tions , which we
now proceed to discuss.
The concep t of local balance is most valuab le when one deals with a network of queu es. H owever, the concept does apply to single-node Mar kovian
queues, and in fact we have already seen an example of loca l balan ce at pla y.
Thi s model also permit s one to handle the closed queueing systems studied by Gordon
a nd Newell. In order to crea te the constant tot al number of customers one need merely set
y (k ) = 0 for k ~ K an d y( K - I) = co, where K is the fixed number one wishes to conta in
within the system. In order to keep the node tran sition probabilities iden tica l in the open and
closed systems, let us denote the former as earlier by r;; and the latter now by rii' : to mak e
th e limit of Jackson' s genera l system equivalent to the closed system of Gordon an d
Newell we then require r;;' = ri; + (r i .N+l)(rU;)'
t In Chapter 4, Volume II , we describe some recent result s that do in fact exte nd the model
to han dle different customer classes and different service disciplines at each node (permitting. in some ca ses, more genera l serv ice-time distributions).
t Sec the definitions following Eq. (4.77).
....
156
Node l
Node 2
Nod e 3
:r
4.8.
157
Clearly we have ' 13 = '32 = ' 21 = I and ' if = 0 otherwise. Our state description is mer ely the triplet (k l , k 2 , k 3 ) , where as usual k , gives the number of
custome rs in node i and where we require, of course, that k 1 + k 2 + k 3 = 2.
For thi s net work we will therefore have exactly
N
(
+K
N -
I) 6
=
= p2p( l,
1,0)
(4.83)
fl2P(0, 2, 0)
= P3P(0, I , I)
(4.84)
(4.85)
(4.86)
fllp( l, 1,0)
(4.87)
(4.88)
158
Each of these glob al-balance equ ati ons is of the form whereby the left-hand
side repre sents the flow out of a state and the right-hand side represents the
flow int o that sta te. Equations (4.83)-(4.85) are already local-balance
equations as we shall see; Eqs. (4.86)-(4.88) have been written so th at th e
first term on the left-hand side of each equation balances the first term on the
right-hand side of the equ ation, and likewise for the seco nd term s. Thus
Eq . (4.86) gives rise to the following local-balance equations:
PIP(1, 1,0)
= p ,p(O, 2, 0)
(4.89)
p,p(l , 1,0)
= P3P(I, 0,
(4.90)
I)
Note, for example, th at Eq . (4.89) takes the rate out of sta te (I , 1,0) due to
a departure from node I and equates it to the rate into that state due to
arrivals at node I; similarly , Eq . (4.90) doe s likewise for departures and
arrivals at node 2. This is the principle of local balance and we see therefor e
that Eqs . (4.83)-(4.85) are already of this form . Thus we genera te nine
local -balance equ ations* (four of which mu st therefore be redundan t when
we con sider the con servation of probability), each of which is extremely
simple and therefore permits a stra ightfo rward solution to be found. If thi s
set of equations do es indeed have a solution, then they certainly guarantee
that the global equations are satisfied and therefore that the solution we have
found is the unique solution to the original global equ ati ons. The read er may
easily verify the following solution :
\,
p(l , 0, I) = fil
- P( 2,0,0)
fi 3
pel, 1,0)
= fil p(2, 0, 0)
fi'
_ (/l l) 2
p(O, 1, 1) -
p(_ , 0, 0)
fl 2fl3
p(O , 0,
2)= (~rp(2, 0, 0)
fl l)2p( 2, 0, 0)
p(O, 2, 0) = ( ;:.
p(2 , 0, 0) = [1
+ PI +
fl3
!!:l
fl 3
(fll)1 fl 2
(4.91)
Had we allowed all possible transitions among nodes (rather th an the cyclic
behavior in this example) then the state-transition-rate dia gram would have
The reade r should write them o ut directly from Figure 4.17.
4.8.
159
3, K
4).
perm itted transitions in both directions where now only unidire ction al
transition s are perm itt ed ; however, it will always be true that only tr ansitions
t o neare st-nei ghb or states (in thi s two-d imensional dia gram ) are permitted
so that such a diagram can always be drawn in a planar fashion . For example,
had we allowed four customer s in a n arbitra rily conn ected three-node
network , then the state-transition-rate di agram would have been as shown in
Figure 4.18. In t his diagram we repr esen t possible tran siti ons between nodes
by an undirected branch (representing two one-way branches in opposi te
directions). Also , we have collected together sets of branches by joinin g the m
with a heavy line, and these are mean t to repr esent branches whose cont ributi ons appear in the same local-balance equ ati on . Th ese diagrams can be
extended to higher dimensions when the re a re more than three nodes in the
system. In particular , with four nodes we get a tetrahedron (that is, a threedimensional simplex). In general, with N nodes we will get an (N - 1)dimensional simplex with K + 1 nodes along each edge (where K = number
of customers in the closed system). We note in these diagram s that all node s
lying in a given straight line (pa ral!el to any base of the simplex) maintai n one
comp onent of the sta te vector at a constant value and that this value increases
or decreases by un ity as one moves to a parallel set of nodes. The localbalan ce equ ati ons are identi fied as balancing flow in th at set of bran ches that
conn ects a given node on one of these constant lines to all other nodes on
that constant line adjacent and parallel to this node , and th at decreases by
unity that component that had been held con stant. In summa ry, then , the
160
local- bal ance equ ati on s a re tr ivial to write down, a nd if one can succeed in
findin g a solution that satisfies them , then one has found the solut ion to the
globa l-bala nce equati on s as well!
As we see, most of the se Markovian ne tworks lead to rather complex
systems of linear equations. Wall ace and Rosenberg [WALL 66] propose a
numerical so lutio n metho d for a large class of the se equation s which is
computati on ally effi cient. They di scuss a computer program, which is designed
to evaluate the equilibrium probability distribution s of state variables in
very large finite Mark ovian queueing net works. Specifically, it is designed to
so lve the equilibrium equ ati on s of the form given in Eqs. (2.50) a nd (2. 116),
namely , 7t = 7tP and 7tQ = O. The procedure is of the "power-iteration
type" such th at if7t (i) is the ith iterate then 7t(i + I) = 7t(i)R is the (i + I)th
iterate ; the matrix R is either equal to the matri x GtP + (I - Gt) I (where a: is a
scalar) or equal to the matrix ~ Q + I (where ~ is a scalar and I is the identity
matrix), depending up on wh ich of the two above equation s is to be solved .
The sca la rs a: and ~ a re ch osen carefully so as to give a n efficient con vergence
to the solution of the se equations. The speed of solution is quite remarkable
and the reader is referred to [WALL 66] and its references for further det ails.
Thus ends our study of purely Markovian systems in equilibrium. The
unify ing feature throughout Chapters 3 and 4 has been that these systems
give rise to product-type so lutions; one is therefore urged to look for
solution s of thi s for m wheneve r Mark ovian queueing system s are enc ountered. In the next chapter we permit either A (t) or B (x) (but not both) to be
of arbitrary form , requiring the other to rem ain in exponential form .
REFERENCES
BURK 56
CHAN 72 Chandy, K. M., " T he Analysis and Solutions for General Queueing
Networks," Proc. Sixth Annual Princeton Conference on Information
Sciences and Systems , Princeton University, March 1972.
Cox, D. R., " A Use of Complex Probabilit ies in the Theory of StoCOX 55
chastic Processes," Proceeding s Cambridge Philosophical Socie ty,
51,313-31 9 (1955).
JACK 63
KING 69
EXER CISES
161
SAAT 65
WHIT 44
EXERCISES
4.1.
(a)
(b)
(c)
4.2.
o <i < n,
< i < k,
162
of the a rrival instants one new customer will ent er the system
p rob ability 112 or two new customers will enter simultaneously
probabilit y 1/2.
Draw the state-transition-rate diagram for this system.
Using the method of non-ne arest-neighbor systems write down
.
the equ ilibr ium equat ions for Ps(c) Find P(z) and also evaluate any co nstants in this expression so
that P(z) is given in terms only of i. and p.. If possible eliminate
any commo n factors in the num erat or a nd den ominat or of this
expression [this make s life simpler for yo u in part (d)].
(d) Fr om part (c) find th e expected number of customers in the
system.
(e) Repeat part (c) using the results obtained in Section 4.5 directly .
4.7.
For the bulk arri val system of Section 4.5, assume (for 0
that
g i = (1 - !y')!y' i
i = 0, 1,2, . . .
EXERCISES
163
F or the bulk arrival system studied in Section 4.5, find the mean N
and variance aN" for the number of customers in the system. Express
your answers in terms of the moments of the bulk arrival distribution.
Consider an M/M /I system with the followin g variation: Whenever
the server becomes free, he accepts (11'0 customers (if at least two are
available) from the queue into service simultaneously. Of these two
customers, only one receives service; when the service for this one is
co mpleted, both customers depart (and so the other cust omer got a
" free ride").
If only one cust omer is available in the queue when the server
becomes free, then that cust omer is accepted alone and is serviced;
if a new customer happens to arrive when this single customer is being
served, then the new customer joins the old one in service and this
new customer receives a "free ride ."
In all cases, the service time is exponentially distributed with mean
I/p, sec and the average (Poisson) arrival rate is A customers per
second .
(a) Draw the appropriate state diagram.
(b) Write down the appropri ate difference equati ons for P =
equilibrium probability of finding k customers in the system.
(e) Solve for P(z) in term s of Po and Pt.
(d) Express Pi in terms of Po.
We con sider the denominator polynomial in Eq. (4.35) for the system
Er/ M/ I. Of the r + I roots, we know that one occurs at z = I. Use
Rouche's theorem (see Appendix I) to show that exactly r - I of the
remain ing r roots lie in the unit disk 1=1 ::s; I and therefore exactly
one roo t, say zo, lies in the region IZol > I.
Show that the soluti on to Eq. (4.7 1) gives a set of variables {Xi} which
gua ran tee that Eq. (4.72) is indeed the solution to Eq. (4.69).
(a) Draw the state-transitio n-rate diagram sho wing local balance
for the case (N = 3, K = 5) with the following structure:
164
o
o o
o
2
I -
where 0 < ~ < I and nodes 0 and N + I are the "source" and "sink'.'
nodes, respectively. We also have (for some integer K)
k,
k,
+ k, - K
+ kz = K
PART
III
INTERMEDIATE
QUEUEING THEORY
We are here concerned with those queueing systems for which we can still
apply certain simplifications due to their Markovian nature. We encounter
those systems that are representable as imbedded Markov chains, namely,
the M/G/I and the G/M/m queues. In Chapter 5 we rapidly develop the basic
equilibrium equations for M/G/l giving th e noto rious Pollaczek-Khinchin
equations for queue length and waiting time . We next discuss the busy period
and, finally, introduce some moderately advanced techniques for studying
these systems, even commenting a bit on the time-dependent solutions.
Similarly for the queue G/M /m in Chapter 6, we find that we can make some
very specific statements about the equilibrium system behavior and, in fact,
find that the conditional distribution of waiting time will always be exponen tial rega rdless of the interarrival time distri bution! Similarly, the conditional
que ue-length distribution is shown to be geometric. We note in this part that
the methods of solution are quite different from that studied in Part II, but
that much of the underlying behavior is similar; in particular the mean
queue size, the mean waiting time, and the mean busy period duration
all are inversely proportional to I - p as earlier. In Chapter 7 we briefly
investigate a rather pleasing interpretation of transforms in terms of
probabilities.
The techniques we had used in Chapter 3 [the explicit product solution
of Eq. (3.1 I)] and in Chapter 4 (flow co nservation) are replaced by an
indirect a-transform approach in Chapter 5. However, in Chapter 6, we return
once again to the flow con servation inherent in the 1t = 1tP solution.
165
s
The Queue MjGjl
c ....: )
-:
-;..: \_~
' L;,..
t.
:,../
- ..::::
7 ~' .:.,-.
" . I_ ~ ..
167
168
THE QUEUE
M/G /I
1 - e-J.t
t::O:O
with an average arrival rate of A customers per second, a mean intera rrival
time of I/Asec, and a variance O"a 2 = 1/;,2. As defined in Chapter 2 we denote
the kth moment of service time by
x
roo
= Jo xkb(x) dx
5.2.
169
pr ovide Xo(t) , the expended service time , which is continuous. We have thus
evolved from a discrete-state description to a continuous-state description ,
and this essenti al difference complicates the analysis.
It is possible to proceed with a general theory based upon the couplet
[N (t ), Xo(t )] as a state vector and such a method of solution is referred to as
the method of supplementary cariables. For a treatment of this sort the reader
is referred to Cox [COX 55] and Kendall [KEND 53); Henderson [HEND
72] also discusses this method, but chooses the remaining service time instead
of the expended service time as the supplementary variable. In this text we
choose to use the method of the imbedded Markov chain as discussed below.
However, before we proceed with the method itself, it is clear that we should
understand some properties of the expended service time; this we do in the
following section.
5.2. THE PARADOX OF RESIDUAL LIFE: A BIT OF
RENEWAL THEORY
We are concerned here with the case where an arri ving customer finds a
partially served customer in the service facility. Problems of this sort occur
repeatedl y in our studies, and so we wish to place this situation in a more
general context. We begin with an app arent par adox illustrated through the
following example. Assume that our hipp ie from Chapter 2 arrives at a
road side cafe at an arbitra ry instant in time and begins hitchhiking. Assume
further that automobiles arrive at this cafe according to a Poisson process at
a n average rate of A cars per minute. How long must the hipp ie wait , on the
average, unt il the next car comes along ?
.
The re are two apparently logical answers to this question .T'irst, we might
argue that since the average time between automobile arri vals is I{A min , and
since the hippie arrives at a random point in time, then " obviously" the
hippie will wait on the average 1{2A min. On the other hand , we observe that
since the Poisson pr ocess is rnernoryless, the time until the next arri val is
independent of how long it has been since the previous arrival and therefore
th e hippie will wait on the average I{). min ; this second argument can be
extended to show that the average time from the last arrival until the hippie
begins hitchhiking is also Iii, min. The second solution ther efore implies
th at the average time between the last car and the next car to arrive will be
2{), min! It appears that this interval is twice as long as it should be for a
Poisson process! Nevertheless, the second solution is the correct one, and
so we are faced with an appa rent parad ox!
Let us discuss the solution to this pr oblem in the case of an arbitrary
interarrival time distribution . Th is stud y properly belon gs to renewal theory.
and we quote results freely from that field ; most of these results can be found
170
THE QUEU E
M/G/l
Time
XO~i P Pie =J
axis
aenves
I + - -- X
A,
Figure 5.1
in the excellent monograph by Cox [COX 62] or in the fine expo sito ry
article by Smith [SMIT 58]; the reader is also encouraged to see Feller
[FELL 66]. The basic diagram is that given in Figure 5.1. In this figure we let
A k denote the kth automobile, which we assume arrives at time T k We assume
that the intervals T k +! - T k are independent and identically distributed
random variables with distribution 'given by
F( x )
=<l. Ph+! -
r, ~
xl
(5. 1)
,g,
dF ( x)
dx
(5.2)
Let us now choose a random point in time, say t , when o ur hippie arrives a t
the roadside cafe. In thi s figure , A n _ 1 is the last au tom ob ile to a rrive pri or to t
and A n will be the first autom obile to arrive after t . We let X den ote this
"s pecial" interarrival time and we let Y den ote the time that our hippie must
wait until the next arrival. Clearly, the sequence of arriv al points {Tk} for ms
a renewal pr ocess; renewal theory discusses the instantane ous replacement
of co mpo nents. In this case , {Tk} form s the sequence of instants when the
old comp onent fails and is replaced by a new comp onent. In the language
of renewal theory X is said to be the lifetim e of the comp onent under considera tion, Y is said to be th e residual life of that comp onent at time t, and
X o = X - Y is referred to as the age of that component at time t. Let us ado pt
that termin ology and pr oceed to find the pdf for X and Y. the lifetime and
residual life of our selected componen t. We assume that the renewal process
has been operating for an arbitrarily long time since we are interested only
in limit ing distr ibuti ons.
The amazing result we will find is that X is not distr ibuted according to
F(x ). In term s of our earlier example thi s mean s that the interval which the
hippie happens to select by his arriv al at the cafe is not a typical interval.
In fact , herein lies the solution to our parad ox: A long interval is more likely
5.2.
171
F(x)
xl
pry ~
(5.3)
dF( x)
dx
(5.4)
Similarly, let the selected lifetime X have a pdfI x(x) and PDF Fx (x) where
~
Fx (x) = P[X
xl
(5.5)
In Exercise 5.2 we direct the reader through a rigorous deri vati on for the
residual lifetime density j'(). Rather than proceed through those. details , let
us give an intuiti ve der ivation for the density that take s advantage of o ur
physical intuition regarding thi s pr oblem. Our basic observation is that lon g
intervals between renewal points occupy larger segments of the time axis
than do shorter intervals, and therefore it is more likely th at our rand om point
t will fall in a long interval. If we accept this, then we recognize th at the
probability that an interval of length x is chosen should be proportional to
th e lengt h (x) as well as to the relative occurrence of such interv als [which is
given by I(x) dx]. Thus, for the selected int erval , we may write
Jx(x) dx
K xJ(x) d x
(5 .6)
where the left- hand side is P[x < X ~ x + dx] and the right-hand side
expresses the linear weightin g with respect to interval length and includes a
con stant K , which must be evaluated so as to pr operly normalize th is den sity.
Integrating both sides of Eq. (5.6) we find that K = I jm" where
Ill,
E[T. - Tk_d
( 5.7)
and is the comm on average time between rene wals (between arrivals of
automobiles). Thus we have shown that the den sity associated with the
selected interval is given in ter ms of the den sity of typical intervals by
xJ(x)
Jx(x) = - -
- (5.8)
Ill,
This is o ur first result. Let us proceed now to find the den sity of residu al life
! (x). If we are told that X = x , then the probability that the residu al life Y
does not exceed the value y is given by
pry ~ y
I X = xl = -y
172
THE QUEUE
M/G/I
s: s:
for 0
y
x; this last is true since we have randomly chosen a poin t within
this selected interval, a nd therefore thi s point must be uniformly distributed
within that interval. Thus we may write down the joint density of X and Y as
pry
f(x) dy d x
(5.9)
nit
s: s:
for 0
y
x . Integrating over x we obtain!(y) , which is the uncondition al
density for Y , namely,
! (y) dy
= r a:> f( x) dy dx
Jz=y
m1
! (y)
I - F(y)
- (5.10)
nit
This is our second result. It gives the density of residu al life in terms of the
common distribution of interval length and its mean. *
Let us express thi s last result in terms of transforms. Using our usual
transform notation we have the following correspondences:
,, .
f (x)-=- F*(s)
!(x)-=- I *(s)
Clearl y, all the random va riables we ha ve been d iscu ssing in th is section are
nonnegati ve, and so the relationsh ip in Eq . (5. 10) may be tr ansformed directly
by use of entry 5 in Table 1.4 and entry 13 in Table 1.3 to give
r es) = 1 - F*(s)
- (5.11 )
Sni t
It is now a tri vial ma tte r to find the moments of residual life in terms of th e
moments of the lifetimes themselves. We denote the nth moment of the lifetime by m; a nd th e Ilth mom ent of the residual life by r n ' that is,
nJ " ,:;,
r, ,:;, E[Y"]
(5.12)
(5.13)
U sing our momen t formula Eq. (1I.26), we may di ffer ent iate Eq . (5.11) to
obta in the moments of residu al life. As s ->- 0 we obtai n indeterminate for ms
It may a lso be show n that th e limiting pdf for age (%0) is the sa me as for residual life ( Y)
given in Eq. (5.10).
5.2.
173
r
n
(n
I)m l
-(5 .14)
This important formula is most often used to evaluate 'I ' the me an residual
life, which is found equal to
- (5.15)
and ma y also be expressed
(J 2 ~ m z - m 12) to give
In
ml
(J2
'I = -2 +-2m
(5.16)
This last form shows that the correct answer to the hippie paradox is m, /2,
half the mean interarrival time , only if the variance is zero (regula rly spaced
arrivals); however, for the Poisson arrivals, m l = IIA a nd (J2 = IfJ.2, giving
'1 = IIA = mt> which confirms our earlier solution to the hippie paradox of
residual life. Note that mtl2 ~ 'I and 'I will gr ow without bound as (J2 ->- 00.
The result for the mean residual life (' I) is a rather counterintuitive result; we
will see it appear again and again.
Before lea vin g renewal theory we take this opportunity to qu ote so me other
useful results. In the lan guage of renewal theory the age-d ependent fa ilure
rate rex) is defined as the instantaneous rate at which a component will fail
given th a t it has already attained a n age of x ; th at is, , (x) dx ~ P[x <
lifetime o f component ~ x + dx I lifetime > z ], From firs t principles, we see
that this conditional density is
f(x)
rex ) = 1 _ F(x )
- (5.1 7)
where once again f (x) and F(x) refer to the common di stribution o f component lifetime. The renewal fun ction H (x) is defined to be
(5.18)
and the renewal density h ex) is merely the renewal rate at time x defined by
hex) ~ dH( x )
dx
(5.19)
174
TIl E QUEUE
M/G/!
Renewal theory seems to be obsessed with limit theorems, and one of the
important results is the renewal theorem, which states that
lim hex)
z ..... ex)
=....!.
(5.20)
nIl
Thi s merely says that in the limit one cannot identify when the rene wal
process began, and so the rate at which components are renewed is equal to
the inverse of the average time between renewal s (m.). We note that hex) is
not a pdf; in fact, its integral diverges in the typical case. Ne vertheless, it
does possess a Laplace transform which we denote by H *(s). It is easy to
show that the following relationship exists between this transform and the
transform of the underlying pdf for renewals, namely :
H*(s)
F*(s)
1 - F*(s)
(5.21)
Thi s last is merely the transform expression of the integral equation ofrenewal
theory, which may be written as
hex)
= f(x) + fh(X
- t)f(t) dt
(5.22)
More will not be said about renewal theory at this point. Again the reader
is urged to consult the references mentioned above.
5.3.
175
clear if we specify the number of customers left behind by a departing customer that we can calculate this same quantity at some point in the future
given only the additional inputs to the system. Certainly, we have specified the
expended service time at these instants: it is in fact zero for the customer (if
any) currently in service since he has just at that instant entered service !*
(There are other sets of point s with this property, for example, the set of
points that occur exactly I sec after customers enter service; if we specify the
number in the system at these instants, then we are capable of solving for the
number of customers in the system at such future instants of time. Such a set
as j ust described is not as useful as the departure instants since we must
worry about the case where a customer in service does not remain for a
duration exceeding I sec.)
The reader sho uld recognize that what we are describing is, in fact , a semiMarkov process in which the state transitions occur at customer departure
instants. At these instants we define the imbedded Markov chain to be the
number of customers present in the system immediately following the
departure. The transition s take place only at the imbedded points and form
a discrete-state space . The distribution of time between state transitions is
equal to the service time distribution R(x) whenever a departure leaves behind
at least one cust omer, whereas it equals the convolution of the interarrivaltime distribution (expo nentially distributed) with b(x) in the case that the
departure leaves behind .an empty system. In any case, the behavi or of the
chain at these imbedded points is completely describable as a Markov process,
and the results we have discus sed in Chapter 2 are appl icable.
Our approach then is to focus attention upon departure instants from
service and to specify as our state variable the numb er of customers lef t behind
by such a departing customer. We will proceed to solve for the system
behavior at these instants in time. F ortunately, the solution at these imbedded
Markov points happens also to provide the solution for all points in time. t
In Exercise 5.7 the reader is asked to rederive some MIG/! results using the
method of supplementary variables; this method is good at all points in time
and (as it must) turns out to be identical to the results we get here by using the
imbedded Mark ov chain approach. This proves once again that our solution
Mo reover ~ we assume that no service has been expended o n any other custome r in the
queue.
t This happ y circu mstance is due to the fact that we have a Poisson input and therefore
(as shown in Section 4.1) an ar riving custome r ta kes wha t am ou nts to a " random" look
at the system. Furthermore, in Exercise 5.6 we ass ist the reader in proving that the limiting
distribution for the number of customers left behind by a depart ure is the same as the
limiting distrib ution of custome rs found by a new arrival for a ny system that change s state
by unit step values (positive or negati ve); th is result is true for arb itrary arriva l- and
arbitrary service-time distributions ' Thu s. for MJG/I. a rrivals. depa rtu res, and random
observers all see the same distr ibution of number in the system.
176
THE QUEUE
M IG II
is good for all time. In the following pages we establi sh results for the queuelength distribution, the. waiting-time distribution, and the busy-peri od
distribution (all in terms of transforms); the waiting-time and busy-peri od
durati on results are in no way restricted by the imbedd ing we have described .
So even if the other methods were not available, these results would still hold
and would be unconstrained due to the imbedding pr ocess. As a final reassurance to the reader we now offer an intuitive ju stificati on for the equivalence between the limiting distributions seen by departures and arrivals.
Taking the state of the system as"the number of customers therein, we may
observe the changes in system sta te as time evolves ; if we follow the system
state in continuous time, then we observe that these chan ges are of the
nearest-neighb or type. In particular, if we let Ek be the system state when k
cust omers are in the system , then we see that the only tran sition s from this
state a re Ek --+ E k+l and E k --+ E k _ 1 (where this last can only occur if k > 0).
Thi s is den oted in Figure 5.2. We now make the observati on that the number
of transitions of the type E k --+ E k+l can differ by at most one from the
number of transitions of the type E k+l --+ E k . The form er corre spond to
customer arri vals and occur at the arriv al instants ; the latter refer to customer
dep artures and occur a t the dep arture instants. After the system has been in
opera tion for an arbitrarily long time, the number of such transitions upward
must essentially equal the number of transition s down ward. Since th is upand-down motion with respect to E k occurs with essenti ally the same
frequ ency, we may therefore conclude that the system states found by arrivals
must have the sa me limitin g distribution (rk ) as the system sta tes left behind
by departures (which we denote by dk ) . Thu s, if we let N(I) be the numb er
in the system at time I, we may summarize our two conclu sions as follows:
1.
2.
If in any (perhaps non-Markovian) system N( I) makes only discontinuous chan ges of size (plus or minus) one , then if either one of the
following limiting distributions exists, so does the other and they are
equal (see Exercise 5.6) :
.
I"k ,;;
lim P[arrival at
t - ",
dk
,;;
lim P[departure a t
1 leaves
t - ",
- (5.24)
Thus, for M/G /l,
5.4.
177
r-
r
i
178
TH E QUE UE
MIG/!
Pi; ~ P[qn+!
= j Iq. = i]
(5.25)
P=
,j
eL.
eLl
eL 2
eLa
eL.
eL J
eL 2
eL a
eL.
eLl
(X2
eL.
eLl
eL.
I;
where
eLk
~ P[v. +!
k]
(5.26)
For example, the jth component of the first row o f thi s matri x gives the
prob ability th at the previou s customer left behind a n emp ty system and that
during th e service of C n + l exactly j customers a rriv ed (a ll of who m were
left behind by the dep arture of C n+\); similarly, for other than the first row,
the entry Pi; for j ~ i - I gives the probability that exac tly j - i + I
customers a rr ived during the service peri od for C,,+I> give n tha t C" left behind
exactly i customers ; of these i customers one was ind eed C "+ 1 and thi s
acc ounts for the + I term in th is last co mp uta tio n. The sta te-tra nsitionprobability dia gram for th is Markov ch ain is show n in F igure 5.3, in which
we show only trans iti on s o u t of E i .
Let us now calc ulat e eLk' We ob serve first o f all th at the a rriva l pr ocess (a
Poisson process at a rate of A customers per seco nd) is ind ependen t of the sta te
of the queueing system . Similarl y, x"' the service time for C", is independent
179
ao
Figure 5.3 State-transition-probabilit y diagram for the M/G/I imbedded Mar kov
Chain.
= P[u = k] =
f'
P[u = k, x
(f.k
f'
P[u = k
Ix =
x ]b(x) dx
(5.27)
where again b(x) = dB (x)/dx is the pdf fo r service time. Since we have a
Poisson arrival process, we may replace the pr ob abil ity bene ath th e int egral
by the expre ssion given in Eq . (2.131), t ha t is,
(f.k
i'"
o
(}.X)k
- e- l' b( x ) d x
k!
(5.28)
AX
180
TH E QUEUE
M/G/l
merely the limiting probability that a departing customer will leave behind k
customers, namely,
Pk
P[q
k]
(5.29)
In the following section we find the mean value E[q] and in the section
following that we find the z-transform for h .
q = lim qn
(5.30)
which certainly will exist in the case where our imbedded chain is ergodic.
Our first step is to find an equation relat ing the random variable qn+l to
the random variable qn by considering two cases. The first is shown in Figure
5.4 (using our time-diagram notation) and corre spond s to the case where C;
leaves behind a nonempty system (i.e., qn > 0). Note that we are assuming a
first-come-first-served queueing discipline, alth ough this assumption only
a ffects waiting times a nd not queue lengths or busy periods. We see from
Figure 5.4 that qn is clearly greater than zero since C n+l is already in the
system when C n departs. We purposely do not show when customer Cn +2
arr ives since th at is unimportant to our developing argument. We wish now
to find an expression for q n +l ' the number of customers left behind when C n+l
dep arts. Th is is clearly given as equ al to qn the numb er of customers present
when C; departed less I (since customer C n+l departs himself) plus the
number of customers that arri ve during the service interval Xn +l ' Thi s last
term is clearly equal to Dn+l by definition and is shown as a "s et" of arri vals
q. lef t
behind
behind
Serv er--------.----~.----------
T ime~
Queue - - r - - - - - - ' - - - - - - - . - . L - - - - - - - : - - -
'---v---J
Cn "
~
v n. l arrive
F igure 5.4
5.5.
r--
Server
c.
lSI
, . , --","
C,, +I
Qu eue ---,r-----'----~c_-----+__--
C.
C,,.1
~
V IJ+ l
arri ve
qn
>0
(5.31)
Now consider the secon d case where qn = 0, that is, our departing customer leaves behind an empty system; this is illustrated in Figure 5.5. In this
case we see that qn is clea rly zero since e n+! has not yet arrived by the time
C n departs. T hus qn+!, the number of customers left behi nd by the depar ture
of C n +1 , is merely equal to the number of arrivals d urin g his service time.
Thus
(5.32)
qn = 0
Collectin g together Eq. (5.31) and Eq. (5.32) we have
qn > 0
qn = 0
(5.33)
k = 1,2, . . .
k~O
(5.34)
182
TH E QUEUE
M/G/I
Equation (5.35) is the key equation for the st udy of M/GfI systems. It
remain s for us to extract from Eq . (5.35) the mean value * for qn' As usual, we
concern ourselves not with the time-dependent behavior (which is inferred
by the subscript II) but rather with the limiting distribution for the rand om
variable qn, which we den ote by g. Accordingly we assume that the jth
moment of qn exists in the limit as II goes to infinity independent of II ,
namely ,
(5.36)
lim E[q /] = EW]
n -e cc
E[6;;]
= .26kP[g =
k]
k= O
= 6 oP[ g = 0]
+ 6,P[g =
1]
+ ...
We could a t this point pr oceed to the next section to obtain the (z-tra nsform of the)
limit ing distribution for numbe r in system and from that expression evaluate the avera ge
number in system. Instead , let us calculate the average number in system directly from
Eq . (5.35) following the method of Kendall [KENO 51] ; we choose to car ry out this extra
work to dem onstrate to the student the simplicity of the a rgument.
5.5.
183
E [D.. ] = O{P[q
= OJ) +
or
E [D.. ] = P[ q > 0]
(5.38)
P [busy system] = p
qn
D. o"
= q n'
o=
E[D..]
+ E[v']
- 2E[q]
+ 2E[q]E[v] -
2E[D. q]E[v]
184
THE QUEUE
M/G /l
E[i?] - E[ o]
2(1 - p)
(5.43 )
.:l
.6.
V(z) = E[z"] =
00
P[o =
k] Zk
(5.44)
k= O
'" r
V(z) =
k~ O
.
-(h)k e-AXb(x)
d x Zk
k!
Our summation and integral are well behaved , and we may interchange the
order of these two operations to obtain
V(z) =
ro
e- AX
I (Axzt)
- - b(x ) d x
( co
k -O
k!
e-IA-A=lxb(x) dx
(5.45)
At thi s point we define (as usual) the Laplace transform B*(s) for the service
time pdf as
B*(s)
~ LX> e- SXb(x) d x
We note that Eq. (5.45) is of this form , with the complex variable s replaced
by i. - }.z, and so we recognize the impo rtan t result th at
V(z)
B*(Je - h )
- (5.46)
Thi s last equation is extremely useful and rep resents a relati onship between
the z-transform of the probability distribution of the random variable 0 and
the Laplace transform of the pdf of the ra ndom variable x when the Laplace
transform is evaluated at the critical point Je - h. The se two rand om variables are such that ii rep resents the number of arrivals occurring du ring the
5.5.
185
inte rval i where the arrival pr ocess is Poi sson at an average rate of Aarrivals
per seco nd. We will sho rtly have occa sion to incorp orate thi s interpretati on
of Eq. (5.46) in our further results.
F rom Appendix II we note th at vari ou s derivati ves of z-tra nsforms
evaluated for z = I give the various moments of the rand om varia ble under
considerati on. Similarl y, the appropriate deriv ati ve of the Laplace transform
evaluated at its ar gument s = 0 also gives rise to moments. In particular,
from th at appe ndix we recall that
B*(k\ O)
~ dkB*(s)
I = (-I )kE[Xk]
k
(5.47)
~ d V(z) I
(5.48)
ds
V(ll(1)
,_0
dz
V(2)(1)
= E[ ii]
:- 1
~ d'V~z) I =
dz"
E[ii 2 ]
E[ii]
(5.49)
:~l
In order to simplify the nota tion for the se limitin g derivat ive opera tions, we
have used the more usual superscript notation with the argument replaced
by its limit. Furthermore, we now resort to the overb ar notat ion to denote
expected value of the random variable below that bar. t Thus Eqs. (5.47)(5.49) become
B*Ckl(O)
( - I )kx"
V(ll(1) = iJ
V(2l( l)
(5.50)
(5.51)
= v' -
iJ
(5.52)
V(1)
(5.53)
dB *(}. - AZ)
dz
dz
-- -
(5.54)
t Recall from Eq. (2.19)tha t E [x nk ] _ x k = bk (ra ther tha n the more cumbersome nota tion
(ilk which one might expect). We ta ke the sa me liberties with vand ij, namely, (if = ;;;
and (fj)k = qk.
186
TH E QUEUE
M/G /l
= - A-
(5.55)
dy
where
y =
A- ;.z
(5.56)
V(ll(1)
_ A dB *(y )
dy
:~1
VOI(I) = - AB*(l)(O)
(5.57)
i3:
(5.58)
But Ax is ju st p and we have once again established that which we knew from
Eq . (5.41), namely, ij = p . (This certainl y is encouraging.) We may continue
to pick up higher moments by differentiating Eq. (5.54) once again to
obtain
d 2 V(z)
d 2 B*(A - k)
-(5.59)
2
2
dz
dz
U sing the first derivati ve of B *(y ) we now for m its second der ivative as
follows :
d
2B*
(). - i.z)
dz 2
.!!-[_;.
dz
dB *(y)]
dy .
_A(d2B*~!J))(dY)
dy-
dz
or
d 2B*(}. - i.z)
d z2
, 2
d 2B*( y )
I.
d y'
(5.60)
5.5.
T HE MEAN QU EU E LENGTH
187
P+
j. 2 2
X
2-(1.:.:...
_"'-p)
(5.62)
T his is the result we were after ! It expresses the average queue size at customer
departure instants in terms of known quantities, namel y, the utilizati on
factor (p = AX), }., and x' (the second moment of the service-time
distr ibuti on). Let us rewr ite thi s result in terms of C; = Gb'/{x)', the squared
coefficient of variat ion for service time :
__ + ' (1 + Cb' )
q -
P 2(1 - p)
- (5.63)
Thi s last is the extremely well-known formula for the average number of
custome rs in an M/G lI system and is comm only* referred to as the PollaczekKhinchin (P- K ) mean-value f ormula. Note with emphasis th at thi s average
dep end s only up on the fi rst ruo moments (x and x' ) of the service-time
dis tribution. Moreover , observe that ij gro ws linearly with the variance of the
service-time distribution (or, if you will, linearly with its squ ared coefficient
of variation).
T he P-K mean -value formula provides a n expre ssion for ij that represent s
the average number of customers in the system at departure instants ; however, we alr eady know that this also repre sents the average number at the
arriva l instan ts and, in fact , at all point s in time. We already have a not ati on
for the average number of customers in the system, namely iii, which we
introduced in Chapter 2 and have used in pre viou s chapters; we will continue
to use the iii notat ion outside of this chapt er. Furthermore, we have defined
iii. to be th e average nu mber of custo mers in the queue (no t coun ting the
customer in service). Let us take a moment to develop a relati onship between
these two quan tities. By definiti on we have
-0- '"
N = "2: kP[ ij
k= O
k]
(5.64)
188
TH E QUEU E
MIGI I
Similarly we may calculate the ave rage queue size by subtracting unity from
this pre viou s calculation so long as there is at least o ne customer in the
system, that is (no te the iowe r lim it) ,
" ",
Nq
= I(k
- I )P[q
k)
k= l
I'" P[q = k)
'"
Nq = I kP [q = k) -
k= l
k= O
Nq = N -
- (5.65)
__ +
q -
2
(2)
P 2(1 - p)
or
q=
-pI - P
MIMII
(5.66)
Equati on (5.66) gives the expected number of cust omers left behind by a
departi ng custome r. Compare thi s to t he expression for the average number of
customers in a n MIMfI system a s give n in Eq . (3.24). They a re identical and
lend va lidit y to our ea rlier statemen ts that th e meth od of the imbedded
Markov cha in in the MIGfI case gives rise to a so lution that is good a t all
points in time. As a second example , let us con sider the service-time distributi on in which service time is a con stant a nd equ al to x. Such systems are
de scribed by the notation MIDII , as we ment ioned earlier. In th is case
clea rly C b 2 = 0 a nd so we have
__ +
q-
P 2(1 -
p)
ij = - p- - --,P_1- P
2( 1 - p)
- (5.67)
MIDII
Thus the MIDfI system has p 2 /2(1 - p) fewer customers o n the a verage than
the MIMI I system, demonstrating the earlier sta tement th at ij increases with
the vari ance of the service-time distribution .
5.5.
I S9
(5.6S)
That is, the service facility consists of two parallel service stages, as shown in
Fi gure 5.6. N ot e that A is also the arrival rate, as usual. We may immediately
ca lculate x = 5/(S).) a nd (Jb 2 = 31 /(64.12) , which yield s C/ = 31/25. Thus
--
q -
P"( 2.24)
+..:........:_-
2(1 - p)
O.12p 2
I- p
I -p
= --+-Thus we see t he (small) increase in ij for the (sma ll) increase in C;2 over th e
va lue of un ity for M/M / 1. We note in this example th at p is fixed a t p =
i.x = 5/S; th erefore, ij = 1.79, whereas for M /M/l a t thi s va lue of p we get
ij = 1.66. We have introduced thi s M /H 2/l example here since we intend to
carry it (a nd the M/M/ I exa mple) thr ou gh our MIG/l discussion.
The main result o f th is sect ion is th e Pollaczek -Khinchin fo rm ula fo r
the mean number in system, as given in Eq . (5.63). This result bec omes a
special case of ou r results in the next sect io n , but we feel th at its development
has been useful as a pedagogical device. Moreover , in ob tai ning th is res u lt
we established the ba sic equation for MIG/I given in Eq . (5.35) . We a lso
obtai ned the ge nera l relati on ship between V( z) a nd B*(5) , as given in Eq.
(5.46); from t his we a re a ble to obtai n the moments for the number o f a rr ivals
during a service interval.
190
TH E QU EUE
M IGII
We have not as yet derived an y results regarding time spent in the system ;
we are now in a positi on to do so . We recall Little's result:
This result relates the expected number of customers iii in a system to 1 , the
arrival rate of customers and to T, their average time in the system. For
MIGII we have deri ved Eq . (5.63), which is the expected number in the
system at customer departure instants. We may therefore appl y Little's result
to this expected number in order to obtain the average time spent in the
system (queue + service) . We know that ij als o represents the average
number of customers found at random , and so we may equate ij = iii. Thus
we have
_
+ C.2 )
(1
N=p+p
2(1 - p)
=1T
T =
px(1
+ C; )
x + -'--'--'----"-'2(1 - p)
(5.69)
This last is easily interpreted. The average total time spent in system is clearly
the average time spent in service plus the average time spent in the queue.
The first term above is merely the average service time and thu s the seco nd
term mu st represent the average queueing time (which we den ote by W).
Thus we have th at the average queueing time is
px(l
+ C;)
W = '---''-----''---'2(1 - p )
or
Wo
W=-I-p
- (5.70)
where W o ~ i0/2; W o is the average remaining service time for th e cust omer
(if an y) found in service by a new arrival (work it out using the mean residu al
life formula). A particularly nice normalization fact or is now apparent.
Consider T, the average time spent in system. It is natural to comp are this
time to x, the average service time required of the system by a cust omer.
Thus the ratio Tlx expre sses the ratio of time spent in system to time required
of the system and repre sents the factor by which the system inconvenie nces
5.6.
191
customers due to the fact that they are sharing the system with other customers. If we use this normalization in Eqs. (5.69) and (5.70), we arrive at the
following, where now time is expre ssed in units of average service intervals:
T
+ p (1 + C b )
2
= 1
2(1 - p)
(l
+C
2
b )
2(1 - p)
_ (5.71)
_ (5.72)
Each of these last two equations is also referred to as the P-K mean-value
formula [along with Eq . (5.63)]. Here we see the linear fashi on in which the
statistical fluctuati ons of the input processes create delay s (i.e., I + C b 2 is
the su m of the squared interarrivai-time and service-time coeffici ents of
variation). Further, we see the highly nonlinear dependence of delays upon
the average load p .
Let us now comp are the mean normalized queueing time for the systems"
M /M /l and M /D fl ; these have a squared coefficient of variation Cb 2 equal to
I and 0, respectively. Applying this to Eq. (5.72) we ha ve
W
x
W
(I - p)
P
2(1 - p)
MIM II
_ (5.73)
M IDII
_ (5.74)
Note that the system with constant service time (M /D/l) has half the average
waitin g time of the system with exponentially distributed service time
(M / M {l) . Thus, as we commented earlier, the time in the system and the
number in the system both grow in proportion to the vari an ce of the
service-time distribution .
Let us now proceed to find the distribution of the number in the system.
5.6.
W j;;; =
192
TH E QUEUE
M fG fl
takin g expectati on s we were able to obtain P-K formulas that gave the
expected number in the system [Eq. (5.63)] and the norm alized expected
time in the system [Eq. (5.71)]. If we were now to seek the second moment of
the number in the system we could obtain this quantity by first cubing Eq.
(5.75) and then taking expectations. In thi s operation it is clear that the
expectation E[f] would cancel on both sides of the equation once the limit
on n was taken ; thi s would then leave an expression for the second moment of
g. Similarly, all higher moments- can be obtained by raisin g Eq. (5.75) to
successively higher powers and then forming expectations. * In this section,
however, we choose to go after the distribution for qn itself (actually we
consider the limiting random variable g). As it turns out, we will obtain a
result which gives the z-transforrn for this distribution rather than the distributi on itself. In principle, these last two are completely equivalent; in
practice, we sometimes face great difficulty in inverting from the z-tra nsform
back to the distribution . Nevertheless, we can pick off the moments of the
distributi on of g from the z-transforrn in extremely simple fashion by making
use of the usual properties of transforms and the ir deri vatives.
Let us now proceed to calculate the a-transform for the probability of
finding k customers in the system immediately following the departure of a
customer. We begin by defining the z-transform for the random va riable qn
as
(5.76)
From Appendix II (and from the definition of expected value ) we have that
thi s z-transform (or probability generating functi on) is also given by
Qn(z) ~ E[zn]
(5.77)
'"
= lim Qn(z) = 2:
P[g =
n -e cc
k ]Zk
E[z"]
g:
(5.78)
""= 0
As is usual in these definit ions for tr an sform s, the sum on the right-hand side
of Eq. (5.76) converges to Eq . (5.77) only within some circle of co nvergence
in the z-plane which defines a ma ximum value for [z] (certai nly [a] ~ I
is allowed).
The system M fG fl is characterized by Eq. (5.75). We therefore use both
sides of thi s equ at ion as an exponent for z as follows :
Specifically, th e k th power leads to an expression for Erqk- 'j that involves the first k
momen ts of service time.
5.6:
193
(5.79)
We now observe, as earlier, that the random var iable v n+t (which represents
the number of arrivals during the service of C n +!) is independent of the
ra ndo m varia ble qn (which is the number of customers left behind upon the
departure of C n) . Since this is true , then the two fact ors within the expectat ion
on the right-hand side of Eq. (5.79) must themselves be independent (since
function s of independent rand om variables are also independent). We may
thu s write the expectatio n of the produ ct in that equ ati on as the product of
the expectatio ns:
Qn+1(z) = E[z .- dq. ]E[zv. +.]
(5.80)
Th e second of these two expectations we again recogn ize as being independent
of the subscript n + I ; we thus remove the subscrip t and consider the ran dom
variable v again . From Eq . (5.44) we then recognize that the second expectation on the right-h and side of Eq. (5.80) is merely
We thus have
(5.81)
T he only complicat ing factor in this last equati on is the expectation . Let us
exam ine this term sepa rately; from the definition of expectation we have
'"
E[zo.-<1] = LP[q
n = k]Zk- <1.
k~ O
Th e difficult part of this summation is that the expo nent on z cont ains t:J. k ,
which takes on one of two values according to the value of k . In order to
simplify this special behavior we write the summa tion by exposing the first
ter m separately:
co
(5.82)
k= l
Regarding the sum in this last equa tion we see that it is almost of the form
given in Eq. (5.76); the differences are that we have one fewer powers of z
and also t hat we are missing the first term in the sum. Bot h these deficiencies
may be correct ed as follows:
'"
I '"
1
LP[q n = k]Zk-t = z L P[qn = k]Zk - Z P[qn = O]ZO
k= t
k- O
(5.83)
194
T HE QUEU E
M IGII
Applying thi s to Eq. (5.82) and recognizing that the sum on the right-hand
side of Eq . (5.83) is merely Qn(z), we have
E[zqn-~'n]
P[qn = 0]
+ Q n(z) -
P[q.
= 0]
We now take the limit as n goes to infinity and recognize the limiting value
expressed in Eq. (5.36). We thus have
Q(z)
V(Z)( P[ij = 0]
+ Q(z) -
:[ij
= 0])
(5.84)
Q(z)
II
I
I
(5.85)
Finally we multiply numerator and denominator of this last by ( -z) and use
our result in Eq. (5.46) to arrive at the well-kn own equation th at gives the
z-transform for the number of customers in the system,
We shall refer to thi s as one form of the Pollaczek -Khinchin (P-K) transform
equation.t
The P-K transform equation readily yields the momen ts for the distr ibuti on
of the number of customers in the system. Using the moment-generating
pr operties of o ur transform expre ssed in Eqs. (5.50)-(5.52) we see th at
certainly Q(1) = I ; when we attempt to set z = 1 in Eq. (5.86), we obta in
an indeterminant formj and so we are required to use L'H ospital's rule . In
carrying out this opera tion we find th at we must evalu ate lim d B*(i. - i.z)ldz
as z ---+ I, which was carried out in the pre vious section and show n to be
equ al to p. Thi s computat ion verifies that Q (I) = I. In Exercise 5.5, th e
reader is as ked to show th at Q(I)(I ) = ij .
I'
!
_ (5.8..6)
t Thi s formula was found in 1932 by A. Y. Kh inchin [KHI N 32]. Shortly we will derive
two other equat ions (each of which follow fro m an d imply this eq uation), which We also
refer to as P-K transform equ ation s ; these were studied by F. Pollaczek [PO LL 30] in
1930 an d Khinchin in 1932. See also the footn ote o n p. 177.
t We note that the denominat or of the P-K tran sform equation must a lways con tain the
factor ( I - c) since B * (O) = I.
5.6.
195
-l:!:-..
s + ,u
8 *(s) =
Q(z) = (
(5.87)
M /M/I
>
,Lt .
Apply-
p)(1 - z)
(I -
A- AZ + ,u [PIC;, - AZ + ,u)] - Z
Q(Z)
I - P
(5.88)
1 - pz
Equation (5.88) is the solution for t he z-t ra nsfo r m of the distributi on of the
number of people in the sys te m. We ca n reach a point such as thi s with many
serv ice-tim e d istributi on s B(x); for th e exponential d istribution we can ev aluate the inve rse transform (by inspection !). We find immediately that
P[ ij = k ] = (1 _ p)p "
M/M /l
(5.89)
This then is the fami liar so lu tion for M /M /!. If the reader refers back to
Eq . (3 .23), he will find the same functi on for the probability of k cu stomers
in the M IMII syste m. Ho wever, Eq . (3 .23) gives the solution for all p oints
in time whereas Eq . (5.89) gives the so lutio n only at the imbedded Markov
points (na mely, at th e de pa rtu re in stants for cu st omers). The fact t hat these
two a nswe rs a re identi cal is no surprise for tw o reason s : first , because we
told yo u so (we said that the imbedded Ma rkov poin ts give so lutio ns tha t
a re good at a ll points) ; a nd second, b ecause we rec o gni ze th at the M IM I I
system forms a contin uou s-t ime Markov ch ain.
As a sec ond examp le, we conside r the system M/H 2/1 whose pdf for service
time was give n in Eq. (5.68). By in spect ion we m ay find B *(s), wh ich gives
8 *(s) =
+ 8A
4(s + A)(S + 2i,)
7i.s
(5.90)
196
THE QUEUE
M/G fl
Q(z) =
8
>-
( 1 - p)(1 - (7{15)z]
Q(z)
1{4
Q(z) = (I - p) ( I _ (2/5)z
+I
3{ 4 )
_ (2/3)z
This la st may be inverted by inspection (by now the reader sho uld rec ogni ze
the sixth entry in Table 1.2) to give
P.
P[ij
k]
(1 -
p>[~(~r+ ~(~n
(5.9 1)
Lastl y , we note th at the value for p ha s a lready been calculated a t 5/8 , and
so for a final soluti on we have
k
= 0, 1,2, . . .
(5.92)
It sho u ld not surprise us to find thi s su m of geo metric terms for our so lutio n.
Further examples will be found in the exerci ses. F or now we terminate th e
d iscussion of how many cu st omers are in the system a nd proceed with the
calculati on of how long a cu st omer spends in the system .
5.7.
V(z) = B* (i. - k)
(5.93)
5.7.
197
Time------;;....
Ououo -
-;.., _ _
\.~----,v~---'}
~.
"n
arrive
B* (i. - i.z).
In Figure 5.7, the reader is reminded of the structure from which we obtained
this equation. Recall that V (z) is the z-transform of the number of customer
arrivals in a particular inter val, where the arrival proce ss is Poisson at a rate A
cust omers per second. The particular time interval involved happens to be
the service interval for C n; this interval has distribution B(x) with Laplace
t ransform B *(s). Th e deri ved relati on between V(z) and B* (s) is given in
Eq. (5.93). The imp ortant observation to make now is that a relationship
of this form must exist between any two random variables where the one
identifies the number of customer arrivals from a Poisson process and the
other describes the time interval over which we are co unting these customer
arri vals. It clearly makes no difference what the interpretation of this time
interval is, only that we give the distribution of its length ; in Eq. (5.93) it
ju st so happens that the interval involved is a service interv al. Let us now
d irect our attention to Figure 5.8, which concentrates on the tim e spent in the
sys tem for C n' In th is figure we have traced the history of C n' The interval
labeled lI" n ident ifies the time from when C; enters the queue until that customer leaves the queue and enters service; it is clearly the waiting time in queue
for C n' We have also identified the service time X n for C n' We may thu s
Tj me~
\'-----~ ~-----'
q"
arrive
198
TH E QUEUE
M/G/I
i ll
sy stem
Sn
for CO'
(5.94)
We have earlier defined gn as the number of customers left beh ind upon the
departure of Cn' In considering a first-come-first-served system it is clear
th at all those customers present upon the arri val of C n must depart before he
d oes; consequently, those customer s that C; leaves behind him (a total of
gn) must be precisely th ose who arri ve durin g his stay in th e system. Th us,
referring to Figure 5.8, we may identify those customers who arrive du ring
the time interval s; as bein g our previously defined rand om variab le gn'
Th e reader is now asked to comp are Figures 5.7 and 5.8. In bot h cases we
have a Poisson arrival process at rate I customers per second. In Figure 5.7 we
inqu ire into the number of arrivals (un) during the interval whose durat ion
is given by X n ; in Figure 5.8 we inquire int o the number of arrivals (gn)
during an interval whose durati on is given by S n' We now define the distribut ion for the total time spent in system for C; as
Sn(Y) ~ P[sn ~ y]
(5.95)
Since we are assuming ergodicity, we recognize immediat ely that the limit of
this distribution (as n goes to infinity) must be independent of II . We deno te
this limit by S(y) and the limiting rand om varia ble by s [i.e., Sn(Y) ->- S(y )
and s; ->- s]. Thus
S (y ) ;; P[s ~ y]
(5.96)
Finally, we define the Lap lace transform of the pdf for total time in system as
S *(s) ;;
f'
e- ' dS( y)
= E[e~S]
(5.97)
With these definitions we go back to the analogy between Figures 5.7 and
5.8. Clearly, since Un is an alogous to gO' then V( z) must be analogous to
Q(z), since each describes the generating functi on for the respective nu mber
distribution. Similarly, since X n is analogous to S n , then B *(s) must be
anal ogous to S *(s). We ha ve therefore by dir ect analogy from Eq. (5.93)
t hat t
Q(z) = S* (i. - }.z)
(5.98)
Since we already have an explicit expression for Q(:) as given in the P-K
transform equat ion , we may therefore use that with Eq . (5.98) to give an
explicit expression for S * (s) as
S *(,1. _ ,1.z) = B*(}. _ ,1.:) (l - p)(l - z)
B*(,1. - }.z) - z
(5.99)
t Thi s can be der ived directly by the unco nvinced reader in a fashion similar to tha t which
led to Eqs. (5.28) and (5.46).
5.7.
199
Thi s last equat ion is just crying for the o bvio us change of va ria ble
which gives
= = I -~
A
Making thi s chan ge of variable in Eq. (5.99) we then have
5 *(s) = B*(s)
s( 1 - p)
s - A + AB*(s)
- (5.1 00)
Equat ion (5.100) is the desired exp licit expression for the Lapl ace transfor m
of the distribution of total time spent in the M IGII system. It is given in
terms of known quantities derivable from the initial statement of the pr oblem
[namely, the specificati on of the servi ce-time distribution B( x ) and the
par ameters A a nd x ). This is the second of the three equ at ion s th at we refer
.
to as the P-K tra nsform equ ati on.
Fr om Eq. (5. 100) it is tr ivial to deri ve the Laplace tr an sform of the distr ibution of wai ting time , which we sha ll den ote by W*(s). We define th e
PDF for e n's waiting time (in queue) to be W n(y), th at is,
W n(y) ~ P[w n ~ y )
Furthermore , we define the limit ing quantities (as n ->- co) , Wn(y)
and W n ->- Iii, so th at
W(y ) ~ P[I~' :-:; y )
(5. 101)
->-
W(y)
(5.102)
JV *(s)
~ L "e- s dW(y)
= E[e- ';;;)
( 5. 103)
F ro m Eq . (5.94) we may de rive the dist ributio n of l~' from the d istribut ion
of s and x (we drop subscri pt notation now since we a re con sidering equ ilibrium behavior). Since a customer' s service time is independent of his
qu eueing tim e, we hav e th at s, the time spent in system for some customer,
is the sum of two independent random vari abl es: l~' (his queueing time) and x
(his service time). T hat is, Eq. (5.94) has the limiting for m
(5.104)
As derived in Appendix II the Laplace transform of the pdf of a random
vari able that is itself the sum of two independent rand om vari able s is equal
to the prod uct of the Lapl ace transforms for th e pdf of ea ch. Con sequently,
we have
5 *(s) = W*(s) B*(s)
200
TH E QUEUE
M/G/I
W *(s) =
s( 1 - p)
s - A + AB*(s)
- (5.105)
Thi s is the desired expre ssion for the Laplace tran sform of the queu eing
(waiting)-time distribution. Here we have the third equ ati on that will be
referred to as the P-K transform equation .
Let us rewrite the P-K transform equation for waitin g time as follows:
1- p
W (s)
1- p
[I - B*(S)]
(5.106)
sx
B*(s) ;; 1 - B*(s)
SX
and are therefore permitted to write
(5.107)
*
I - p
W (s) - ------,'-----
(5.108)
- I - pB*(s)
Thi s observa tion is trul y amazi ng since we recognized at the outset that the
problem with the M/Gfl analysis was to take account of the expended
service time for the man in service. Fr om that investigat ion we found that the
residual service time remain ing for the customer in service had a pdf given
by b(x) , whose Laplace transform is given in Eq. (5.107). In a sense ther e is a
poetic ju stice in its appearance a t thi s point in the final solution. Let us
follow Benes [BENE 56] in inverting this transform in term s of these residu al
service time den sities. Equation (5.108) may be expanded as the following
power series :
co
(5.109)
W*(s) = ( I - p)2: l[B*(s)]k
P O
From Appendix I we know that the kth power of a Lapl ace tran sfor m corresponds to the k-fold con volution of the inverse tran sform with itself. As in
Appendix I the symbol 0 is used to denote the conv oluti on opera to r, and
we no w choose to den ote the k-fold convoluti on of a funct ion f (x ) with
itself by the use of a parenthetical subscript as follows :
d
f (k)(X) = ,f (x) 0 f ( x )0 .. 0 f( x)
( 5.110)
5.7.
20 1
Us ing this notation we may by inspection invert Eq. (5.109) to obtai n the
waiting-time pdf, which we de note by w(y) ~ dW(y)/dy; it is given by
w(y)
'" (I
=L
- p)pk bCkl(y)
(5.111)
k=O
Thi s is a most intriguing result! It state s th at the waiting time pdf is given by
a weigh ted sum of conv olved residual service time pdf' s. The interesting
observatio n is that the weightin g factor is simply (I - p)pk, which we now
recognize to be th e pro bab ility distribution for the number of custo mers in
an M/M /l system . Tempting as it is to try to give a physical explanation for
th e simp licity of this result and its relation to M/M /I , no satisfactory,
int uitive explan at ion has been found to explain th is dramatic form. We
note that the contributio n to the waitin g-time den sity decreases geometrically
with p in thi s series. Thu s, for p not especially close to unit y, we expect the
high-o rde r terms to be of less an d less significance, and one pract ical application of this equ ati on is to provide a rapidly converging approximatio n to the
density of waiting time.
So far in th is section we have esta blished two principle results, namely,
the P-K transfor m equatio ns for time in system and time in queue given in
Eq s. (5.100) and (5. 105), respectively. In the previous section we have already
given the first moment of these two rand om variable s [see Eqs. (5.69) and
(5.70)]. We wish now to give a recurrence formula for the moments of t he
waiting time. We denote the kth moment of the waitin g time E [wk ], as usual ,
by Irk. Takacs [TAKA 62b] has show n that if X i+! is finite, then so also are
Iii, \\,2, . . . , Wi; we now adopt our slightly simp lified notati on for the ith
moment of service time as follows : hi ~ x'. Th e Tak acs recurr ence for mula is
T""
wk = - I.'
I - P i='
(k)
~.'
i+ 1
---IV
i (i
(5.112)
I)
3
(5.114)
IV- = 2(
+ - }.b
---"-3(1 - p)
l,'r
In orde r to obtain similar moments for the total time III system, that is,
s", we need merely take ad vant age of Eq. (5. 104) ;
from this equ ation we find
(5.115)
202
Using the bin omi al expansion and the ind ependence bet ween wai ting time
and service time for a given customer, we find
? = i (~)Wk-ibi
i=O
(5.11 6)
Thus calculating the moments of the wa iting time from Eq . (5.112) a lso
permits us to calcul ate the moments of time in system from this las t equation.
In Exercise 5.25 , we drive a relati on ship bet ween Sk and the mom ent s o f
the number in system; the simplest of these is Little's result, a nd the others
are useful genera liza tio ns.
At the end of Section 3.2, we promised the reader th at we wo uld de velop
the pd f for the time spent in the system for an M IM II queueing system. We
are now in a position to fulfill th at promise. Let us in fact find both the
distribution of waiting time and distribution of system time for cu stomers in
M /M /I . Usi ng Eq. (5.87) for the system M /M fI we may calculate S*(s) from
Eq. (5. 100) as follows :
S*(s)
1-
(s
+ p ) L; -
s( l - p)
A + Ap/(s
S*(s) =
p (1 - p)
.
s + p( l - p)
+ p)
p ) e- P(l - p) u
MIM II
(5.117)
y ~O
M IM I 1
- (5.118)
M IMII
- (5.1l 9)
e-p(l- p) u
W*(s) =
s( 1 - p)
s - A + i.p/(s
(s
s
+ It)
+ p )(1 - p)
+ (p - ;.)
= (I
_ p)
T hi
(
to '
reF
Re
(5.
frc
d i:
tir
th
(5
m
SI
(5.120)
W*(s)
Imp
F ro
ApF
Th is equat ion gives the Laplace tr an sform of th e pdf for time in the system
which we den ote , as usu al , by s(y) ~ dS(y) ldy. Fortunately (as is usual with
the case M /M /I) , we recogni ze the inver se of thi s tr an sform by inspection.
Thus we have immediat ely that
s(y) = p( 1 -
Th is
we I
;.(1 - p)
s + p(l - p)
(5.12 1)
a
t'
5.7.
203
I" (y )
+ A(I
- p)e- 11- p l
y ~0
M/M /I
- (5.122)
y ~ 0
M/M/I
- (5.123)
(I - p)pk
(5.124)
A simple exponential form for the tail of the waiting-t ime distribution (that is, the
probabilities associated with long waits) can bederived for thesystem M/G /1. We postpone
a discussion of this asymptotic result until Chapter 2, Volume II, in which we establish
this result for the more general system GIG/I.
204
THE QUEUE
M/G/I
We repeat agai n that thi s is the same expression we foun d in Eq. (5.89) and
we know by now that this result app lies for all po ints in time. We wish to form
t he Lapl ace transform of the pdf of total time in the system by considering
thi s Lapl ace transform conditioned on the number of customer s found in th e
system upon arrival of a new customer. We begin as generally as possible
a nd first consider the system M IGII. In particular , we define the condit iona l
d istribution
I = P[customer's total
S (y k )
Jo e- sv d5(y I k )
t. ( ""
5 *(s k ) =
(5.125)
Now it is clear that if a customer finds no one in system upon his a rrival,
then he must spend an amount of time in the system exactly equal to his own
service time , and so we have
S *( s I 0)
B *(s )
On the other hand , if our arriving customer finds exactly one customer
ahead of him , then he remains in the system for a time equal to the time to
finish the man in service, plu s his own service time; since these two int ervals
are independent, then the Laplace transform of the density of this sum must
be the product of the Lapl ace tr ansform of each density, giving
S *(s I I)
8 *(s)B *(s )
where B *(s) is, again, the tran sform for the pdf for residual service time.
Similarly, if our arriving customer finds k in front of him , then his total
system time is the sum of the k service times associated with each of t hese
customer s plus his own service time. Th ese k + I rand om variable s are all
independent, and k of them are dra wn from the same distributio n S ex).
Thus we have the k-fold product of B *(s) with B*(s) giving
I =
5 *(s k )
[B*(s)jk8*(s)
(5. 126)
Equ ati on (5.126) hold s for M IG II. Now for our M/M /I problem , we have
that B* (s) = I'-/ (s + 1'-) and, similarly, for B*(s) (memoryless); thus we have
I = ( -I'--)k+'
5*(s k )
s + 1'-
(5.127)
5.7.
205
In order to obtain S *(s) we need merel y weight the transform S *(s k ) with
the pr obability P of our customer finding k in the system upon his arrival,
namely ,
cc
S*(s)
= L 5*(s I k)Pk
k=O
S *(s)
co (
=L
-P-)k+l(I
k~O S + P
!l(I -
p)
+ p(I
- p)
- p)p'
(5.128)
4_s-,-(I_----'-p
- -'--)("-s--:+'----'l)-'-(s_+
..:...-2_l )' --_
- 4(s - l )(s + l )(s + 2).) + 8).3 + 7l 2 s
W (s) -
I_-----'-p.:..o
)(s--,+_).-,-,)(:.. . s.-:
+_ 2_),.:. .)
[s + (3/2)l ][s + (112).]
.0....(
W (s) = -
Once again, we must divide numerator by den ominator to reduce the degree
of the numerator by one, giving
W (s)
= (I
- p)
]
+ _ ).-:,(_1 _--,-p-,--)['--s.-:+_ (,---51,---4,---).-,--[s + (3/2)'][s + ( 1/2).]
W*(s)
= (1 -
1
p>[ + s
}./4
3}./4
==
206
MIG/!
THE QUEUE
T his we may now invert by inspection to o bta in the pd f for waiting time (a nd
recalling that p = 5/8) : .
3 ()
wy
() = - u y
32
32
y;:::O
(5. 129)
This complete s o ur d iscussion o f the waiting-time an d system-time d istr ibution s fo r M/G/1. We now introduce the bu sy peri od , an imp ortant
stochas tic process in queueing systems.
C;
7"n
In
Xn
= arrival time o f C;
7"n -
7"n _I
We now recall the imp o rtant sto chastic process V( I) as de fined in Eq . (2.3) :
V(t) ~ the unfini shed work in the system at time I
~ the rem aining time req uired to empty the system of all
5.8.
207
U(I)
(a>
~ I~
BUSY
IDLE
IBUSY,
ID LE
BUSY
c,
C, C3
Server - .---ji-- +--'-------,----L---,----+-..L---;;..
(b)
C,
C3
C,
- - --
-f-
-+---,,----'-
C3
Figure 5.10 (a) The unfinished work, the busy period, and
history.
(b)
the customer
enters the system at time T I and brings with him an amount of wor k (tha t is, a
required service time) of size X l ' Thi s customer finds the system idle and
therefore his arrival termin ate s the pre vious idle period and initiates a new
busy period. Prior to his arrival we assumed the system to be empty and
therefore the unfinished work was clearly zero. At the inst ant of the arrival
of C, the system backl og or unfinished work j umps to the size X ll since it
would take this long to empty the system if we allowed no further entries
beyond this instan t. As time progresses from T 1 and the server works on C ll
this unfinished work reduce s at the rate of 1 sec/sec and so Vet ) decrease s
with slope equal to - I. t 2 sec later at time T 2 we observe that C2 ent ers the
system and forces t he unfinished work Vet) to make another vertical jump of
magnitude X 2 equal to the service time for C2 The functi on then decrea ses
agai n at a rate of I sec/sec until customer C 3 enters at time T 3 forcing a vertical
ju mp aga in of size X 3 Vet ) continues to decre ase as the server works on t he
customers in the system unt il it reaches the instant T 1 + YI , at which time he
has successfully emptied the system of all cust omers and of all work . Thi s
..
208
TH E QUEUE
M/G/!
then terminates the busy period and initiates a new idle period. The idle
per iod is terminated at time T. when C. enters . This second busy period
serves only one customer before the system goes idle again . The third busy
period serves two customers. And so it continues. For reference we show in
Figure 5. lOb our usual double-time-axis representation for the same sequence
of customer arrivals and service times dra wn to the same scale as Figure 5. IOu
and under an assumed first-come-first-served discipline. Thus we can say
that Vet) is a function which has vertical jumps at the customer-arrival instants
(these jumps equaling the service times for those customers) and decrea ses
at a rate of I sec/sec so long as it is positive; when it reaches a value of zero, it
remains there until the next customer arrival. This stochastic process is a
continuous-state Markov process subject to discontinuous jumps; we have
not seen such as this before .
Observe for Figure 5.10u that the departure instants may be obtained by
extrapolating the linearly decreasing portion of Vet) down to the horizontal
axis; at these intercepts , a customer departure occurs and a new customer
service begins. Again we emphasize that the last observation is good only for
the first-come-first-served system . What is important, however, is to observe
that the function Vet) itself is independent of the order of service ! The only
requirement for this last statement to hold is that the server remain busy as
long as some customer is in the system and that no customers depart before
they are completely served; such a system is said to be "work conserving"
(see Chapter 3, Volume II) . The truth of this independence is evident when
one considers the definition of Vet) .
Now for the idle-period and busy-period distributions. Recall
A(t) = PIt . ~ t] = 1 - e- At
B(x) = PI x.
t~O
(5.130)
x]
where ACt) and B(x) are each independent of n. Our intere st lies in the two
following distributions:
F(y) ~ prJ. ~ y] ~ idle-period distribution
(5.131)
(5. I32)
The calculation of the idle-period distribution is trivial for the system M/G/I .
Observe that when the system terminates a busy period , a new idle period
must begin, and this idle period will terminate immediately upon the arrival
of the next customer. Since we have a memoryle ss distribution, the time until
the next customer arrival is distributed according to Eq . (5. I30), and
therefore we have
F(y) = J - e-i.
y ~ 0
- (5.133)
So much for the idle-time distribution in M/G/1.
5.8.
209
Ulti
(Il)
T,
x, _
--.-.j-E-_
.\'.
-----J-.\'
peri~1
3-
--...;-1-0;..--
Decomposition of the
busy period
.\' 2-
- Sub-busy period
Sub-busy period
generated by C3
generated by C1
---;-1
~------------ y ------------~
Busy period generated by C1
Nlti
(bJ Nu mber in the system
C,
Figure 5.11
C.
ClI
C9
Now for t he busy-period d istrib ution ; this is not q uite so simple. The
reader is referred to Figur e 5.11. In part (a) of th is figure we once agai n
observ e the unfinished work U(t) . We assum e th at th e system is empty just
pri or to the instant 7"1. at which time customer
init iates a busy pe riod of
duration Y. His service time is equal to Xl . It is clear that th is customer will
depart from the system at a time 7"1 + Xl . Du ring his service other customers
C;
2 10
THE QUEUE
M/G /I
may arrive to the system and it is they who will continue th e busy period.
Fo r the function shown, three other custo mers lC2 , C3 , and C.) ar rive during
the interval of Cl's service. We now make use of a brilliant device due to
Tak acs [TAKA 62a]. In particular, we choose to permute the order in which
customers are served so as to create a last-come-first-served (LCFS) que ueing
discipline* (recall that the duration of a busy period is independent of the
order in which customers a re served). The moti vation for the reordering of
custo mers will soon be ap parent. At the departure of Cl we then take int o
service the newest customer , which in our example is C, . In add ition , since
all future arrivals du ring this busy period must be served before (LCFS!)
a ny customers (besides C,) who arrived during Cl' s service (in this case C 2
and C3 ) , then we may as well consider them to be (tempora rily) out of the
system. Thus, when C, ent ers service, it is as if he initiated a new busy period,
which we will refer to as a "sub-busy peri od"; the sub-busy period generated
by C, will have a duration X. exactly as long as it takes to service C. and all
those who enter into the system to find it busy (remember that C 2 and C3 are
not considered to be in the system at thi s time). T hus in Figure 5.l la we
show the sub-busy period generated by C. during which customers C., C. ,
and Cs get serviced in that order. At time 1"1 + Xl + X. this sub-busy period
ends and we now continue the last-come-first-served order of service by
bringing C3 back into the system. It is clear that he may be co nsidered as
generating his own sub-busy period , of durati on X 3 , duri ng which all of his
"descendents" receive service in the last-come-first-served order (name ly,
C3 , C7 , Ca, and C9) . Finall y, then , the system emptie s agai n, we reintr oduce
C 2 , and perm it his sub-busy peri od (of length X 2 ) to run its cour se (and
complete th e major busy period) in which customer s get serviced in the order
C2 , C10 , and finally Cu.
Figure 5. lla shows that the cont our of any sub-busy per iod is identical
with the con tour of the main busy period over the same time interval and is
merely shifted down by a constant amount ; th is shift, in fact, is equal to the
summed service time of all th ose customers who arrived during Cl's service
time and who have not yet been allowed to generate their own sub-busy
periods. The details of custo mer history are shown in Figure 5.1Ic and the
to tal numb er in the system at any time und er this discipline is shown in Figure
5.1l b. Th us, as far as the queueing system is concerned, it is strictly a lastcome-first-served system from sta rt to finish. However, our analysis is
simplified if we focus upon the su b-busy periods and observe th at each behaves
statistically in a fashion identical to the major busy period generated by Cl.
T his is clear since all the sub-busy periods as well as the major busy period
This is a "push-down" slack. This is only one of many perm utations that " work"; it
happens that LCFS is convenient for peda gogical purp oses.
5.8.
211
are eac h initiated by a single customer whose service times a re all drawn
from the same distribution independently ; each sub-busy period continues
until the system catches up to the work load , in the sense that the unfin ished
work funct ion U(t) drops to zero . Thus we recognize th at the random variables { X k } are each independent a nd identically distributed a nd have the sa me
distributio n as Y , the duration of the major busy peri od .
In Figure S.ll e the reader may follow the customer history in detail ; the
soli d black region in this figure identifies the customer being served during
that time interval. At each cu stomer departure the server " floa ts up" to the
top of the customer contour to engage the most recent a rrival a t that time;
occasio nally the server "fl oats d own " to the cust omer directly below him
such as a t the departure of CG The server may trul y be thought of as floating
up to the highest customer there to be held by him until his departure, a nd
so on . Occasi onall y, however, we see that our ser ver "falls down" through a
ga p in o rde r to pick up the most recent a rrival to the system, for example, a t
the departure of CS It is at such instants th at new sub-busy peri ods begin
and on ly when th e server falls down to hit the horizontal axis doe s the maj or
busy period termin ate .
Our point of view is now clear : the duration of a bu sy peri od Y is the sum
of I + v random variables, the first of which is the service time for C , and
the remainder of which a re each random vari able s de scribing the duration of
the sub-busy peri od s, each of which is distributed as a busy peri od itself. v is
a rand om va riable equal to the number of cust omer arrivals during C, 's
service interval. Thus we ha ve the important relat ion
Y =
X,
+ X v+! + X ; + ... + X + X
3
(5.134)
G(y)
pry
y]
(5.135)
We also know th at X , is distributed acco rd ing to B (x) a nd th at X k is distr ibu ted as G(y) from our earlier comments. We next derive the Laplace tra nsform for th e pdf asso cia ted with Y, which we define, as usual, by
G*(s)
f 'e-"dG(y)
(5.136)
On ce ag ain we remind the reader that these transform s may also be expressed
as expectation opera to rs, nam ely:
G*(s) ~ E[ e- SF ]
Let us now ta ke ad vantage of the powerful technique of conditioning used so
often in pr obability the ory ; thi s technique permits one to write d own the
probability associated with a complex event by cond itionin g that event on
----- - -- -212
TH E QUEUE
M IG II
E[e- ' Y Xl
ii
X,
k]
=
=
Since the sub -busy periods have durations that a re independent of each other,
we may write this last as
E[e- ' Y Xl
X,
ii
k]
Since X is a given constant we have E[e-sx } = e:, and further, since the subbusy periods are identically distributed with corresponding transforms
G* (s), we have
E[e-:' Y Xl = X, ii = k ] = e- SX[G*(s)]k
E[e- sY Xl
~ E[e- sY I Xl =
X] =
X, ii
k]P[ii
k]
k= O
1x
k- O
e -X[s+l- ;.O - (s )l
G*(s) =
L"
This last we recognize as the transform of the pdf for service time evaluated
at a value equal to the bracketed te rm in the exponent, that is,
G*(s)
B*[s
+ ;. -
AG*(S)}
- (5.137)
This maj or result gives the transform for the M IGII busy-peri od distributi on
(for an y o rder of service) expressed as a functi onal equati on (which is usually
impossible to invert). It was ob tained by identifying sub-busy periods with in
the busy period all of which. had the same distributio n as the busy peri od
itself.
5.8.
213
Later in this chapter we give an explicit expression for the bus y period
PDF G(y), but unfortunately.it is not in closed form [see Eq . (5.169)]. We
point out , however, that it is possible to solve Eq . (5.137) numerically for
G*(s) at any given value of s through the following iterative equation:
G ~+l( s) =
B*[s
+ ). -
i.G n *(s) ]
(5.138)
(5 .139)
(5.140)
x k = (_I)kB*(kl(O)
[note, for s
-G *(lI(O)
0, th at s
+ i. -
-B*(lI(O)!!... [s
ds
-B*(I)(O)[I - )'G*(I)(O)]
+ ). -
i.G*(s)
g,
x( 1
)'G*(s)]
I,_0
0] and so
+ i.g,)
g,
= -x-
- (5.141)
1- P
If we comp are this last result with Eq , (3.26), we find th at the average length
of a busy period fo r the sys tem M IGII is equal to the average time a customer
spends in an M IMI] sys tem and depends only 0 11 ). and x
Let us now chase down the second moment of the bu sy period. Pr oceedin g
from Eq. (5. 140) and (5.137) we obtai n
=:z=.
214
THE QU EUE
M /G/l
and so
+ Ag,)Z + XI.g2
g2 = x 2(1
gz =
+ Ag,)Z
I - I.X
xZ[1
+ I.X/(i
_ p)] z
I-p
and so finally
gz
= (I
- (5.142)
_ p)3
This last result gives the second moment o f the bu sy period and it is interesting to not e the cube in the den ominat or ; this effect d oes not occu r when one
calculates th e seco nd moment of th e wai t in the system where only a squa re
power ap pears [see Eq. (5. 114)]. We may now ea sily calcul a te the va rian ce of
the bu sy period , den ot ed by u.", as follows:
XZ
( X)2
( I - p)3
( I _ p)z
and so
uz =u.-" +p (-)"
x
( I _ p)3
_ (5.143)
x3
g3 = (i _ p)4
(l -p)
+ (I
-p)
_ p)5
15Az(?) 3
lOA? ?
x'
g4
3A(?)"
+ (I
6
+ (I
-p)'
5.8.
215
or
A[G*(s}f - (ft
ft
AG*(S)
+ A-
+ ft
+ ). + s)G* (s) + ft = 0
Solving for G * (s) and restricting our solution to the required (sta ble) case
for which IG* (s)1 ~ I for Re (s) ~ 0, gives
G*(s) = ft
+ }. + s -
[(ft
+ A+ S)2 -
4ft A]!!2
2A
(5.144)
Thi s equ ation may be inverted (by referring to transform table s) to obtain
the pdf for the busy perio d, name ly,
g(y)
~ dG( y)
dy
- (5.145)
where I I is the modified Bessel functi on of the first kind of order one.
Con sider the limit
lim G*(s)
0 < 5-0
lim
0 < $-0
(5.146 )
Jo
Examining the right side of this equation we observe that this limit is merely
the probability th at the busy period is finite, which is equivalent to the
probability of the busy period ending. Clea rly, for p < I the busy period
ends with prob abilit y one, but Eq. (5. 146) pro vides inform ati on in the case
p> I. We ha ve
P[busy period ends] = G* (O)
Let us examine this computati on in the case of the system M IM I !. We have
directly from Eq . (5. 144)
G*(O) = ft
+ }. -
[(ft
+ A)2 -
4ft}]!!2
2A
and so
G*(O) =
1p
Thu s
p< l
{;
(5.147)
p> 1
21 6
TH E QU EUE
M IGII
The busy peri od pdf given in Eq . (5.145) is much more complex than we
would have wished for this simplest of interesting queuein g systems ! It is
ind icati ve of the fact that Eq. (5. I37) is usually unin vertible for more general
service-time distributions.
As a seco nd exampl e, let' s see how well we can do with our M/H 2 /1 example.
Using the expression for B* (s) in our funct ional equat ion for the busy period
we get
G*(s) =
8). 2 + 7).[s + }. - )'G*(s)]
4[s + A - }.G*(s) + A][S + A - }.G*(s) + 2A]
which lead s dire ctly to the cubic equation
4[G * (S)]3 - 4(2s
(15
+ 7s) = 0
Th is last is not easily solved and so we stall at this po int in our attempt to
invert G* (s). We will return to the functional equati on for the busy period
when we discuss pri orit y queueing in Chapt er 3, Volume II. Th is will lead
us to the concept of a delay cycle, which is a slight generalization of the
busy-period analysis we have j ust carried out and greatly simplifies priority
queueing calculations.
5.9. THE NUMBER SERVED IN A BUSY PERIOD
In th is section we discuss the distribution of the number of customers
served in a busy period. Th e development parallels that of the previou s
section very closely, both in the spirit of the der ivation and in the nature of
the result we will obtain.
Let N b p be the number of customers served in a busy period . We are
interested in its probab ility d istribu tion Indefined as
In =
P[ N b p
II]
(5.148)
The best we can do is to obt ain a functi onal equati on for its z-transform
defined as
(5.149)
The term for II = 0 is omitted from this definitio n since at least one customer
must be served in a busy peri od. We recall that the random var iable ii
repre sent s the number of arrivals during a service peri od and its z-transform
V(z) obeys the equation deri ved earlier, namely ,
V( z)
B*(A - Az)
(5.150)
Proceedin g as we did for the durati on of the busy period , we condition our
argument on the fact that ii = k , that is, we assume that k customers arrive
5.9.
217
I iJ =
k] = [z1+J1I+.1[,+ .+.11,]
I iJ = k]
[ZSbP
= z II [z lI,]
i= 1
But each of the M i is dist ributed exactly the same as N b p and, therefore,
E[ZSb P
I iJ = k] = z[F(z)]k
F(z)
= L E[z.Y bP I iJ =
k]P[iJ
k]
k= O
00
= z
LP[iJ =
k][F(zW
k=O
From Eq, (5.44) we recognize this last summation as V(z) (the z-transform
associated with iJ) with transform variable F(z); thus we have
(5.151)
F(z) = zV [F(z)]
Z8*[A - ).F(z)]
- (5.152)
This functional equation for the z-transform of the number served in a busy
period is not unlike the equation given earlier in Eq . (5.137).
From this fundamental equation we may easily pick off the moments for
the number served in a busy period. We define the kth moment of the
number served in a busy period as 11k We recognize then
h1
Flll(l)
= 8*(1)(0)[- AF(1)(I)]
+ 8*(0)
Thus
which immediately gives us
1
h1 = - -
1- p
- (5.153)
218
TH E QUEUE
M/G/I
We further recogni ze
. F(2)(l ) = h2 - hI
Carrying o ut thi s computation in the usual way , we obtain the second moment and va ria nce of the number ser ved in the busy period:
J1. =
2p(1 -
Uk
p)
(I -
+ A x +-1
p)3
1- p
2
p(l - p) + A2 ?
(1 _ p)3
~--'-':"""';'---
- (5.154)
- (5.155)
As an example we again use the simple case of the M/M jl system to solve
for F(z) from Eq. (5.152). Carrying thi s out we find
F(z) = z
/l
+ A- AF(z)
+ A)F(z) + /lz =
/l
AF2( z) - (/l
Solving,
F(Z)=!..1'[I-(I2p
1
J
4pz )1/
( 1 + p)"
(5.156)
In' the
- (5.157)
As a seco nd example we con sider the system M/D/1. For thi s system we
have hex ) = uo(x - x) and from entry three in Table 1.4 we ha ve immediately
that
B*(s) = e- ' z
U sing thi s in our functi onal equ ati on we obta in
F(z)
= z e- Pe pF ( z )
(5.158)
z pe: "
u = H(u)e-ll(u)
The solutio n to th is equ ation may be obta ined [RIOR 62) a nd then our
original fun ction may be evaluated to give
e-n pz n
5.10.
219
From this power series we recognize immediately that the distribution for the
number served in the MIDII busy period is given explicitly by
n- l
In
)
(
.!!.f!....-e- np
Il
- (5.159)
Fo r the case of a constan t service time we know tha t if the busy period
ser ves II customers then it must be of durat ion nii , and therefore we may
immediately write down the solution for the MIDfI busy-period dist ribution
as
[V/il( n p) n-l
G(y)
=L- n= l
It
e- np
- (5.160)
We had mentioned in the ope ning paragraphs of this chapter that waiting
times could be ob tai ned from the busy-period analysis. We are now in a
position to fulfill tha t claim. As the reader may be aware (and as we shall
show in Chapter 3, Volume II), whereas the distribution of the busy-period
duration is independent of the queueing discipline, the distribution of waiting
time is strongly de pendent upon order of service. Therefore, in this section
we consider on ly first-come-first-served MIG/! systems. Since we restrict
ourse lves to thi s discipline , the reordering of customers used in Section 5.8
is no longer permitted. Instead, we must now decompose the busy period
into a sequence of interva ls whose length s are dep endent random variables
as follows. Co nsider Figure 5.12 in which we show a single busy period for
the first-come-first-served system [in terms of the unfinished work Vet)].
Here we see that customer C, initiates the busy peri od upon his arrival at
time T , The first interva l we cons ider is his service time Xl> which we denote
by X o ; during this interval mo re custome rs arrive (in this case C2 and C 3 ) .
All those customers who arrive during X o are served during the next interva l,
whose duration is X, and which equals the sum of the service times of all
a rrivals du ring Xi> (in this case C2 and C 3 ) . At the expiration of X" we then
create a new inte rval of duration X 2 in which all customers arriving during
X , are served , and so on. Thus Xi is the length of time required to service
all those customers who arrive during the previous interval whose duration
is Xi_l' If we let n i denote the nu mber of customer arriva ls du ring the interval
Xi' the n n i customers arc served during the interval Xi+l' We let no equal the
number of custome rs who arrive du ring X o (the first customer's service time).
220
T HE Q UEUE
MiGfl
U(t )
c,
OL....-+_ _--->:,---_
---'':-_~
_.._:I~._
""
Y = LXi
;= 0
"
X i(y) = P[X i
:s;;
y]
and the correspondi ng Lapl ace transform of the assoc iated pdf to be
X,*(s)
~ 1"" e-
SV
dXi(y)
5. 10.
221
Clearly , the left-hand side is X;*(s); evaluating the sum on the right-hand side
lead s us to
X i*(s)
f.-0
OO
Thi s integra l is recogni zed as the tran sform of the pd f for Xi-I> na mely,
X i*(s)
Xi~l [A
- AB*(s)]
(5.161)
E[e- 'WI i , X i
y, Y,
v' , s,
/I]
e- '" [B*(s)r
222
THE QUEUE
M /G/l
I I.,
X i = y , Yt
-_
y' ' ] -_
e-'"
.L.
n :o O
(5. 162)
=
=
r'" r'
Jy=o J JI'= O
'"
1._0
[- s
+ A-
AB* (5)]E [X ;]
dX(y)
- ,W I I].
E [e
X7+1(5) - X ;*(5)
[5 - I. + A8*(5)] [ X i ]
= ----"-'-'--'-----'---'-'---
Now we may rem o ve the cond ition o n our arriva l entering during the ith
interval by weighting th is la st ex pression by the probability th at we have
formerly expressed for the occurre nce of th is event (still condition ed on o ur
ar riva l en tering during a bu sy per iod) , a nd so we have
i]
i- O
[5 _"I.
:L
'" [v*
,i \ ... ( S )
, ,1
X .*(s)]
,
5.11.
223
Th is last sum nicely collap ses to yield 1 - X o*(s) since Xi*(s) = I for those
inte rvals beyond the busy period (recall X i = 0 for i ~ i o) ; also , since X o =
x" a service time, then X o*(s) = B *(s ) , and so we arrive at
E[e-S;;; enter
In
b usy peno
. d] =
1 - B*(s)
+ }.B*(s)]E[Y ]
[s - }.
Fr om pre viou s con sider ation s we know that the probability of an a rrival
ente ring during a busy per iod is merely p = Ax (and for sure he mu st wait for
service in such a case); further, we may evaluate the average length of the
busy peri od E[ Y] either from our pre vious calcul ati on in Eq . (5. 141) o r from
elementary considerations ' to give E [Y] = 'i/ (l - p). Thus, unc onditioning
on an arrival finding th e system bu sy, we finally have
E[e- SW]
= (I -
[1 - B*(s)](1 - p)
= ( 1 - p)
+ p [s _ A + AB*(s)]'i
= ----'s'-'-(I=--------'---p)~
(5.163)
s - A + AB*(s)
Voila! T his is exactl y the P-K tran sform equation for waiting time , namel y,
W *(s) ~ E[e- siD ] given in Eq. (5. 105).
Thus we have shown how to go from a busy-period analysis to the calcul ation of waiting time in the system. Thi s meth od is rep orted up on in [CO NW
67] and we will have occasio n to return to it in Chapter 3, Volu me 11 .
S.U.
We had menti oned in the opening remarks of this chapter th at consideration of rand om walks and combinat ori al meth ods was applica ble to the study
of th e M/G!I qu eue. We take thi s oppo rtunity to ind icate so me asp ects of
th ose methods. In Figure 5.13 we have reproduced Vet) from Figur e 5.1Oa. In
additio n, we have indic at ed th e " ra ndom walk" R (t) , which is the same as
Vet) excep t th at it does not satura te at zero but rat her co ntinues to decline at
a rat e of I sec/sec below the hori zontal axis ; of course, it too tak es vertica l
j umps at the custo mer-arriva l insta nts. We intro d uce th is diagram in orde r
to define wha t are known as ladder indices. The kth (descending) ladder index
The following simple argument ena bles us to ealculate E[ Y]. In a long interval (say, I) the
server is busy a fraction p of the time. Each idle per iod in M /G /l is of average length I f}.
sec and therefore we expect to have ( I - p)I /(l I).) idle periods. This will also be the
number of busy periods, approxi mately; therefore, since the time spent in busy perio ds is
pI , the average durat ion of each must be pl l ).I( 1 - p) = :el(l - p) . As I ~ 00 , this ar gument becomes exact.
224
TH E Q UEUE
M /G !I
5.11.
COMBINATORIAL METHODS
225
a+b
This theorem originated in 1887 (see [TAKA 67] for its history). Takacs
generalized thi s theorem and phrased it in terms of cards drawn from an urn
in the following way. Consider an urn with Il cards, where the cards are
marked with the nonnegative integers k I , k 2 , , k ; and where
n
L k, =
k ~
Il
i= l
(that is, the ith card in the set is marked with the integer k ;). Assume that all
cards are drawn without replacement from the urn. Let o; (r = I, . . . , Il)
be the number on the card drawn at the rth drawing. Let
Il
Nr =
VI
+ V2 + .. . + V
r = I, 2, . .. ,
11
NT is thus the sum of the numbers on all cards drawn up through the rth
draw. Takacs' generalization of the classical ballot theorem states that
-
P[N T
< r for
all r
1,2, .. . , 11]
11 - k
=-
(5.165)
11
The proof of this theorem is not especially difficult but will not be reproduced
here . Note the simplicity of the theorem and , in particular, that the probability expressed is independent of the particular set of integers k; and depends only upon their sum k . We may identify o; as the number of customer
arrivals during the service of the rth customer in a busy period of an fovl /G/l
queueing system. Thus FlT + I is the cumulative number of arrivals up to the
conclusion of the rth customer's service during a busy period . We are thus
involved in a race between FlT + I and r : As soon as r equals FlT + I then the
busy period must terminate since, at this point, we have served exactly as
many as have arrived (including the customer who initiated the busy period)
and so the system empties. If we now let N b P be the number of customers
served in a busy period it is possible to apply Eq . (5.165) and obtain the
following result [TAKA 67]:
P[N b p
= III = -1 P[Nn
=
11 -
I]
(5.166)
Il
226
TH E QUEUE
M/G/I
arriva ls on th e durati on of the busy period , multiply by the p robabi lity that 11
service interva ls will, .in fact , su m to thi s length and then integrate ove r all
p ossible lengths. Thus
P[N n
= II -
I]
(5 . 167)
where bl. l(y) is the n-fold convolution of bey) with it self [see Eq . (5. 110)]
a nd repre sen ts th e pd f for the sum of n independent random varia bles , where
each is drawn from th e co mmon den sity bey). Thus we a rr ive at a n expl icit
expression for the pr obability d istr ibution for the number served in a bu sy
period:
P[N b p = II] =
'"
I
O
(;ly)n-l - .l"
- - e bl. l(y) dy
- (5.168)
il!
and so ,
G(y )
"I
I
co
O .~ l
e"
.lz(iX) - l
- - b1nl(X) d x
II !
- (5.169)
5.12.
5.12.
227
chan ges. It is a Markov process since the entire past history of its motion is
summarized in its current value as far as its future behavior is concerned.
That is, its ver tical discont inuities occur at instants of customer arrivals and
for M/G/l these a rrivals form a Poisson pr ocess (therefore, we need not
know how lon g it ha s been since the last arrival), and the current value for
Vet) tells us exactly how much work remains in the system at each instant.
We wish to deri ve the probability distribution funct ion for Vet), given its
initial value at time t = O. Accordingly we define
F(w, t ; wo) ';; P[U(t ) ::;;
wi U(O) =
wo]
(5.170)
+ D.I , I) + AD.t
aF( x , I)
B(w - x ) -d x + O(D. I) (5.171 )
x~o
ax
w
+ D. t) =
F(w, t)
+ aF(w, I) D.I _
AD.t[ F(W, t)
aw
+ aF(w, t) D.tJ
aw
+ i. D.tL : oB(W -
x) dxF(x , I)
+ O(D. I)
Subtracting F(w, r), dividing by D.t , and passing to the limit as D.t
finally ob tain the Taka cs integrodifferential equation for V( t) : .
aF(w, t)
aF(w, t)
at
ow
--'----'- =
- i.F(w, t)
+A
B(w - x) dxF(x, t)
x-o
--->
0 we
- (5.172)
228
M/G/I
TIl E QUEUE
T ak ac s [TAKA 55] deri ved thi s equation for the more genera l case of a
nonhom ogene ou s Poisson process, namely , where th e a rriva l rat e .1.(1)depends
up on I. He sho wed t ha t this equ ation is good for almost all W ~ 0 an d 1 ~ 0 ;
it d oes 1/01 hold a t th ose w a nd 1 for which of(lV, 1)/OlV has an accumulati on
of probability (na mely, an impulse) . This occurs , in particular, a t 1\' = 0
a nd would give rise to the term F(O , I)uo(w) in of(lV, I)/OW, whereas no other
term in the equation contains such an impulse.
We may gai n more information from the Takac s integr odifferential
equation if we transform it on the variable W (a nd not on t) ; thus using the
tr an sform variable I' we define
W *'(r, I)
~fo~ e-
TW
dF w(w, I)
(5.173)
We use t he notation (*.) to denote transformation on the first , but not the
second a rgument. The symbo l Wis ch osen since, as we shall see, lim W*'(r, I) =
W *(r) as 1 ->- 00 , which is our former tr ansform for the waitin g-time ' pdf
[see, for example, Eq . (5. 103)].
Let us examine the tran sform of each term in Eq, (5. 172) sepa ra tely. First
we note th at since F(w, I) = S~ '" d Fi, I), then from entry 13 in Table 1.3 o f
Appendix I (a nd its footnote) we mu st ha ve
- TW
J.o
dw =
W*'(r, I)
+ F(O- , I)
--'---'-----'-~
I'
J.
'" B(w)e
- TW
dw =
B*---,,(--,_
r)---,+,-B(,O~-)
I'
0-
H owever , since the unfini shed work and the ser vice time are both nonnegat ive
random varia bles , it mu st be that F(O-, I) = B (O-) = 0 a lways . We rec ogni ze
th at th e last term in the T ak acs inte grodifferential equa tion is a con volution
between B(w) an d of(W,I)/O W, a nd therefore th e tr an sform o f th is co nvolution (includi ng the con stant multiplier A) mu st be (by properties 10 a nd
13 in that sa me tabl e) }.W* (r, I)[B *(r) - B (O- )]Ir = }.lV*(r, I)B*(r)/r.
N ow it is clear that the tr an sform for the term of(w, I)/OW will be W* '( r, I) ;
but thi s tra nsfo rm includes F(j+- , I), the tr ansform of the impulse locat ed a t the
o rigin for thi s partial deri vative, and since we kn ow th at the T ak acs int egr od ifferential equati on does not contain that impulse it mu st be subtracted out.
Thus, we ha ve from Eq . (5.172),
I )ow*'(r , I)
(r
01
= IV
*,
( I',
I) - F(O , r) -
i.W *'(r, I)
r
W *'(r, I)B*(r)
+ A-~~--'--'r
';
I
I
(5.I 74)
5.12.
229
oW *"(r, t)
"
*
*"
+
--o-'-t-'---'- = [r - A + .1.B (r) ]W (r, t) - rF(O , I)
(5.175)
Takacs gives the solution to thi s equ ati on {p, 51, Eq . (8) in [TAKA 62b]}.
We may now transfor m on o ur seco nd vari able 1 by first defining the
double transform
I"
(5.176)
(5.177)
r,*(s) ~
00
e- stF(O+, t) dl
We may now transform Eq . (5.175) usin g the tran sform pr operty given as
entry II in Table I.3 (and its foot note) to obtain
+ .1.B*(r)]F**(r , s) -
[r - ;.
rF o*(s)
F**(r, s)
(5.178)
(5.179)
Now we recall that V (O) = IVO with probability one, and so from Eq . (5. 173)
we have W *' (r ,O) = e-r u: o. Thus F**(r , s) takes the final form
F**(r, s)
(rl1])e-~ Wo
;.B*(r) - i.
- e- rwo
+r-
- (5.180)
230
TH E QUEUE
M/G/ l
d F(w)
-- =
dw
l WB(w -
U(w) - A
x) d F( x )
(5. 181)
=0
Furtherm ore , for p < 1 then W *(r ) ~ lim W *' (r , t) as t ---+ CIJ will exist and
be independent of-the init ial distribution . Taki ng the tr an sform o f Eq. (5.181)
we find as we did in deri ving Eq . (5. 174)
+
W*( r) - F (O )
i. W *(r)
).B*(r )W* (r )
= -- _ _
....:....:...---='-'
where F (O+) = lim F (O+, t) as t ---+ CIJ and equals the p robability that the
unfini shed wo rk is zero. Th is last may be re written to give
rF(O+)
H owe ver , we require W* (O) = 1, which requ ires th a t the unkn own consta nt
p. Finally we ha ve
W* (r)
=
r -
r(1 - p)
i. AB*(r)
(5 . 182)
COX 62
FELL 66
GAVE 59
Benes, V. E., " On Que ues with Poisson Arrivals," Annals of M athematical Statistics, 28, 670-6 77 (1956).
Co nway, R. W., W. L. Maxwell, and L. W. Miller, Theory ofScheduling , Addison-Wesley (Reading , Mass.) 1967.
Cox, D. R., " The Analysis of No n-Markovian Stochastic Processes by
the Inclusion of Supplementary Variables," Proc. Camb. Phil. Soc .
(M ath. and Phy s. S ci.), 51,433-441 (1955).
Cox, D. R., Renewal Theory , Methuen (London) 1962.
Feller, W., Probability Theory and its Applications Vol. II , Wiley (New
York), 1966.
Gaver, D. P., Jr ., "Imbedded Mar kov Cha in Analysis of a WaitingLine Process in Continu ous Time," Annals of Mathematical S tatistics
30, 698-720 (1959).
EXERCISES
231
HEND 72
EXERCISES
5.1.
5.2.
t ,(y) ~ pry
<XI
= k~l
:s;
y It]
(1+.
J,
[L - F(t
+y
x) ] dP[Tk
:s;
x]
232
TH E QUEUE
(b)
M jGjl
Observing that Tk :s; x if and only if oc(x) , the number of " arrivals" in (0, x): is at least k , that is, P[Tk :s; x ] = P[oc(x) ;::: k] ,
show that
<Xl
L Ph :s;
'"
x ] = L kP [oc(x)
k= l
(c)
k]
k= l
fey)
_I_-_F--.:-(y::..:.)
mi
5.3.
5.4.
5.5.
From Eq. (5.86) form Q(l)(I) and show that it gives the expression for q in Eq. (5.63). Note that L'Hospital's rule will be
required twice to remove the indeterminacies in the expression
for Ql1l(I).
(b) From Eq, (5.105), find the first two moments of the waiting
time and compare with Eqs. (5.113) and (5.114).
5.6.
(a)
k]
1/
233
EXERCISES
5.7.
- APo(t )
where
h(x o)
rex) -
_....:......:e.....-
1 - B(xo)
o -
(b)
t -..
co,
l'"
Pl(XO)r(x o) dx o
(ii)
piO)
(iii)
1'"
Pk+l(XO)r(x o) dx o
>1
---''-..:--=
and
ox o
= [}.z -
zR(z, O) =
(e)
1'"
A -
r(xo)]R(z, x o)
r(xo)R(z , x o) dx o + ;.z(z - l) po
234
THE QUEUE
(f)
M/G/I
Definin g R (z) ~
(g)
Po = I - p
(h)
(p
= Ax)
Q(z) = Po
+ R (z)
bey) and T
x. Let Pk(l )
P[N (I )
k]
[11'
n~k
n!
[1 - B(x)] d x
l o
Jk[11'
-
B(x) dx
In-k
r-< ::
().X)k
-AX
5.10.
Co nsider M/ E./ !.
(a) F ind the po lynomial for G*(s).
(b) Solve for S(y) = P[time in system
~ y].
I
/
EXERCISES
235
5.11.
5.12.
Consider the M/G/I bulk arrival system in the pre viou s problem .
Usi ng the method of imbedded M a rkov chains:
(a) Fi nd th e expe cted queue size. [HI NT: show th a t ij = p and
;? _
o = d2V~Z) I
z~ l
dz-
= /(C b2
+ 1) + ~(c: + 1 _ ~) (g)2
P.
Q(z)
B*[A _ AG(Z)] _ z
Using Litt le's result, find the ratio W/x of the expected wait on
queue to the ave rage service time.
(c) Using the same method (imbedded Markov chain) find the
expected nu mb er of groups in th e qu eu e (averaged over depa rture
times). [H IN TS : Show tha t D(z) = f3* (A - Az), where D(z) is the
generating functi on for the number of groups arri ving during the
ser vice time for an entire group and where f3 *(s) is the Laplace
tra nsform o f the service-time den sity for an entire gro up. Also
not e th a t f3 *(s) = G [B*(s) ], which a llows us to show that
r 2 = (X) 2(g2 - g) + x 2g , where r2 is the second moment o f the
group service time.]
(d) U sin g Little's result, find W., the expected wa it on queue for a
gr oup (measured from the arrival time of the gr oup until the
start of service o f the firs t mem ber of the group) a nd show that
xII' =
(e)
P g
2(1 - p)
C
[1+ ~
+ C2J
2
236
5.13.
TH E QUEUE
M/Gfl
={
qn + V n - m
vn
qn
<m
m-'
Q(z) =
(c)
I Pk(zm - Zk)
Zm [;:~A _ AZ)r' _
z - 1
= _m
__
zm -
+ A(I
5.14 . Con sider an M/Gfl system with bulk service. Whenever the server
becomes free , he accepts 11\"0 cust omer s from the queue into service
simult aneou sly, or, if only one is on qu eue, he accepts that one; in
either case, the service time for the group (of size I or 2) is taken from
B (x ). Let qn be the number of customers remaining after th e nth
service instant. Let V n be the number of arrivals during the nth service.
Define B*(s), Q(z), and V(z) as transform s associated with the rand om
va riables x, ij , and ii as usual. Let p = ).X12.
(a) Using the meth od of imbedded Markov chains, find
237
EXERCISES
q.
Vn
(a)
(b)
eo
V(z)
= .L P[vn =
k] Zk
k -O
(c)
CX)
in terms
co
Q.(z)
= .L P[qn =
k]Zk
k- O
(d)
(e)
Consider an M/G{I queue. Let E be the event that Tsec have elapsed
since the arrival of the last customer. We begin at a random time and
238
THE QUEUE
M /G /!
measure the time IV until event E next occurs. This measurement may
invol ve the o bserva tion o f man y customer a rriva ls before E occurs.
(a) Let A(t ) be the intera rrival-tirne distribution for th ose interva ls
during which E d oes no t occur. F ind A(1) .
(b) Find A *(s) = f;; e-st dA(t).
(c) Find W* (s I n) = f;; e- S W dW(1V I n). where W (IV I 11) = P[time
to event E :s;; IV I II arrivals occur before E).
(d) F ind W *(s) = f;; e- SW dW( IV), where W (w) = P[time to event
E:S;; w).
(e) Find the mean time to event E .
. 5.18.
x is
= nq sec) = K
11
P[service time
(a)
(b)
(c)
(d)
(e)
=
=
=
).q
10
so me multiple o f q sec
0 , 1,2" . ,
V( z)
L vmz m
ffl = O
(f)
).q
00
and
G(z)
L gmzm
m= O
EXER CISES
(b)
239
x? T
={
<T
5.21.
5.22.
(d)
-;;
+ p.
..
~(n) X".k-;;::;;
XT
.L..
q k- 1 k
Let
co
QT(z)
= LPkTzk
k ~O
QT(z)
(e)
5,23.
Find
iV,
< q prove
that
240
THE Q UEUE
M IGII
again. Let F (z) = I ;"': 1];Z; be the z-tra nsfo rm for the number of
customers awaiting service when the server returns from vaca tion to
find at least one cu stomer wa iting (tha t is, /; is the prob ability that a t
the initiation o f a bu sy period the server find s j cu stomers awaiting
service).
(a) Derive an expression which gives qn+l in terms of qn, vn+l' and j
(the number of customer a rriva ls du ring the server' s vacati on) .
(b) Deri ve an expression for Q(z) where Q(z) = lim E[z"") as
n ->- co in terms of Po (equal to the probability that a departing
cu stomer lea ves 0 customers behind). (HINT : condition o n j .)
(c) Sh ow that po = ( I - p)IF(l )(I) where F (l)(I) = aF(z)/a zlz_1
and p = Ax.
(d) Assume no w th at the service vaca tio n will end whenever a new
cu stomer enters the empty system. For th is ca se find F (z) a nd
show that when we substitute it back into our an swer for (b)
then we arrive a t the classical M IGII solutio n.
5.24.
1V*(s) = Po +
(b)
5.25.
r'" I
Jo
klCl
pk(Xo)[B*(s )k-1
L"
e f~Or( u )d u d X o
e-' Y
r(y
+ x o) e_f~+ZOr(U)dU d y
N= J;~ J.T
F ro m Eq . (5.98) esta blish the seco nd- mo men t relationship
N 2 -
(c)
= A. 2 S 2
+ I) =
;'k Sk
We have so far studied systems of the type MfM/I and its variants
(elementary queueing theory) and MfG/l (intermediate queueing theory).
The next natural system to study is GfM/I, in which we have an arbitrary
interarrival time distribution A (t) and. an exponentially distributed service
time . It turns out that the m-server system GfMfm is almost as easy to study
as is the single-server system GfM/I, and so we proceed directly to the
m-server case. This study falls within intermediate queueing theory along
with MfG/I, and it too may be solved using the method of the imbedded
Markov chain, as elegantly presented by Kendall [KEN D 51].
242
T HE QUEUE
G/M /m
\
Server
- -- --i---f- - - - -- + - -- + --+--
en
Queue -
--;;;'r --
-'-----'--
q'" found
---'---:;;o1' --
---'---
Time ----;.
-'-- -
q' ;/ .l found
elf
Cn + 1
q;
to the arrival of C;
We use qn' for th is random variable to distinguish it from qn, th e number of
customers left behind by the departure of C n in the M/Gfl system. In Figure
6.1 we show a sequence of arrival time s and identify them as critical points
imbedded in the time axis. It is clear th at the sequence {q,: } forms a discretestat e Markov chain . Defining
V~+l =
We mu st now calculate the tr an sition pro ba bilities asso ciate d with this
Mar kov chain , and so we define
= j I: =
i]
(6.2)
for
>i+
- (6.3)
since there ar e at most i + I pre sent between the arriva l of C; and C n +!.
Th e Markov state-transition-pro bability diagram has tran sition s such as
shown in Figure 6.2; in thi s figure we sho w only the transition s out of sta te E,.
6.1.
(G /M/ m)
243
8
Figure 6.2 Slate-tran sition-probabilit y diagram for the G/M/m imbedded
Mark ov chain.
We are concerned with steady-sta te results only and so we must inquire as
th e condition s under which this Markov chain will be erg odic. It may
easily be shown that the condition for ergodicity is, as we would expect,
A < mu ; where A is the a verage arrival rate associated with our input d istribution and fl is the parameter associated with our exponential service time
(tha t is, x = I /fl) . A s defined in Chapter 2 and as used in Secti on 3.5, we
define the utili zat ion factor for thi s system as
to
).
=4 ---
(6.4)
mfl
Once again thi s is the a vera ge rate at which work enters the system (Ax = Nfl
sec of work -per elapsed seco nd) divided by the ma ximum rate at which the
system can do work (m sec of wo rk per elap sed second). Thus our condition
for ergodicity is simply p < I. In the ergodic case we are assured th at an
eq uilibrium pr ob ab ility d istribution will exist describing the number of
cust omers present a t the a rriva l inst ants; thus we define
p
rk
lim P[q;
k]
(6.5)
= rP
(6.6)
where
(6.7)
a nd P is the matrix whos e elements a re the one-step tr an sition pr obab ilities
Pu
Our first ta sk then is to find th ese o ne-step tran sition probabil ities. We
mu st con sider four region s in the i,j pl ane as sho wn in Figure 6.3, which gives
the case m = 6. Regarding the region labeled I , we already kn ow from Eq.
(6.3) that Pis = 0 for i + I < j . Now for region 2 let us con sider the ran ge
j ~ i + I ~ m , which is the case in which no cu stomers a re waiting and all
pre sent are engaged with their own server. During the intera rriva l period , we
244
TH E QUEUE
G/M /m
t
j
;.-
::~:-:::::-:::::-::::~:-~::::-::
3
III
+I
- j departures within
sec after
= ( . i
1+
en arrives I q n' =
ij
I-J
(. i+ 1. ) = ( i ~ l )
I+I-J
L'" (i ~ I) (I -
~ i+ I~ m
- (6.9)
Next con sider the ran ge m ~ j ~ i + I, i ~ In (region 3), * which correspo nds to the simple case in which all m servers a re bu sy throu ghout the
The point i = m - I , j = m can properly lie either in region 2 or region 3.
6.1.
(G /M /m)
245
intera rrival interval. U nder this assumption (that all m servers remain busy),
since each service time is exponentially distributed (memoryless), then the
number of customers served du ring this inte rval wiII be Poisson distributed
(in fact it is a pure Poisson death process) with parameter mu ; that is defining
"t , all m busy" as the event that t n+! = t and all m servers remain bu sy during
t n + 1 , we have
(mflt)k - mn t
P[k cu stom ers served t, all m bu sy] = - - e
k!
i~/[i + 1 -
Pi; =
or
., =i'"
P"
t-
O ( I.
j served
l1I~j~i+l
(6.10)
Note that in Eq. (6.10) the indices i and j appear only as the difference
+ 1 - j , and so it behooves us to define a new quantity with a single index
111 ~
j ~ i
+ 1, m
(6.11)
'i"
1= 0
(l1Iflt) n - mn t
--,- e
n.
dA(t )
o~
n ~ i
+1-
111 , In ~
- (6.12)
The last case we must consider (region 4) is j < m < i + 1, which describe s
the situa tio n where C; a rrives to find m cust omers in service and i - 111
waitin g in queu e (which he joi ns) ; upon the a rriva l of C n +! there are exactly j
custo mers, all of whom are in service. If we assume that it requires y sec unt il
the queue empties then one may calcul ate Pi} in a straightforward manner to
yield (see Exercise 6.1)
-l"'(m). e
Pi' -
- in'
j< m < i + l
- (6.1 3)
246
T HE Q UEU E
G/M /m
Thus Eqs. (6.3), (6.9), (6. I2), and (6.13) give the complete description of the
one-step transition probabilities for the G/M /m system.
Havin g established the form for our one-step transition probabilities we
may place them in the transition matrix
p=
poo
pal
PlO
P20
Pn
hI
P m- 2 .0
P m-2. 1
P m-2. m -l
Pm-l.m-I
Pm.m-l
P m+ n,m-l
P m+ n,O P m+ n.l
Pl2 0
p.)') P23
. . .
Po
PI
0
0
0
0
flo
(3n+ l
f3n
. . .
Po
In this matri x all terms above the uppe r diagonal are zero, and the terms
fln are given through Eq. (6.12). The "boundary" terms denoted in thi s
matrix by their generic symbol PH are given either by Eqs. (6.9) or (6.13)
according to the range of subscripts i and j . Of most importance to us are the
transition probabilities Pn.
6.2.
(6. 14)
6.2.
247
Note from Figu re 6.2 that t he system can move up by at most one state, but
may mo ve down by many states in any single transition. We consider thi s
motion between states and define (fo r III - I :::;; k)
Uk
E k+I
(6. 15)
We have that the pr obability of reaching state E k+l no times bet ween returns
to state E k is equal to I - Po(that is, given we are in state E k the onl y way
we can reach state E k+1 before our next visit to sta te E k is for no customers to
be served , which has pr obability Po, and so the probability of not getting to
E k+1 first is I - Po, the probability of serving at least one) . Furthermore, let
y = P[Ieave state
E k+l
s;
P[leave state
state E k ]
E k+ 1
This last is true since a visit to state E , for j :::;; k must result in a visit to sta te
E k before next returning to state E k+I (we move up only one state at a time) .
We note that y is independent of k so long as k ~ III - I (i.e., all III servers
are bu sy). We have the simple calcul ati on
PIn occurrences of state
E k+ 1
E k]
= yn- 1(I -
Uk
=I
nyn-1(1 - y)Po
n= l
E k +I
= -Po-
1- Y
for k
E k-
III - 1
Po
U = Uk = --
l- y
for k~III-1
(6.16)
248
TH E QUEUE
G/M /m
. . N kH( t)
fl o
a=hm---=-/-'" Nk(t)
1- Y
m-l
(6.17)
However, the limit is merely the ratio of the steady-state probability of finding
the system in state EkH to the probability of finding it in state Ek . Con sequently, we have established
~
m-l
(6.18)
m-l
(6.19)
for some constant K. This is a basic result, which says that the distribution of
number of customers found at the arrival instants is geometric for the case
k ~ m - 1. It remains for us to find a and K, as well as rk for k < m - 1.
Our intuitive reasoning (which may easily be made rigorous by results
from renewal theory) has led us to the basic equation (6.19). We could have
"pulled this out of a hat" by guessing that the solution to Eq . (6.6) for the
probability vector r ~ fro, rl> r2 , J might perhaps be of the form
(6.20)
This flash of brilliance would, of course , have been correct (as our calculations
have just shown) ; once we suspect this result we may easily verify it by
considering the kth equation (k ~ m) in the set (6.6), which reads
co
r, = K~ =
L riP ik
i=O
co
L riP ik
i = k- l
<Xl
=L
K a ifli+l_ k
i =k- l
..::.
(J
i+l-kR
P i + l -k
i=1.--1
6.2.
Of course we know
tion:
fJn
249
n= O
Jt-O
n!
A *(mf.l - mf.la)
- (6.21)
=IKak
k =m
(6.22)
1- a
P[queue size
n I arrival queues]
= --="---m
and so
m+n
P[arnval queue s]
Ka n+ m
Ka /(1 - a)
= (1 -
P[queue size
a)a n
0 - (6.23)
Thus we conclude that the conditional queue length distribution (given that a
queue ex ists) is geometric for any G/Mlm system.
250
TH E QUEUE
G/ M/m
W*(s n)
n]
(6.24)
we have
I =(
W *(s n)
111P.
s
But clearl y
00
= .2 W*(s
)n+l
(6.25)
+ mp.
n =O
.2 (1 00
n~O
= (1 -
mp:
(J)(J n
+ mp.
(J) -
)n+1
111P.
----'---
+ mp. -
mp.(J
y ~O
- (6.26)
--6.4.
TH E QUEUE
G/Mfl
251
th e un ique root in the ran ge 0 < a < I of th e functi onal equ at ion (6.21).
We are still sea rching for the distribution r k and hav e ca rried that so lution to
the point of Eq. (6.20); we have as yet to evalu ate the constant K as well as
the first m - 1 terms in that distribution. Before we pr oceed with these last
steps let us study a n imp ortant speci al case.
6.4. THE QUEUE G/M /l
This is perhaps the most important system and form s the "dual" to the
system M/G/1. Since m = 1 then Eq. (6.19) gives us the solution for r k
for all values of k , that is,
0, 1,2, . . .
K is now easily evaluated since the se probab ilities must sum to unity. From
thi s we obtain immediately
k=0,1 ,2, . ..
_ (6.27)
- (6.28)
y]
I A ]P [A ]
y I A' ]P [A' ]
-P[queueing time>
(6.29)
Clearly, the last term in thi s equation is zero; th e rem ainin g conditional
pro bability in this last exp ression may be obtained by integrating Eq. (6.26)
from y to infinity fo r III = I; thi s computation gives e -p (l -.) . a nd since a is the
252
TH E QUEUE
G/M/m
- (6.30)
We have the remark able conclu sion that the unconditional waitin g-time
distribution is exponential (with a jump of size 1 - a at the origin) for the
system G/M /l. If we compare thi s result to (5.123) and Figure 5.9, which
gives the waitin g-time distribution for M/M /l, we see that the results agree
with p replacing a. That is, the queueing-time distribution for G/M /l is of
t he same f orm as for M/M / 1!
By straightforward calcul ati on, we also have that the mean wait in G/M / I is
- (6.31)
Exa mple
Let us now illustrate thi s meth od for the example M/M /\. Since A(t)
+ i.
(6.32)
i,
a = - - - -fl - ua
A
or
fla" - (fl
which yields
+ A)a + i. = 0
(a - l)(,l a - i.) = 0
Of these two solutions for a, the case a = I is un acceptable due to sta bility
conditions (0 < a < I) and therefore the only acceptable solution is
i.
a = - =p
fl
M/M /l
(6.33)
6.5.
THE QUEUE
G/M /m
253
Example
As a second (slightly more Interesting) example let us con sider a G/M /I
system, with an intera rrival time distribution such th at
A *(s) =
P,
(s
(6.35)
+ p,)(s + 2p,)
+ p.)(p. -
+ 2p,)
p,a
a3
5a 2 + 6a - 2
We know for sure th at a = I is always a root of Eq. (6.28), and this permits
the stra ightforward fact oring
(a - 'I)(a - 2 - J 2)(a - 2
+ J 2) =
J"2
o< a <
r,
= (li -
1)(2 - J 2f
0, 1,2, . . .
(6.36)
Similarl y we find
W(y) = 1 - (2 -
y ~O
(6.37)
254
TH E QUEUE
G /M /m
appears in the form of Eq. (6.20) ; we may fact or out the term Ko":" to obtain
- (6.38)
where
k = 0, 1, . .. , m - 2
(6.39)
(6.40)
We have as yet not used the first m - 1 equations repre sented by the matri x
equation (6.6). We now require them for the evaluation of our unkn own
terms (of which there are m - I) . In terms of our one-step transition probab ilities PH we then have
e, = I""
k = 0, 1, .. . , m - 2
R iPik
i = k- l
00
e, = I
R iPik
+ I
i =k- l
a i+ l - mp ik
i =m-l
Rk-
Rk
00
"R
.. iP;k
"
.. a
i =k
i +I- m
P ik
i =m-l
- (6.41)
l -
P k-l .k
m- 2
co
k=O
k=m- l
I n, + J I
ak- m+I
or
J
1
m -2
--+IR k
I - a
k- O
- (6.42)
6.5.
TH E Q UEUE
G/M /m
255
rr
W(O)
m- l
m- l
k= O
k =O
= .L r, = J .L s ,
(6.43)
"
I
mfL(mfLx)k- m
(k - m)!
e- mpz d x
If we now remove the condition on k we may write the unconditi onal distribution as
W(y)
W(O)
= W(O)
+J
i f"
k~m Jo
I"
+ J(J
( mfL)(mfLx)k-mr!'-m+l e- mpz d x
(k - m)!
mfLe- mp zl1 - a l
dx
(6.44)
We may now use the expression for J in Eq. (6.42) and for W(O) in Eq . (6.43)
and carry out the integration in Eq. (6.44) to obtain
(J
y~O
- (6.45)
Thi s is the final soluti on for our waitin g-time distribution and shows that in
the general case GIM/m Ire still have the ex ponential distribution twith an
accumulation point at the origin) for waiting time!
We may calculate the average waiting time either from Eq. (6.45) or as
follows. As we saw, a cust omer who arrives to find k ~ m others in the
system must wait unt il k - m + I services are complete , each of which
take s on the avera ge I jmp sec. We now sum over all those cases where our
256
TH E QUEUE
G/M /m
<X>
E[lv]
=L -
(k - m
k- m mfl
= -.
+ l )r k
Ko" a nd so
<X>
L (k
- m
mf,l k- m
+ l )a
<Xl
L rk = 1 = ro + L K~
k- O
k~l
(6.46)
Our task now is to find another relation between K and roo Thi s we may do
from Eq. (6.41), which states
co
R1
R 0 -_
".L. a'-1Pn.
'-1
(6.47)
POl
But R,
f '(D
[I -
e-"~oe-"t dA( I)
Thi s we recognize as
P Ol
A *Cfl)
(6.48)
6.6.
THE QUEUE
G/M/2
257
=1"'(7)[1 -
e-'V'dA(t)
Pa = 2A *(Il) - 2A *(21l)
Also for i
2,3,4 , . . . , we have
Pi! =
(6.50)
_1_[1-
2A*{fl)
A *(Il)
+ 2A*(21l) -
iIa Ip il]
i-2
(6.51)
Th e summation in this equation may be ca rried out withi n the integral signs
of Eq. (6.50) to give
~ i-I
""'" a
i- 2
Pn. = 2A (21l) +
_....0....:_----'----'- _ _----''--'-
20' - 1
(6.52)
But from Eq. (6.21) we recognize that a = A*(21l - 21la) and so we have
.
L'" a,-I
PiI
i- 2
2A*(21l)
+ - 20'-
20' - I
[I - 2A*{fl) ]
R0 -- ..!:!.
and so we may express
r as
Ka
r - Ka[l - 2A *(Il)]
0 - (I - 2a)A*(Il)
(6 .53)
Thus Eqs. (6.46) and (6.53) give us two equations in our two unknowns K
and ro, which when solved simultaneously lead to
(l - 0')[1 - 2A *(Il)]
ro =
I - 0'- A*(Il)
EXERCISES
259
Compa ring Eq. (6.56) with ou r results from Chapter 3 [Eqs. (3.37) and (3.39)]
we find that they agree for m. = 2.
Th is compl etes our study of the G[M [m queue . Some further results of
interest may be found in [DESM 73]. In the next chapter, we view transform s
as probabilities and gain considerable reduction in the ana lytic effort required
to solve equ ilibrium and transient queueing problems.
REFERENCES
COHE 69 Cohen, J. W., The Single Ser ver Queue , Wiley (New York) 1969.
DESM 73 De Smit, J. H. A., "On the Many Server Queue with Exponential
Service Times," Advances in Applied Probability, 5,1 70-1 82 (1973).
KEND 51 Kendall, D. G. , " Some Problems in the Theory of Queues," Journal
of the Royal Statistical Society , Ser. E., 13, 151-1 85 (1951).
Conside r M[M[m .
(a) How do Pk and r k co mpare?
(b) Co mpare Eqs. (6.22) and (3.40).
6.4.
(a)
(b)
(c)
(d)
}.t
2,
}'2
I, f1 = 2, and
5[8.
Find G .
Find r k
Fin d Il'(Y) .
Find W.
6.7. Conside r a D[MfI system with f1 = 2 and with the same p as in the
previous exercise.
(a) Find G (correct to two decimal places).
260
TH E QUEUE
(b)
(c)
(d)
G/M/m
Find r k
Find w( y).
Find W.
6.8.
Consider a G/M /I queueing system with room for at most two customers
(one in service plus one waiting) . Find rk (k = 0, 1,2) in terms of fl
and A * (5).
6.9.
7
The Method of Collective Marks
W hen on e stud ies stoch astic pr ocesses such as in queueing the ory , one
finds th at the wo rk d ivides int o two parts. The first part typically requires a
careful probabilistic argument in order to arrive a t expressions inv olvin g the
random va riables of interest. * The second part is then one of a nalysis in which
the f ormal manipulation of symbo ls takes place either in the o rigina l domain
or in so me transformed d omain. Whereas the prob abil istic a rgu men ts typica lly must be made with great care, they nevertheless leave one with a comfort abl e feeling th at the " p hysics" of th e situ ation a re con stantly withi n one's
understand ing a nd gras p. On the other hand , whereas the a na lytic manipulations that o ne carries out in the seco nd part tend to be rather stra ightfo rwa rd
(albeit difficul t) formal o peratio ns, one is unfortunately left with the uneasy
feelin g th a t these man ipul at ion s relate back to the origina l p roblem in no
clearly understandable fashi on. This " no nphysica l" aspect to problem
so lving typically is taken o n when one moves into the domain of transforms,
(either Laplace o r z-t ra nsfo rms).
In th is ch ap ter we de mon str at e that one ma y deal wit h tr a nsforms a nd sti ll
maintain a hand le on the prob abilistic arguments taking place as the se
tra nsfo rms a re manipulated. Th ere a re tw o separat e opera tions involved : the
" ma rking" of custo mers ; a nd the observatio n of "catas tro phe" p roc esses.
Together the se meth od s a re referred to as the meth od of collective marks.
Both opera tio ns need not necessarily be used simultaneou sly , and we study
them sepa ra tely bel ow. This ma terial is dra wn princip ally from [R U N N 65] ;
these ideas were introduced by van Dan tzig [VA N 48] in order to expose th e
probabil ist ic int erpreta tion for tr an sforms.
7.1.
I - z
(7. 1)
(7.2)
As, for exa mple. the arguments leading up to Eqs. (5.31) and (6.1).
26 1
I
!
I
262
(7.3)
It is clear that k cu stomers will arrive in the int er val (0 , t ) with the pro babil ity
(At)ke-At/k !. M oreover, with probability zk, none of these k cu stomers will
be marked ; thi s last is true since marking takes place independently a mo ng
customers. Now summing over all values o f k we have immediately that
0)
q(z, t)
= .2
(}. t)ke-.lt
k!
k- O
Zk
e.lt( z-Il
(7.4)
G oin g back to Eq . (2. 134) we see that Eq . (7.4) is merel y the generati ng fun ction for a Poisson arriva l proce ss. We thus conclude th at the genera ting
functi on for th is arrival process may also be interpreted as the prob abili stic
qu antity expressed in Eq . (7.3). This will not be the first time we may give a
probabili stic in terpretati on for a generat ing fun ction!
Example 2: M /M/ ro
We con sider the birth-death queueing system with a n infinite number of
servers. We also assume a t time t = 0 that there a re i customers present. Th e
parameters of our system as usual are). a nd f.l [i.e., A(t) = I - e- At a nd
B (x ) = I - e-~X] .
We are intere sted in the qu antity
(7.5)
P(z, r) =
.2 Pk(t)Zk
k=O
(7.6)
I '
f
7.1.
263
Once again we mar k customer s according to Eqs. (7.1) and (7.2). In analogy
with Exa mple I , we recognize that Eq . (7.6) may be interpreted as the pr oba bility that the system contains no marked customers at time t (where the
term Zk again repre sents the probability that none of the k customer s present
is marked). Here then is our crucial ob servati on : We may calculate P (z , t )
directly by findin g the probab ility that there are no marked customers in the
system at time t, rather than calculating Pk(t) and then finding its z-transform !
We pr oceed as follows: We need merel y find the probability that none of
the customers still present in the system at time t is mark ed and this we do by
accounting for all customers present at time 0 as well as all customer s who
arrive in the interval (0, t ). For any cust omer present at time 0 we may
calculate the probability that he is still present at time t and is mar ked as
(I - z)[ l - B(t )] where the first factor gives the probabil ity that our
customer was marked in the first place and the second factor gives the
prob abil ity that his service time is greater than t . Clearly, then , this quantity
subtracted from unity is the pr obability that a customer originally present is
not a marked customer present at time t ; and so we have
P[custome r present initially is not a marked customer present at time t]
= 1 - ( 1 - z)e-P '
Now for the new customers who enter in the interval (0, r), we have as
before P[k arrivals in (0; t)] = ().t)'e- ;" /kL Given that k ha ve arrived in this
interval then their arrival instants are uniformly distributed over this interval
[see Eq. (2.136)]. Let us consider one such arrivin g customer and assume that
he arri ves at a time or < t. Such a customer will not be a marked customer
present at time t with probability
P[new arrival is not a mar ked customer pre sent at time
arrived at or
t]
given he
(7.7)
=!
for 0
or
and so
P[new arrival still in syste m at t]
t=O
1 - e-
pt
(7.8)
flt
U nconditioning the arrival time from Eq. (7.7) as shown in Eq. (7.8) we have
1
- pi
P[n ew arr ival is not a mark ed customer present at t] = 1 - ( 1 _ z) - e
p.t
264
Thus we may calculate the probability that there a re no marked c usto mers
a t time t as follows:
co
P( z, t)
=I
k= O
II
I
r
co (J.t)k
I
- e-
k~O
At
1 - (I - z)
1-
k!
- PJ k
e
[I - ( I - z)e-p'r
/it
(7.9)
It sho uld be clear to the student that the usual method for obtaining this
result would have been extremely complex.
Example 3: M JGJI
In this exa mple we co nsider the FCFS M JGfI system. Re call that the
random va riables w. , t . +! , t . + 2 , , x., x.+1 , . . . a re all independent of
each o ther. As usu al , we define B *(s) a nd W n *(s) as the Laplace tr an sform's
for the service-time pdf b(x) and the waitin g-time pdf w n(Y) for C n, respe ctively.
We define the event
.
{no u In
1Y,
}
IV .
--
(7.10)
We wish to find the probability of this event, that is, P [no M in Jr. ]. Con ditioning on the number of arriving customers and on the wait ing time lI ' n ,
a nd then removing the se conditions , we have
P[no M
In
w.]
.
= I<Xl 1 <Xl (i.)k
....JL. e- Azk dWn(y )
k- O
f"
k!
e -A.(l- z)
d W.(y)
(7 .11)
Thus once again we ha ve a very simple probabili stic int erpretat ion for the
(Laplace) transform of an important distribution . By identical a rguments
we may arrive at
(7.12)
P[no M in x.] = B*(J. - },z)
7.1.
265
This last gives us another interp reta tion for a n old expre ssion we have seen
in Ch apter 5.
Now co mes a startling insight! It is clear that the arriva l of cu stomers
during the waiting time of C n and the arrival of customers during the service
time of C; must be independent events since these are non overiapping
intervals and our arrival process is memoryless. Thus the events of no marked
customers arriving in each of these two disjoint intervals of time mu st be independent , and so the probability that no marked customers arrive in the
uni on of the se two disjoint intervals must be the product of the probabilities
th at none such arrive in each of the intervals sepa ra tely. Thus we may write
P[no M in
ll ' n
+ x n] =
=
P[no M in lI'n]P[no M in x n]
(7.13)
(7.14)
P[no M in
lI ' n
+ x n] =
P[no M in \" n +
+ P[no Min ll ' n
X n
and C n H marked]
X n and C n H not marked] (7.15)
Furthermore , we have
P[no Min
Wn
if
Wn+l
>0
266
1<
Wn
X fl
- - - - -- -71
c;
Server
wn
IE
Queue
t
c;
No
arrivals
.I
>1
wn + 1
C n-tl
\.
v-
---)
A rrivals of
interest
P[no Min
IV n
+ X n and
C n+l
From the se observations and the result of Eq. (7. \ I) we may write Eq. (7.15)
as
P[no M
in +
lV n
Xn)
+ zW:+ (i. 1
i.z) (7.17)
7.2.
+ z lV*()'
267
lim W n *(s)
- i.z)
If we make the change variable s = X- )'z and solve for W*(s) , we obtain
WOes)
s( 1 - p)
s - ).
i.B*(s)
(7.18)
which, of course, is the P-K transform equ ation for waiting time.
We have demonstrated three examples where the marking of customers
has allowed us to a rgue purel y with pr obabilistic reasoning to deri ve exp ressions relating transforms. What we have here traded has been straightforward
but tedious analysis for deep but physical probabilistic rea sonin g. We now
co nsider the cat astrophe pr ocess.
7.2.
L'" e-Y'f( t) dt
= F *(y)
I
I
(7.19)
where, as usual , f (t )>F*(s) are Laplace tran sform pairs. Thu s we have a
prob abili stic interp reta tio n for the Laplace transform (evalua ted at the point
y) of the pdf for the time of occurrence of the event, namel y , it is the pr obability that an event with this pdf occurs before a Poisson catastr ophe at rate
y occurs.
t A catastrophe is merely an impressive name given to these generated times to distinguish
them from the " event" of intere st at time
I.
268
We are interested in deriving an expression for the ren ewal fun ction H (I),
which we rec all fr om Section 5.2 is equal to the expected number of events
(renewa ls) in a n interval of len gth I . We proceed by defin ing
(7.20)
H(I)
=J,nPn(t)
n -O
But from its definition we see that Pn(l ) = F(n,(I) - F(n+I)(I) and so we ha ve
OX>
H(I)
= J, n(F(n,(I) -
F tn+I,(I)]
n= O
OX>
.L F(n,(I)
(7.2 1)
n- l
N;
(7.22)
+ dl )]
E[N , ]
= f.OX> H(t)ye- y t dl
(7.23)
H*( s)
fO
h(I)e- " dt
E,
u
tl
1
Pn(l )
If
(7.24)
t We use the subscript (n) to remind the reader of the definition in Eq . (5.110) denoting the
n-fold convolution. We see that[. n,(t ) is indeed the n-fold convo lution of the lifet ime density
J(t) .
REFERENCES
269
S:
(7.26)
We no w have two expressions for E[N e ] and so by equating them (and making
the change of va ria ble s = y) we ha ve the final result
H*(s) _
F*(s)
(7. 27)
F*(s)
Thi s last we recogni ze as the tr an sform expression for the integral equation
of renew al the ory [see Eq. (5.2 1)]; its integral formulation is given in Eq .
(5.22).
It is fair to say that th e method of collective marks is a rather elegant way
to get so me useful and imp ortant results in the theory of stochastic processes.
On the other han d , th is method has as yet yielded no results th at were not
prev iousl y known throu gh the application of other methods. Thus at present
its principal use lies in providin g a n alternati ve way for viewing the fundamental relat ion ship s, thereby enhancing one's insight int o the prob abili st ic
structure of the se processes.
Thus end s o ur treatment of intermediate queueing the ory. In the next
part, we venture into the kingdo m of the GIG II queue.
1 -
REFERENCES
RUNN 65 Runnenburg. J. Th ., " On the Use of the Method of Collective Marks
in Queueing Theory," Proc. Symposium O il Congest ion Theory , eds,
W. L. Smith and W. E. Wilkinson, University of North Carolina
Press (1965).
VAN 48
van Dantzig. D., " Sur la methode des fonctions generatrices,"
Colloques internationaux du CN RS, 13, 29-45 (1948).
270
EXERCISES
7.1.
Consider the M/G /l system sho wn in the figure belo w with average
arri val rate A and service-time distribution = B(x) . Customers a~e
served first-come-first-served from queue A until they either leave or
receive a sec of service, at which time they join an entra nce box as shown
in the figure. Cu stomers continue to collect in the ent rance box forming
,J
sec of service
received
com pleted
Server
a gro up until queue A empties and the server becomes free. At this point ,
the entrance box "dumps" all it has collected as a bulk arrival to queue
B. Queue B will receive service until a new arrival (to be referred to as a
"starter") join s queue A at which time the server switche s from queue B
to serve queue A and the customer who is preempted returns to the head
of queue B. The entrance box then begins to fill and the process repeat s.
Let
g. = P[entrance box delivers bulk of size n to queue B]
G(z) =
'"
I gn
.-0
zn
(a)
Give a pr obabili stic interpretat ion for G(z) using the method of
collective marks.
(b) Gi ven th at the " starter" reaches the entrance box, and usin g the
method of collective marks find [in term s of A, a, B (' ), and G(z)]
P k = P[k customers arrive to queue A during the " starter's"
(c)
EXER CISES
271
7.7..
7.3.
7.4.
Consider the G/M /m system. The root <1 , which is defined in Eq, (6.2 1)
plays a central ro le in the so lution. Exam ine Eq. (6.2 1) fro m the
viewpoi nt of collective marks and give a probabilistic interpretation
for <1 .
PART
IV
ADVANCED MATERIAL
273
8
The Que ue
G/G/l
We have so far made effective use of the Mark ovian property in the
queuein g systems M/M /l , M/G/l , and G/M /m . We must now leave behind
man y (but not all) of the simplificatio ns that deri ve from the Markovian
p roperty and find new meth ods for studying the more difficult system GIG /I.
In this chapter we solve the G/G/l system equat ion s by spectral meth ods,
makin g use of tran sform and complex-varia ble techniques. There are , however, numerous other approa ches : In Section 5.11 we introduced the ladder
ind ices and pointed out the way in which they were related to important
events in queueing system s ; these idea s can be extended and applied to the
general system GIG /I. Fluctuations of sums of random variables (i.e., the
ladd er indices) have been studied by Ander sen [AND E 53a, AND E 53b,
A ND E 54] and also by Spit zer [SPIT 56, SPIT 60], who simplified and
expanded Andersen' s work . Thi s led, among other thin gs, to Spitzer's identity ,
of great impo rta nce in that app roach to queueing theory. Much earlier (in
the 1930's) Pollaczek considered a form alism for solving these systems and his
approach (summarized in 1957 [POLL 57]) is now referred to as Pollaczek's
method. More recently, Kingman [KING 66] has developed an algebra fo r
queues, which places all these meth ods in a commo n fram ewor k and exposes
the und erlying similarity among them ; he also identifies where the problem
gets difficult and why, but unfortunately he shows that this method does not
exte nd to the multiple server system. Keilson [KEIL 65] applies the method
of Green's fun ction. Benes [BENE 63] studied G/G fl through the unfinished
work and its "relat ives."
Let us now esta blish the basic equat ions for this system.
8.1.
Th e system under considera tion is one in which the intera rrival times
between custo mers are independent and are given by an arbitra ry distr ibution A (t) . Th e service times are also independen tly dr awn from an arb itra ry
distr ibut ion given by B(x). We assume there is one server avail able and that
service is offered in a first-come-first-served order. The basic relat ion ship
275
276
THE QUEUE
G/GfI
amo ng the per tinent random varia bles is deri ved in this sectio n and leads to
Lindley's integral equation, whose solution is given in the follo wing sectio n.
We consider a sequence of arriving customers indexed by the subscrip t n
and remind the reader of our earli er notation :
= the
= Tn -
tn
We assume that the random variables {t n } and {x n } are independent and are
given , respectively, by the distribution functions A (t) and R(x) independent
of the subscript n. As always, we look for a Markov process to simplify our
analysis. Recall for M/G/I , that the unfinished work U( t) is a Markov proce ss
for all t. For G/G/I , it should be clear that although U( t ) is no longer
Markovian, imbedded within U(t) is a crucial Markov pr ocess defined at the
customer-arrival tim es. At these regeneration points, all of the past history
that is pertinent to future beh avior is completely summa rized in the current
value of U(t ). That is, for FCFS system s, the value of the unfinished work
just p rior to the arrival of C; is exactl y equ al to his waiting time (IVn ) and
this Mar k ov process is the object of our study . In Figures 8.1 a nd 8.2 we use
the time-diagram notation for queues (as defined in F igure 2.2) to illust rat e
the history of en in two cases : Figure 8. 1 displays the case where C n +!
arrives to the system before C; departs from the service facility ; and Figur e
8.2 sho ws the case in which Cn + 1 arrives to an empty system. Fo r the condit ion s of Figure 8.1 it is clear th at
That is,
(8.1)
if
The condition expressed in Eq . (8. 1) ass ures that C n +! ar rives to find a busy
Cn _
Server
c,
Xn
"'n
c;
Queue
t,..,
C.
'Ulnf.'
Cn
+ 1
Time --;:-
.. I
c..,,
C n+1
8.1.
Server
t--
277
x.~
ell
C.
+1
T im e~
Qu eue
t n +1
c.
Figure 8.2 The case where C n+l arrives to find an idle system.
system. From Figure 8.2 we see immediately that
if
(8.2)
where the condition in Eq. (8.2) assures that Cn +1 arrives to find an idle
system . For convenience we now define a new (key) random variable U n as
(8.3)
This random variable is merely the difference between the service time for C;
and the interarrival time between Cn+l and Cn (for a stable system we will
require th at the expectation of u.; be negative). We may thus combine Eqs.
(8.1)--(8.3) to obtain the follo wing fundamental and yet elementary relationship , first established by Lindley [LIND 52];
if
if
IV n
IV n
+ U n :?: 0
+ U n::::;; 0
(8.4)
The term IV. + Un is merely the sum of the unfinished work (w.) found by
C; plus the service time (x. ), which he now adds to the unfini shed work,
less the time durat ion (t'+l) until the a rrival of the next customer Cn+l;
if th is qu antity is nonnegati ve then it represents the a mount of unfini shed
work found by Cn+l and therefore represents his waiting time wn+1 However, if this quantity goes nega tive it indicates that an interval of time has
elap sed since the a rrival of Cn, which exceed s the a mount of un finished wo rk
present in the system j ust after th e arrival of Cn' thereby ind icating that the
system has go ne idle by the time Cn+l arrives.
We may write Eq . (8.4) as
1"n+1
max [0,
II' n
+ un]
(8.5)
278
T HE QUEUE
GIGII
Since the random va ria bles {In} and {xn} are independent a mo ng themselves
and each other, then one o bserves that the sequence of random variables
{1I'0, W I' IVz, . .} forms a Markov process with sta tiona ry tran siti on probabilities. This can be seen immediately from Eq . (8.4) since the new value
IV n +! depends upon the previou s sequence of random vari abl es W ; (i =
0, I, . .. , n) o nly through the most recent value IV n plus a random varia ble
lin' which is independent of the random variables 11'; for all i ~ n.
Let us solve Eq . (8.5) recursively beginning with W o as a n initi al condition.
We ha ve (defining Co to be our initi al arrival)
+ lIo)+
(II', + lI,) + = max [0, w, + lI l]
= max [0, lI, + max (0, Wo + lIo)]
= max [0, lI l, III + lIo + IVO]
W3 = (w z + IIz)+ = max [0, Wz + liz]
= max [0, liz + max (0, lI" lI, + lIo + 11'0)]
= max [0, liz, liz + lIl , liz + III + lIO + wo]
=
IVz =
IV,
Wn
(IVO
(IV n_ ,
+ lI n_ ,)+ =
=
U n _ It U n _ 1
lIn_1
un_ 2 ,
. , U n_ 1
+ .. . +
Uh
(8.7)
Equation (8.8) is obta ined from Eq. (8.7) by relabeling the ra ndo m varia bles
u.. It is no w con venient to define the qu antities U; as
n- I
o; = III;
i= O
(8.9)
Uo = 0
(8.10)
8. I.
279
w:
- (8.l! )
n ;:::O
n+ll
= x-i
= i(p
- 1)
(8 .13)
where as usual we assume that the expected service time is x and the expected
interarrival time is I (and we have p = xli). From Eqs , (8. 12) and (8. 13) we
see we have requi red th at p < I , as is our usual cond ition for sta bility. Let us
denote (as usu al) the stati on ary d istribution for IV n (a nd also the refore for
\I'n') by
(8.14)
lim P[ w n ~ y ] = lim P[ w n ' ~ y ] = W(y )
n-Xl
n -CD
which mu st exist for p < 1 [LIND 52]. Thus W(y ) will be our ass um ed
sta tio na ry distribution for time spent in queue ; we will not dwell up on the
proof o f its existence but rather up on the me th od for its calcul ati on . As we
kno w for such Markov processes, this limiting distribution is ind ependen t o f
the initi al sta te 11"0'
Before proceeding to the formal derivation of result s let us inves tiga te the
way in which Eq. (8.7) in fact produces the waiting time. This we do by
exa mple; consider Figure 8.3, which represen ts the unfin ished work U(t) .
Fo r the sequence of arrivals a nd departures given in this figure, we pre sent the
table bel ow sho wing the interarri val times t n + b service time s x n ' the rand om
variables U n' and the wait ing time W n as measured from the dia gram ; in the
last row of this table we give the waiting time s W n as calculated from Eq.
(8.7) as follows.
280
G/G/l
TH E QUEUE
U(t)
1 2 3 4 5 6
t t C,t t t
Arrivals
..
t
C.
tt
C,
~ ~
C, C2
Co
Figure 8.3
C.
~ ~
Depa rtures
tt
C J C4
Co C,
e"e"
CJ C4
e"
C. C. C,
~
e"
1,,+1
Xn
2
2
4 5
7 8
2
-I
/I n
-4 2 -I -6
Wn
Wn
IVO
= 0
=
IV2 =
IVa =
IV 1
IV. =
IVS =
max (0,
=
\1'. =
IV 7 =
IVB =
IV. =
II. , II.
=0
-2, -1 ,0) = 2
8.1.
281
n - In+l ~ u]
~ u + I I In+l
= I]
l~oB(U + I)
(8.16)
Also, let
=c.
ceu)
= f t a>~oB(u + I)
(8.17)
Note that the integral given in Eq . (8.17) is very much like a convolution form
for aCt) and B(x) ; it is not quite a straight convolution since the distribution
C(u) represents the difference between X n and n+1 rather than the sum.
Using our convolution notation (@), and defining cn(u) ~ dCn(u) jdu we have
Cn(u)
c(u)
a( -u) @ b(u)
- (8.18)
282
For y
THE QUEUE
~
G/ G/I
i~P[Un:::; Y -
wi
Wn
Wn+l(Y)
= i~ Cn(y -
= w) d W.(w)
Wn
w) dWn(w)
we have
for Y
~0
(8.20)
W(y) =
l~ C(y -
for y
w) dW(w)
Further, it is clear th at
W( y) = 0
for y
<0
Combining these last two we have Lindley 's integral equation [LI ND 52),
which is seen to be an integral equation of the Wiener-Hopf type [SPIT 57).
Jo(
W(y) =
y ~O
(8.2 1)
y< O
Equa tion (8.2 1) may be rewritten in at least two ot her useful fo rms, whic h
we now proceed to deri ve. In tegrating by parts, we ha ve (fo r y :2: 0)
W(y)
C(y - w) lV(w)I:::,_o-
w-r co
w)
( ~W(w) dC(y
Jo
- w)
W(y)
Jo-
y~O
(8.22)
y< O
8.2.
283
Let us now show a third form for this equation. By the simple variable change
IV for the a rgument of our distributio ns we fina lly arrive at
u = y -
W(y)
(fo"
W(1i - u) dC(u)
y ~ O
- (8.23)
Q)
y<O
Equations (8.21), (8.22), and (8.23) all describe the basic integral equati on
which governs the beha vior of GIGII. These integral equ ations, as menti oned
ab ove, are Weiner-Hopf-type integral equations and are not unfamiliar in
the theory of stochastic processes.
One observes from these forms tha t Lindley 's integral equat ion is almo st,
but not quite, a convolution integral. The imp ortant distinction between a
convolution integral and that given in Lind ley's equation is that the latter
integral form holds only when the varia ble is nonnegative; the distribution
functi on is identically zero for values of negativ e argument. Unfortunately,
since the integral ho lds on ly for the half-line we must borrow techniques from
the the ory of complex variables and from contour integration in or der to
solve our system. We find a similar difficulty in the design of optimal linear
filters in the mathematical theory of co mmun icat ion ; there too, a WeinerHopf integral equ ati on describe s the optimal solution, except that for linear
filters, the unkn own appear s as one factor in the inte grand rather than as in
o ur case in que ueing theory, where the unknown appears on both sides of the
integral equation. Neverthel ess, the solution techniques are a mazingly
similar and the read er acqua inted with the theory of optimal realiza ble linear
filter s will find the following ar guments famil iar.
In the next section, we give a fairly general solutio n to Lindley's integral
equat ion by the use of spectral (transform) methods. In Exercise 8.6 we
examine a solution approach by mean s of an example that doe s not require
tran sform s ; the example chosen is the system D/E,/I con sidered by Lindley.
In that (direct) approa ch it is requ ired to ass ume the solution for m. We now
conside r the spectral so lution to Lindley's equation in which such assum ed
so lution form s will not be necessary.
284
TH E QU EUE
GIGII
variable y but not so otherwise). In order to get around this difficulty we use
the following ingenious device whereby we define a " complementary"
wait ing time, which completes the convolution, and which take s on the value
of the integral for negative y only, that is,
0
W_(y )
C>
y ~ O
L"
= ( "
W(y
- u ) d C(u )
y<O
(8.24)
Note that the left-hand side of Eq. (8.23) might consistently be written as
W+(y) in the same way in which we defined the left-hand side of Eq. (8.24).
We now observe that if we add Eqs. (8.23) and (8.24) then the right-hand
side takes on the integral express ion for all values of the argument, that is,
W( y)
+ W_( y ) = r oo W (y -
u )c(u) du
(8.25)
aCt)
hm
-Dt
t - oo e
< 00
(8.26)
The condition (8.26) really insists that the pdf associated with the interarrival time dr ops off a t least as fast as an exponen tial for very large interarrival times. From this cond ition it may be seen from Eq. (8.17) that the
behavior of C (u) as u ..... - 00 is governed by the behavior of the interarri val
time ; this is true since as u takes on large negative values the argument for the
service-time distribution can be made positive only for lar ge values of t ,
which also appears as the argument for the interarrival time density. T hus
we can show
.
C(u )
---v;: < 00
u - - oo e
hm
That is, C( u) is O (~U) as u ---+ - 00 . If we now use this fact in Eq. (8.24) it is
easy to establish that W_(y) is also O(eD") as y ---+ - 00
The notat ion O(g (x as x _ X o refers to a ny function that (as x - xo> decays to zero
at least as rapidl y asg(x) [where g(x) > OJ , that is,
lim
x-xo
10(g(X1
sv:
= K
<
00
8.2.
285
L:
W-<y)e- " dy
(8.27)
L:
W(y) e- " dy
(8.28)
Note that <I>+(s) is the Laplace transform of t he PDF for waiting time,
whereas in previous chapters we have defined WOes) as the Laplace transform
of the pdf for waiting time ; thu s by entry I I of Table 1.3, we have
s<l>+(s) = WOes)
(8 .29)
Since there are regions for Eqs. (8.23) and (8.24) in which the functions drop
to zero, we may therefore rewrite these transform s as
<I>. Is)
= f~ W_(yV"
$ +(s)
dy
W(yV 'Ydy
(8.30)
(8.31)
(8.32)
Let us now return to Eq . (8.25), which expresses the fundamental relati onship among the variables of ou r problem and the waiting-time distrib ution
W(y). Clearly, the time spent in queue must be a no nnegative rando m variable,
and so we recogni ze the right-hand side of Eq . (8.25) as a convolution between
the waiting time PDF and the pdf for the random va riable ii. The Laplace
286
THE QUEUE
G /G /l
+ <I>_(s) =
<1>+(s)C* (s)
+ <I>_(s) = <1>+(s)A *( -
s)B * (s)
which gives us
<I>-Cs)
I]
(8.33)
We ha ve already established that both <I> _(s) a nd A*(-s) are analytic in the
region Re (s) < D. Furthermore, since <I>+(s) and B* (s) a re transform s of
bounded function s of nonnegative varia bles then both funct ions must be
. analytic in the region Re (s) > O.
We now come to the spect rum fa ctorizat ion. The purpose of this factorization is to find a suitable repre sentation for the term
A*(-s)B*(s) -
(8.34)
in the form of two fact ors. Let us pause for a moment and recall the method
of stages whereby Erla ng conceived the ingenious idea of approxi mating a
distribution by mean s of a collect ion of series and parallel exponent ial stages.
The Laplace transform for the pdf's obtaina ble in this fashion was genera lly
given in Eq. (4.62) or Eq. (4.64); we immediately recognize these to be rati onal
functions of s (tha t is, a rati o of a polynomial in s divided by a polynomial
in s). We may simila rly conceive of appro ximating the Laplace transfor ms
A *( - s) and B* (s) each in such form s; if we so app roximate, then the term
given by Eq. (8.34) will also be a rat ion al functio n of s. We thus choose to
consider th ose queue ing systems for which A *(s) and B *(s) may be suitably
approximated with (o r which are given initia lly as) such rational functio ns
of s, in which case we then p ropose to for m the following spectrum factor ization
A *c - 5)8 *(5) _ I
= 'Y..(s)
- (8.35)
'Y _(s)
Clearly 'Y +(s)/'f_ (s) will be som e rat ional function of s, an d we are now
desirou s of finding a particu lar factored form for this exp ression. We specifically wish to find a factorizat ion such that :
(8.36)
8.2.
287
F urthe rmo re, we wish to find these functions with the additional pr operties :
For Re (5)
For Re (5)
> 0,
<
'F+(s )
lim - - = 1.
[. 1- 00
(8.37)
'I" (5)
D, lim - - - = -1.
1., / - 00
The conditions in (8.37) are convenient and must have oppos ite polarity in
the limit since we o bserve that as 5 run s off to infinity along the imaginary
axis, bot h A *( - 5) and 8 *(5) must decay to 0 [if they are to have finite mom ents
and if A(t) and 8 (x ) do not contain a sequence of discontinuities, which we
will not permit] leaving the left-hand side of Eq. (8.35) equal to -I, which
we have suitably matched by the rati o of limits given by Condition s (8.37).
We shall find that this spectrum fact or izati on, which requ ires us to find
'F +(5) and '1"_ (5) with the appropri ate properties, cont ains the diffi cult part
of this method of solution. Nevertheless, assuming that we ha ve found such a
factorizati on it is then clear that we may write Eq. (8.33) as
<1> (5)
-
'1'+(5)
= <1>+(5) - lL (5)
or
288
TH E QUEUE
G/G /l
[TIT C 52], which immediately establishes that this function must be a constant
(say, K). We thus have .
$ _(s)'Y_(s) = $ +(s)'Y+(s)
(8.40)
(8.41)
$ +( s) = - 'Y+(s)
The reader should recall that what we are seeking in this development is an
expression for the distribution of queue ing time whose Laplace tran sform is
exactly the function $ +(s), which is now given through Eq. (8.41). It remains
for us to demonstrate a method for evaluating the constant K.
Since s$+(s) = W* (s) , we have
Let us now consider the limit of this equation as s -+ 0 ; working with the
right-h and side we have
lim
8- 0
r ~ e-"
Jo
dW( Y)
= r~ dW(Y) =
Jo
.-0
(8.42)
This is nothing mor e than the final value theorem (entry 18, Table 1.3) and
comes ab out since W( (0 ) = I. Fr om Eq. (8.41) and this last result we then
have
lim 'Y+(s)
8- 0
(8.43)
Equat ion (8.43) provides a means of calcul atin g the constant K in our solution for $ +(s) as given in Eq. (8.41). If we make a Taylor expan sion of the
funct ion 'Y+(s) around s = 0 [viz., 'Y...(s) = 'Y+(O) + s'Y<;> (O) + (s2/2 !)'Y~ I(O)
+ ...] and note from Eqs. (8.35) and (8.36) that 'Y +(0) = 0, we then
recognize that this limit may also be written as
.
d'Y+(s)
K=hm---
.-0
(8.44)
ds
8.2.
289
and this provides us with an alternate way for calculating the constant K.
We may further explore this constant K by examining the behavior of
<1>+(5)o/+(S) anywhere in the region Re (5) > 0 [i.e., see Eq. (8.40); we
choose to examine this beh avior in the limit as 5 --->- a::J where we kn ow from
Eq. (8.37) that 0/+(5) behaves as 5 does ; that is,
= lim s
s .... 00
lim
r ~ e- ' vW(y) dy
Jo
X
we have
r~e-"w(~)
s
s.. . oo )o
dx
As 5 --->- cc we may pull the con stant term W(O+) outside the inte gral and
then obtain the value of the rema ining integral, which is unit y. We thus
obtain
'(8.45)
This establishes that the con stant K is merely the probability that an arriving
.
customer need not queue].
In conclusion then , assuming that we can find the appro priate spectru m
fact ori zati on in Eq . (8.35) we may immediately solve for the Lapl ace transform of the waitin g-time distribution through Eq . (8.41), where the con stant
K is given in eith er of the three forms Eq . (8.43), (8.44), or (8.45). Of course
it then remain s to invert the transform but the pr oblems involved in that
calcul ati on have been faced before in numerous of our other solution form s.
H is possible to carry out the solution of this problem by concentrating on
0/_ (5) rather than '1'+(5) , and in some cases this simplifies the calcul ation s. In
such cases we may proceed from Eq, (8.35) to obtai n
= o/_(s)[A *( -
1)
(8.46)
(8.47)
'F+(s)
s)8*(s) -
t Note t ha t W (O+) is not necessaril y equ al to I - p. which is the fracti on of lime the server
is idle . (T hese two are equ al for the system M{G{I.)
290
THE QUEUE
GIGII
I]'~)(O)
o4 *(1\O)B*(O) ]
(8.48)
From Eq. (8.44) we recognize the left-hand side of Eq . (8.48) a s the consta nt
o/_(O)[-x
+ f]
(8 .49)
Thus , if we wish to use 'F_(s) in our so lutio n form , we obtai n the transform
of the waiting-time di st ribution fr om Eq. (8.47), where the unknown constant
K is evalu ated in terms of 'L(s) through Eq. (8.49) .
Summari zin g then , o nce we have ca rried out the spectru m factori zat ion
as indicated in Eq . (8.35), we may proceed in one of two directions in solving
fo r <1l +(s) , the tra nsfo rm of the waiting-t ime distributi on . The firs t me th od
gives us
- (8.50)
and the seco nd provide s us with
<I> s _
0/-<0)(1 - p)i
+( ) - [A *( - s)B*(s) - I]'Y_(5)
- (8.51)
Example 1: M IMI]
Our old fr iend MIMI I is extremely straightfo rwa rd and sho uld serve to
clarify the meaning of spectru m factori zati on. Since both the intera rriva l time
a nd the service time a re exponentially di stributed random va ria bles, we
immediately have A *(5) = AI(s + }.) and B*(s) = fll(s + fl) , wh ere x = I/,u
a nd i = I /A . In order to solve for <I>+(s) (the transform of th e wai ting time
di stribution), we mu st first form the expression given in Eq. (8.34), tha t is,
04 *( - 5)B *(5) - I =
(_ A_) (_fl_) ). -5
5 +fl
+ S(fl - A) .
(i. - S)(5 + fl)
52
-8.2.
291
A *(-s) B*(s) _ 1
s(s
+ p. -
= (s + po)(), -
J.)
(8.52)
s)
In Figure 8.4 we show the location of the zeroes (denoted by a circle) and
poles (deno ted by a cross) in the complex s-plane for the funct ion given in
Eq. (8.52). No te that in this particular example the ro ots of the numer at or
(zeroes of the expression) and the roots of the denominat or (poles of the
expression) are especially simple to find ; in general , one of the most difficult
parts of this method of spectrum factori zation is to solve for the ' roots. In
order to fact orize we require that conditions (8.36) and (8.37) mainta in.
Inspecting the pole- zero plot in Figure 8.4 and rememberin g that 0/ +(s)
must be analytic and zero-free for Re (s) > 0, we may collect together the
two zeroes (at s = 0 and s = - po + I.) and one pole (at s = - po) and still
satisfy this requ ired condition. Similarly , 0/ _(s) must be a nalytic and free
from zeroes for the Re (s) < D for some D > 0; we can obtain such a
cond ition if we allow this functi on to contain the rema ining pole (at s = J.)
and choose D = J.. Th iswe show in Figur e 8.5.
Thus we have
'F+(s)
= s(s + po
- A)
s + p.
(8.53)
'L (s) = J, - s
(8.54)
00 .
Im (s)
s-plane
- - -"*--()---e>---*- - - - - Re(s)
-p
_I
292
THE QUEUE
GIG II
Im(s)
Im(s)
s-plane
s-cp lane
----{J--{)--
(a)
-----l-~;_- R e(s)
Re(s)
'!'. (s)
lim o/+(s)
.-0
s +/l-A
hm -----'-.- 0
+ /l
(8.55)
=I - p
Our expression for the La place tran sform of the waiting time PDF for M IMI !
is therefore fro m Eq. (8.41),
<I>+(s)
(I - p)(s
s(s
+ /l
/l)
- A)
(8.56)
A t this poi nt, typically, we attempt to invert the transform to get the waitingtime dist ribution. H owever , for thi s M/M /I example, we have already
carr ied out th is inversion for W *(s) = s<I>+(s) in going from Eq. (5.120)
to Eq . (5.123). T he solutio n we o btai n is th e familiar form,
y~O
(8.57)
Example 2: GIMllt
In this case B*(s) = /l l(s
giving us
+ /l)
t Thi s examp le forces us to locale roo ts using Rouche's theorem in a way often nccessar y
for specific G/G !I problems when the spectrum facto rizatio n meth od is used. Of co urse,
we have already studi ed th is system in Section 6.4 and will compare the results for both
methods.
8.2.
293
and so we have
o/+(s} = flA*( - s) - s - fl
s +fl
'I'_(s)
(8.58)
In order to factorize we must find the roots of the numerator in this equ ation.
We need not concern ourselves with the poles due to A *( - s) since they mu st
lie in the region Re (s) > 0 [i.e., A (t ) = 0 for t < 0) and we are attempting to
find o/+(s), which cannot include any such poles. Thus we only study the
zeroes of the function
+ fl
- flA*(- s) = 0
(8.59)
Clearly, one root of this equation occ urs at s = O. In order to find the
remaining roots , we make use of Rouche's theorem (given in Appendix I but
which we repeat here) :
Rouche's Theorem Iff( s) and g(s ) are analytic functions of s inside and
on a closed contour C, and also iflg(s)1 < /f(s)1 on C, thenf(s) andf (s) +
g (s) have the same number of zeroes inside C.
In solving fo r the roots of Eq . (8.59) we make the iden tification
f (s) = s
g(s)
+ fl
-flA*( - s)
We have by definition
A*( -s) = .Ce"dA (t )
We now ch oose C to be the contour that runs up the imaginary axis a nd then
forms a n infinite-radius semicircle moving counterclockwise and surround ing
the left half of the s-plane, as shown in Figure 8.6. We consider thi s contour
since we are concerned abo ut all the pole s a nd zeroes in Re (s) < 0 so that
we may properly include them in 'Y+(s) [recall that 0/ _(s) may contain none
such); Rouche's theorem will give us information concerning the number of
zeroes in Re (s) < 0, which we must consider. As usual , we assume that the
real a nd imaginary parts of the complex variable s are given by a and (0),
respectively, that is, for j
J=I
s
= a + jw
294
GIGII
TH E QUEUE
Im (, )
s- plane
-1--
I-::--
- - Re(.<)
eat ~
l:
l,u l:
~ l,u l:
~ l,ui:
Ig(s)1 =
l,u
e st
I (t ~ 0) and so
dA(t)1
e at e 'r"
e;wt
dA(t)/
<lA(t )1
dA(t)1
=,u
(8.60)
1I(s)I = Is + ,u l
(8.6 1)
Similarly we hav e
Now, examining the contour C as shown in Figure 8.6, we observe th at for
all points on the contour, except at s = 0, we ha ve from Eqs. (8.60) and
(8.61) that
(8.62)
II(s)1 = Is + ,ul > ,u ~ Ig(s)j
This follows since s + ,u (for s on C) is a vector whose length is the distance
from the point -,u to the point on C where s is located. We are almos t in a
8.2.
295
Im(. ) = w
s-p lane
- -- =--- - - -- -t-'::7-.....:....-"+:::--- - - - -
Re(, ) = a
1/ (5)1 =
Is + fl l 2 = 1- cos () + j sin () + fl l 2
+ fl) and
= fl '
- 2fl cos 0
+ 0()
(8.63)
N ote that the sma llest value for I/(s) I occurs for () = O. Eva lua ting g( s) on
th is sa me semicircula r excursion we have
F ro m the power-series expan sion of the expon enti al inside the inte gral we
have
Ig(sW =
fl21I.~ [I + (-
dA(e)12
I
296
T HE Q UEUE
GIGII
= !1- 2 11-
= xli =
where, as usual, p
2
2
2!1-E
!1- - 2!1- E co s (} >!1- - - cos /}
(8.65)
This last is true since p < I for our stable system . The left-ha:nd side of
Inequality (8.65) is merely the expression given in Eq. (8.63) for I/(s)/2
correct up to thefirst order in E , and the right-hand side is merely the expression in Eq. (8.64) for Ig(S)j2, again correct up to the first order in E. Thus we
have shown that in the vicinit y s = 0, I/(s) 1 > jg(s)l. Thi s fact now having
been established for all points on the contour C , we may apply Rouche's
theorem and state that I (s) and I (s) + g(s) have the same number of zeroes
inside the contour C. Since I (s) has only one zero (at s = - !1-) it is clear that
the expression given in Eq. (8.59) [/(s) + g(s) ] ha s only one zero for Re (s) <
0 ; let this zero occur at the point s = - S1' As discussed a bove, the point
5 = 0 is also a root of Eq. (8.59).
We may therefore write Eq . (8.58) as
(8.66)
where the first bracketed term contains no poles and no zero es in Re (s) ~ 0
(we have di vided out the only two zeroes at s = 0 and s = - 51 in this halfplane). We now wish to extend the region Re (s) ~ 0 into the region Re (s) <
D and we choose D (> 0) such that no new zeroes or poles of Eq. (8.59)
are introduced as we extend to this new region . The first br acket qu alifies
for ['I"_(S)]-1, and we see immediatel y that the second bracket qual ifies for
'I"+(s) since none of its zeroes (s = 0, s = - S1) or poles (s = -!1-) are in
-8.2.
Re (5)
297
(8.67)
'1" _(5) =
(8.68)
-5(5 + 51)
5 +p.-.P.A*(-5)
We have now assured that the functions given in these last two equations
satisfy Conditions (8.36) and (8.37). We evaluate the unknown constant K
as follows:
.
0/+(5)
S + SI
K = hm-- = hm-,- 0
=~=
. - 0 S + p.
(8.69)
W(o+)
P.
Thu s we have from Eq. (8.41)
<1>+(5) = St(p. + s)
P.5(5 SI)
! _15
sdp.
S +51
(8.70)
1- (1 - ; )e-
S 1Y
~0
(8 .71)
The reader is urged to compare this last result with that given in Eq. (6.30),
also for the system G{Mjl; the comparison is clear and in both cases there
is a single constant that must be solved for. In the solution given here that
constant is solved as the root of Eq. (8.59) with Re (s) < 0; in the equati on
given in Chap ter 6, one must solve Eq. (6.28), which is equivalent to Eq.
(8.59).
Example 3:
The example for G{Mjl can be carried no further in the general case. We
find it instructive therefore to consider a more specific G{M{I example and
finish the calculations; the example we choose is the one we used in Chapter 6,
for which A *(s ) is given in Eq . (6.35) and corresponds to an Ez{M{1 system,
where the two arrival stages have different death rates. For that example we
298
THE QUEUE
GIG II
s-plane
)(
I'
= [(fl -
S~~2~
- s(s - It
(s
- S)]
C: J-
+ fl / 2)(S -
+ fl )(fl
fl -
,u.j2 )
- s)(2fl - s)
(8.72)
'f'"+(s)
-- 'L(s)
- (S - fl - fl JZ)] rls(s - It + It J
[ (fl - s)(2fl - s)
s + ,u
2l]
8.3.
299
In th is form we rec ogn ize the first bracket as II' F_(s) and the seco nd bracket
as 'F+(s). Thus we have
\I'"+(s)
s(s - It
It J 2)
s +1t
(8.73)
-1
+ ./2-
(8.74)
It
a nd thi s of co urse corresponds to W(O+) , whic h is the probability tha t a ne w
arrival mu st wa it for service. Finally then we substi tute these values into
Eq. (8.7 1) to find
W(y) = I -
= ...! =
(2 - J 2)e- p ( v "2-
Uy
~0
(8 .75)
8.3.
Let us once again sta te the funda menta l relat ionships underlying the
que ue. Fo r u; = X n - t n +! we have th e basic relat ion ship
GfGfl
(8.76)
max [0,
U n _ 17 U n _ 1
,
un _ 2 ,
. . . ,
U n_ 1
+ . .. + U 1 , U n_ 1 + . . . + U o + H'o]
We observed ea rlier th at {lV n } is a Markov process with sta tio nary tr an sition
prob abilities ; its tot al stoc has tic structure is givcn by P[ lV m+ n :::; y 11I'm= x],
which may be calcu lated as a n n-fold integral ove r the n-d imensional joint
d istri b utio n of the n rando m va ria bles \I' m +!' . . , \l'm+ n ove r that regio n o f
the space which results in 11.,+ n :::; y. T his ca lculation is mu ch too complicated
an d so we look fo r alternative means to so lve this p ro blem. Pollaczek [POLL
57] used a spectra l ap proach and comp lex integrals to carry ou t the sol ution.
Lind ley [LI ND 52) observed that Il' n has the sa me d istribution as
defin ed
ea rlier as
IV : ,
300
THE QUEU E
G/G/I
If we have the case E[u n ] < 0, which corresponds to p = xli < I, then a
stable solution exists for the limiting random variable IV such that
IV = sup U';
(8.77)
l1 ~ O
8.3.
301
The elements of this algebra consist of all finite signed measures on the
real line (for example, a pdf on the real line). For any two such measures,
say hi and h 2 , the sum hi + h2 a nd also all scalar multiples of either belong
to th is algebra. The product o peration hi 0 h 2 is defined as the convoluti on
of hi with h2 It can be shown th at this algebra is a real commutative algebra.
There a lso exists an identity element denoted by e such that e 0 h = h
for an y h in the algebra, and it is clear that e will merely be a unit impulse
located at the origin. We are interested in operators that map real functions
into other real functi ons and that are measurable. Specifically we are interested in the operato r that takes a value x and maps it into the value (z) " ,
where as usual we have (x)+ ~ max [0, x). Let us denote this operator by 11',
which is not to be confused with the matrix of the transition probabilities
used in Chapter 2 ; thus, if we let A denote some event which is measurable,
and let h(A) = P{w : X(w)tA } denote the measure of this event, then 11' is
defined through
11'[h(A) = P{w: X +(w)A }
We note the linearity of this operator, that is, 11'(ah) = a11'(h) and
11'(h l + h2 ) = 11'(h l ) + 11'(h2 ) . Thus we have a commutative algebra (with
identity) alo ng with the line ar o pera to r 11' that maps this algebra into itself.
Since [(x) +]+ = (x)+ we see that an important property of this operator 11'
is th at
A linear operator satisfying such a condition is referred to as a projection.
Furthermore a projection whose range and null space are both subalgebras of
the underlying algebra is called a Wendel projection; it can be shown th at 11'
has this property, and it is this that makes the solution for G/Gfl possible.
Now let us return to considerations of the queue G /G/1. Recall that the
random va riable u; has pdf c(u) and that the waiting time for the nth cu stomer
II' n has pdf 1\'n(Y). Ag ain since u.; and IVn are independent then IVn + U n has
pdf c(y) @ lI'n(Y). Furthermore, since II' n+! = (IV n + u n)+ we have therefore
n=O,I, . ..
(8.78)
and thi s equati on gives the pdf for waiting times by induction. Now if p < I
the limiting pdf 1I'(Y) exists and is independent of 11'0' That is, Ii- must hate the
same pdf as (Ii' + ii)+ (a remark due to Lindley [LIND 52]). Th is gives us the
ba sic equation defining the stationary pdf for waiting time in G /G/ I :
ll'(Y) = 11'(c(y) @ If (Y
_ (8.79)
The solution of this equation is of main interest in solving G/G/1. The remaining p orti on of th is secti on gives a succinct summa ry of some elegant
results invol ving thi s algebra; only the courageous are encouraged to continue.
302
TH E QUEUE
GIGII
The particular formalism used for constructing this algebra and car rying
out the solution of Eq. (8.79) is what distinguishes the various meth ods we
have menti oned above . In order to see the relationship among the various
approaches we now introduce Spitzer's identity . In order to state this identit y,
which involves the recurrence relation given in Eq. (8.78), we must intr oduce
the following z-transform :
co
X(z, y)
=I
wn(y)z'
(8.80)
n= O
Addition and scalar multiplication may be defined in the ob vious way for
this power series and "multiplication" will be defined as corre sponding to
convolution as is the usual case for tran sforms. Spitzer's identity is then given
as
(8.81)
where
y ~ log [e - zc(y)]
(8.82)
Thus wn(Y) may be found by expanding X( z, y) as a power series in z and
picking out the coefficient of Z' . It is not difficult to show that
X(z, y )
wo(Y)
+ Z1T(C(Y)
X (z, y)
(8.83)
(8.84)
/.
,I
E[ lj'l = W =
I'" -I E[ (U . )+j
n :>; 1 n
II
8.3.
303
il
00
L Wn*(s)zn
X* (Z, S) =
(8.85)
n =O
X (z, s) = Wo (s)
+ -.
21r}
ie-oo
"(s -
s)
(8.86)
e;(Y(,)e,l s)-;I;'(s
(8.88)
This spectrum fact ori zati on, of course, is the critical step.
This uni ficati on as a n algebra for queues is elegant but as yet has provided
little in th e way of extending the theory. In particular, Kin gm an point s out
that this a pproac h d oes not eas ily extend to the system G /G/m since whereas
the range of this algebra is a subalgebra , it s null space is not; therefore, we
d o not have a Wendel pr ojection . Perhap s the most enli ghtening asp ect of
thi s discussion is the significa n t equ ation (8.79) , which gives th e basic
condition that must be sati sfied by the pdf of waiting time. We take advantag e
of its recurrence form , Eq. (8.78), in Ch apter 2, Volume II.
304
THE QUEUE
G jGjl
= max [0,
Wn
+ un]
We now define a new random variable which is the "other half" of the
waiting time, namely,
(8.89)
Yn = -min [0, Il 'n + unl
This random variable in some sense cor responds to the random variable
whose distribution is W_(y), which we studied earlier. Note from these last
two equations that when Yn > 0 then II'n+l = 0 in which case Yn is merely
the length of the idle period, which is terminated with the arrival of C n+!.
Moreover, since either II' n +l or Yn must be 0, we have that
(8.90)
We adopt the convention that in order for an idle period to exist, it must have
nonzero length, and so if Yn and w n + 1 are both 0, then we say that the busy
period continues (an annoying triviality).
From the definitions we observe the following to be true in all cases:
(8.91)
From this last equation we may obtain a number of important results and we
proceed here as we did in Chapter 5, where we derived the expected queue size
for the system M jGjl using the imbedded Markov chain approach. In
particular, let us take the expectation of both sides of Eq. (8.91) to give
E[wn+l1 - E[Yn] = E[wnl
+ E[un]
We assume E[unl < 0, which (except for D{D{I where ~ 0 will do) is the
necessary and sufficient condi tion for there to be a stationary (and unique)
waiting-time distribution independent of n ; this is the same as requiring
p = xji < I. In this case we have*
lim E[wn+!1
n-co
lim E[wnl
n_ CX)
f'l _ oo
since these are distinct random variables . We permit that step here, but refer the interested
reader to Wolff [WOLF 70) for a careful treatment.
8.4.
305
(8.92)
-E[u]
where Yn -- Yand u; -- ii . (We note that the idle periods are independent and
identicall y distributed, but the duration of an idle period does depend upon
the duration of the previous busy period.) Now from Eq. (8.13) we have
E[u] = i (p - I) and so
(8.93)
E[y] = i(l - p)
Let us now square Eq. (8.91) and then take expected values as follows :
Using Eq. (8.90) and recognizing that the moments of the limiting distribution
on IV n must be independent of the subscript we have
E[(y)2]
2E[lvii]
+ E[(ii)2]
We now revert to the simpler notation for moments, " ok ~ E[(w)'], etc. Since
W n and Un are independent random variables we have E[wu] = wii; using this
and Eq , (8.92) we find
_ .:l
u2
y2
2ii
2y
IV=W=----
- (8.94)
y2
2y
(8.96)
We must now calculate the first two moments of y [we already know that
= i(l - p) but wish to express it differently to eliminate a constant].
This we do by conditi oning these moments with respect to the occurrence of
an idle peri od. Th at is, let us define
fj
Go
= P[y > 0]
= P[arrival finds the system idlej
(8.97)
306
TH E QUEUE
GIGII
I > 0] =
P[fi ~ y y
P[idle period ~ y]
(8.98)
a nd this last is just the idle-period distribution earlier den oted by F(y ). We
denote by I the random variable represent ing the idle period. Now we may
calculate the following:
y = E[Y I y = O]P[Y = 0]
= 0 + auE[y I y > 0]
The expectation in this last equation is merely the expected value of I and
so we have
.
(8.99)
Similarly, we find
(8.100)
Thus, in particul ar, y2/2y = 12/21 (a o cancels!) and so we may rewrite the
expre ssion for Win Eq. (8.96) as
IV =
- p)2
2i(1 - p)
2
1
21
- (8.101)
Unfortunately this is as far as we can go in establ ishing W for GIG /I. The
calculation now invo lves the dete rmin ation of the first two moments of the
idle period. In general, for G/Gfl we cannot easily solve for these rnojnents
since the idle period depends upon the particul ar way in which the previous
busy period terminated. However in Chapter 2, Volume II , we place bounds
on the second term in this equation , thereby bounding the mean wait W.
As we did for M IGII in Chapter 5 we now return to our basic equat ion '
(8.91) relat ing the import ant rand om variables and attempt to find the
transform of the waiting time density W* (s) ~ E[ e- siD ] for GIG/I. As one
might expect this will involve the idle-time distribution as well. Formin g the
tran sform on both sides of Eq . (8.91) we have
However since
II' n
8.4.
307
In order to evalu a te the left-h and side o f thi s tra nsform expression we ta ke
advantage of the fact that o nly o ne or the other of the ra ndom variables
IV n +l and Yn may be nonzero. Accordingly, we have
E [e- S1wn+.--1In) ]
(8.103)
T o determine the right -h and side o f this last eq uatio n we may use the following simi lar expansion :
>
>
E[e- SIDft +' I Yn > 0] =
(8.104)
I. M a king use
H owever , since IV n +l Y n = 0 , we have
of the defin ition fo r 0 0 in Eq . (8.97) a nd allowi ng the limit as -+ ex) we
obta in the following tr an sform expression from Eq . (8.104) :
E [r S ;;; I fj = O]P[y
= 0] =
w *(s) -
00
)]
= 1* ( -
s)oo + W*(s) -
00
(8.105)
wher e 1*(s) is the Laplace transform of the idle-time pdf [see Eq. (8.98)
for the defin ition of th is d istribution ]. Thus, fro m thi s last a nd from Eq.
(8.102), we ob tai n immediatel y
W*(s)C*(s) =
0 01*( -
s) + W*(s) -
00
where as in the past C*(s) is the Laplace tr an sform for the den sity describing
the random varia ble ii . Thi s last equation fina lly gives us [M ARS 68]
*
0 0 [1 - 1*(- s)]
W (s) =~ 1 - C*(s)
- (8.106)
For thi s system we know that the idle-period d istributi on is the same as the
intera rri val-time dist ribution, nam ely ,
F(y) = P[l ~ y ] = I - e- 1
y ~ O
(8.107)
308
THE QUEUE
G /G/I
).2(1/).2
l /fl2)
( I _ p)2
W = ----'--'-----'-'--'---'-- ..:....:..2).( 1 - p)
I
).
a nd so
w=
P/fl
(8. 108)
1 - p
W (s) =
"------'-'~-'-"---'-=
I - ).fl/(A - s)(s
- (I - p)s(s
+ fl)
(A - s)(s
(1 -
+ fl)
Afl
(8.109)
S + fl - A
which is the sa me as Eq . (5. I20).
Example 2: M /G//
..
2).(1 - p)
=p
(1
+C
2
b )
p)
which is the P-K formula . Also , C*(s)
(I - pl Equation (8 . 106) then gives
(8 .1 10)
2fl (1 -
w*(s)
( I - p)[1 -
J./U -
Go
s)]
= ..:....:..---'--'-'----'-----'--'
1 - [)./(A - s)]B*(s)
s( I - p)
s - A + AB*(s)
(8.II I)
8.4 .
309
Example 3: D!D!]
In this case we ha ve that the length of the idle period is a constant and is
given by i = i - x = t(l - p); therefore 1 = t(l - p), and j2 = (1) 2.
Moreover, aa2 = a/ = O. Therefore Eq . (8.101) gives
IV = 0
+ Ufo -
p)2 _ H(1 _
2t(l - p)
)
p
and so
w=o
(8.112)
This last is of course correct since the equilibrium waiting time in the (stable)
system D/D!I is always zero.
Since i, I, and] are all constants, we have B*(s) = e- ii , A *(s) = e-" and
I*(s) = e- ,l = r ,m -p). Also, with probabil ity one an arrival finds the system
empty; thus ao = I. Then Eq. (8.106) gives
IV (s)
=1
and so w(y) = uo(y ), an impulse at the origin which of course checks with the
result that no waiting occurs.
Considerations of the idle-time distribution naturally lead us to the study
of duality in queues. This material is related to the ladder indices we had
defined in Section 5.11. The random walk we are interested in is the sequence
of values taken on by U'; [as given in Eq. (8.9)]. Let us denote by Un. the
val ue taken on by U; at the kth ascending ladder index (instants when the
function first drops below its latest maximum). Since u < 0 it is clear that
lim U; = - ~ as 11 ->- 00 . Therefore, there will exist a (finite) integer K such
th at K is the large st as cend ing ladder index for Un' Now from Eq. (8.11)
repe ated below
Ii '
It is clear that
sup U n
n 2:: o
Ii' = U n"
310
TH E QUEU E
GIG II
for k ::;; K. That is t, is merely the amount by which the new ascending ladder
height exceeds the previous ascending ladder height. Since all of the random
variables u; are independent then the rand om variables i k cond itioned on K
are independent and identically distributed. If we now let I - a = P[Un ::;;
Un. for all n > nk ] then we may easily calculate the distribution for K as
P[K = k] = (I - a)a k
(8.115)
l+l++I-=u
1
2
K
n l -Un o + Un! -Un l + + Un g - U" K - l
=Un x
where no ~ 0 and Uo ~ O. Thus we see that
+ i K and so we may write
11 + ...
E[e-'W]
= E[(i*(s))K]
\
(8.116)
where I *(s) is the Lapl ace transform for the pdf of each of the t, (each of
which we now denote simply by I). We may now evaluate the expectation in
Eq . (8.116) by using the distribution for Kin Eq. (8.115) finally to yield
W*( )
1- a
s = I _ ai*(s)
- (8.117)
Here then , is yet another expre ssion for W*(s) in the GIGII system.
We now wish to interpret the random variable 1 by considering -a ."dual"
queue (whose varia bles we will distinguish by the use of the symbol "), The
dual queue for the GIGII system considered above is the queue in which the
service times x n in the original system become the interarrival times i n+1 in
the dual queue and also the interarrival time s I n + 1 from the origin al queue
become the service time s xn in the dual queue.] It is clear then that the
random variable Un for the dual queue will merely be Un = xn '- i n+! =
I n+! - X n = -Un and defining O n = Uo + ... + Un_ 1 for the dual queu e we
have
(8.118)
t Clearly , if the origina l queue is sta ble, the du al must be unstabl e, a nd conversely (except
that both may be unstable if p = I) .
8.4.
311
as the relationship am ong the dual and the original queues. It is then clear
from our discussion in Section 5.11 that the ascending and descending ladder
indice s are interchanged for the original and the dual queue (the same is
true of the ladder heights). Therefore the first ascending ladder index n) in the
original queue will correspond to the first descending ladder index in the dual
queue ; however, we recall that descending ladder indices correspond to the
arrival of a customer who terminates an idle period. We denote this customer
by en,. Clearly the length of the idle period that he terminates in the dual queue
is the difference between the accum ulated interarrival times and the accumulated service times for all customers up to his arrival (these services must
have taken place in the first busy period), that is, for the dual queue,
n 1-1.
= n~o t n+l
n l-l
n~o X n
n l -l
Xn -
n ""O
n , -1
I n+J
n= O
.U n,
(8.119)
where we have used Eq. (8.114) at the last step. Thus we see that th e random
variable j is merely th e idle period in th e dual queue and so our Eq . (8.117)
relate s the transform of the waiting time in the original queue to the transform
of the idle time pdf in the du al queue [contrast this with Eq . (8.106), which
relates this waiting-time transform to the tran sform of the idle time in its own
queue] .
Thi s d uality observation permits some rather powerful conclusions to be
d rawn in simp le fashion (and these are discussed at length in [FELL 66],
especially Sections VI.9 and XII.5). Let us discuss two of these .
Example 4: GIMII
If we have a sta ble G IM II queue (with i = II), and x = IIp.) then the du al
is an unstable queue of the type M IGI I (with i = IIp. and if = II), and so 1
(the distribution of idle time in the dual queue) will be of exp onential form;
therefore 1*(s ) = p.1(s + p.), which gives from Eq. (8.11 7) the follo wing
--312
TIlE QU EUE
G/GfI
+ /-l )
0"1-'
In vertin g this a nd form ing the PDF for waiting time we have
W(y) = I
_ O"e- p ll- a lY
y ~ O
(8.120)
Example 5: M IGI}
As a second example let the original queue be of the form M /G fl and therefore the dual is of the form G/M /I. Since 0" = P[ w > OJ it must be that
0" = P for M IG/I . Now in the du al system, since a busy period end s at a
random point in time (and since the service time in this dual queue is memoryless), an idle per iod will have a durati on equ al to the residual life of an
interarrival time ; therefore from Eq. (5.1I) we see that
1*(s) = I - B*(s)
sx
(8.121)
and when these calc ulatio ns are applied to Eq. (8.117) we have
W *(s)
=
1-
I - P
p{[l - B*(s)]/sx }
(8.122)
which is the P-K transform equ ati on for wailing time rewritten as in Eq.
(5.106).
Th is conclu des our study of G/G /1. Sad to say, we have been unable 10
give analytic expressions for the waiting-time distribution explicitly in terms
of known qu antities. In fact, we have not even succeeded for the mean wait
W! Nevert heless, we have given a method for handling the rational case by
spectrum fact orizati on , which is quite effective. In Chapter 2, Volume II,
we return to G/G{I and succeed in extracting many of its important properties throu gh the use of bounds , inequ alities, and approximations.
REFERENCES
313
REFERENCES
ANDE 53a
ANDE 53b
ANDE 54
BENE 63
FELL 66
KEIL 65
KING 66
LIND 52
MARS 68
POLL 57
RICE 62
SMIT 53
SPIT 56
SPIT 57
SPIT 60
SYSK 62
TITC 52
WEND 58
3 14
THE QUEUE
WOLF 70
GIGII
EXERCISES
8.1.
A *( - s)B *(s).
8.4 .
For the seq uence o f random variables given below, generate the figure
corresponding to Figure 8.3 and complete the ta ble.
0
II
t n -t 1
x.
2 I I 5 7 2
3 4 2 3 3 4
2
2
3 4
9
6
3
/In
lV .
measured
w. calculated
8.5.
u) = W(y) - u W(l)(y)
for 0
+ -u
<E
I . Let us expa nd
W(21(y)
+ R(u, y)
where w (nl(y) is the nth derivat ive of W(y) and R (II, y) is such th at
~'" R (u, y) dC(~) is negligible due to the slow varia tion of W( y)
when p = I - E. Let Il k denote the kth mom ent of ii.
(3) Und er these conditio ns con vert Lindley's integral eq uation to a
seco nd-orde r linear d ifferential eq uation involving Ii" and ii.
(b) Wit h the boundary cond ition W(O) = 0, solve the equation
foun d in (a) and expre ss the mean wait W in term s of the first
two moments of i and x.
8.6.
Consider the DIErl1 queueing system, with a con stant intera rrival
time (of [sec) and a service-time pdf given as in Eq. (4.16).
(3) Fi nd ceu).
-315
EXERC ISES
(b)
W( y - i) =
(c)
for y
W( y - w) dB(w)
=0
~ {
lV(y)
= 1 + L G,e'"
y~O
i- I
_, .,
e '
i (rp.. +a,cx
i_ 0
.
i )' +!
= 0
rp.
rp.
+ cx,
)r
1,2, . . . , r
j = 0, 1, . . . , r -
C onsider the following queueing systems in wh ich no queue is permitted . Cu st omers wh o a rrive to find the system bu sy mu st lea ve
witho ut service.
(a) M/M /l : Solve for P = P[k in system].
(b) M/H ./I: A s in Figure 4.10 with cx , = cx , cx. = 1 - CX , P., = 2p.cx
a nd p.. = 2p.(1 - ).
(i) Find the mean ser vice time x.
(ii) Solve for Po (a n empty system) , P (a cu st om er in th e 2p.:t.
box) and P'- a (a cu stomer in the 2p.(1 - Gt) box).
(c) H./M fl : Where A(t) is hyp ere xp onential as in (b), but with
par am et er s tI, = 2,1,cx an d p.. = 2}.( 1 - Gt) instead . Draw the
sta te-tra nsition diagram (with labels on br a nche s) for th e
foll ow ing four states: E iJ is state with " a rri ving" cu stomer in
a rriva l stage i and j cu stomers in service i = I , 2 and; = 0 , I.
316
TH E QUEUE
GIG fl
(d)
(e)
(0
8.8.
8.9.
A*(s) - -
(s
2
:::....-_
+ 1)(s + 2)
1
8 *( s) = - -
Find the expression for 'I'+(s)/'I"_ (s) and show the pole- zero plot
in the s-plane.
(b) Use spectrum factorization to find 'I' +(s) and 'F _(s).
(c) Find <I>+(s).
(d) Find W(y) .
(e) Find the average waiting time W.
(0 We solved for W(y) by the method of spectrum fact or izati on .
Can you describe another way to find W( y ) ?
(a)
8.10.
Consider the system M/G/1. Using the spectral so lution meth od for
Lindley' s integral equation, find
(a) . 'Y+ (s). {HINT: Interpret [l - B * (s)l/sx. }
(b) 'I'_(s).
(c) s<l>+(s).
EXERCISES
8.11.
317
'+(5)
'1" _(5)
F(5)
1 - F(5)
8.12.
8,13.
= lim wn(y)
8.14.
(d)
= lim wn(y)
satisfies Eq . (8.79).
Compare 1t'2(Y) with I\(Y).
8.15.
By first cub ing Eq. (8.91) and then forming expectations, expre ss a;;;
(the variance of the waiting time) in terms of the first three moments
of i, i , and I .
8.16.
8.17.
318
THE QUEUE
(C)
G/G /!
1 - ~ *(s)
sl
(d)
-Epilogue
We have invested eight chapters (and two a ppendices!) in studying the
theory of queuein g systems. Occasionally we have been overjoyed at the
beaut y and generality of the results, but more often we have been overco me
(with frustration) at the lack of real pr ogress in the theory. (No, we never
promi sed you a rose garden.) However, we did seduce you into believing that
this study would pro vide worthwhile methods for pract ical application to
man y of today' s pressing congestion problems, We confirm that belief in
Volume II.
In the next volume, after a brief review of th is one, we begin by tak ing a
more relaxed view of GIG II. In Chapter 2, we enter a new world leaving
behind the rigor (and pain) of exact soluti ons to exact probl ems. Here we are
willing to accep t the raw facts of life, which state that our models are not
perfect pictures of the systems we wish to ana lyze so we should be willing to
accept approximations and bounds in o ur problem solution. Upper and lower
bound s are found for the average delay in GIG II a nd we find that these are
related to a very useful heavy traffic approximati on for such qu eues. This
approximation, in fact , predicts that the long waiting times are exponentially
distributed. A new class of models is then introduced whereby the discrete
a rrival and departure processes of queueing systems are replaced first by a
fluid approximation (in which these stoc hastic pr ocesses are replaced by their
mean values as a function of time), and then secondly by a diffusion ap proximation (in which we permit a variation about these means). We happil y find
that these approximations give quite reasonable result s for rather general
queueing systems. In fact th ey even permi t us to study the transien t behavio r
not only of stab le queues but also of saturated queues, a nd this is the material
in the final section of Chapter 2 whereby we give Newell' s tr eatm ent of the
ru sh-hour approxi mation-an effective method indeed.
Cha pter 3 points the way to our application s in time-shared computer
systems by presenting some of the prin cipal results for pr iority queueing
systems. We study general methods and appl y them to a number of import ant
queue ing disciplines. The con servati on law for priority systems is establi shed ,
preventing the useless search for non realizable disciplines.
In the remainder , we cho ose applications pr incipally from the computer
field, since these application s are perh aps the most recent and successful for
the theory of queue s. In fact, the queueing an alysis of allocation of resources
and job flow through computer systems is perh aps the only tool available
319
320
EP ILOGUE
to computer scientists in understanding the behavior of the complex interaction of users, programs , processes, and resour ces. In Chapter 4 we emphasize multi-access computer systems in isolati on , handling demands of a large
collection of competing users. We look for throughput and response time as
well as utilization of resources . The major portion of this chapter is devoted
to a particular class of algorithms known as pr ocessor-shar ing algorithms,
since they are singularly suited to queueing analysis and capture the essence
of more difficult and more complex algorithms seen in real scheduling problems. Chapter 5 addresses itself to computers in network s, a field that is
perhaps the fastest growing in the young computer indu stry itself (most of the
references there are drawn from the last three year s-a tell-tale indicator
indeed) . The chapter is devoted to developing method s of anal ysis and design
for computer-communication networks and identifies many unsolved important problems. A specific existing network , the ARPANET, is used th roughout as an example to guide the reader through the motivati on and evaluati on
of the various techn iques developed.
Now it remain s for you , the reader, to sharpen and appl y your new set of
tools . The world awaits and you must serve !
A P PEN DIX
321
322
APP ENDI X I
fa
Figure I.l
A general system.
f (t ) -.. get)
(I.!)
to den ote the fac t that get) is the output of our system when f (t) is applied as
input. A system is sa id to be linear if, when
and
f(t
+ T} -.. get + T)
(1.3)
for a ny T. If the a bove two properties both hold , then ou r system is sai d to be
a linear time-invariant system, a nd it is these with which we con cern ourselves
fo r the momen t. .
Whenever o ne studies such systems , o ne find s th a t complex exponent ial
fun ctions of time a ppea r throu ghout the solution . Further, as we sha ll see,
the tr an sforms o f interes t merely represent ways of dec omposing functi on s of
time int o sums (o r int egrals) of complex exp onentials. Th at is, co mp lex
expo nentials form the building blocks of ou r tr an sforms, a nd so, we m ust
inq uire further to d iscover why the se complex exponentials pervade o ur
thin kin g with such systems. Let us now pose the fundam ent al qu estio n,
namel y , which fun ction s of tim e f( t) may pass through o u r linear ti meinvariant systems with no change in form ; th at is, for whichf(t) will g( l) =
Hf (t }, where H is so me sca la r multiplier (with respect to I) ? If we can discover
such functi on s f( t} we will then have found the "eigenfunctions," or "characteri stic functi on s," o r " inva ria nts" of ou r system. Denotin g th ese eigenfun ction s by f ,(I) it will be show n th at th ey mu st be of the following form (to
within a n a rbitrary sca la r multiplier) :
!e(t} = e st
(IA)
1.1.
323
where s is, in general, a complex varia ble. Th at is, the compl ex exponentials
given in (1.4) form the set of eigenfunctions for all linear time-invariant
systems. Thi s result is so fund amental that it is worthwhile devoting a few
lines to its derivation. Thu s let us assume when we appl y[. (t) that the output
is of the form g.(t), tha t is,
f . (t ) = e' t -+ g.(t)
But , by the linearity property we have
e"f.(t)
e'(t+ T) -+ e"g.( t)
where T and therefore e" are both constants. Moreover , from the timeinvariance property we must have
f .( t
+ T) = e,(t+T)
-+
git
+ T)
g.(t
+ T)
He' t
which confirms our earlier hypothesis that the co mplex exponentials pass
through our linear time-invariant systems unch anged except for the scalar
mult iplier H. H is independent of t but may certa inly be a funct ion of s and so
we choose to write it as H = H(s) . Therefore, we have the final conclu sion
th at
(r.5)
an d this funda menta l result exposes the eigenfun ctions of our systems.
In this way the complex expon enti als are seen to be the basic functions in
the study of linea r time-invaria nt systems. Moreover , if it is tru e th at the
in put to such a system is a complex expon ential , then it is a tr ivial computation to evaluat e the output of that system from Eq. (1.5) if we are given the
function H (s ). Thus it is natural to ask th at for a ny inputf(t ) we would hope
to be ab le to decomp ose f( t) into a sum (o r integral) of com plex expo nentials,
each of which contri butes to t he overa ll outp ut g(t ) thr ough a com putation
of the form given in Eq. (1.5). Then the overall output may be foun d by
summin g (integrating) these individual comp onents of th e output. (The
fact tha t the sum of the individual out puts is the same as the output of the
sum of the individua l inpu ts-that is, the complex expo nentia l decomposition-is due to the linear ity of o ur system.) The process of decomposing
our input into sums of exponentials, computing the respon se to each from
Eq. (r.5), and then reconstitutin g the outpu t from sums of expo nentials is
324
APPENDIX I
i n ->- gn
ai~ll
+ bi~;'
->-
a g~ll
f n+ m - . g n+ m
+ bg ~2 )
(1.7)
( 1.8)
where m is some integer constant. Here Eq. (I.7) is the expressi on of linearit y
wherea s Eq. (I.8) is the expression of time -invariance for our discrete systems.
We may ask the same fundamental question for the se discrete systems and ,
of course, the an swer will be essentially the same, namel y, that the eigenfunction s are given by
On ce again the complex exponentials are the eigenfunctions. At this point it
is convenient to introduce the definition
(1.9)
1.1.
WH Y TRANSFORMS ?
325
n=O
(I. 11)
n~O
When we apply u.; to our system it is common to refer to the output as the
unit response, a nd th is is usually denoted by h; That is,
---+
hn+ m
---+
z -nznzmhn+m
( 1. 12)
Furthermore , if we con sider a set of inputs {J~I}, and if we define the output
for each of these by
then by the linearity of our system we must h ave
2J~ I _Lg ~)
i
(1.l3 )
326
APPENDIX t
""
..
n+ m
z n+ 1n ----+- .,.-n
OJ
v hn+ m'"
_ n+ m
Lm
where the sum ranges over all integer values of m . Fr om the definition in
Eq. (I. I I) it is clear that the sum on the left-hand side of this equation has
only one nonzero term , namely, for m = -n, and this term is equal to unity;
moreover, let us make a change of variable for the sum on the right-hand
side of this expression , giving
z - !1
---+- z-n
L hkz k
k
This last equation is now in the same form as Eq. (1.10); it is obvious then
that we have the relationship
(l.I 4)
This last equ ation relates the system functi on H( z) to the unit respon se hk
Recall that our linear time-invariant system was completely* specified by
knowledge of H( z), since we could then determine the output for any of our
eigenfunctions ; similarly, knowledge of the unit response also completely*
determines the operation of our linear time-invariant system. Thus it is no
surprise that some explicit relationship must exist between the two , a nd,
of course, this is given in Eq. (I.I4).
Finally, we are in a position to answer the question-why tran sforms ?
The key lies in the expression (1.14), which is, itself, a transform (in this case
a z-transform), which con verts'[ the time function hk int o a function of a
complex variable H(z). This transform aro se naturally in our study of linear
time-invariant systems and was not intr oduced into the anal ysis in an artificial way. We shall see later that a similar relationship exists for continuoustime systems, as well, and this gives rise to the Laplace transform . Recalling
that continuous-time systems may be described by constant-coefficient linear
differential equations and that the use of tr ansform s greatly simplifies the
solution of these equations, we are not surprised that discrete-time systems
lead to sets of constant-coefficient linear difference equations whose solution
is simplified by the use of e-transforms. Lastly, we comment that the input s f
Completely specified in the sense that the only additional requ ired information is the
initial sta te of the system (e.g., the initial condit ions of all the ener gy sto rage elements).
Usually, the system is assumed to be in the zero-energy state, in which case we truly have a
complete specification.
t Transforms not only change the form in which the informati on describing a given function is presented , but they also present this inform ation in a simplified form which is
con venient for. mathem atical man ipulation.
1.2.
327
and the outputs g are easily decomposed into weighted sums of complex
exponentials by means of transforms, and of course, once this is done, then
results such as (1.5) or (1.10) immediately give us the component-by-component output of our system for each of these inputs; the total output is then
formed by summing the output components as in Eq. (1.13).
The fact that these transforms arise naturally in our system studies is really
only a partial answer to our basic question regarding their use in analysis.
The other and more pragmatic reason is that they greatly simplify the analysis
itself ; most often , in fact, the analysis can only proceed with the use of transforms leading us to a partial solution from which properties of the system
behavior may be derived.
The remainder of this appendix is devoted to giving examples and properties of these two principal transforms which are so useful in queueing theory.
1.2. THE z-TRANSFORM [JURY 64, CADZ.73]
Let us consider a function of discrete time In' which takes on nonzero
values only for the nonnegative integers, that is, for n = 0, 1,2, ... (i.e.,
for convenience we assume that/n = 0 for n < 0). We now wish to compress
this semi-infinite sequence into a single function in a way such that we can
expand the compressed form back into the original sequence when we so
desire. In order to do this, we must place a "tag" on each of the terms in
the sequence/no We choose to tag the term j', by multiplying it by z" ; since
n is then unique for each term in the sequence, each tag is also unique.
z will be chosen as some complex variable whose permitted range of values
will be discussed shortly. Once we tag each term, we may then sum over all
tagged terms to form our compressed function, which represents the original
sequence . Thus we define the z-transform (also known as the generating
function or geometric transform) for /n as follows:
F(z)
<Xl
= 2. fnz'"
(1.15)
n= O
F(z) is clearly only a function of our complex variable z since we have summed
over the index n; the notation we adopt for the c-transforrn is to use a capital
letter that corresponds to the lower-case letter describing the sequence , as in
Eq. (LIS). We recognize that Eq. (Ll4) is, of course, in exactly this form . The
z-transform for a sequence will exist so long as the terms in that sequence
grow no faster than geometrically, that is, so long as there is some a > 0 such
that
328
APPENDIX I
If the sum over all term s in the sequence f n is finite , th en certainly th e unit
disk [z] ~ 1 represents a range of analyticity for F(z). * In such a ca se we have
a>
F(l) =
Lfn
(I.16)
n= O
fn<=> F(z)
(I.1 7)
For our first example, let us consider the unit function as defined in Eq . (1.1I).
For this function and from the definition given in Eq . (1.15) we see that
. exactly one term in the infinite summation is nonzero, and so we immediately
have the transform pair
(U8)
For a related example, let us consider the unit function shifted to the right
by k units, that is,
Il=k
n,pk
From Eq . (US) again, exactly one term will be non zero , giving
U n_ k <=> Zk
As a third example, let us consider the unit step fun ction defined by
for
(recall that all functions are zero for n
series, that is,
0, 1,2 , . ..
a>
n- O
I - z
15 n<=> L l e" = - -
(I.19)
We note in thi s case that Izl < 1 in order for the z-transform to exist. An
extremely important sequence often encountered is th e geo metric series
n=0,1,2, .. .
* A functi on of a comple x varia ble is said to be analytic at a point in the complex plane if
that function has a unique derivative at that point. The Ca uchy- Rieman n necessary and
sufficient condition for analyticity of such functions may be found in any text on functions
of a complex variable [AHLF 66).
t Th e do uble bar denotes the tran sform relati on ship whereas the doubl e heads on the arrow
indicate that the journe y may be made in either direction , f => F a nd F => f
1.2.
TH E Z-T RANSFORM
329
F(z)
= L Aocnzn
n=O
A
1 - ocz
And so
n
A
A oc <=> - - 1 - ocz
(1.20)
We are intere sted in deriving the z-transform of the convoluti on for I n and gn,
a nd this we do as follows:
co
f n 0 gn<=> L U n 0 gn)zn
n- O
00
= L L fn _kgk?n-kzk
n-=O k=O
However, since
co
n=O k=O
we have
00
co
L L=L L
00
1;- 0 n= k
00
n=k
G(z)F(z)
-330
APPE NDIX I
Table 1.1
Some Properties of the z-Transf orm
z-TRA NSFORM
SEQUENCE
co
I. f n
2. af;
+ bg n
0, 1,2, . . .
aF(: )
n =0,k,2k, ...
>0
F(zk)
f ol
l- J
zF (z)
7. f n-1
8. f n-k
+ bG (z)
I
- [F(z) z
5. f n+!
6. fn+k
L: f nzn
n_ O
F(a z)
3. a"fn
4.fnlk
F(z) =
k>O
zkF(z)
9. nfn
z dz F(z)
dm
11. f n @ g n
F(z)G(z)
12. fn - f n-1
(I - z )F(z )
13.
L: f k
F C: )
n = 0, 1, 2, . ..
k~ O
a
oa [
14. -
dz'"
(a is a parameter off n)
I - z
o
oa "
- F (z)
co
L: f n
n= O
F(I ) =
F( -I) =
F (O) =
--
,_1
co
L: ( -
I) nfn
n= O
fo
I d nF(: )
n! dz"
I
%=0
= j,
lim ( I - : )F(z ) = f oo
1.2.
THE Z-TRANSFORM
331
"he
2
Transform Pairs
SEQUENCE
z-TRA NSFORM
F (z) =
co
(~
333
0, 1, 2, . . .
2: I nzn
n= O
n = 0
rm
"he
ion
the
itly
vay
sed
1/ ~ 0
zk
, 1
1/ = 0 ,1 , 2 , .. .
1 - z
Zk
her
) is
1 - z
~i a l
uo r
1 - z
wer
hen
ctZ
(I - ctZ)2
Z
(I - z )'
ctZ(I + ctZ)
(I - ctZ )3
z(1
+ z)
(I - z)" .
1
1) ,,_
(I - ctz)2
I)
+ m)(1/ + m
(I - z )"-
I) . . . (1/
+ 1)ctn
(I _ ctz)m+l
' to
ress
In S-
.ing
lOW
for
h is
IS a
5 in
rms
I to
out
.eed
eZ
.ach
UIL
332
APPENDIX I
Some comments regarding these tables are in order. First, in the propertv
table we note that Property 2 is a statement of linearity, and Properties 3 and
4 are statements regarding scale change in the transform and time domain,
respectively . Properties 5-8 regard translation in time and are most useful.
In particular, note from Property 7 that the unit delay (delay by one unit of
time) results in multiplication of the transform by the factor z whereas
Property 5 states that a unit advance involves division by the factor z,
Properties 9 and 10 show multiplication of the sequence by terms of the form
n(n - I) . . . (n - m). Combinations of these may be used in order to find,
for example, the transform of n2jn; this may be done by recognizing that
n2 = n(n - I) + n, and so the transform of n2jn is merely Z2 d 2F(z)/dz2 +
zdF(z)/dz. This shows the simple differentiation technique of obtaining
more complex transforms. Perhaps the most impor~ant, however, is Property
I I showing that the convolution of two time sequences has a transform that
is the product of the transform of each time sequence separately. Properties
12 and 13 refer to the difference and summation of various terms in the
sequence. Property 14 shows if a is an independent parameter of In' differentiating the sequence with respect to this parameter is equivalenttodifferentiating the transform. Property 15 is also important and shows that the transform
expression may be evaluated at z = I directly to give the sum of all term s in
the sequence. Property 16 merely shows how to calculate the alternating sum.
From the definition of the z-tra nsform, the initial value theorem given in
Property 17 is obvious and shows how to calculate the initial term of the
sequence directly from the transform. Property 18, on the other hand, shows
how to calculate any term in the original sequence directly from its z-transform
by successive differentiation; this then corresponds to one method for
calculating the sequence given its transform. It can be seen from Property 18
that the sequence In forms the coefficients in the Taylor-series expansion
of F(z) about the point o. Since this power- series expansion is unique,
then it is clear that the inversion process is also unique. Property 19 gives
a direct method for calculating the final value of a sequence from its
z-transform.
Table 1.2 lists some useful transform pairs . This table can be extended
considerably by making use of the properties listed in Table 1.1 ; in some cases
this has already been done. For example, Pair 5 is derived from Pair 4 by use
of the delay theorem given as entry 8 in Table 1.1. One of the more useful
relationships is given in Pair 6 considered earlier.
Thus we see the effect of compressing a time scquence In into a single
function of the complex variable z. Recall that the use of the variable z
was to tag the terms in the sequence Inso that they could be recovered from
the compressed function; that is,ln was tagged with the factor z", We have
see!
pre
F(z
firs
F (:
sec
ex
wr
a,
I.2.
THE Z-TRANSFORM
333
f = 1- d nF(z) I
n
It!
dz"
%= 0
(t his meth od is useful if one is only interested in a few term s but is rather
tedi ou s if man y term s are required) ; the second way is useful if F(z) is
expressible as a rationa l fun ction of z (that is , as the rati o of a polyn omial
in z over a polynomial in z) and in thi s ca se one may divide the den omin ator
int o the numerator to pick off the sequence of leadin g term s in the power
series directly. The power-series expan sion meth od is usually difficult when
man y term s are req uired.
Th e second a nd most useful meth od for inverting z-tra nsforms [that is, to
calcul ate j', from F(z)] is the inspection method. That is, one att empts to express
F(::.) in a fas hion such that it co nsists of term s that are recognizable as tran sform pairs, for example , fr om Table I.2. The sta nda rd approach for placing
F(z) in this for m is to carry out a par tial-fraction expansion, * which we now
discuss. Th e partial-fr act ion expansion is merely an algebraic techn iqu e for
expre ssing rat ional fun ction s of z as sums of simple term s, each of which is
easily inverte d. In pa rticular , we will attempt to express a rati onal F(z) as a
sum of terms , each of which looks either like a simple pole (see entry 6 in
Ta ble I.2) or as a multi ple pole (see entry 13). Since the su m of the tran sform s
equals the tra nsform of the sum we may apply Property 2 from Tabl e I.l to
inve rt each of these now recognizable forms sepa rately, th ereby carrying out
the req uired inversion. To carry out the parti al-fraction expan sion we proceed
as follows. We ass ume that F(z) is in rati on al for m, that is
F(z)
N( z)
D(z)
where both the nu merat or N (z) and the den ominat or D (z) are each
T his procedure is related to the La u rent expa nsion o f F( z) around each pole [G U IL
49 ].
334
APP ENDI X I
IS
a lready
in
D(z)
= II ( I -
,,;z)m;
( 1.21)
i= l
multiplicity
m.. [We note here th at in most problems of interest, the difficult part of the
so lution is to take a n arbitr ary polynomial such as D(z) and to find its roots
so that it ma y be put in the factored form given in Eq. (1.21). A t this point
we ass ume th at that difficult ta sk has been accomplished. ] If F(z) is in thi s
form then it is possible to express it as follows [G UlL 49]:
IX;
This last form is exactl y wha t we were looking for, since eac h term in this
sum may be found in o ur table o f transform pa irs ; in particul ar it is Pair I3
(a nd in the simplest case it is Pa ir 6). Thus if we succeed in ca rrying ou t the
partial-fraction expa nsion, then by inspection we ha ve o ur time seq uence In.
It rem ain s now to descr ibe the meth od for ca lculating th e coefficient s A i;'
The genera l expression for such a term is given by
)J I
1
1 / - 1 d l-l [
"(
A =
- ( I - IX .Z)m, ~
)
(
I 1
"
(j - I )!
,
dZ D(z)
:~ I/..
(1.23)
This rather formida ble procedure is, in fact , rather stra ightforwa rd as long as
the function F(z) is not terribly complex.
We no te here tha t a partial-frac tion ex pansion ma y be ca rrie d ou t o nly if the degree of the
numerato r po lynomial is strictly less th an th e degr ee of the den omi na to r polyno mia l : if
thi s is not thc case, then it is nece ssary to d ivide the den omina to r into the numerator until
the remaind er is o f lower degree th an the de nom inat or. This remainder divided by th e
origina l den omi na tor may then be exp anded in partial frac tions by the method show n ; the
terms ge nerated from the division al so may be invert ed by inspectio n mak ing use of
tr an sform pa ir 3 in T a ble 1.2. An alterna tive way of sa tisfying the de gree co ndit ion is to
attempt to factor o ut e no ugh pow ers o f z from the numera tor if possi ble.
l.2.
TH E Z-TRANSFORM
335
F(z)
4z (1 - 8z)
(1 - 4z)(1 - 2z)"
(1.24)
F(z)
Z2 [
4(l - 8z)
]
(1 - 4z)(I - 2Z)2
t>.
4(1 - 8z)
0
(1 - 4z)(1 - 2z)'
All
1 -4z
= ---
A. I
A22
+ - ----"'
'-( 1 -2z)" ( 1-2z)
0
such as All (that is, coefficients of simple pole s) are easily obtained
:q. (1.23) by mult iplying the ori ginal functi on by the factor correspond the pole and t hen evalu ating the result at the po le itself (that is, whe n
; o n a value that d rives the facto r to 0). Thus in o ur example we ha ve
A , = ( 1 - 4z)G(z) l.
.1
. -1/ 1
= 4[1 - (8/4)] = - I
[I _ (2(4)]2
4[ 1 - (8/2) ]
[I - (4/2)]
12
-------------<
336
APP ENDI X I
- -I -d
2 dz
[(I - 2Z)2G(Z)]
%- 1/ 2
I d 4(1 - 8z)
2: dz ( I - 4z) %- 1/2
!.(I - 4z)( -<32) - 4{~
= = _
- 8z)(-4)
(I - 4z)"
%~ 1 /2
=8
Thus we conclude that
-16
G(z ) = - I - 4z
12
+ -8 (I - 2Z)2 I - 2z
This is easily sh own to be equal to the original fact ored form of G(z) by
placing these terms over a common denominator. Our next step is to invert
G(z) by inspecti on . Thi s we do by observing that the first and third term s are
of the form given by transform pair 6 in Table 1.2 and that the second term
is given by transform pair 13. This, coupled with the linearity property 2
in Table 1.1 gives immediately that
G(z) <=> gn =
0
{-16(4)'
<0
(1.25)
0, I , 2, .. .
Of course, we must now account for the factor Z2 to give the expressio n for
f n. As menti oned ab ove we do thi s by taking ad vantage of Property 8 in
Table 1.1 and so we have (for n = 2 , 3, . . .)
f. =
and so
f,n -
0
{(3/1-.1 )2' - 4'
n <2
2, 3,4, ...
-Ii
I n = -.
F(z)z-l- n dz
27TJ C
(1.26)
1.2.
337
whe rej = ,j~ a nd the int egral is evaluated in th e complex z-pla ne a round a
closed circular contou r C, whic h is large en ou gh * to surround a ll poles of
F(z) . T his method of eva lua tion works properly whe n facto rs of the for m Zk
are removed fro m th e express io n ; the reduced expre ssion is th en evalu at ed
a nd the final solutio n is obtained by taking ad vantage of Property 9 in Table
I.l as we sha ll see below. This contour integrati on is most ea sily performed by
making use of the Cauchy residue theorem [G UlL 49]. This th eorem may be
sta ted as follows:
1.
27Tj 'J;;
- I
gn =
4z- 1 - n (1 - 8z) d
(1 _ 4z)(1 _ 2zl z
Since Jo rdan 's lemma (see p. 353) req uires that F(z) ~ 0 as z ~ 00 if we a re to let the
. con tour grow , then we require tha t any function F(z) that we consider have this property ;
thu s for rat ional functi ons of z if the numerator degree is not less than the denominat or
degree , then we must divide the numerator by the denominator un til the remainder is of
lower degree than the denominator, as we ha ve seen ear lier. The terms generate d by this
division are easily transformed by inspection , as discussed ~1 rlier . a nd it is only the remaining function which we now consider in this inversion meth od for the z-transfo rm.
t A function F(z) of a complex variable z is said to be analytic in a region of the complex
plane if it is single-valued an d differentiab le at every point in that region .
338
APPENDI X I
where C is a circle large enou gh to enclose the pol es of F(z) at z = 1/4 and
z = 1/2 . Using the residue theorem and Eq . (1.27) we find that the residu e at
z = 1/4 is given by
4z- 1- n(1 - 8z)
(z -.1)
r1f< =
=
whereas the residue at z
1/2 is calculated as
JI
1 n(14z- 8z)
(1 - 4=)(1 - 2Z)2
r1/ 2 = -d [( z - dz
z- 1/2
)JI
d [z-l- n(1 - 8z
1 - 4z
= dz
z-1/2
(1 - 4 z)[( -1 - n)z-2- n(1 - 8z) + z-:-n( -8)]- Z-I-"(1 - 8z)( - 4)
(I - 4z)"
I
' =1/2
J (2:I)-I-n( - 3)( - 4)
1)- 2- n
( I)-I~n
=(-1 ) [ ( - I - n) ( 2:
(-3)+ 2:
(- 8) = - 12(n
+ 1)2 n + 16(2)" -
24(2)"
N ow we mu st take 27Tj times the sum of the residues and then multiply by the
factor preceding the integral in Eq . (1.26) (thus we mu st take - 1 times the
su m of the residues) to yield
gn = - 16(4)" + 12(n + 1)2 n + 8(2 )n
0, 1, 2, ...
But thi s last is exactly equal to the form for gn in Eq . (1.25) found by the
method of partial-fraction expansions. From here the solutio n pr oceeds as in
th at meth od , thus confirming the consistency o f these two a pproaches .
Thus we have reviewed some of the techniques for applyin g and inverting
th e z-tra nsfo rm in the handling of discrete-time fun ction s. The a pplica tion
of these methods in the so lution o f difference equations is carefully described
in Sect. 1.4 below .
1.3.
339
con ven ience we are assuming t ha t f( l) = 0 for I < O. For the more general
case , mo st of the se techniques apply as discussed in the paragraph containing
Eq . (1.38) below .] As with d iscrete-time functi on s, we wish to tak e our
continuous-time func tion a nd transform it from a functi on of t to a function
of a new complex variable (say, s). At the same time we would like to be able
to " untransform" back int o the t domain, a nd in order to do this it is clear
we must so mehow " tag"f(t) a t each va lue of t. For reason s related to th ose
de scribed in Secti on 1.1 the tag we ch oose to use is e: , The complex va riable
s may be wri tten in ter ms of its real a nd co mplex parts as s = o + j w where ,
again , j = J~ . Having multiplied by thi s tag , we then integrate over a ll
no nzero values in order to obtain our transform function defined as follows :
F*(s)
~ L:!(t)e- Sl dt
(1.28)
Agai n, we have ado pted the notation fo r genera l Laplace tr an sforms in which
we use a capital letter for the tran sform of a function of tim e, which is
described in terms of a .lo wer case letter. This is usually referred to as the
"two-sided, " or "bilateral" Laplace tran sform since it opera tes on both
the negative and positive time axes. We have assumed thatf(t ) = 0 for t < 0,
and in th is case the lower limit of integration may be replaced by 0- , which is
defined as the limit of 0 - as (> 0) go es to zero; further , we often denote
this lower limit merel y by 0 with the understanding th at it is meant as 0(usually thi s will cau se no confusion). There also exists what is known as the
"one-sided" Lapl ace transform in which the lower limit is repl aced by 0+,
which is defined as th e limit of 0 + e as (> 0) goes to zero; th is o ne-sided
tr an sform has application in the so lution of tran sient problems in linear
systems . It is impo rtant th at th e reader distingu ish bet ween th ese two transfo rms with zero as th eir lower limit since in th e former case (the bilat eral
tr ansform) an y accumulation at the origin (as, for example , the unit impulse
defined below) will be included in the tr an sform , wherea s in the la tte r case
(t he o ne-sided transform) it will be o mitted .
For o ur assumed case in which f(l) = 0 for t < 0 we may write o ur
t ran sform as
F*(s)
= f '! (t)e- dt
S
'
(1.29)
where, we repeat , the lo wer limit is to be int erpreted as 0- . Thi s Lap lace
transform will exist so lon g asf(l) gro ws no fa ster than an exponential , th at
is, so lon g as there is so me real number u. such that
340
APPENDIX I
The smallest possible value for " is referred to as the abscissa of absol ute
con vergence. Aga in we stat e tha t the Laplace transfo rm F*(s) for a given
functio n j (r) is unique.
If the integral of f(l) is finite, then certainly the right-ha lf plane Re (s) ~ 0
represents a region of analyticity for F*(s) ; the notati on Re ( ) reads as
" the rea l part of the complex function withi n the parentheses." In such a
case we have, corresponding to Eq. (1.16),
F*(O) = 1"'j (l) dt
(I.30)
From our earlier definition in Eq. (1.9) we see th at prope rt ies for the ztran sform when z = I will corres po nd to properties for the Lapl ace transform when s = 0 as, for example , in Eqs. (1.16) a nd (I.30).
Let us now co nsider so me importa nt examples of Laplace tr an sfor ms. We
use notati on here ident ical to that used in Eq. (1.17) for z-t ransforms, namely,
we use a dou ble-ba rred , dou ble-headed arrow to denote the relat ion ship
bet ween a functio n a nd its transform; thu s, Eq. (1.29) may be written as
j(t) <=;o- F*(s)
(I.31)
<0
. t
f '"Ae --<J'e- s t dt
J.
=A
e -( H al t dt
s+ a
And so we have the fund ament al relation sh ip
A
Ae-at 0(1) <=;0---
s+ a
( 1.32)
1.3.
T HE LAPLACE TR ANSFORM
341
where we hav e defined the unit ste p func tio n in con tinuo us time as
o(t) =
{~
t~O
( 1.33)
t<O
In fact , we ob ser ve that the un it step function is a special ca se of o ur on esided expo nentia l function when A = I , a = 0, a nd so we ha ve immed iatel y
th e additional pair
o( t) <=> -
(1.34)
-G .
~---------342
APPENDI X I
ItI ::;;
ItI > J.
20'.
t=O
= {ex)
o
L:
1:;60
uo(t) dt = 1
= 8
0:
4
0:
= 4
2
0: =
0:
11
1
1
- 8" - 16
16
Figure 1.2 A sequence of functions whose limit is the unit impulse function uo(t ).
1.3.
343
o
Figure i.3
indicating the area of the impulse; that is, A times a unit impulse function
loca ted at the point t = a is denoted as Auo(t - a) and is depicted as in Figure
1.3.
Let us now co nsider the integral of the unit imp ulse funct ion. It is clear that
if we integra te from - 00 to a point I where I < 01hen the total integral must
be 0 whereas if I > 0 th en we will ha ve succe ssfully integr ated past the unit
impulse and thereby will have accumulated a total area of unity. Thus we
conclude
r' uo(x) d x = {I
L"
I~O
1<0
But we note immediately that the right-hand side is the same as the definition
of the unit step fun ct ion given in Eq. (1.33). Therefore , we conclude th at the
un it step fun cti on is the integr al of th e un it impulse functi on , a nd so the
" de riva tive" o f th e uni t st ep functi on mu st therefore be a unit impulse
funct ion. However , we rec ogni ze th at th e derivative o f this discontinuous
funct ion (the step function) is not pr operly defined; once again we appeal to
the th eory of distr ibutions to place thi s o peration on a firm mathematical
foundation. We will therefore assume thi s is a proper operation and proceed
to use the unit impulse functi on as if it were an ordinary function .
One of the very important properties of the unit impulse fun ction is its
sifting property ; that is, for an arbitrary differentiable function g(l) we have
L :uo(t - x)g(x) d x = get)
Th is las t equa tio n merel y says th a t the integr al o f the product of o u r fu nction
g(x) with an imp ulse loca ted at x = I "sifts" the fun cti on g(x) to p rod uce its
val ue a t t , g(I). We no te that it is possible al so t o define the deriva tive of the
unit impulse which we den ote by U,(I) = dUo(I) /d l; th is is known as th e uni t
doublel and ha s th e property th at it is everywhere 0 except in the vicinity of
the origi n where it run s o ff to 00 j ust to the left of the o rigin a nd o ff to - 00
344
APPE NDIX I
just to the right of the origin , and, in addition , ha s a tot al area equal to zero .
Such function s correspond to electrostatic dip oles, for example , used in
physics. In fact , an impulse function may be likened to the force placed o n a
piece o f paper when it is laid over the edge of a knife and pre ssed down
whereas a unit doublet is similar to the force the paper experiences when cut
with scissors. Higher-order deri vatives are possible a nd in genera l we may
have un(t) = dU n_l (t )/dt . In fact , a s we have seen, we may also go back down
the sequence by integrating these functi ons as, for example, by generating
the unit step function as the integral of the unit impulse functi on ; the obvious
notation for the unit step function , therefo re, would be U_I(t) and so we may
write uo(r) = dU_I(t)/dt. [Note, from Eq. (1.33), that we have also reserved
the notation bet) to represent the unit step functi on.) Thus we have defined
an infinite sequence of specialized functions beginning with the un it impulse
a nd proceeding to higher-order derivatives such as the doublet, a nd so on,
as well as integrating the unit imp ulse and thereby generating the unit ste p '
function , the ramp, namel y,
11_ 2(1)
"' I'
II _I( X)
dx
-a:>
t~O
= {I
1<0
"-.<) g,
L"~(x) ~
dx
I~O
(:
1 <0
and in general
tn-l
(
~n - I) !
I~O
(1.35)
1 <0
This entire family is called the family of singularity fun ctions , and the most
important members are the un it step funct ion a nd the u nit impulse function.
Let us now return to our main discu ssion and con sider the Laplace tra nsform of uo(t ). We proceed directly from Eq . (1.28) to ob tai n
lIo(I) <=>l''' uo(l)e- " dt = 1
(Note that the lower limit is interpreted as 0- .) Thus we see th at the unit
impulse has a Laplace transform equal to the constant unity.
Let us now consider so me of the important pr operties of the transformation. As with the z-tra nsform , the convolution property is the most imp ortant
and we proceed to derive it here in the continuous time ca se. Thus. con sider
two functi on s of continuous time f(t) and get), which ta ke on non zero values
1.3.
345
~L:
J (I - x)g(x) d x
(1.36)
=f
J (1 - x)g(x) dx
we may then ask for the Laplace tran sform of this convolut ion . We obta in
this formall y by plugging into Eq. (1.28) as follows :
J (t) 0
g(/)-=-I~o(J(t) 0
= I t: o
g(t))e- S'dl
I~/(t -
= L : oi: /
x)g(x) dx e- ' dt
And so we have
j( t) 0 g(t) -=- F*(s)G*(s)
Once again we see that the tran sform of the con volution of two functions
equals the produ ct of the transform s of each . In Table 1.3, we list a nu mber of
impo rta nt properties of the Laplace transform , and in Table 1.4 we list some
o f the import an t tran sform s themselves. In these tabl es we ado pt the usual
notatio n as follows:
s- .J
dnJ (t ) ~ f nl(t)
dt n
j(x) dx
(1.37)
~ j(- nl(t)
--------n times
Fo r example, p - ll(t) = j!.. ",! (x) dx ; when we dea l with functions which are
zero for t < 0, then p-ll(O-) = O. We comm ent here "that the o ne-sided
tr an sform tha t uses 0+ as a lower limit in its definition is quite co mmonly
used in tran sient analysis, but we prefer 0- so as to include impul ses at the
origi n.
The table of properties permits one to compute many transform pairs from
a given pair. Proper ty 2 is the sta tement of linearit y and Pr oper ty 3 describes
the effect of a scale change. Property 4 gives the effect of a tr anslation in time,
Table {.3
Some Properties of the Laplace Transform
FUNCTION
I. f(t)
2. af (t)
+ bg(t)
f(~)
3.
TRANSFORM
(a
> 0)
F* (s) =
5.:
f(t)e- ' t dt
+ bG*(s)
aF*(s)
aF*(as)
4. f (t - a)
e- "'F*(s)
5. e-a'f(t)
F*(s + a)
dF *(s)
- - -
6. tf(t )
ds
d nF*(s)
( _ I )n ~
7. t nf (t )
1:,
8. f (t )
F *(s, ) ds,
r'"
9. f( l)
In
10. f (l ) g(l)
I I.
12.
ds;
J 51""" 3
r'"
J 8: = 81
ds 2 ' "
I"
dsnF*(sn)
) Sn= 5 n _ l
F*(s)G*(s)
t df(l )
sF*(s)
---;[(
t d"f( t)
snF*(s)
di"
F* (s)
B .t f ",f(t )dl
f"",
14. t
s
F*(s)
sn
f J (t )(dr)n
n times
[a is a parameter)
15 . aa f(t)
a
aa F(s)
F*(O) =
f(t) dt
fo:
t_ O
l _co
t To be co mplete. we wish to sh ow the form of the tran sform for entries 11 -14 in
the cas e when f( l) may ha ve nonzero va lues for I < 0 a lso :
, It
I
-
'"
,.
n times
346
1.3.
347
Table 1.4
Some Laplace Transform Pairs
FUN CTION
t ~ 0
1. !(t)
2.
110 (1)
3.
1I0 (t
4. 1I. (r)
TRAN SFORM
F*(s)
(un it impulse)
s.:
!(t)e--<J' dt
- a)
A
= dt 1I._I(t )
5. 1Ct(r) ~
6. 1C , (t - a)
(uni t step)
a r
7. lI_n(r) = (II _ I) !
n- 1
8. Ae- a t 6(t)
9. te- a t 6(t )
s .
A
s+a
--(s
+ a)2
(s
+ a)"+1
whereas Property 5, its d ua l, gives the effect of a para meter shift in the
tra nsfo rm do mai n. Pr operties 6 a nd 7 show the effect of mul tiplication by t
(to so me power) , which co rres po nds to differen tia tion in the tran sform
d om a in ; sim ila rly, Prope rties 8 and 9 sho w the effect of di vision by t (to so me
power) , which correspo nds to integ ra tio n. Property 10, a most im porta nt
p rop er ty (de rived ea rlier), shows th e effect of con volut ion in the time d om ai n
going over to simple multiplicat ion in the tran sfo rm domain . Properties 11
and 12 give the effect of time diffe rentiatio n ; it shou ld be noted that this
corresponds to mu ltip lica tio n by s (to a p ower equal to the number of
differentia tion s in time) times the origi na l tra nsfo rm . In a simi la r way
Prop er ties 13 a nd 14 show the effect of time integration goi ng over to division
by s in the transform domai n. Property 15 shows that differ en tiat ion with
res pect to a pa ram eter off(t) corresp on d s to differentia tio n in the transfo rm
domai n as well. Pro perty 16, th e integra l pr op erty, shows the simple way in
which the tra nsform may be eva lua ted a t the o rigi n to give the total integral
348
APP END IX I
ofJ(t). Pr operties 17 and 18, the initial and final va lue theorems , show how to
compute the va lues for J(t) at t = 0 and t = CIJ directly from the tra nsform.
In Table 1.4 we have a rather short list of important Laplace tran sform
pairs. Much more extensive tables exist and may be found elsewhere
[DOET 61]. Of course , a s we said earlier, the table shown can be extended
considerably by making use of the properties listed in Table 1.3. We n ote , for
example , th at the transform pair 3 in Table f.4 is obtained from tr ansform
pair 2 by application of Pr operty 4 in Table 1.3. We point out again that thi s
table is limited in length since we ha ve included only those functi ons that find
relevance to the material contained in thi s text.
So far in this discu ssion of Laplace transforms we have been considering
only functionsJ(t) for which J(t) = 0 for t < O. This will be satisfactory for
most of the work we consider in this text. However, there is an occas iona l
need for transforming a function of time which may be nonzero an ywhere o n
the real -time axi s. For this purpose we mu st once again con sider the lower
limit of integration to be - CIJ, that is,
F*(s) =
L:J(IV
S1
( 1.38)
t~O
t<O
t~O
J( t)
= J-(1) + I +(t)
We now observe th at J-( -t) is a functi on that is nonzero only for positi ve
values of t, and J+(t) is non zero only for nonnegative values of t. Thus we have
J+(t)
<=:>
F+*(s)
*(s)
J- (-t) <=:> L
where these tr an sforms are defined as in Eq . (1.29). H owe ver , we need the
tran sform of J- (t) which is easily shown to be
J- (t)
<=:>
L *( -s)
Thus, by the linear ity o f transforms, we may finally write the bilateral
transform in terms of one-sided tran sforms:
F*(s) = L *( - s)
fOI
R,
ta
cc
ta
"
\'
dt
One can ea sily sho w th at thi s (bilate ra l) Laplace transform may be calculated
in terms of one-sided time functi ons and their transforms as foll ows. First we
define
t<O
J ( /)
J-C t ) = {0
J+(t) = { 0
J (t )
As
Let
th a
wil
wh
th,
+ F+*(s)
1.3.
349
As always, these Laplace tran sfor ms have abscissas of abso lute convergence.
Let us therefor e define 0'+ as the co nvergence abscissa for F; *(s) ; this implies
th at the region of convergence for F+*(s) is Re (s) > 0'+. Similarl y, F_ *(s)
will have some abs cissa of abso lute convergence, which we will denote by 0'_ ,
which implies that F_ *(s) converges for Re (s ) > 0'_ , It then follows directly
that F_ *( - s) will have th e same convergence ab scissa (0'_) bu t will converge
for Re (s) < 0'_ , Thus we ha ve a situation where F *(s) converges for 0'+ <
Re (s) < 0'_ and therefor e we will have a " convergence st rip" if and only if
0'+ < 0'_ ; if such is not the case, then it is not useful to define F*(s). Of course,
a similar a rgument ca n be made in the case of z-transforrns for funct ion s that
ta ke on non zero values for negati ve time indices.
So far we have seen the effect of taggin g our time funct ion f( t) with the
compl ex exponential e- s t a nd then compressing (integrating) over all such
ta gged functi ons to form a new functi on, namel y, the tran sform F*(s). The
purpose of the tagging was so that we could later " untransfo rrn" or, if you
will, "unwind" the transform in order to obtainf(t) once again. In princ iple
we know this is possible since a tr an sform and its time functi on are uniquely
related. So far , we have specified how to go in the ~Jn e direction from f( t)
to F*(s). Let us now discuss the problem of inverting the Laplace tr an sform
F *(s) to recover f( t) . There are basically two meth ods for conducting this
inversio n : The insp ection method a nd th e f ormal inversion int egral m ethod.
These two meth od s are very similar.
First let us discuss the inspection method , which is perh ap s the most
useful scheme for invertin g transforms. Here , as with z-transforms, the
approac h is to rewrite F*(s ) as a sum of term s. each of which ca n be recognized from the table of Laplace transform pa irs . Then , makin g use of the
linear ity pr operty, we may invert the transform term by term , and then sum
the result to recover f( t) . Once again , the basic method for writing F*(s) as a
sum of recognizable term s is that of the partial-fraction expan sion . Our
description of that meth od will be so mewhat sho rtened here since we have
discussed it at so me length in the z-tra nsform sectio n. First. we will ass ume
th at F*(s) is a rat ion al functio n of s , nam ely,
F*(s)
N( s)
D(s)
where bot h the numerator N(s) and den omin at or D (s ) are each polynomials
in s. Again , we ass ume that the degree of N(s) is less than the degree of D (s ) ;
if this is not t he case. N (s) must be divided by D(s) until the remainder is of
degree less than the degree of D(s), and then the partial-fract ion expan sion is
ca rried out for this remainder, whereas the terms of the qu otient resultin g
from the division will be simple powers of s, which may be inverted by
appealing to Tr an sform 4 in Table 1.4. In additi on , we will assume that the
350
AP PEN DIX [
" hard" part of the problem ha s been done , namely, that D (s) has been put in
facto red form
k
II (s + ai)m,
i :::lt l
D(s) =
.
(1.39)
Bkl
(s
+ ak)m.
Bk 2
(s
+ ak)'n.-,
+ ... +
Bk m
(s
+ ak)
(1.40)
Bi ;
= (j
I
d
[
_ I)! ds i- ' (s
N(S)]
+ ai)m, D(s)
s--a,
(1.41)
Thus we have a complete pre scription for findingf (t) from * (s) by inspectien in those cases where F*(s) is rat ional and where D (s ) has been facto red
a s in Eq. (1.39). This method works very well in those cases where F *(s ) is not
overly complex.
To elucidate some of these principles Jet us carry out a simple exam ple.
Assume that F *(s) is given by
8(S2 + 3s
l)
(1.42)
F*(S)=~+
S+ 3
B 2,
(s
+ 1)3
B 22
(s
+ 1)2
+~
(s
+ 1)
1.3.
TH E LAPLA CE TRANSFORM
351
Evaluat ion o f the coe fficients Bij pr oceeds as foll ows. Bl l is especi ally simple
since no differentia tions are requi red , and we obtain
+ 3)F
B l l = (s
(S)I'~_3 = 8
(9 - 9 + I)
(_2)3
=-1
+ I) F *(s)I.~-1 =
3
B2 1 = (s
(I - 3
2
+ I)
= - 4
B = .!!...[8(S2 + 3s
22
cis
= 8 (s
+ I)J
+3
3)(2s
+ 3) (s
S2
=8
6s
+ 81
(s + 3)"
I
,--I
(S2 + 3s
1)(1)
+ 3)2
I
.--1
I -
=8 --~~
.--1
(2)2
=6
Lastly , the calculati on of B23 in volve s two differentiati ons ; however , we have
a lrea dy carried o ut the first diffe rentiation , a nd so we take adva ntage o f the
fo rm we ha ve derived in B22 j ust pr ior to eva luation at s = -I ; furthermore ,
we note th at since j = 3, we have for the first time an effect due to the term
(j - I)! fr om Eq . (1.4 1). Thus
B' 3 =
ci
2! ds 2
_ 1 (8) .{ [S2 + 6s + 8J
- 2
cis
= 4 (s
(s
3)2(2s
+ 3)2
+
6) - (S2
(s
= 4
(2)2(4) - ( I - 6
6s
+ 3)4
+ 8)(2)(2)
'---'-~---'---'----'-:"':""':
(2)4
=1
I.-- 1
+
8)(2)(s
3)
I
s~- I
352
APP ENDI X I
This completes the evaluation of the con stants B ii to give the parti al-fracti o n
expansion
F*(s) =
---=.!.- + .
s
+3
(s
-4
1)3
(s
+ 1)2
+ _I_
(s + I)
(1.43)
This last form lend s itself to inversion 'by inspecti on as we had promi sed . In
particular, we o bserve that the first a nd last terms invert directly accordin g to
transform pair 8 from Table lA, whereas the seco nd a nd third term s in vert
directly from Pair 10 of that table; thus we have for I ~ the following :
(1.44)
J( t)
= - I . J"'+iOO F*(s)es t ds
2-rr}
( 1.45)
", - ;00
for t ~ and (Je > (J a ' The integration in the complex s-plane is tak en to be a
straight-line inte grati on parallel to the imaginary axis a nd lying to the right
of (J a ' the ab scissa of ab solute con vergence for F *(s). The usu al means for
carrying out this integra tion is to make use of the Cauchy residue th eorem as
applied to the integral in the complex domain around a closed contour. T he
closed con tour we ch oose for this purpose is a semicircle of infinite radiu s as
shown in Figure IA. In thi s figure we see the path of int egrati on required for
Eq . (lAS) is S3 - S1 and the semicircle of infin ite radius closing thi s co nto ur is
given as S1 - S 2 - S3 ' If the integral al on g the path S 1 - S2 - S3 is 0, then the
integral along the entire closed contour will in fact give us J(I) from Eq.
(lAS) . To establish that this contribution is 0, we need
1.3.
TH E LAPLA CE TR ANSFORM
353
W = Im (s)
s- plane
--:-1-----;;+---':-""+------ 0 =
Re(s)
Jordan's Lemma
If IF*(s)l- 0 as R -
00
on s, - s. for
S 3,
then
>0
Thus, in orde r to carry out the complex in version int egral show n in Eq .
(1.45) , we mu st first express F*(s) in a form for which J o rdan' s lemm a
applies. Ha ving done thi s we may then evaluate the integral a ro und the
clos ed co nto u r C by ca lculating residues and using Cauchy' s residu e the orem .
Thi s is most easily ca rried o ut if F*(s) is in rational form with a fact or ed
den ominat or as in Eq . (1.39). In order for Jordan' s lemma to apply, we will
require, as we did before , th at th e degree o f the numerator be strictly less than
the degree of the den om ina tor, a nd if thi s is not so , we mu stdividetheration al .
functi on until the remainder has th is property. That is all there is to the
meth od . Let us carry th is o ut o n our previous example, namely th at given in
Eq . (1.42). We note this is alread y in a form for which Jordan's lemm a
applies, and so we ma y proceed directly with Cauchy's residue theorem.
Our poles a re loca ted .at s = -3 and s = -I. We begin by ca lculating the
residu e a t s = -3 , thu s
r-
= (s
+ 3)F*(s)eSlf'~_3
+ 3s + l )e',
(s + 1)3
8(9 - 9 + 1)e-
= 8(s'
31
=_
( _2)3
e-3'
I
s_ _ 3
354
APPENDIX I
I d (5 + 1)3F*(s)e"
r_ 1=:;-;----;
.
_. ds
,~ - I
2
= I d 8(S2 + 3s + l)e"/
2
2 d5
(5 + 3)
,-- 1
= !~[(s +
2 ds
Oe"JI
.~_I
=!
I
{(s + 3)"[8(25 + 3)e" + 8(5 2+ 3s + 1)le"
2 (s + 3)4
1,--1
f(l )
_ e- 3 '
+ e:' + 6le- t
Thus we see that o ur solution here is the same as in Eq . (1.44), as it must be:
we ha ve once again that f( l) = 0 for t < O.
In o ur earlier discussion of the (bila teral) Laplace transform we discussed .
functi on s of time 1_(1 ) and f +(I) defined for the negat ive a nd positive realtime a xis, respect ively. We also obse rved th at the tr an sform for each of th ese
functi on s was a na lytic in a left half-plane and a right half-pl ane , respecti vely,
as mea sured from their appropriate a bscissas of a bsolute con vergenc e.
More over , in o ur last inversio n method [th e a pplication of Eq . (1.45) ] we
observed that closing th e co nto ur by a semicircle of infinit e rad ius in a co unterclockwise directi on gave a result fo r t > O. We comment now th at had we
closed the co nto ur in a clockwi se fashion to the right , we would have ob ta ined
the result th at would have been a pplica ble for I < O. ass umi ng tha t the
contribu tion of thi s contour could be sh own to be 0 by Jordan ' s lemma . In
o rde r to invert a bilateral tr ansform, we pr oceed by o btaining first f( l) for
positive values of I a nd then for negati ve values of I. For the first we tak e a
path of inte gra tion within the con vergence strip de fined by G_ < G c < G...
and th en closing the contour with a counterclockwise semicircle; for I < 0,
we ta ke th e same vertical contour but close it with a semicircle to t he righ t.
- - - - - - - - - - - - --
1.4.
355
A s may be anticipated from our contour integration methods, it is some times necessary to determine exactly how many singularities of a function
exist wit hin a closed region. A very powerful and convenient theorem which
a ids us in thi s determination is given as follows :
Rouche's Theorem [G UIL 49] If f(s) and g(s ) are analytic fun ctions of s
inside and on a closed contour C, and also if /g(s)! < If(s)1on C, then f (s)
and f (s) + g (s) have the same number of zeroes inside C.
1.4.
+ ... + Gog.
= 0
(1.47 )
as A rx n-
(1.48)
356
APPENDI X I
This Nthorder polyn omial clearly ha s N so lutio ns, which we will de note by
ai ' <X 2 , , <X,v , assuming fo r the moment th at a ll the <Xi a re distinct. As soci.
a ted wit h each suc h so lutio n is an a r bitra ry co nstant Ai whic h will be de ter mi ned fro m the initial co nditio ns for the differen ce equati on (o f which there
must be N). By ca ncelling th e com mon te rm A<xn- .Y from Eq . (1.48) we
fina lly arrive at the characteristic equation which determines the va lues <X i
aN
, ..
( 1.49)
Thu s the sea rc h for th e homogeneous so lutio n is now reduced to find ing th e N
roots o f o ur characteristic equati on (1.49) . If a ll N o f the <X i a re disti nct, th en
the homogene ou s so lutio n is
g<;:) = A,<x l
+A
2 <X2
+ ... + A ,vaN
In the case of nondist inct root s, we have a slightly different sit ua tio n . In
particula r, let <XI be a multiple root of o rde r k; in thi s case the k equ al ro ot s
will contrib ute to th e homogeneous so lutio n in the fo llowing fo r m:
(Aun k -
+A
k 2
12 11 -
a nd simila rly for any o the r multi p le roots. As fa r as the particular solution
g~P) is con cerned, we know that it mu st be found by a n a ppro priate guess
fr om the form of en'
Let us illustrate so me of these principles by mea ns o f a n exam ple. Consider
the seco nd-o rder di ffere nce eq ua tio n
+ g"-2 =
6g" - 5g n_ l
6Gr
II
= 2, 3, 4, . ..
( 1.50)
T his eq uati o n gives the rela tionsh ip among the u nkno wn functio ns g n for
n = 2 ,3 , 4 , .. . . Of co urse, we mu st give two ini tial co ndi tio ns (since the
o rder is 2) a nd we choose th ese to be go = 0, g l = 6/5 . In order to find the
hom ogeneo us solutio n we mu st form Eq . (1.49) , whic h in th is case becomes
60: 2
a nd so the two values of
<X
5 <X
+I=
:%<) =
= -
I
3
( 11
)
on
A('!')"+
A.-,(.3!.)"
2
1
lA.
357
The particular solution must be guessed at, and the correct guess in this case is
If we plug g~p) as given back int o our basic equation , namel y, Eq . (1.50), we
find that B = 1 and so we are convinced that the 'pa rticula r solution is
correct. Thus our comp lete sol ution is given by
We use the initial conditions to solve for Ai and A 2 and find Ai = 8 and
= -9. Thus our final solution is
A2
Il
= 0, 1, 2, . . .
(1.51)
This completes the standard way for solving our difference equation.
Let us now describe the method of c-tra nsfo rrns for so lving difference
equations. Assume once again that we are given Eq . (1.46) a nd that it is good
in the range /I = k , k + .1 , .. .. Ou r approach begins by definin g the following c-tra nsforrn:
<X>
G(: )
=I
gn:n
(1.52)
n= O
Fro m o ur earlier d iscussio n we know that once we have found G(:) we may
then apply o u r inversion techniques to find th e desired solution gn' Our
next step is to multiply the nth equation from Eq. (1.4 6) by : n and then form
the sum of a ll such multiplied equation s from k to infinity ; that is. we form
ec
00
.2 .2 Q ig n_ i z n = .2 e nz n
Tl = h: ; = 0
n=h:
We then carry out the summa tions and a ttempt to recogni ze G(z) in this
sin gle equ ati on. N ext we solve for G(:) algebraica lly and then pr oceed with
o ur inversion techniques to obtai n the so lution. Th is meth od does not requ ire
that we guess at the particula r solution , and so in th at sense is simpler th an
the di rect method : however, as we sha ll see, it still has the basic difficulty that
we must solve the char acteristic equation [Eq . (1.49)] and in general this is the
difficult part of the solution . However, even if we cannot so lve for the ro ot s
(Xi it is possible to o btai n meaningful properties of the solution g n from the
perhap s unfa ctored form for G(: ).
358
APPENDIX I
Let us solve our earlier example using the method of z-tra nsforrns. Accordingly we begin with Eq. (1.50), multiply by z" and then sum; the sum will go
from 2 to infinity since this is the applicable range for that equation . Thus
We now factor out enough powers of z from each sum so that these powers
match the subscript on g thusly:
Focusing on the first summation we see that it is almost of the form G(z)
except that it is missing the terms for n = 0 and /I = I [see Eq. (1.52)];
applying this observation to each of the sums on the left-hand side and
carrying out the summation on the right-hand side directly, we find
6(1/5)2z2
= --'--'---'--
1 - (1/5) z
Observe how the first term in this last equation reflects the fact that our
summation was missing the first two terms for G(z). Solving for G(z) algebraically we find "
G(z)
G(z)
- z)
(I)[I _ (1/3) z][1z(6- (1/2)
z][1 -
= "5
(I /5)z]
G(z) =
-9
1 - (1/3) z
8
1 - (1/2) z
1
1 - (1/5)z
0, 1,2, .. .
Note that this is exactly the same as Eq . (LSI) and so our method checks. We
comment here that even were we not able to invert the given form for G(z)
we could still have found certain of its properties; for example, we could"find
IA.
359
that the sum of all term s is given immed iately by G(I ), that is,
+ aN-l
dN- 'f(t)
v
dt : -
df(t )
e(t)
( 1.53)
Here the coefficients a; are given constants, and e(t ) is a given driving function. Along with this equation we must a lso be given N initial co nditions in
order to carry out a complete solution; these conditions typically a re the
values of the first N derivat ives at some instant, usually at time zero. It is
required to find the function f( t). As usual, we will have a homogeneous
solut ion fihl( t), which solves the homogeneous equation [when e{t ) = OJ
as well as a particular solut ion f (OI(t) that corresponds to the nonhomogeneous
equation. The form for the homogeneous solution will be
<X i
The evalu ation of th e coefficients A ; is carried out makin g use of the initial
condition s. In the case of multiple root s we have the following modificat ion.
Let us assume that <x, is' a repeated root of order k ; this multiple roo t will
contri bute to the hom ogeneous solution in th e following way:
360
APPENDIX I
and in the case of more than one multiple root the modification is obviou s.
As usual , one must guess in order to find the particular solution J< p)(t). The
complete solution then is, of course, the sum of the homogeneous and
particular solutions, namely,
+f
Jet) = j (hl(t)
p'(t)
+ 9f(t) = 2t
(1.54)
0 and df(O-)/dt
= O.
Forming the
+9=0
(1.55)
cx, = cx, = 3
and so the hom ogeneou s solution must be of the form
jlh l(t)
= (Ant + A'2)e3t
+ B,t
Substituting this back into the basic equation (1.54) we find that B,
and B. = 2/9. Thu s our complete solution takes the form
j (t) = ( Ant
4/27
+ A , 2)e + -4 + -2 t
3/
27
Since our initial conditions state that bothf(t) and its first derivative must
be zero at t '= 0- , we find that Au = 2/9 and A" = -4/27 , which gives for
our final and complete solution
Je t) =
t~O
(1.56)
The Laplace transform provides an alternative meth od for solving constantcoefficient linear differential equations. The method is based upon Properties
II and 12 of Table 1.3, which relate the derivative of a time funct ion to its
Laplace transform. The approach is to make use of these properties to tr an sform both sides of the given differential equation into an equation involving
the Laplace transform of the unknown functionf (t) itself, which we denote
as usual by F*(s) . This algebraic equation is then solved for F*(s), and is then
IA.
361
- l'\o-) -
6sF*(s)
+ 6J(0-) + 9F*(s) = ~
s-
F*( s)
2/s
= -.---'--s: - 6s
+9
We must now factor this last equation, which is the same problem we faced
in findin g the roots of Eq. (1.55) in the direct method , and as usual forms the
basically difficult part of all direct a nd indirect methods. Carrying this out we
have
F*(s) _
-
2
S2( S -
*
2/9
4/27
F (s) = - + - +
s
S2
2
9
=-
2/9
-4/27
+
-(s - 3)2
S- 3
I ~
0,
+ -4 + -2 le
27
3t
4 3t
e
27
362
APPENDIX t
REFEREN CES
A hlfors, L. V., Complex Analysis, 2nd Ed ition, McGraw-Hili (New
York), 1966.
CA D Z 73 Ca dzow , J . A. , Discrete-Time Sys tems, Pre ntice-Ha ll (E nglewood
Cliffs, N.J.), 1973.
D OET 61 Doetsch, G. , Guide to the Applications of Laplace Transf orms, Van
Nostrand (Princeton), 1961.
GU[L 49 Guillemin, E. A., The Mathematics of Circuit Analy sis, Wiley (New
Yo rk) , 1949.
J URY 64 Jury, E. 1., Theory and Application of the z-Transfo rm Me /hod,
Wiley (New Yor k), 1964.
SC HW 59 Schwartz, L., Theorie des Distributions, 2nd printing , Ac tualities
scientifiques et industrielles Nos. [245 and I [ 22, Hermann et Cie.
(Paris), Vol. 1 (1957), Vol. 2 (1959).
WIDD 46 Widder , D . V., The Laplace Transf orm, Princeton Uni versity Press
(Princeto n), 1946.
AHLF 66
APPENDIX
II
We now descri be the rule s of the game for cre ating a mathematical model
for probabilistic situatio ns, which is to corresp ond to real-world experiments.
Typic ally one exa mines three features 'of such experiments:
1.
364
2.
3.
APP ENDIX II
1.
(a)
(b) P[S]
(c)
I.
(ILl)
(11.2)
(11.3 )
It is appropriate at thi s point to define some set the or etic not ati on [for
example , the use of the symbo l U in property (c)]. Typically, we descr ibe a n
event A as follows : A = {w : w sa tisfies the membership pr operty for the
event A } ; thi s is read as "A is the set of sa mple points w such th at w satisfies
the memb er ship property for the event A ." We further de fine
A U B
= {w : w
A II B = AB
in A or B or both} = union of A a nd B
11.1.
365
(1104)
P[ A B]
~ P[ AB]
P[B]
when ever P[B] '" O. The introduction of the conditional event B force s
us to restrict attention from the original sample spa ce S to a new sa mple
space defined by the event B; since this new constrained sample space mu st
now ha ve a total pr obability of unity, we magnify the probabilities associated
with co nditio nal event s by dividing by the term P[B] as given above.
The seco nd additiona l notion we need is that of statistical independence
of events. Two events A, B are said to be statistically indep endent if a nd only
if
(11.5)
P[ AB] = P[A]P[B]
Fo r three events A, B , C we require that each pair of event s satisfies Eq . (11.5)
a nd in additi on
P[ABC] = P[ A]P[B]P[C]
366
AP PEN DIX II
down to all the pairwi se factorings. It is easy to see for two ind ependen t
events A, B that PtA I B} = Pt A], which merely says that knowledge of the
occurrence of the event B in no way affects the probability of the Occurrence
of the independent event A .
The theorem of total probability is especially simple and important. It
relates the pr obab ility of an event B and a set of mutually exclusive exhau stive
events {Ai} as defined in Eq . (11.4). The the orem is
n
= L P[ AiB]
P[B]
i= l
which merely says that if the event B is to occur it must occur in conjunction
with exactly one of the mutually exclusive exhaustive events Ai ' Howe ver
from the definition of conditional probability we may always write
P[AiB]
P[A i B] P[B]
P[B
I Ai ]
P[A i]
Thus we have the second important form of the theorem of total pro bability,
namel y ,
n
P[B) =
L P[B I Ai]
i= l
pr All
Thi s last equ ation is perhaps one of the most useful for us in st udying qu eueing theory. It suggests the following approach for finding the pr obability of
some complex event B , namely, first to condition the .event B on some event
Ai in such a way that the calculation of the occurrence of event B given this
cond ition is less comple x, and then of course to multipl y by the pr obabil ity
of the conditio nal event A , to yield the j oint probability P[ A,B) ; this having
been done for a set of mutu ally exclusive exhau stive events {A ,} we may then
sum these probabilities to find the probability of the event B. Of course, this
approach can be extended and we may wish to condition the event B on more
than one event then unc ondition each of these events suita bly (by multipl ying
by the probabilit y of the appropriate condition) and then sum all possible
form s of all conditions. We will use this approach man y times in the text.
We now come to the well-know n Bay es' theorem. Once aga in we consi der
a set of events {A ,}, which are mutually exclusive and exhaustive. Th e theo rem
says
P[B Ai) P[A i )
P [A ; B) = - n --'--'--.:..:....---'---''-=--
L P[B I Ai )
P[A i)
j= l
I
J
ILl .
RUL ES OF TH E GAME
367
imple example is in order here to illustrate some of these ideas. Con sider
'ou have just entered a ga mbling casino in Las Vegas. You approach a
: who is known to have an ident ical twin brother ; the twins cannot
tinguished. It is further kn own that one of the twin s is an honest dealer
.a s the second twin is a cheating dealer in the sense that when you play
he honest dealer yo u lose with probability one-h alf, whereas when you
vith the cheating dealer you lose with probability p (if P is greater than
alf, he is cheating again st you whereas if p is less than one-half he is
ng for you). Further more, it is equally likely that upon entering the
) you will find one or the other of these two dealers. Con sider that you
ilay one game with the particular twin who m you encounter and further
-ou lose. Of course yo u are disappointed and you would now like to
ate the probability that the dealer you faced was in fact the cheat, for if
an establish that thi s probability is close to unity, you have a case for
the casino. Let D II be the event that yo u pla y with the ho nest dealer
:t D c be the event that yo u play with the cheating dealer ; further let L
: event that you lose. What we are then asking for is P[D c L]. It is no t
diat ely ob vious how to make th is calculation ; howe ver , if we appl y
, the orem the calculation itself is trivial , for
I = P[L I Dc]
P[D c L]
IL
[ c
]-
pm __2p_
pm+ mm 2p + I
s the answer we were seeking and we find that the probability of having
a cheating dealer, given that we lost in one play , ranges from 0 (p = 0)
(p = I). Thus, even if we know that the cheating dealer is completel y
Jest (p = I), we can only say that with probab ility 2/3 we faced this
"given th at we lost one play.
a final word on elementary topics, let us remind the reader that the
er of perm utations of N objects taken K at a time is
N'
-----'-'-'.=--
(N - K )!
N(N - I) . .. (N - K
+ 1)
368
APPENDIX II
(~)
and is given by
(~)
N!
K!(N - K)!
+5
X(w)
wEW
W ED
-5
wE L
11.2.
369
R ANDOM VARIABLES
which is merely the sum of the probabilities associa ted with each point w for
which X (w) = x . For our example we have
P[X = -5] = 3/8
P[X =0] = 1/4
(11.7)
P[ X = 5] = 3/8
x u
---'----------L.--------L.----- R ~a l
line
370
APPEN DIX II
[X
x] = {w: X(w)
x}
which expresses the probability that the random varia ble X ta kes on a valu e
less than o r equal to x . The important properties of this functi on are
Fx(x) ~ 0
(II .S)
Fx ( oo) = I
Fx( - oo) = 0
Fx(b) - Fx (a) = P[a
<X
Fx (b) ~ Fx (a)
b)
for
for
<b
(11. 9)
a~ b
5
8
,
1
...3
-'---
-5
+5
11.2.
j to the d efin ition o f the
RANDOM VARIABL ES
371
s:
to. d F x( X)
iX(X) = h
(I I.! 0)
F.y(x) =
f / x(Y) d y
(II.! n
ix(x) ~ 0
F.y(oo) = I, we have from Eq. (II.! I)
the pdf is a function which when integrated over an interval gives the
bility that the random variable X lies in that interval , namely, for
. we ha ve
Pea
< X ~ b] =
fix
(x ) d x
. - b, and the axio m sta ted in Eq. (II.! ) we see that this la st equation
mpli es
f x'<x) ~ 0
an e xample , let us consider an exponentially distributed random
F x(x)
.
(I0 -
: i. > O.
e-A X
O~X
x<O
(I I.! 2)
x<O
(11.13)
372
APPENDI X II
F or thi s example, the probability that the random va ria ble lies between the
va lues a( >O) and b( > a) may be calculated in eith er of the two following
ways:
P[a
<
P[a
<x~
x ~ b] = F x ( b) - Fx (a )
= e-.l. a _ e- 1 b
b] = f fx (x) d x
=
e - i.a _
e - J.. b
From o ur blackjack example we not ice that the PDF ha s a derivat ive
which is everywhere 0 except at the three critical points (x = -5 , 0 , + 5).
In o rder to complete the definition for the pdf when the PDF is discontinuou s
we recognize that we must introduce a function such that when it is integrated
o ver the region o f the discontinuity it yields a value equal to the size of the
discontinuous jump; that is, in the blackjack example the probability density
functi on mu st be such that when inte gr ated fr om -5 - E to -5 + E (for
small E > 0) it sh ould yield a probability equal to 3/8. Such a function has
alread y been studied in Appendix I and is, of course , the impulse functi on
(or Dirac delta funct ion ). Recall th at such a function uo(x) is given by
lI
o(x )
oo
0
x = 0
x ;= O
C'
uo(x ) d x
. - a::
lI
L"
o(Y) d y
= {O
I
x< O
x :2: 0
Using the graphica l notati on in Fi gure 1.3, we may pr operly descr ibe the pd f
for o ur blackjack example as in Figure 11.4. We note immediately th at this
represen tat ion gives exactly the information we had in Eq. (11.7), a nd therefore the use of impulse functions is overl y cumbersom e for such problems.
In particular if we de fine a discrete random va riab le as o ne that take s on
va lues over a discrete set (finite o r countable) then th e use of the pdf* is a bit
heavy a nd unnecessary a lt ho ugh it does fit int o o u r genera l definition in the
o bvio us way. On the o ther hand , in the case o f a random va riable that ta kes
o n va lues over a continuum it is perfectly natural to use the pdf and in the
In the discrete case, the function P[X = xd is often referr ed to as the probability 1Il0SS
[unction. The genera lization to the pd f lead s one to the noti on of a mass density func tion .
[1.2.
_ _ -'
-5
RANDOM VARIABLES
-'
--'
373
+5
374
AP PEN DIX II
1
- - --======--x
o
o
(a) PD F
Figure U.s
(h) pdf
the natural exte nsion of the PD F for two rand om varia bles, namely,
6
F Xy(X , y) = P[X ~ x, Y ~ y ]
which is mer ely th e prob ability tha t X ta kes on a value less tha n or eq ua l to x
a t the sa me tim e Y ta kes on a va lue less th an o r equal to y; th at is, it is th e
su m of th e p robabilities associated with all sample p oint s in the intersecti o n
of the two events {w : X(w) ~ z }, {w : Y(w) ~ y }. FXY(x, y) is referred to a s
t he j oint PDF. Of co urse, associa ted with thi s funct ion is a joint probability
d ensity funct iondefined as
A
f.yy( x, y) =
2F
XI'(X' y)
_-"--'.--'-~
dx dy
Gi ven a joint pdf, o ne naturally inq uires as to th e "marginal" density functi on for one of th e varia bles and thi s is clearl y given by integra ting over a ll
possible val ues of the second variable, thus
fx( X) = i : _J xr (X, y) dy
(11.14)
IL 2.
RANDOM VARIABLES
375
a d
fXI y(x y) = - P[X ::;; x
dx
Iy
= y) =
fxy (x, y)
:...:=-'--'--'-
fy(y)
(11.15)
Y= g(X)
where g(' ) is some given function of its argument. Thus, once the value for X
is determined, then the value for Y may be computed ; however, the value
for X depends upon the sample point w, a nd therefore so does the value of
Y which we may therefore write as Y = Y(w) = g( X(w. Gi ven the random
va riable X and its PDF , one sho uld be able to calculate the PDF for the
random variable Y, once the functi on gO is known. In principle, the co mputat ion take s the following form:
Fy (Y)
( 11.16)
Y = L: X i
i- I
Let us derive the distribution function of the sum of two independent random
variables (n = 2). It is clear that this distribution is given by
Fy(y)
P[Y::;; y]
P[X 1
+X
2 ::;;
y]
376
APPENDIX 11
X,
+ X,
::::; y .
We have the situ ati on shown in Figure I1.6. Inte gratin g over the indic ated
region we have
f [f"-X'
OO
- <Xl
-00
=foo Fx,(Y -
xJ!x,(x,) dx,
- 00
Th is last equ atio n is merely the convolut ion of the density functions for X,
a nd X, a nd, as in Eq. (1.36), we denote this con volut ion opera to r (which is
both associative an d com mutative) by a n asterisk enclosed within a circle.
Thus
II. 3.
EXPECTATIO N
377
In a similar fash ion , o ne easily shows for the case of a rbitra ry n th at the pdf
fo r Yas de fined in Eq. (11.16) is given by the conv olu tion of the pd f's fo r the
X;'s, th at is,
(11.17)
II.3. EXPECTATION
In thi s sectio n we discuss certain measures associated with the PDF and the
pdf for a random variabl e. These measures will in genera l be ca lled expectations and they deal with inte grals of the pdf. As we saw in the last section ,
th e pdf involves certain difficulties in its definiti on , and the se difficultie s were
handily resolved by the use of impulse functions. However, in much of the
literature on pr obability the or y a nd in most of the literature o n queueing
the or y th e use of impulses is either not accepted, not und er stood or not kn own ;
as a result, special care and not ation has been built up to get a ro und the
problem of differentiating discontinuous functions. The result is that many
of the int egrals encountered are Stieltjes integrals rather than the usual
Riemann inte grals with which we are most fa miliar. Let us take a moment to
define the Stieltjes integral. A Stieltjes inte gral is defined in term s of a nondecreasing function F(x) and a continuous function 'r ex); in additi on, two sets
of points {f.} a nd { ~.} such that 1. _ 1 < ~. ~ f. a re defined and a limit is considered where max If .:... 1. _,1 ~ O. From the se definiti on s, con sider the sum
L 'I'(~.)[F(I.)
- F(I . _1) ]
This sum tends to a limit as the interva ls shrink to zero indepe ndent of the
sets {f. } a nd { ~.} a nd the limit is referred to as the Stieltje s integral of cp
with respect to F. This Stieltjes integral is written as
qc(x) dF(x )
Of co urse, we rec ogn ize th at the PDF may be identified with the functi on F
in thi s definitio n a nd th at dF(x) may be identified with th e pdf [say , f (x)]
through
dF (x )
= f( x) d x
by defini tion . With out the use of impulses the pdf may not exist ; however,
the Stieltjes integral will always exist and therefore it avo ids the issue of
impulses. Howe ver , in thi s text we will feel free to incorporate impul se
functi on s and therefore will work with both the Riem ann and Stieltje s
integra ls ; when impulses a re pe rmitted in the fun ct ion f (x) we then have the
378
APP ENDI X II
following identity:
E[X]
&
.1
'1"
-X> X
(I 1.1 8)
dF_,(x)
This last is given in the form of a Stieltjes integral ; in the form of a Riemann
integral we have, of course,
E[X]
= . =
L:
xfx(x) dx
E[X] =
f'
(I - Fx(x)] d x -
f/ x(X) dx
E[X] =
i"
[I - Fx (x)] dx
zo
g(X )
We may define the expecta tio n E r [Y] for Yin terms of its PDF just as we did
for X ; the subscript Yon the expectation is there to distingui sh expectation
with respect to Yas opposed to any other random va ria bles (in th is ca se X ).
Thus we have
II.3.
379
This last computation requi res that we find either Fy(Y) o r f y(y) , which as
mentioned in the previous section , may be a ra ther complex computation.
However, thefu/ldamentaltheorem ofexpectation gives a much more straightfor ward calculatio n for th is expectation in ter ms of distributio n of the
underlying random variable X, na mely,
Edy ] = Ex[g(X)]
= roo g(x)fx(X) dx
.i_oo
We may define the expectation of the sum of two random variables given
by the followi ng obvious general izatio n of th e o ne-dimensional case :
E[X
Y] = L : L : (x
+ Y)fxy( x, y) d x dy
= L : L : Xf x y (x , y) d x dy
=L :
xfx (x) d x
= E[X ]
+i
+L :
L :Yfxy(x, y) d x dy
:y!y(y) dy
+ E[Y ]
( 11 .19)
E[X l
A similar que stion may be asked a bo ut th e product of two rando m var iab les,
that is,
E[XY]
L:L:
xyf s (x )f d Y) dx dy
E[X]E[Y ]
( 11.20)
380
APP ENDI X II
E[ g(X )]E[lJ(Y )]
(l1.21 )
~ xn~
L:
x nfx(x) dx
( X - x )n =
(x -
X)"fx( x) d x
- oc
The nth central moment may be expressed in terms of the first II mo ments
themselves ; to show thi s we first write down the foll owin g ident ity making
use of the bino mial the orem
(X -
X )"
=I(n)X'-C- Xl"-k
,~O
(X -
i
k -O
( 11.22 )
var
In
(I
fil
tc
o
q
d
11.4 .
38 1
ax 2 ~ (X - X)2
~ X2 - (1')2
In the second line of thi s last equation we have taken ad vantage of Eq.
(II.22) and have expre ssed the variance (a central moment) in term s of the
first two momen ts themselves. The square root of the variance ax is referred
to as the standard deviation. The ratio of the sta nda rd deviati on to the mean
of a random va riab le is a most important qu ant ity in sta tistics and a lso in
queueing the or y; th is ratio is referred to as the coefficient of variation and is
den oted by
(11 .23)
L:
eiuz;-x(x) dx
Ic/>x(u)I :s;; L :
a nd since
lei""1=
I . we have
382
APPENDIX II
An important property of the char acteri stic function may be seen by expanding the exponential in the integrand in terms of its power series and then
integrating each term separately as follows:
epx(u)
= Lf'"/
= 1
[ + jux + ~
(jU X)2
+ ...]
x(x) 1
dx
(jU)2~
. _
Thi s last important result gives a rather simple way for calculating a constant
times the nth moment of the random variable X .
Since this property is frequently used, we find it con venient to adopt the
following simplified notation (consistent with that in Eq. 1.37) for the nth
der ivative of an arbitrary function g (x), evaluated at some fixed value x = xo:
gln )(x )
o
~ d ng( x)
d x"
. ( 11.25)
%=:1: 0
= rx.
The mom ent generating function denoted by M x(v) is given below along
with the appropriate differential relationship that yields the nth moment of X
directly .
Mx (v) ~ E[e- x]
=L :
M:~)(O) = X
e-Z! x(x) d x
n
where v is a real variable. From this last property it is easy to see where the
name "moment generating function" come s from. The deriv ation of this
moment relationship is the same as that for the characteristic function .
Another important and useful function is the Laplace transform of the pdf
of a random variable X. We find it expedient to use a notation now in which
the PDF for a rand om va riable is labeled in a way th at identifies the rand om
variable without the use of subscripts. Thus, for example. if we have
383
rand om var iable X , which represents , say, the interarrival time between
adjacent customers to a system, then we define A( x) to be the PD F for X;
A(x) = P[ X
x]
where the symbol A is keyed to the word "A rrival." Further , the pdf for th is
example would be denoted a(x). Finally, then , we den ote the Laplace tr an sform of a(x) by A *(s) and it is given by the following:
A *(s) ~ E[e- ' x]
L:
e-' '''a(x) d x
Th e reader should take special note that the lower limit 0 is defined as 0- ;
that is, the limit comes in from the left so that we specifically mean to include
an y impulse functions at the origin. In the fashion identical to that for the
moment generating funct ion and for the characteristic function , we may find
the moments of X through the following formula:
A*(nl(o) = ( _ l)nx n
(11.26)
~ f ',e-'''''IG(X)' d x
But the complex variable s consists of a real part Re (s) = a and an imaginary
par t 1m (s) = w such that s = a + j w. Th en we have
JO' a (x ) dx =
IA*(s)1
I,
Re (s) ;:::: 0
It is clear tha t the three functions 1>x(u), M x (v), A *(s) are all close relatives of each other. In particular, we ha ve the following relationship :
c/>x(sj)
M x (- s)
A*(s)
384
APPENDIX II
Thus we are not surprised that the moment generating pr operties (by differentiati on) are so simila r for each; this property is the central pr operty th at
we will take ad vantage of in o ur studies. Thus the nth mom ent of X is calculable from an y of the following expression s:
X
= rn 4>~~)(O)
=;
Xn
.M~~)(o)
x<O
A-
M (v) = - -
}. _ v
A *(s)
= -.A-
A.+ s
4>x(O)
.Mx(O)
A *(0)
and , of course, this checks out for our example as well. Usin g our expression
for the first mom ent we find through a nyo ne of our three functions that
_
X= -;A.
and we ma y also verify that the second moment may be calculated from an y
of the three to yield
X2
=2
}.2
gk = P[X
k]
11.4.
385
G(z) = E[zx]
=
~ z
k
(11.27)
gk
IG(z)1 ~
L Izkllgkl
k
~ L gk
and so
IG(z)1 ~ I
for [z]
(11.28)
No te th at the first deri vati ve evaluated at z = I yie lds the first moment of X
(11.29)
a nd th at th e second deri vat ive yields
GIZ)(l ) = XZ - X
in a fashi on simila r to th at for continuous random va ria bles. * N ote that
in a ll ca ses
G(I ) = I
Let us a pply the se methods to the blackjack example considered earlier in
thi s a ppend ix. Work ing either with Eq. (11.7), which gives the probability
of va rious win nings o r with th e impulsive pd f give n in Fi gure 11.4, we find
th at the probability genera ti ng functi on for the number of doll a rs wo n in
a game of blackjack is given by
1
3 z5
G( z) =-3 z- 5 +-+8
4
8
We no te here th at , of course , G( I)
may be calculated as
X=
GIll(l ) = 0
y =
T hu s we ha ve th at a X Z = G(ZI(I )
386
APPENDIX II
ep y(u) ~ E[eiuY]
[ ;u.:E Xi]
= E e
=
, ~,
E[eiUX'eiuX, . . . e;Ux,]
Now in Eq . (II.21) we showed that the expectation of the product of functi ons
of independent random variables is equal to the product of the expectat ions
of each function separately ; applying this to the above we have
ep y(u)
Of course the right-hand side of this equation is just a product of characteristic functions , and so
ep y(U)
epx,(u)epx,(u)' . . epx,(u)
(11.30)
+ X 2)2 =
X, 2 + 2X,X2
+ X 22
11.4.
387
ming the va ria nce of Y and then using these last two eq ua tio ns we have
a y' =
y~
- (Y)"
= Xt = ax ,2
+ X .'
(Xt )'
+ ax + 2( X
,2
- (X,)2
IX.
+ 2( X tX2 -
XI X2)
XtX2)
XIX2 ,
similar fas hion it is easy to show th at the va ria nce of the sum of n
'pendent random va riab les is equal to the sum of the varia nces of eac h,
: is,
'o ntin uing with sums of independe nt random va ria bles let us now ass ume
: the nu mber of these variables tha t are to be summed together is itself a
dorn variable, that is, we defi ne
Y =
I X
i
i= l
OX>
=I
E[e- , x l ] .. . E [e-,xn]P[N
II]
n= O
Y *(s)
=I
[X *(s)rP[N
II]
( 11.33)
n= O
ere we have den oted the Lap lace transfo rm of th e pd f for each of the X i
X ' (s) . The final expression given in Eq . (II .33) is immedia tely recognized
388
AP PENDI X II
= N (X*(s
(11.34)
which is a perfectly reasonable result. Similarly, one can find the variance
of this rand om sum by differenti at ing twice an d then subtracting off the
mean squared to obta in
a y 2 = Nax 2 + (X)2 all 2
Thi s last result perhaps is not so intuitive.
11.5. INEQUALITIES AND LIMIT THEOREMS
In this section we present some of the classical inequalities and limit
theorem s in pro bability the ory.
Let us first consider bou nding the probability that a random variab le
exceeds some value. If we know on ly the mean value of the random variable,
then the following Ma rkov inequality can be established for a nonnegative
ran dom variable X:
P[X
z]
~-
Since on ly the mean value of the rand om var iable is utilized, this inequ ality
is rather weak. The Cheby shev inequality makes use of the mean and variance
a nd is somewhat tighter ; it states that for any x > 0,
a .2
P[IX - X/ ~ xl ~ ~
xOth er simple inequal ities invol ve momen ts of two ran dom varia bles. as
follows: First we have the Cauchy-Schwarz inequality , which make s a
statement ab out the expectation of a product of rand om varia bles in term s of
11.5.
389
A gene rali zati on of this last is Holder's inequality , which states for C1.
I, C1.- 1 + f3- 1 = I, and X> 0, Y> 0 that
f3 >
>
I,
XY ~ (X ")l /'(yP/IP
IX + YI ~ IXI
+ IYI
A generalizat ion of the tri an gle inequality , which is kn own as the C.-inequality,
is
where
I
C=
{ 2.- 1
O<r~1
I<r
g(C1.X 1
+ (I
- e<)x.) ~ C1.g(x l )
(1 - C1.)g (x.)
For such con vex funct ions g and random variables X , we have Jensen's
inequality as follows:
g(X) :2: g(X)
When we deal with sum s of random variables, we find that some very
nice limitin g properties exist. Let us once again con sider the sum of n
independent identically distr ibuted random vari ables Xi' but let us now
divide that sum by the number of terms n , thu sly
I n
IVn =-")
_ X.
ll i = l
390
APPENDIX II
=X
Wn
2
a.~/
aW = -
If we now apply the Chebyshev inequ ality to the random variab le W n and
ma ke use of these last two observa tion s, we may express our bound in terms
of the mean and variance of the random variable X itself thu sly
(I 1.36)
Th is very important result says that the arithmetic mean of the sum of n
independent and identically distributed rand om variables will approach
its expected value as n increases. This is due to the decreasing value of
a X2/nx2 as n grows (a X 2/X2 remain s constant). In fact , th is leads us directly to
the weak law of larg e num bers, namely, that for any > 0 we ha ve
lim P [l Wn
X/ ;::: ] = 0
= X
2: Xi - nX
Z =
(I 1.37)
i- I
ax.Jn
and states that the PDF for Z, tends to the standa rd normal distribution as
n increa ses; that is, for any real number x we have
lim P[Z n ~ x]
n -<Xl
= lI>(x)
where
~
lI>(x) =
IX
c- eo
- 1-
(27T)1/2
e- ' z/ 2 d y
That is, the ap propriately norm alized sum of a large numb er of independe nt
random variables tends to a Gaussi an , or a normal distribution. There are
many other forms of the central limit theorem that deal, for example, with
dependent random variables.
-----------
II.S.
391
A rather sophisticated means for bounding the tail of the sum of a large
number of independent random variables is available in the form of the
Chernoffbound. It involves an inequality similar to the Markov and Chebyshev
inequalities , but makes use of the entire distribution of the random variable
itself (in particular, the moment generating function) . Thus let us consider
the sum of n independent identically distributed random variables X i as
given by
n
Y = LXi
i=l
From Eq. (II.31) we know that the moment generating function for Y,
M y(v), is related to the moment generating function for each of the random
variables X i [namely, 1\1 x (v)) through the relationship
(11.38)
(11.39)
Clearly, for v;::: 0 we have that the unit step function [see Eq. (1.33)) is
bounded above by the following exponential:
u_,(w - y) ::;; evCw-
L:
eVWJy(w) dw
for v ;::: 0
However, the integral on the right-hand side of this equation is merely the
moment genera ting function for Y , and so we have
v;::: 0
(11040)
=t>. log M( v)
nyx(v)
v~O
392
APPENDI X II
Since this last is good for any value of v (~ 0), we should choose v to create
the tighte st possible bound; this is simply carried out by differenti atin g the
exponent and setting it equal to zero . We thus find the optimum relationship
between v and y as
y = nYi l(v)
(H.41 )
Thus the Chernoff bound for the tail of a density function takes the final
form*
pry ;;:.; ny~i-l(v)l ~ e n[ Yz lvl-vYz l1Jlvl]
v ;;:.; 0
(11.42)
It is perhaps worthwhile to carry out an example demonstrating the use of
this last bounding procedure. For this purpose, let us go back to the second
paragraph in this appendix, in which we estimated the odds that at least
490,000 heads would occur in a million tosses of a fair coin. Of course, that
calculation is the same as calculating the probability that no more than
510,000 head s will occur in the same experiment. assuming the coin js fair.
In this example the random variable X may be chosen as follows
X= {I
heads
o tails
Since Y is the sum ofa million trials of this experiment, we have that n = 10 ,
and we now ask for the complementary probability that Yadd up to 510,000
or more, namely , pry ~ 510,000] . The moment-generating function for X is
Mx(v) =
and so
y,.(v)
.,
! + !e
1
log -2 (1
+ e")
Similarly
V
y l1l ( v)
= -e-.
x
1 + e"
From our formula (H.4I) we then must have
nylll( v)
s:
106
eV
--
+ e"
510,000
=y
Thus we have
51
49
eV
= -
51
log49
and
The same derivation leads to a bound on the "lower tail" in which all three inequalities
from Eq. (II.42) face thusly: ~. For example v ~ o.
11.6.
STOCHASTI C PROCESSES
393
Thus we see typically how v might be calcul ated . Plugging these values
back into Eq. (11.42) we conclude .
P[ Y ~ 510,000] ~ e l 0 ' ( l o l< (50/4.)-o. 51 !Og (5 1/4 .) ]
Th is computat ion shows that the probability of exceeding 510 ,000 heads in a
million tosses of a fair coin is less than 10- 88 (this is where the number in
our opening par agraph s comes from). An alternative way of carrying out
this computation would be to make use of the central limit theorem. Let
us do so as an example . For this we require the calculation of the mean and
varia nce of X which are easily seen to be X = 1/2 , Gx 2 = 1/4. Thu s from
Eq, (11.37) we have
Y - 106(1 /2)
Z = ------'-:'---'n
(1/2)103
1 - <1>(20) ~ 25 x 10- . 0
25 x 10-4
Thi s result is twice as large as it should be for our calculation since we have
effectively calculated both tails (namely, the probability that more than
510 ,000 or less than 490 ,000 heads would occur); thus the appropriate an swer
for the Chebyshev inequ ality would be that the probability of exceeding
5 10 ,000 heads is less than or equal to 12.5 x IQ-4. Note what a poor result
this inequ ality gives comp ared to the central limit theorem approximat ion ,
which in this case is comp arabl e to the Chernoff bound.
11.6. STOCHASTIC PROCESSES
It is often said that queueing theory is part of the theory of applied stochastic processes. As such, the main port ion of this text is really the proper
sequel to this section on stochastic processes; here we merely state some of
the fundamental definitions and concepts.
We begin by considering a probability system (S, Iff, P) , which consists
of a sample space S, a set of events {A , E, .. .} , and a probability measure P.
In addition, we have already introduced the notion of a rand om variable
394
AP PEN DIX II
Fx(x , t) = P[X(t )
xl
t h 12 ,
.. .
t n} a j oint PD F,
,t,J
.l.
= P[X(tl)
~ Xl ' X (t 2 ) ~
X 2, ,
X(t n ) ~ x n ]
and we use the vector notation Fx(x ; t) to den ote this function.
A stochastic process X(t) is said to be stat ionary if all Fx (x , t) are invariant to shifts in time ; that is, for a ny given constant T the followin g holds:
Fx(x ; t
+ T) = Fx(x; t)
E[X(t )]
L:
xix(x; I) d x
Il .6.
395
J-: L:
A large the or y of sto chas tic pr ocess has been de veloped , kn own as secondorder theory, in which these pr ocesses a re classified a nd distingui shed
o nly o n the basis of th eir mean X U) a nd autocorrelati on R x x (t" t 2) . In the
case of stat iona ry rand om processes, we ha ve
X(I) = X
( 11.43)
and
R x .\:(t"
12 )
= R x x (12
I,)
(11.44)
G
G
s.
Glossary of Notation*
g
}-
Ii
I,
(Only the notat ion used ofte n in this book is included below.)
NOTATI ONt
A .(t) = A(t)
An*(s) = A *(s)
ak
a. (t) = aCt)
Bn(x) = B (x)
B ;*(s) = B*(s)
bk
bn(x) = b(x)
C2
b
C.
Cn(u) = CCu)
C. *(s) = C*(s)
c.(lI) = c(u)
D
dk
E [X ] =
Ei
Er
FC FS
Fx(x)
DEFI NITION
I'
TYPICAL PAGE
REFER ENCE
P[t . ~ t] = P[i ~ t]
Lapl ace transform of aCt )
k th mome nt of aCt )
dA n(t) /dt = dA( t) /dt
P[x . ~ x ] = P[x ~ x ]
Laplace tra nsform of b(x)
kth moment of b(x)
dBn(x)/dx = dB(x)/dx
Coefficient of variati on for service time
nt h customer to enter the system
P[u. ~ u]
Lapl ace tra nsfor m of cn(lI) = c(u)
dC . (lI)/du = dCCu)/du
Deno tes determin istic distribution
P[ij = k]
Expectation of t he ran dom variab le X
System sta te i
Den otes r-stage Erlan gian distribution
Fir st-come-first-served
P [X ~ x]
13
14
14
14
14
14
14
14
187
II
281
285
281
K
L
tv
tv
III
O(
01
P
P
VIII
176
378
27
124
8
370
PI
p<
P,
Pi
Pi
fk
Q
In those few cases where a symbo l has more than one meaning. the context (or a
specific statement) resolves the a m biguity.
t The use of the notation Y n - Y is mean t to indicate tha t y = lim Yn' as /I - co wherea s
y( t) - y indicates that Y = lim y(r) as t - "'J .
396
q"
q"
GLOSSARY OF NOTATION
397
I,(x)
dFx(x){dx
371
G
G(y)
G*(s)
VIII
gk
g(y)
dG(y) {dy
HIl
Im (s)
In - > I
I *(s )
I*(s)
208
211
213
215
141
293
206
307
310
LC FS
dual system
Size of finite sto rag e
La st-c ome-first-ser ved
VIII
M
111
viii
N.(I) ->- N.
N(I) ->- N
o(x)
8
Vlll
lim o(x) {x
o(x)
viii
z-o
=K<
=0
48
P[ N(I ) = k ]
P[next sta te is
Pk(l )
Pu
II
284
co
Pt A]
P tA I B]
17
t; I current
sta te is E,]
31
364
365
369
371
55
27
46
90
192
48
Pk
P[ X (t ) = j Xes) = i ]
P[k custo mers in system]
Q(=)
quC l )
qn
177
242
Pij(S, I)
ij
q,/ ~ ij'
398
r ij
rk
P [q' = k ]
Rc(s)
P[sn ~ y] -> p es ~ y ]
S; *(s)
s n~s
dS n(y) /dy
sn ---+ s = T
snk
S * (s)
---+
-Joost
T
I n -> i
i n = (= IIA
fi'
U(/)
Un ~U
V(z )
V
W
IV o
W_(y )
Wn(y)
W(y)
->
W n*(s)
---+
->
~vn -+-
W= W
w(y)
w nk - lvk
X(I)
53
xk
y
z
---+
P[I;:' ~ y ]
wn(y)
Xn
P[w n ~ y ]
"'n-+- lV
X n -+-
dS(y)/dy
!lo(/)
Un ----+
---+
= x = 1/fl-
dWn(y) /dy
---+
dW(Y) ldy
340
149
176
14
14
339
14
14
14
14
14
14
14
14
206
341
277
184
177
14
190
284
14
14
14
14
14
14
19
14
14
14
206
327
GLOSSARY OF NOTATION
(X(t)
Number of arrivals in (0 , t)
Yi
bet)
},
z,
fl
fl k
1t ( n ) -)-
Cnl
1Tk
1t
---+ 1Tk
399
15
149
16
14
53
14
54
31
29
II a,
at a2 '
Utilization factor
"
ak (Product notation)
334
i= l
18
249
305
305
12
285
285
(0, t)
X= E[X]
(y) +
max [0, y]
(:)
A/B /m /K/M
FC nl(a)
fCkl(x)
o
f ---+ g
A <c4B
=k
11
n'
( .
! n - k)!
z-Server queue with A(t) and B(x)
identified by A and B, respectively,
with storage capacity of size K, and
with a customer population of size M
(if any of the la st two descriptors are
missing the y are a ssumed to be
infinite)
d nF(y)/dyn I .~a
f(x ) 0 . .. 0 f (x)
k-fold convolution
Convolution opera tor
Input f gives output g
Statement A implies sta tement Band
conver sely
Binomial coefficient
15
378
277
368
viii
382
200
376
322
68
328
18
.18
18
17
T= X + W
N = AT (Little's result)
N. = AW
N. = N - p
17
188
P = r k
r k = dk
59
176
176
MARKOV PROCESSES
P (t) = (At)k ek
k!
;"
N(t) = At
0, t
60
62
62
63
..
400
..
69
401
BIRTH-DEATH SYSTEMS
k- 1
Pk = Po II -
k ~ 1
57
k=O
57
i..
'
92
;_0 P HI
Po =
co
k- 1
A-
92
1+ I I I '
k =l i - O f-li +l
MIMII
e-IHPlt[plk~;)/2Ik_i(a t)
Pk(t) =
+ plk-i-11 /2Ik+H1(at )
77
+ (1 - p)P';~k~+/-;/2I;(at)J
96
96
Pk = ( 1 - p)P'
R = pl(l - p)
u/ =
97
pl(l _ p)2
pIp
1- p
191
T=~
98
W =
1- p
P[~k in system ] =
99
S(y)
w(y)
W(y)
1 - pe- plt- ph
s(y)
y ~ 0
202
202
y ~O
0
y~O
203
203
<!5
IV
DI SCR ET E-TIM E
HOMOG ENEOUS
One-ste p
PiI"
.
transition
= P[X" t1 = J I X"
prob abil ity
Matrix of o nestep tran sition
I' [Pij]
proba bilit ies
Multiple-step
tra nsiti on
proba bilities
Mat rix of
multiple-step
tran sit ion
prob abilities
Cha pma nKolmogorov
equa tion
pci1'r l
I' ,m,
p(,"1
tI
p lm )
[1'1;"1]
PiI(n , n+ I )
P[X" +l = j
I X" = i]
=
"
+ 1)]
1'(11)
= J I X" = /)
101
2: pt'''-0Ip
ik
k;
= /]
= p [X" t",
HOMOG ENEO US
NON HOMOGENEOUS
[Pi /n, n
Pij(III , n)
A P[ X"
= j I x'" =
P[ X(t
Po (1)
P[X (s
p o (m , n)
=
p ik(m , q)pk;(q , n)
k
11 (11I , n) = H (m , q)H(q, n
2:
.
Po U , I + ~ I )
+ ~I ) = j I X( I) = i] A P[X(I + ~I ) = jl X (I ) = i]
Po
I'
i]
H (t )
1'(1)
[PiI)
+ I) = j I Xes) = i]
H (t )
PiI(1)
NO NHOMOGEN EOUS
A [P iI (I )]
2:k P,k(1 =
s)Pk,(s)
H (I - s)H (s)
[pi;(I, I
Po C' , I )
P[x(t )
H (s , I)
= j I Xes) =
i)
=.
" [ p i; (S,I )J
H (s , I)
Piles, I)
+ ~t)]
H (s , II)H (II, I)
Table (continued)
Forwa rd
P IUIl
p tm- l lp
P I11IJ
pp lm - l )
equa tio n
Backward
equation
Soluti on
p llll l =
Transition-rate
matrix.
State probabi lity
Matrix of state
probabi lities
For ward
equation
solution
,, ~,,) ~ P [X"
./>0
VJ
= jl
,,~n ) ~ P[X"
n: (tll
n ln )
n ' n,
n 1n- 11P
= n lO'p n
[I -
= nP
%p l- 1
-> p n
= QH (I)
H (I) = e o t
I)
d n(I) ldl
all(s, t)l as
H (s , I)
= -Q (s)lI(s, t)
= n (I )Q
=0
= jl
n (t ) ,1\, [" jU )l
dn(I)ldl
n (l )
= n (I )Q (I )
I)
nQ
p et) - I
=j l
n(l) = n (O)e o t
= H (s, I)Q (I )
aH (s, t)lal
n (l ) ~ [" j U)l
n:11ll
H (I)Q
=j l
= n 'OIP(O)p (1) . . . P en -
d H(t)ldl
,d
d H(I)ldl
P - I
Q =l im - -
Equilibrium
so lution
Tr an sform
relatio nships
pm
H (III , II)
= H (III , II - 1)1' (11 - I)
H (III , II)
= P (m)H (1II + I, II)
H (III , II)
= 1'(111)1'(111 + I) ' . . 1'(11 - I)
404
g(y)
in =
t~ O
I - e-;"
l(2n - 1
2)p n_l(1
215
n -
+ p)I- 2n
218
( .A.)k
~ - CJ.1.u)K+I .u
104
otherwise
Pk =
M!
(M - k)!
I.
M!
,-oeMP(z)
148
y p
l - I,I.u
Pk =
(/.)k
.u
107
(M/ M/ IIIM)
( ~)'
i) ! .u
.uO - p)(l - z)
.u( I - c) - 1.z[1 - G(z)]
Pk = ( I -
(M/M/ I/ K)
~)(~J
0, 1,2, . . .
136
139
M/M/rn
102
k- O
(7)(2-)
P[qu eue m gj =
['II(mp)k + (m p)m)P(_ l _ ) ]
k _O
k!
III!
CJ.1.ulj m <A1.u)'
Pk = ~ ,~ i !(.A.I.u )m/
Pm= - m!
(AI.u )'
2:-.i ~ O I!
m
103
III !
( Erla ng C
formula)
103
1- p
105
(M/M/m/m)
(M/M/m/m)
(E rla ng's
loss formula)
106
405
MIDI I
ij = _P- p'
1 - p 2(1 - p)
W=
px
188
191
2(1 - p)
( ul .j (n p ) n-1
= 2: - - e- n p
G(y)
n!
n= l
219
,- 1
(
)
e-n p
f , =..!!.E.....-..
1
219
n.
b(x) - -'--'--'---'---- (r - I) !
x ~ O
1
p,(r)1/'
124
= -
(J b
124
P;
= (I -
p) 2: A;(:=,r;
= 1, 2, . . . , r
129
i= I
I - p
{ p(ZOT _ l )zo -
Pk =
rk
k= O
k>O
133
x ~ O
141
b( x) =
co"
~ I
143
MARKOVIAN NETWO R KS
;., = y,
+ 2: }.;r ji
149
;"",,1
(closed)
152
406
fx( x) = xf(x)
m,
f "(y )
-_ I - F(y)
I*(s)
= I - P es)
(resi
residual life den sity)
m,
sm l
f (x )
1 - F( x)
172
172
173
rex)
171
173
(failure ra te)
173
M/G/l
176
181
183
v = p
v' - V = A'X' = l (1
C b' )
= B*(A - i.z)
__ + , (I+C/ )
187
V( z)
P 2(1 - p)
- = I
( 1 + c /)
+ p -'---'----"--"-
q -
2(1 - p)
W
(I
Co')
- = p
x
2(1 - p)
Wo
W = -I-p
184
(P-K mean value formula)
(P-K mean value formula )
Wo ~ AX'
2
Q(z)
187
191
19 1
190
190
(P-K tran sform equation )
194
w*(s) =
s(I - p)
s - i. + AB*(s)-
sO -
5*(s) = B*(s)
s - i.
=
G*(s) =
P[J ~ y ]
G(y)
f"
208
212
AG*(S
(Ax)n-I
= Jon~le-AX -n-!-
226
b(n)(x) d x
213
gl = - 1 -p
214
g, = ( 1 - p)"
,
V b'
+ p(x)'
3 -
214
(1 - p)3
x3
g -
(1 _ p)'
3i.C?)2
+ (1 _
lOAx' :;O
x'
214
p)5
15i.'( x2)3
g =
+ - -- + ----'---'
( 1 - p)5_ ( 1 _ p)" (1 _ p)7
214
217
1
11 1 = - 1- p
')(
1 - P) + I...,--:;
h; = - p
"x'
( I - p)3
a lt - =
p(I _ p)
226
217
1
+ __
218
1- p
+ A2:2
21 8
(1 - p)"
'fw (
aF(w,
t) = aF(w, t )
---''---'---'
- .I.F(w, t ) + ),
B w - x d x F( x, t )
at
aw
= 0
(Ta kacs integrod ifferential equation )
F
**
( r , s)
Q(: )
= --'--''--'-'-- - - AB*(s) - i.
200
199
y ~0
+ A-
00 .
p)
+ i.B*(s)
1 - e- A'
B*(s
407
+r-
227
229
(bulk arrival )
235
408
M/G/ a:;
234
T= x
234
s(y) = b(y )
234
G/M!l
r k = (1 -
251
k = 0, 1,2, .. .
O)if'
251
a = A*(p - pa)
W(y )
1 - ae-~(l--<1)
y ~ 0
252
252
G[M[m
242
249
249
a = A*(mp - mu a)
P[queue size
n arrival queues]
(I - a)a
n~O
254
Rk
m-2
00
i=k
i=m- l
"R iP ik ..
"
i +I- m
..
a
P ik
254
Pk-l,k
Pu = 0
PH =
fJn
for
i'" (i ~
>i+I
1) [I -
= Pi. i+l - n =
e-~t]i+l-;e-~t; dA(t)
1= 0
242
n.
- m~t
dA(t)
244
o~ n ~ i + I
- m, m
245
245
J =
W =
m-2
-I - -a +k~IR
k
O
254
Ja
m,u(l - a)2
256
409
1-
a
1
m- 2
+ (1 -
a)
e-m"ll-a).
y ~ O
250
y ~ O
255
Rk
k=O
GIGll
W. +l =
(w.
+ 11 . ) +
277
281
(I"
W(y) =
-e ec
A ( -s)B (s) - I
y ~O
y<O
'Y+(s)
= 'Y _(s)
(Lin dley's
integral
equation)
283
286
I
. 'Y+(s)
W(o+)
$ +(s) = - - 11m - - = - '+(s)
s
'Y+(s)
290
290
.-0
$ +(s)
W=
y2
_ p)2
[2
21
306
W = ---2ii
2Y
305
IV = sup U;
279
n~O
W(y)
*
W (s)
W *( )
s
7T{c(y) 0 w(y
ao[l - [ *( - s)]
1 - C* (s)
1- a
1 _ aI*(s)
301
307
310
Index
4 11
412
INDEX
state , 30
Erlang, 119, 286
B formula , 106
C formula, 103
distribution, 72 , 124
loss formula , 106
E,/M/I, 130 - 133, 405
E,(r -stage Erlang Distribution), 40 5
Event , 364
Exhaustive set of events , 365
Expectation , 13, 377 - 38 1
fundamental theorem of , 379
Exponential dist ribution, 65 -71
coefficient of variat ion, 71
Laplace tra nsfo rm , 70
mean, 69
memory less prop erty , 66 -67
variance, 70
Exponential functio n, 340
FCFS , 8
Fig flow example , 5
Final value theo rem , 330, 346
Finite capacity, 4
Flow , 58
conservation, 91 -92
rate , 59 , 87 , 9 1
system , 3
time, 12
Fluid approximation, 319
Forward Chapman-Kolmogorov equation,
42 ,47,49 ,90
Foster's crit eria , 30
Fo urier transfor m, 321 , 381
Functio n of a random variable , 375,380
Gaussian distrib ution, 390
Generat ing functio n , 321 , 327
pro babilistic interpretat ion, 262
Geometric distribu tion, 39
Geometric series, 328
Geometric transform , 327
G/G/ I,19,275-312
defining eq uatio n, 277
mean wait, 306
summary of results , 409
waiting time transform , 307 , 310
G/G/m, II
.
Global-balance eq uatio ns, 155
Glossary of Notation , 396 -399
INDEX
G! M! I,25 1-253
dual queue, 311
mean wait , 252
spectrum factorization, 292
summary of results, 408
waiting time distribution, 25 2
G/M!2 , 256 - 259
distrib ution of number of custome rs. 258
distribution of waiting time , 258
G/ M/m.241 -259
con ditio nal pdf for queueing time,250
conditio nal qu eue length distrib ution, 249
functional equatio n, 249
imbedded Markov chain, 241
mean wait , 256
summary of results , 408- 409
transitio n probabilities, 241 - 246
waiting-time distribu tion , 255
Gre mlin , 261
Gro up arr ivals, see Bulk arrivals
Group service, see Bulk service
Heavy traffic approxi mation, 319
Hippie example, 26-27,3 0-38
Homogeneous Markov chain , 27
Homogeneou s solutio n, 355 ,
HR (R-stage Hyperexponential Distribu tion) ,
14 1, 405
Idle period, 206, 305, 31 1
isolating effec t , 281
Idle time , 304 , 309
Imbedded Markov chain, 23 ,16 7, 169,241
G/G/I, 276-279
G/M/m.24 1-246
M/G/I ,1 74 -1 77
Independence, 374
Independent process, 21
Indepe ndent random variables, produc t of
functions, 386
sums, 386
Inequalit y , Cauchy-Schwarz , 388
Chebyshev, 388
Cr. 389
Holder, 389
Jensen, 389
Markov, 388
trian gle, 389
Infinitesimal generato r, 48
Initial value theorem , 330, 346
413
414
INDEX
M/M/m.l02 -1 03,25 9
summary of results, 404
M/M/m /K/M,1 08-l 09
INDEX
Net work flow theory , 5
Network s of Markovian queues, 147-1 60,4 05
Non-nearest-neighbo r, 116
No queue, 161- 162 , 315- 3 16
Normal distribution, 390
Notation, 10 -15 , 396- 399
Null event, 364
Number of custom ers, II
Open qu eueing network s, 149
Paradox of residual life, 169-1 73
Parallel stages, 140-14 3
Parameter shift , 347
Part ial-fraction expansion, 333 -3 36, 349352
Particular solution, 355
Period ic sta te, 28
Permutations, 367
Pineappl e fact ory example, 4
Poisson , catastro phe, 267
distribution , 60 , 63
process, 61 -65
mean, 62
probabilistic interpr etation , 262
summary of result s, 400
variance, 62
pure death process, 245
Pole, 291
mult iple, 350
simple, 350
Pole-zero patte rn, 292, 298
Pollaczek-Khinchin (P-K) mean value
formula, 187, 191 , 308
Pollaczek-Khinchin (P-K) transform equation , 194 , 199 , 200, 308
Power-iteration , 160
Priorit y queueing, 8, 319
Probabilistic arguments, 261
Probability density function (pdf) , 13, 371,
374
Probability distrib ution function (PDF) , 13,
369
Proba bilit y generating function , 385
Probability measure , 364
Probability system, 365
Probability theory , 10, 363 -395
Processor-sharing algorithm, 320
Produ ct notati on , 334
Project ion , 301
415
416
INDEX
bilateral, 354
Fourier, 381
Hankel, 321
Laplace, 338 -355
Mellin, 321
method of analysis, 324
two-sided, 383
z-transforrn, 327 - 338
Transient process, 94
Transient state, 28
Transition prob ability , G/M /m, 241-246
M/G/l ,I77 -1 80
matrix ,31
m-step , 27
one-step , 27
Transition-rate matri x , 48
Translat ion , 332, 345
Vacation, 239
Varian ce, 38 1
Vecto r transform , 35
Virtual waiting time, 206
see also Unfinished work
Waiting time, 12
complementary ,284
tran sform , 290
Wendel projection , 301 , 303
Wide-sense sta tionarity , 21, 395
Wiener-Hopf integral equation, 282
Work, 18
see also Unfinished work
IND EX
Zeroes. 291
z-Transf orm, 321, 327- 338, 385
inversion , inspectio n meth od. 333
inversion form ula. 336
41 7