You are on page 1of 426

QUEUEING SYSTEMS

VOLUME J: THEORY
Leonard Kleinrock
Prof essor
Computer Science Dep artment
S chool of Engineering and Applied Sci ence
University of California, Los Angeles

A Wiley-Interscience Publication

John Wiley & Sons


New Yo rk Chiches te r > Brisbane Toronto

.I

" Ah, ' All thina s come to th ose who wait.'


They come, but cjten come too late."
From Lady Mary M. Curr ie: Tout Vient

a Qui Sait

Attendre (1890)

27947

Co pyr ight 197 5 , by J oh n Wiley & So ns , Inc.


All rights reserved . Publ ished simulta neo usly in Canada.
Reproduction or translation of any part of th is work beyond that

perm itted by Sections 107or 108of 'he 1976 United States Cop yright Ac t wi thout the permission of the copyrig ht owne r is unlawful. Requests fo r perm ission or furthe r in fo rmatio n should be
addressed to the Permissio ns Depart ment , Jo hn wi ley & Son s. Inc .

Library of Congress Cataloging in Publication Data:


K leinrock , Leon ard .
Queuein g systems.

"A Wiley-Iru erscien ce pu blicat io n."


CONTE NTS : v . I. Th eor y.
I. Que ueing theory. I. T itle.
T57.9 . K 6

5 19.8'2

IS BN 0-47 1-49110- 1

13 1415

74-98 46

Preface

How much time did yo u waste waiting in line this week ? It seems we cann ot
escape frequent delays, and they are getting progressively worse ! In this text
we study the phen omena of sta nding , waiting, and serving, and we call this
study queueing theory .
An y system in which arrivals place demands upon a finite-cap acity resource
may be termed a queueing system. In particular, if the arri val times of these
demand s are unpredictable , or if the size of these demands is unpredictable ,
then conflicts for the use of the resource will ari se and queu es of waiting
customers will form . The lengths of these queue s depend upon two aspects of
the flow pattern : first, they depend upon the average rate at which demands
are placed upon the resource; an d second, they depend upon the statistical
flu ctuations of this rate. Certainly, when the average rate exceeds the cap acity,
then the system breaks down and unbounded queue s will begin to form ; it is
the effect of this average overload which then dominates the growth of queue s.
Ho wever, even if the average rate is less than the system capacity, then here,
too , we have the forma tion of queues due to the sta tistical fluctu ati on s and
spurts of arrivals that may occur; the effect of these var iatio ns is greatly
magnified when the average load approaches (but does not necessarily
exceed) that of the system cap acity. The simplicity of these queueing structure s is decepti ve, a nd in our studies we will often find ourselves in deep
ana lytic waters. Fortunately, a familiar and fund amental law of science
perme ate s our queueing investigations . This law is the conservation of flow,
which states that the rate at which flow increases within a system is equal to
the difference between the flow rat e int o and the flow rate out of tha t system.
Thi s observation permits us to write down the basic system equa tions for
rath er co mplex structures in a relativel y easy fashion .
The pu rpose of this book , then , is to present the theory of queue s at the
first-year gradua te level. It is assumed that the student has been expose d to a
first co urse in probabi lity theory ; however, in Appendi x II of this text we
give a pr obability theory refresher and state the basic pr inciples that we shall
need. It is also helpful (but not necessary) if the student has had some
exposure to tran sform s, alth ough in this case we presen t a rat her com plete
vii

viii

PREFACE

transform theory refresher in Appendix I. The student is advised to read both


appendices before proceeding with the text itself. Whereas our material is
presented in the language of mathematics, we do take great pains to give as
informal a presentation as possible in order to strike a balance between the
abstractions usually encountered in such a study and the basic need for
understanding and applying these tools to practical systems. We feel that a
satisfactory middle ground has been established that will neither offend the
mathematician nor confound the practitioner. At times we have relaxed the
rigor in proofs of uniqueness , existence, and convergence in order not to
cloud the main thrust of a presentation. At such times the reader is referred to
some of the other books on the subject. We have refrained from using the
dull "theorem-proof" approach; rather, we lead the reader through a natural
sequence of steps and together we "discover" the result. One finds that
previous presentations of this material are usually either too elementary and
limited or far too elegant and precise, and almost all of them badly neglect the
applications; we feel that the need for a book such as this, which treads the
boundary inbetween, is necessary and useful. This book was written over a
period of fiveyears while being used as course notes for a one-year (and later a
two-quarter) sequence in queueing systems at the University of California,
Los Angeles. The material was developed in the Computer Science Department within the School of Engineering and Applied Science and has been
tested successfully in the most critical and unforgiving of all environments,
namely, that of the graduate student. This text is appropriate not only for
computer science departments , but also for departments of engineering,
operations research, mathematics, and many others within science, business ,
management and planning schools .
In order to describe the contents of this text, we must first describe the
very conven ient shorthand notation that has been developed for the specification of queueing systems. It basically involves the three-part descriptor
A/B/m that denotes an m-server queueing system, where A and B describe the
interarrival time distribution and the service time distribution , respectively. A
and B take on values from the following set of symbols whose interpretation
is given in terms of distributions within parentheses: M (exponential) ; E
(r-stage Eriangian); HR (R-stage hyperexponential); D (deterministic) ;
G (general). Occasionally, some other specially defined symbols are used . We
sometimes need to specify the system's storage capacity (which we denote by
K) or perhaps the size of the customer population (which we denote by M) ,
and in these cases we adopt the five-part descriptor A/B/m/K /M; if either of
these last two descriptors is absent, then we assume it takes on the value of
infinity. Thus, for example, the system D/M /2/20 is a two-server system with
constant (deterministic) interarrival times, with exponentially distributed
service times, and with a system storage capacity of size 20.
T

PREFA CE

ix

Th is is Volume I (Theory) of a two-volume series, the second of which is


devoted to computer applications of this theory. The text of Volume I
(which consists of four parts ) begins in Chapter I with an intr oducti on to
queuein g systems, how they fit into the general scheme of systems of flow,
and a discussion of how one specifies and evaluates the performance of a
queueing system. Assuming a knowledge of (or after reviewing) the mater ial
in Appendices I and II, the reader may then proceed to Chapter 2, where he is
warned to take care! Section 2.1 is essential and simple. However, Sections
2.2, 2.3, and 2.4 are a bit "heavy" for a first reading in queueing systems, and
it would be quite reasonable if the reader were to skip these sections at this
point, proceeding directly to Section 2.5, in which the fundamental birthdeath process is introduced and where we first encounter the use of a-transform s and Laplace tran sforms . Once these preliminaries in Part I are estab lished one may proceed with the elementary queueing theory presented in
Par t II. We begin in Ch apter 3 with the general equilibrium solut ion to birthdeath processes and devote most of the chapter to providin g simple yet
importa nt examples. Chapter 4 genera lizes this treatment , and it is here
where we discuss the method of stages and prov ide an intr oduction to networks of Mar kovian queue s. Whereas Part II is devoted to algebraic and
tr ansform oriented calculations , Part III returns us once again to probabilistic
(as well as tran sform) agruments. This discussion of intermediate queueing
theory begins with the important M/G /I queue (Chapter 5) and then proceeds
to the dual G/M /I queue a nd its natural generalization to the system G/M /m
(Chapter 6). Th e material on collective mark s in Ch apter 7 develops the
probabilistic interpretation of tran sforms . Finally, the advanced mat erial in
Part IV leads us to the queue G/G /I in Chapter 8; th is difficult system (whose
mean wait cann ot even be expressed simply in term s of the system parameters)
is studied thro ugh the use of the spectral solution to Lindley's integral equation. An approximati on to the precedence structure among chapters in these
two volumes is given below. In this diagram we have represented chapters in
Volume I as numb ers enclosed in circles and have used small squares for
Volum e II. The shading for the Volume I nodes indicates an appropri ate
amount of mater ial for a relatively leisurely first cour se in queu eing systems
that can easily be accompl ished in one semester or can be comfortably handled
in a one-qua rter co urse. The shading of Chapter 2 is meant to indicate that
Sections 2.2- 2.4 may be omitted on a first reading, and the same applies to
Sections 8.3 and 8.4. A more rapid one-semester pace and a highly accelerated
one-quarter pace would include all of Volume I in a single cour se. We close
Volume I with a summary of important equations, developed thr oughout the
book, which are grouped together according to the class of queu eing system
involved ; th is list of results then serves as a "handbook " for later use by the
reader in co ncisely summarizing the principal results of this text. The results

PREFACE

Volume I

Vo lume II

are keyed to the page where they appear in order to simplify the task o f
locating the explanatory material associated with each result.
Each chapter contains its own list of references keyed alphabetically to the
author and year; for example, [KLEI 74] would reference this book . All
equations of importance have been marked with the symbol - , and it is
these which are included in the summary of important equations. Each chapter
includes a set of exercises which , in some cases, extend the material in that
chapter ; the reader is urged to work them out.

XII

PREFACE

the face of the real world's complicated models, the mathematicians proceeded
to ad vance the field of queueing theory rapidly and elegantl y. The frontiers
of this research proceeded into the far reache s of deep a nd complex mathematics. It was soo n found that the really intere sting model s did not yield to
solution and the field quieted down considerably. It was mainly with the
advent of digital computers that once again the tools of queueing theory were
brought to bear on a class of practical problems, but thi s time with great
success. The fact is that at present, one of the few tools we have for an alyzing
the performance of computer systems is that of queuein g the ory , and this
explains its popularity am ong engineers and scientists today. A wealth of
new problems are being formulated in terms of this theory and new tools and
meth ods are being developed to meet the challenge of these problems. Moreover, the application of digital computers in solving the equations of queuein g
theory has spawned new interest in the field. It is hoped that thi s two-volume
series will provide the reader with an appreciation for and competence in the
methods of analysis and application as we now see them .
I take great pleasure in closing th is Preface by acknowledgin g those
indi vidual s and institutions that made it possible for me to brin g this book
int o being . First , I would like to thank all tho se who participated in creatin g
the stimulating environment of the Computer Science Department at UCLA,
which encouraged and fostered my effort in this directi on. Acknowledgment
is due the Advanced Research Projects Agency of the Department of Defense ,
which enabled me to participate in so me of the most exciting and ad vanced
computer systems and networks ever developed . Furthermore , the John
Simon Guggenheim Foundation provided me with a Fellowship for the
academic year 1971 -1 972, during which time I was able to further pursue my
investigati ons. Hundreds of students who have passed through my queueingsystems courses have in major and minor ways contributed to the creation
of this book , and I am happy to ackn owledge the specia l help offered by
Arne Nilsson, Johnny Wong, Simon Lam, Fouad Tobagi, Farouk Kam oun,
Robert Rice, and Th omas Sikes. My academic and profes sional collea gues
have all been very suppo rtive of th is endeavour. To the typi sts l owe all. By
far the lar gest port ion of this book was typed by Cha rlo tte La Roche , and I
will be fore ver in her debt. To Diana Skoc ypec and Cynthia Ellm an I give my
deepest thanks for carrying out the enormous task of. proofreading and
correction-making in a rapid , enthusiastic, and suppo rt ive fash ion. Others who
contributed in major ways are Barbara Warren , Jean Dubinsky, Jean D'Fucci ,
and Gloria Roy. l owe a great debt of thanks to my fam ily (and especially to
my wife, Stella) who have stood by me and supported me well beyond the
call of duty or marriage contract. Lastl y, I would certainly be remiss in
omitting an ackn owledgement to my ever-faithful dictatin g machine, which
was constantly talking back to me.
LEONARD KLEI NROCK

March, 1974

Contents
VOLUME I
PART

I: PR ELIMINARIES

Chapter 1 Queueing Sys tems

1.1. Systems of Flow .


1.2. The Specification and Measure of Queueing Systems
Chapter 2
2. 1.
2.2 .
2.3 .
2.4 .
2.5 .

PART

So me Imp or tant Rand om Processes


Notatio n a nd Structu re fo r Basic Que ueing Systems
Definition and Classification of Stochastic Processes
Discrete-Time Markov C hains
Co nti nuo us-Time Mar kov Ch ain s .
Birth-Death Processes.

3
3
8
10
10
19
26
44
53

II: ELEMENTARY Q UE UEING THEORY

Chapter 3

Birth-Death Queueing Sys tems in Equilibrium

3.1. Gener al Eq uilibrium So lution


3.2. M/M/I: The Classical Q ueueing System .
3.3 . Discouraged Arrivals
3.4. M / M/ro: Resp on sive Se rvers (Infinite N umber of Server s)
3.5. M/M /m: The m-Server Case.
3.6. M /M /I /K : Finite Storage
3.7. M/ M/m/m : m-Server Loss Syste ms .
3.8. M/M /I IIM : Finite Custome r Population-Single Server
3.9. M/M / roIIM : Finite Cu sto mer Po pulation- " Infinite"
Numbe r of Servers
3.10. M /M /m/K /M : Fi nite Population, m-Serve r Case , Finite
Storage

89
90
94
99
101
102
103
105
106
107
108
X III

xiv

CONTEN TS

Chapter 4 Markovian Queues in Equilibrium


4.1.
4.2.
4.3.
4.4.
4.5.
4.6.
4.7.
4.8.
PART

The Equilibrium Equ at ions .


The Method of Stages- Erlangian Distribution E .
The Queue M/Er/1
The Queue ErlM /I
Bulk Arri val Systems
Bulk Service Systems
Series-Parallel Stages : Generalization s
Networks of Markovian Queues

I 15
11 5
119
126
130
134
137
139
147

III: INTERMEDIATE QUEUEING THEORY

Chapter 5 The Queue M /G/I

167

5. 1. The M/G/I System


168
5.2. The Paradox of Residu al Life: A Bit of Renewal Theory . 169
5.3. The Imbedded Markov Chain
174
177
5.4. The Transition Probabilities .
5.5. The Mean Queue Length .
180
191
5.6. Distribution of Number in System .
196
5.7. Distribution of Waiti ng Time
206
5.8. The Busy Peri od and Its Durat ion .
5.9. The Numbe r Served in a Busy Period
216
5.10. From Busy Periods to Waitin g Times
219
5. 11. Combin at orial Method s
223
226
5.12. T he Tables Inte grodifferential Equation .
Chapter 6 The Queue G/M /m

241

6. 1. Transition Prob abilit ies for the Imbedded Markov Chain


(G/M /m )
241
6.2. Conditi onal Distributi on of Queue Size .
246
6.3. Cond itional Distribution of Waiting Time
250
251
6.4. The Queue G/M /I
6.5. T he Queue G/M /m
253
6.6. Th e Queue G /M /2
256
Chapter 7 The Method of Collective Ma rks

261

7. 1. The Mar king of Customers


7.2. The Catastrophe Proce ss

26 I
267

CONTDITS

PART

XV

IV: ADVANCED MATERIAL

Chapter 8
8. 1.
8.2.
8.3.
8.4 .

The Queue GIGII

275

Lin dley's I ntegra l Equa tio n


Spect ra l Sol ution to Lindley' s In tegra! Eq uatio n
Ki ngman 's Algebra for Queues
T he Idle Tim e a nd D uali ty

275
283
299
304
319

Epilogue
Appendix I : Transform Theory Refresher: z-Transforrn and Laplace
Transform

..

1.1. Why Transforms ?


1.2. The z-T ra nsform .
1.3. Th e La place T ran sfo rm
1.4. Use of Tran sforms in the Solution of D ifference a nd Differen tia l Equa tions

321
327
338
355

Appendix II: Probability Theory Refresher

II. I. R ules of th e G ame


11.2. Rand om Va riables
I1.3. Expectatio n
11.4. Transfo rms, Generating Funct ion s, a nd Ch aracteristic
F un ctio ns .
11 .5. In equal ities a nd Limit Theorems
11.6. St oc hastic Processes

363
368
377
38 1
388
393

Glossary of Notation

396

Summary of Important Results

400

Index

4 11

-.
xvi

CONTENTS

VOLUME 1/
Chapter I

A Queuein g The ory Primer

I. Notation

2.
3.
4.
5.
6.
7.
8.
9.
10.

Gene ra l Results
Markov, Birth-Death, and Poisson Processes
The M /M / l Que ue
The MI M l m Q ueueing System
Markovian Que ueing Networks
The M / G/l Queue
The GIMII Queue
T he GI Mlm Queue
The G/G /l Que ue

Chapter 2

Bound s, Inequalitie s and Approximations

I. The Heavy-Traffic Approximation

2.
3.
4.
5.
6.
7.
8.
9.
10.

Chapter 3

An Upper Bound for the Average Wait


Lower Bounds for the Average Wait
Bounds on the Tail of the Waiting Time Distribution
Some Remarks for GIGlm
A D iscrete Approximation
The Fluid Approximation for Queues
Diffusion Processes
Diffusion Approximation for MIG II
The R ush-H our Approximation

Priority Queueing

I . The Model
2. An Approach for Calculating Average Waiting Times
3. The Delay Cycle, Generalized Busy Periods, and Waiting
T ime Distributions
4. Conservation Laws
5. The Last-Come- First-Serve Queueing Discip line

CONTENTS

6.
7.
8.
9.

Head- of-the-Lin e Priorities


Ti me-Dependent Prior ities
Opt imal Bribin g for Qu eue Position
Service-Tim e-Dep end ent Disciplines

Chapter 4

1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.

Definitions and Models


Distribu tion of Att ained Service
Th e Batch Pro cessing Algorithm
The Round-Robin Scheduling Algorithm
The Last-Com e-First-Serve Schedul ing Algorithm
Th e FB Schedul ing Algorithm
Th e Mul tilevel Processor Sharing Schedulin g Algor ithm
Selfish Schedu ling Algo rithms
A Co nservation Law for T ime-Shared Systems
Ti ght Bou nds on the Me an Response T ime
F inite Popul ation Mod els
Mult iple-R esour ce Mod els
Mod els for Multiprogramming
Remote Terminal Acce ss to Computers

Chapter 5

I.
2.
3.
4.
5.
6.
7.
8.
9.
10.
II .
12.

Computer Time-Sharing and Multiaccess Systems

Computer-Communication Networks

Resource Sharin g
Some Contrasts and Trade-Off's
Networ k Structures and Packet Switchin g
The ARPANET-An Operational Descripti on of an
Existing Network
Definitions, the Model, and the Problem Statement s
Delay An alysis
Th e Capacity Assignment Problem
Th e Traffic Flow Assignm ent Problem
The Capac ity and Flow Assignment Problem
Some Topological Co nside rations-Applicatio ns to the
ARPAN ET
Satellite Packet Switchin g
Grou nd Radio Packet Switching

xvii

xvi ii

CONTENTS

Chapter 6

1.
2.
3.
4.
5.
6.
7.
8.
9.

Computer-Communicatio n N etworks
Measurement, Flow Control and ARP ANET Traps

Simulatio n and Routing


Ea rly AR PANET Measur ements
Flow Co ntro l
Lockups, Degradations a nd Traps
Network T hro ughput
One Week of ARPA NET Data
Line Overhead in the ARPANET
Recent Changes to the Flow Co ntrol Procedure
T he Cha llenge of the Futu re

Glossary
Summary of R esults
In dex

QUEUEING SYSTEMS
VOLUME I: THEORY

PART

PRELIMINARIES

It is difficult to see the forest for the trees (especially if one is in a mob
rather than in a well-ordered queue). Likewise, it is often difficult to see the
impact of a collection of mathematical results as you try to master them; it is
only after one gains the understanding and appreciation for their application
to real-world problems that one can say with confidence that he understands
the use of a set of tools .
The two chapters contained in this preliminary part are each extreme in
opposite directions. The first chapter gives a global picture of where queueing
systems arise and why they are important. Entertaining examples are provided
as we lure the reader on. In the second chapter, on random processes, we
plunge deeply into mathematical definitions and techniques (quickly losing
sight of our long-range goals); the reader is urged not to falter under this
siege since it is perhaps the worst he will meet in passing through the text.
Specifically, Chapter 2 begins with some very useful graphical means for
displaying the dynamics of customer behavior in a queueing system. We then
introduce stochastic processes through the study of customer arrival, behavior, and backlog in a very general queueing system and carefully lead the
reader to one of the most significant results in queueing theory, namely,
Little's result, using very simple arguments. Having thus introduced the
concept of a stochastic process we then offer a rather compact treatment
which compares many well-known (but not well-distinguished) processes and
casts them in a common terminology and notation, leading finally to Figure
2.4 in which we see the basic relationships among these processes; the reader
is quickly brought to realize the central role played by the Poisson process
because of its position as the common intersection of all the stochastic
processes considered in this chapter. We then give a treatment of Markov
chains in discrete and continuous time; these sections are perhaps the toughest sledding for the novice, and it is perfectly acceptable ifhe passes over some
of this material on a first reading. At the conclusion of Section 2.4 we find
ourselves face to face with the important birth-death processes and it is here

PRELIMINARIES

where things begin to take on a relationship to physical systems once again .


In fact , it is not unreasonable for the reader to begin with Section 2.5 of thi s
chapter since the treatment following is (almost) self-contained from there
throughout the rest of the text. Only occasionally do we find a need for the
more deta iled material in Sections 2.3 and 2.4. If th e reader perseveres
through Chapter 2 he will have set the stage for the balance of the textb ook.

Que ue ing Systems

One of life' s more disagreea ble act ivities, namel y , waiting in line, is the
delightful subject of thi s book. One might reasonably ask, " Wha t does it
profit a man to study such unpleasant phenomena 1" The an swer , of course,
is that through understanding we gain compassion, and it is exactl y this
which we need since people will be wa iting in lon ger and lon ger queues as
civilizat ion progresses, an d we must find ways to toler ate the se unpleasant
situa tio ns. Think for a moment how much time is spent in one's daily
acti vities waiting in some form of a queue: waiting for breakfast ; sto pped at a
traffic light ; slowed down on the highways and freewa ys ; delayed at th e
en tran ce to o ne's parking facility; queued for access to an elevat or ; sta nding
in line for the morn ing coffee; holding the telephone as it rin gs, and so o n.
The list is endless, and too often also are the queues.
The orderliness of queues varies from place to place ar ound the world.
Fo r example, the English are terribly susceptible to formation of o rderly
queues, whereas so me of the Mediterranean peopl es con sider th e idea
ludicrous (have yo u ever tried clearing the embarkation pr ocedure at the
Port of Brindisi 1). A common sloga n in the U.S. Army is, "Hurry up and
wait." Such is the nature of the phenomena we wish to study.
1.1.

SYSTEMS O F FLOW

Queueing system s represent an example of a much bro ad er class of


interest ing dynamic systems, which, for con venience, we refer to as " systems
of flow." A flow system is one in which so me commodity flows, moves, o r is
tran sferred through one or more finite-capacity channels in orde r to go from
o ne point to another. For example , con sider the flow of a uto mobi le tra ffic
t hr ough a road network, or the transfer of good s in a railway system, o r the
st reami ng of water th rough a dam , or the tr ansmission of telephone or
telegraph messages, or the passage of customers through a superma rket
checko ut co unter, or t he flow of computer pr ogram s t hrou gh a time-shar ing
computer system. In these examples the commodities are the a uto mobiles,
the goo ds, the water, the telephone o r telegraph messages, th e customers, and
the programs, respecti vely ; the channel or channels a re th e road network,
3

QUEUE ING SYSTEMS

the railway net wor k, the dam , the teleph one or telegraph networ k, the
supermarke t checkout counter, and the computer processing system, respectively. The " finite capacity" refers to the fact th at the channel can satisfy
the demands (placed upon it by the commodity) at a finite rate only. It is
clear that the ana lyses of man y of these systems requ ire analytic tools drawn
from a variety of discipline s and , as we shall see, queueing the ory is ju st one
such disciplin e.
When one an alyzes systems of flow, they naturally break int o two classes :
steady and unsteady flow. The first class con sists of those systems in which the
flow proceeds in a predictable fashion . Th at is, the qu antity of flow is
exactly known and is const ant over the int erval of interest; the time when tha t
flow appears at the channel, and how much of a demand that flow places upon
the channel is known and consta nt. These systems are trivial to an alyze in the
case of a single channel. For example , consider a pineapple fact ory in which
empty tin cans are being transported along a conveyor belt to a point at
which they must be filled with pineapple slices and must then proceed further
down the conveyo r belt for addi tional operatio ns. In this case, assume that
the cans arrive at a constant rate of one can per second and that the pineap ple-filling operation takes nine-tenths of one second per can . The se numbers
are constant for all cans and all filling operations. Clearly thi s system will
funct ion in a reliable and smooth fashion as long as the assumptions stated
above continue to exist. We may say th at the arrival rate R is one can per
second and the maximum service rate (or capacity) Cis 1/0.9 = 1.11111 . ..
filling operations per second . The example above is for the case R < c.
However , if we have the condition R > C, we all know wha t happens : cans
and /or pineapple slices begin to inundate and overflow in the factory! Thus
we see that the mean capacity of the sys tem must exceed the average flow
requirements if chaotic congestion is to be avoided ; this is true for all systems
of flow. Th is simple observation tells most of the sto ry. Such systems a re of
little interest theoretically.
T he more interesting case of stea dy flow is that of a net work of cha nnels.
For stable flow, we obviously require that R < C on each cha nnel in the
networ k. However we now run int o some serio us combinat orial problem s.
For example , let us consider a rail way networ k in the fictitious lan d of
Hatafla. See Fig ure 1.1. The scena rio here is that figs grown in the city of
Abra must be transported to the destination city of Cadabra , makin g use
of the railway netwo rk shown. Th e numbe rs on each chann el (sectio n of
railway) in Figure 1.1 refer to the maximum numb er of bushels of figs which
that cha nnel can handle per day. We are now co nfro nted with th e following
fig flow problem: How man y bushels of figs per day can be sent from Ab ra to
Cadabra and in wha t fashion sha ll this flow of figs take place ? The answer to
such questions of maximal " traffic" flow in a variety of networ ks is nicely

1.1.
Zeus

SYST EMS OF FLOW

Nonabel

Cadabra

Abra

Sucsamad

Oriac

Figure 1.1 Maximal flow prob lem.


sett led by a well-kno wn result in net work flow theory referred to as the
max-flow-min-cut theorem. To state this theo rem, we first define a cut as a
set of channel s which, once removed from the network , will separate all
possible flow from the origin (Abra) to the destination (Cadabra). We define
the capacity of such a cut to be the total fig flow that can travel acro ss that cut
in th e direction from origin to destination . For exa mple, one cut con sists of
the bran ches from Ab ra to Zeus, Sucsam ad to Zeus , and Sucsamad to Oriac ;
the cap acit y of this cut is clearly 23 bushels of figs per day. The max-flowmin-cut the orem states th at the maximum flow that can pass bet ween an
origin and a destin ation is the minimum capacity of all cuts. In our example
it can be seen th at the maximum flow is therefore 21 bu shels of figs per day
(work it out). In general, one must consider all cut s that sepa rate a given
origin and destination. This computation can be enormously time consuming.
Fortunately, there exists an extremely powerful method for finding not only
what is the maximum flow, but also which flow pattern ach ieves th is maximum flow. This procedure is known as the labeling algorithm (d ue to Ford
and F ulkerson [FORD 62]) a nd is efficient in tha t th e computational requ irement grows as a small power of the number of nodes ; we present the algor ithm
in Volume II , Ch apt er 5.
In additio n to maximal flow problems, one can pose nume rou s other
interesting and worthwhile questions regarding flow in such networks. For
example , one might inq uire int o the minimal cost network which will suppo rt
a given flow if we assign costs to each of the channels. Also , one might as k
the sa me questions in network s when more than one origin and dest inati on
exist. Co mplicating ma tters further, we might insist that a given netwo rk
suppo rt flow of various kind s. for example, bushels of figs, carton s of
cartrid ges and barrel s of oil. This multic omm od ity flow problem is an
extremely difficult one, and its solution typically requires consi de rable
computati onal effort. The se and numerous other significant problem s in
networ k flow theory are addressed in the comprehensive text by Frank and
Frisch [FRAN 71] and we shall see them aga in in Volume II , Chapter 5. Network flow theory itself requires met hod s from gra ph the ory, combinator ial

QUEUEING SYSTEMS

mathematics, optimization theory, mathematical programming, and heuristic


programming.
The second class into which systems of flow may be divided is the class of
random or stochastic flow problems. By this we mean that the times at which
demands for service (use of the channel) arrive are uncertain or unpredictable, and also that the size of the demands themselves that are placed upon
the channel are unpredictable. The randomness, unpredictability, or unsteady
nature of this flow lends considerable complexity to the solution and understanding of such problems. Furthermore, it is clear that most real-world
systems fall into this category. Again, the simplest case is that of random flow
through a single channel; whereas in the case of deterministic or steady flow
discussed earlier in which the single-channel problems were trivial, we have
now a case where these single-ehannel problems are extremely challenging
and, in fact , techniques for solution to the single-channel or single-server
problem comprise much of modern queueing theory .
For example, consider the case of a computer center in which computation
requests are served making use of a batch service system. In such a system,
requests for computation arrive at unpredictable times, and when they do
arrive, they may well find the computer busy servicing other demands. If, in
fact, the computer is idle, then typically a new demand will begin service
and will be ru~ until it is completed. On the other hand, if the system is busy,
then this job will wait on a queue until it is selected for service from among
those that are waiting. Until that job is carried to completion, it is usually the
case that neither the computation center nor the individual who has submitted
the program knows the extent of the demand in terms of computational effort
that this program will place upon the system ; in this sense the service requirement is indeed unpredictable.
A va riety of natural questions present themselves to which we would like
intelligent and complete answers . How long, for example , maya job expect to
wait on queue before entering service? How many jobs will be serviced before
the one just submitted? For what fraction of the day will the computation
center be busy? How long will the intervals of continual busy work extend?
Such questions require answers regarding the probability of certain periods
and numbers or perhaps merely the average values for these quantities.
Additional considerations, such as machine breakdown (a not uncommon
condition), complicate the issue further; in this case it is clear that some preemptive event prevents the completion of the job currently in service. Other
interesting effects can take place where jobs are not serviced according to their
order of arrival. Time-shared computer systems, for example, employ rather
complex scheduling and servicing algorithms, which , in fact , we explore in
Volume II, Chapter 4.
The tools necessary for solving single-channel random-flow problems are

1.1.

SYSTEMS O F FLOW

conta ined and described withi n q ueue ing th eory, to which much o f th is text
devote s itself. Th is requires a back ground in pr obability th eory as well as a n
understanding o f complex variables and so me of the usual tr an sformcalculus meth ods ; th is material is re viewed in Appendices I and II.
As in the case of determin istic flow , we may enla rge our sco pe of p robl ems
to that of networks of channels in which random flow is encountered. An
exa mple of such a system would be that of a computer network. Such a
system con sists of computers connected together by a set of communica tion
lines where the capacity o f the se lines for carryin g information is finite. Let us
return to the fictiti ous land of Hatafla and assume that the railway net work
considered earli er is now in fact a computer net work. Assume th at user s
located a t Abra requ ire computational effort on the facility at Cad abra. The
particular times a t which these requests are made are themselves unpredictable , and th e commands or instructio ns that describe these requests are also
of unpredictable len gth . It is the se commands which mu st be transmitted to
Cadabra over our communication net as message s. When a message is
inserted int o the netw ork a t Abra , and after an appropriate deci sion rule
(referred to as a routing procedure) is accessed, then th e message proceeds
through the netw ork a long so me path. If a port ion of this pa th is busy, a nd
it may well be, then the message must queue up in front of the bu sy channel
and wait for it to bec ome free. Const ant decisions must be made regarding
the flow of messages "and routing procedures. Hopefully, the message will
eventually emerge at Cadabra, the computation will be performed , and the
results will then be inserted into the network for delivery back at Abra.
It is clear th at the problems exemplified by our computer net wor k involve a
variety of extremely complex qu eueing problems, as well as networ k flow
and deci sion problems. In a n earlier work [KLEI 64] the auth or addressed
him self to certain as pects of the se questions. We de velop the an alysis of the se
systems lat er in Volume II , Chapter 5.
Having thus classified * systems of flow , we hope th at the reader understands
where in the genera l scheme of things the field of queueing the ory ma y be
placed . The method s from thi s the ory a re central to a nalyzing most stochas tic
flow problems, an d it is clear from a n examina tion of the current litera ture
that the field and in particular its applications a re growing in a viable a nd
purposeful fashion.
The classifica tion described a bove places qu eueing systems within the class of systems of
flow. This approach identifies and emph asizes the fields o f applicatio n for queu eing theory.
A n a lterna tive a pproa ch wo uld ha ve been to place queueing theory as belongi ng to the
field of app lied stochas tic processes ; this classifica tion would have emphasized the mat hema tical structure of queueing theory ra ther than its a pplica tions. Th e poin t of view taken
in th is two-volume book is the form er one, namely. with a pplica tion of the theory as its
major goal rat her than extension of the math emat ical for mal ism a nd results.

QUEU EIN G SYSTEMS

1.2. THE SPECIFICATION AND MEASURE


OF QUEUEING SYSTEMS
In o rder to completely specify a queuein g system, o ne mu st iden tify the
stochas tic processes that describe the arriving stream as well as th estructure
and di sciplin e of the service facility. Generally, the arri val pr ocess is described
in term s o f the p robability distribution o f the interarrical times of custo mers
a nd is den oted A (t) , where*
A (t )

= P[time between arrivals ~

t]

(I.I )

The as sumption in most of queueing theory is that th ese interarrival time s are
independent, identically distributed random variables (a nd, therefore, the
strea m of arrivals forms a stationary renewal process ; see Chapter 2). Thus,
onl y the distribution A (t) , which de scribes the time between a rrivals, is usually
of significa nce. The second sta tistica l quantity th at mu st be described is the
am ount of demand the se arrivals place upon the channel; thi s is usuall y
referred to as the service tim e whose probability distribution is den oted by
B(x) , that is,
B(x) = P[service time ~ x ]
( 1.2)
Here service time refers to the length of time th at a customer spends in the
ser vice facility .
N ow regarding the st ructure and discipline of the service facility , one must
spec ify a variety of additio na l qu antities. One o f the se is the extent of
storage capacity available to hold waiting customers a nd typically thi s quantit y is described in term s of the variable K ; often K is taken to be infinite. An
additional specification involves the number of service stations ava ilable, and
if more th an one is available, then perhaps the distribution o f service time
will differ for each , in which case the distribution B(x) will include a su bscript
to indicate that fact. On the other hand, it is so metimes the ca se that the
arriving strea m con sists of more th an one identifiable class of customers ; in
such a case the interarrival distributi on A (t ) as well as the service distr ibut ion
B(x) may each be characteristic of each cla ss and will be identified aga in by
use of a subscrip t o n these d istr ibution s. An other importa nt structura l
descripti on o f a queueing system is th at of the queueing discipline; thi s
describes the order in which customers a re taken from the queue a nd a llo wed
int o service . For example, so me sta nda rd queueing disciplines are first-co mefirst-serve (F C FS), last-come-first-serve (LCFS), a nd random o rder of
service. When the arriving customers are distin guishable according to gro ups,
then we encou nter the case of priority queueing disciplines in which priority
Th e notat ion P[A] denotes, as usua l. the " pro bability of the event A,"

1.2.

THE SPECIFICATION AND ~I EASUR E OF QUEUEING SYSTntS

among gro ups may be esta blished. A further sta tement regarding the availability of the service facility is also necessary in case t he service faci lity is
occasio na lly requ ired to pay attention to other ta sks (as, fo r example, its
own breakdown). Beyo nd th is, q ueue ing systems may enjoy custo mer
behavio r in the fo rm of defections from th e qu eue, j ock ey ing a mo ng th e man y
qu eues, balking before ent ering a queue, bribing fo r queue positi on , cheating
for q ueue po sition, a nd a variety of o the r interesting a nd not-unexpected
humanlike cha rac terist ics. We will encounter these as we move th rough t he
text in an o rderly fashion (first-come-fi rst-serve ac co rding to page nu mber).
No w tha t we ha ve indi ca ted how one must specify a queueing system, it is
appropriate t hat we ide nti fy the meas ures of performance a nd effectiveness
th a t we sha ll obtai n by ana lysis. Basicall y, we a re int erested in the waiting time
for a custo mer, the number of customers in the system, th e length of a busy
period (the contin uo us interva l d uring which th e serve r is busy), the length of
an idle period , a nd th e cu rrent 1I'0rk backlog expressed in un its of tim e. All
t hese quant ities a re ra ndom varia bles a nd thus we seek th eir complete
p rob a bilistic desc rip tion (i.e., their proba bility dist ribu tion fu nction ).
Us ually , ho wever , to give th e distribution functio n is to give more th an
one can easi ly make use of. Consequ en tly, we often settle fo r the first few
mo ments (mean , var iance, etc.).
Ha ppily, we shall begin with simp le co nside rations a nd de velop the tools
in a st raigh tforwa rd fashio n , paying a tte ntio n to th e essential details of
a na lysis. In t he followi ng pages we will enco unter a va riety of simple qu eueing
problems, simple at least in the sense of description and usually rather
so phistica ted in term s of so lution. However , in o rde r to do t his pr op erly, we
first devote our efforts in the following chapter to desc ribing some of t he
imp orta nt ra ndo m processes that ma ke up the a rriva l a nd service processes
in o ur q ueueing systems.
REFERENCES
FORD 62 Ford, L. R. and D. R. Fu lkerson, Flows in Networks, Princeton
University Press (Princeton, N.J.), 1962.
FRAN 71 Frank . H. and I. T. Frisch, Communication. Transmission , and
Transportation Ne twork s, Addison-Wesley (Reading, Mass.), 1971.
KL EI 64
Kleinrock, L. . Communication Nets ; Stochastic Message Flow and
Delay . McGraw-Hili (New York), 1964 , out of print. Reprinted by
Dover (New York ), 1972.

Some Important Random Processes*

We assume that the read er is familiar with the basic elementary notions,
terminology, and concepts of probability theo ry. Th e particular aspects of
that theory which we requ ire are presented in summary fashion in Appendix
II to serve as a review for th ose readers desi ring a quick refresher and
remin de r; it is recommended that the material therein be reviewed, especially
Sectio n 11.4 on transform s, generating functions , and characteristic function s.
Included in Appendix " are the fo llowing important definitions, concep ts,
and results :

2.1.

Sample space, events , and probability.


Conditional probability, statistical inde pendence, the law of total
probability, and Bayes' theorem.
A real random va riable, its pro babili ty dist ribution function (PD F),
its probability density func tion (pdf), and their simple properties.
Events related to random variables and their p robabilities.
Joint dist ribu tion functions.
Functions of a random variable and t heir de nsity functions.
Expectation .
Laplace transforms , generating functions, and characteristic functi on s
and their relationships and p ropertics.t
Inequalities and limit theorems .
Definition of a stochastic process.

NOTATI ON AND ST RUCTU RE FOR BASIC


QUEUEING SYSTEMS

Before we plun ge headlong into a step-by-step development of queueing


theory from its elementary not ions to its inte rmediate and then finally to
some ad vanced material , it is impo rtant first that we understan d the basic
Section s 2.2, 2.3, a nd 2.4 may be skipped on a first read ing.
[ is a transform theor y refresher. This materia l is a lso essential to the proper
under standing of this text.

t Appendix
10

2.1.

NOTAn ON AND STRUC TU RE FO R BASIC QUEUEING SYSTEMS

II

stru cture o f queues. Also, we wish to provide the read er a glimpse as to


where we a re head ing in th is journ ey.
It is o ur purpose in thi s sectio n to define so me notation , both sy mbo lic
and gra p hic, and then to introduce one o f the ba sic sto chas t ic pr oce sses that
we find in queueing systems. F urth er , we will deri ve a simple but significa nt
result, which relates so me first moments of impo rta nce in these systems. In
so doin g, we will be in a positi on to define the quantities a nd processes th at
we will spend man y pag es study ing later in th e text.
The syste m we co nsider is the very general queueing syste m G/G/m ; recall
(fro m the Preface) th a t thi s is a system whose interarrival time di stribution
A (I) is completel y a rbitra ry a nd who se service time di stribution B (x) is a lso
completely arbitrary (a ll interar rival tim es and serv ice time s are assum ed to
be inde pe ndent o f each o ther). The syste m ha s m servers and order of service
is also quite a rbitra ry (in particular , it need not be first-come-first-serve).
We focu s a ttentio n o n th e flow o f customers as the y arri ve , pass throu gh , a nd
eventuall y lea ve thi s syste m: as such, we choose to number the cu stomers with
the subsc rip t n a nd define C ,. a s foll ows :
C n denotes the nth custom er to enter the system

(2. 1)

Thus, we may portray o ur system as in Figure 2.1 in which the box represents
t he qu eu eing syste m a nd the flow of cust omers both in a nd o ut of the system
is shown . One can immediately define so me rand om processes of int ere st.
For example , we a re int erested in N( I) where *
N(I) ~ number of cust om ers in the system at time

(2.2)

An oth er stochastic process o f interest is th e un finished wo rk V( I) th at exists


in the system a t tim e I , th at is,
V( I) ~ the unfinished work in th e system a t time I
~ th e rem a inin g t ime required to empty the system of a ll
cu stomers pre sent a t time I

(2.3)

Whenever V( I ) > 0 , then the system is sa id to be bu sy , a nd o nly when


V( I) = 0 is th e syste m sai d to be idle . The durati on and loca tio n of the se busy
an d idl e peri ods a rc al so qu antiti es of int ere st.

. ,J; ~

Oueoei nq
system

;I
-

----'

Figure 2. 1 A general queueing system.


The notation

is to be read as "equals by defi nition."

12

SOME IMPOR TANT RANDOM P ROC ESSES

The det ail s of the se sto chastic processes may be ob served first by defining
the foll owing va riables and then by displayin g the se va riab les on a n appropriate time diagram to be d iscussed belo w. We begin with the definition s.
Recalling that the nth cu stomer is den oted by Cn. we define his a rriva l time
to the queuein g system as
Tn

~ a rriva l time for C n

(2.4)

(2. 5)

interarrival time s a re drawn from the dis-

P[t" ~ t]

A(t)

(2.6)

which is independent of n. Similarly , we define the service time for COl as


X n ,;;

serv ice time for C n

(2.7)

and from our assumptions we have


P[Xn ~ x ] = B(x)

'(2.8;

The sequences {t n } a nd {x n } may be th ought of as input va ria bles for OUI


queueing system ; the way in which the system handles the se cust omers give:
rise to queues and waiting times th at we mu st now defin e. Thus, we de fine t h.
waiting time (time spent in the queue) * as
.
II' n

~ wa itin g ti me (in qu eue) fo r C;

(2.9

The total time spent in the system by COl is the sum of his waiti ng time a n,
service time, which we denote by

s; ~ system time (q ueue plus service) for COl

= II' n + x"

(2. 1(

Thus we have defined for the nth cu stomer his a rriva l time, " his" intera rri v.
time , his service time, his waiting time , a nd his system t ime. We find
* T he terms " waiting ti me" and " queueing time" have conflicting defini tions within tl
body of queueing-theory literatu re. T he fo rmer so metimes refers to the tota l time spent
system. an d the latter then refers to the total time spent on queue ; however . these tv
definitions are occasio na lly reversed . We a ttem pt to remove tha t confusion by defini
waiting a nd queu eing time to be the sa me quant ity. name ly. the time spent wa iting '
queu e (but not being served ); a more appropriate term perhaps would be " wasted tim'
Th e tota l time spent in the system will be referred to as "sys tem time" (occasiona lly kn o a s " flow time" ).

2.1.

NOTATION AND STR UCT UR E FOR BASIC QUE UEIN G SYSTEMS

13

expedien t at th is point to elaborate somewhat further on notation. Let us


con sider the interarrival time In once again . We will have occasion to refer
to the limiting random varia ble i defined by
I- =~

I'im
n-e cc

1n

(2.11)

which we den ote by I n -+ i. (We ha ve already requ ired that the interarrival
times In have a distribution independent of n, but this will not necessaril y be
the case with many other random variables of interest.) The typical notation
for the probab ility distribut ion function (PDF) will be
(2.12)
and for the limiting PDF
P[i ~ I] = A (I)

This we denote by A n(l) -+ A(t ) ; of course, for the interarrival time we have
as sumed that A n(l ) = A (I) , which gives rise to Eq. (2.6). Similarly, the
probability den sity function (pdf) for t n a nd i will be an(l) and aCt), respectively, and will be den oted as an(t) -+ aCt). Finally, the Laplace transform
(see Appendix II) o f the se pdf's will be denoted by An *(s) and A *(s),
respecti vely, with the obvio us notation A n*(s ) -+ A *(s) . The use of the letter
A (a nd a) is meant as a cue to remind the reader that they refer to the
interarrival time . Of .course, the moments of the interarrival time are of
interest a nd the y will be denoted as follows * :
E[l n]';;'i.

I)
d

))

al
it
he
in
vo

ng
on

-vn

(2.13)

Acc ording to our usual notation , the mean interarrival time for
random variable will be given] by i in the sense that i. -+ i. As
i, which is the average interarrival time between customers,
frequently in o ur equ ati on s that a special notation ha s been
follows :
_ <l I
1=-

i.

(2.14)

the limiting
it turns out
is used so
adopted as
(2.15)

Thus i. represents th e Qt'erage arrical rate of customers to o ur queuein g


system. Hi gher moments o f the interarrival time are a lso o f interest a nd so
we define th e k th moment by
k = 0, 1,2 , . . .

(2. 16)

The no tat io n E[ J denotes the expecta tion of the quant ity within sq uar e brackets. As
shown, we a lso ad opt the overbar notat ion to deno te expectat ion.
t Actually, we sho uld use the no tatio n I with a tilde and a ba r, but this is excessive a nd will
be simplified to i. T he sa me simplificatio n will be a pplied to ma ny of o ur ot her ra ndom
va riab les.

14

SOME IMPORTANT RANDO~[ PROCESS ES

In this last equation we ha ve introduced the definition ak to be the kth


moment o f the interarrival t ime i ; thi s is fairl y sta nda rd not at ion a nd we
note immediately from the above that

_
t

=-=
A

j.

a1

(2. 17)

That is, th ree special notations exist fo r the mean interarrival time; in particular, the use of the symbol a is very common and vario us of the se form s
will be used throu ghout the text as a p propria te. Summarizing th e information
with regard to the interar riva l time we have the followin g shortha nd glossa ry :

t; = interarrival time bet ween C; and C n _ 1


t n -+ i,
-

t n -+

A n(t)

I
t= ~

-+

A(t),

an(t ) ->- a(t ),

a1

t;

a,

->-

A n*(s)

->-

A*(s)
(2.18)

ak

In a sim ilar ma nner we identify the notation associated with


as follo ws :
Xn

and

Sn

= service time for C n

X n -+

X,

-X

-x = -1 = b1 = b ,

n -..

X n , I\'n,

B.(x) -+ B(x ),

b . (x) -+ b(x ),

B n*(s)

-~

B*(s)
(2.19)

f-l
IV n

= waiting time for C n

(2.20)
-s;

s;

-+

s,

= system

S n(Y) ->- S(y ),

time for C,

sn(Y) -+ s(y),

S n"(s)

-+

S *(s)
(2.21)

All th is notation is self-eviden t except perh ap s for th e occas ional special


symbo ls used for the first moment and occasionally the higher mom ent s of
th e random va ria bles invol ved (tha t is, th e use of the sym bo ls )" a, Il, b, IV,
a nd T ). The reader is, at thi s poin t , directed to the Gl ossary for a complete
set of not ati on used in thi s bo ok .
With the above not ation we now suggest a time-diagram notation for
queues, which pe rmits a gra p hica l view of the dynamics o f ou r queueing
system and a lso provides the det a ils of the underlying stochastic p roce sses.
This diagram is shown in Figure 2.2. Thi s particu lar figure is show n for a

2.1.

NOTATION A:--ID STR UCTURE FOR BASIC Q UEUEI N G SYST EMS

s.
C ll _ 1

'I
C.

cr. ,!'.

15

C n +2

Servicer

C~
T.

Queue

Cn +1

C:"'+2

Tlme v-c-e--

T/u 2

'n +1

(11 +2

Cn

en +1

Cn -t2

Figure 2.2 Time-diagram nota tion for queues.


first-co me-first-serve order of service, but it is e~sy to see ho w the figure
may also be made to represent any order of service. In this time diagram the
lower horizontal time line rep resents the queue and the upper hori zontal time
line represents the service facility; moreove r, the diagram shown is fo r the
case of a single server, although this too is easily generalized. An arrow
approaching the q ueue (or service) line from below indicates that an arrival
has occurred to the queue (or service facility) . Arrows emanating from the
line indicate the departure of a customer from the queue (or service facility).
In this figu re we see th at customer C n+1 arrives before customer Cn enters
service; o nly when C ; departs from service may Cn+l ente r service and , of
course, the se two events occur simultaneously. Notice that when Cn + 2 enters
the system he finds it empty and so immediately proceeds through an empty
queue directly int o the service facility . In th is diagram we have also sho wn th e
waiting time and the system time for C n (note that 1\'n+2 = 0). Thus, as time
proceed s we can identify the number of cu stomers in the syste m N(t), the
unfini shed work Vet) , and also the idle a nd busy period s. We will find much
use for thi s time -diagram notation in what follows.
In a genera l que ueing system one expects that when the number of
customers is lar ge then so is the waiti ng time. One manifestation of thi s is a
very simple relati on ship between the mean number in the queueing system,
the mean a rriva l rate of customers to that system, and the mean system
time for customers. It is our purpose next to deri ve th at relati on ship a nd
thereby familiarize ourselves a bit further with th e underlying behavior
of the se system s. Referring back to Figure 2.1 , let us position ourselves at
the input of the queueing system and count ho w man y customers ent er as a
function of time . We denote this by ot(/) where

ot(t) ~ number of arrivals in (0, t )

(2.22)

16

SOM E IMPORTA NT RA NDOM PROCESSES

12, -- - - - - - - - - - -- - - - - - - - -- ----,
11
10
~

'5
c

'0

"
~
~

9
8
7
6
5

4
3
2
1

o L----"'="'"
Time

Figure 2.3 Arrivals and departures.


Alternat ively, we may position ourselves at the outp ut of the queuei ng system
and count the number of departures that leave; thi s we den ot e by
bet) ~ number of dep ar ture s in (0, r)

(2.23)

Sample functions for these two stochastic processes are sho wn in Figur e 2.3.
Clearly N( t), the number in the system at time t, must be given by
N( t )

= (r) -

bet)

On the other hand, the tot al are a bet ween these two curves up to some point ,
say t , repr esents t he tota l time all customers have spe nt in the system (measur ed in unit s of customer-seconds) during the int erval (0, t) ; let us denote this
cumulative area by yet ). M oreover, let At be defined as the average arrival
rate (custo mers per second) during the interval (0, t); th at is,
(2.24)
We may define T, as the system time per customer averaged over all custome rs
in the interval (0, t); since yet ) repre sents the accumulated customer- seconds
up to time t , we may divide by the number of arriva ls up to that poin t to
ob tain
yet)
Tt = (X( t)
Lastly, let us define NI as the average nu mber of custome rs in the qu eueing
system during the interval (0, r): this may be obtained by dividin g the
accumulated number of customer-seconds by the total interval length t

2.1.

NOTA TION AND STRUCTURE FOR BASIC QUE UEING SYSTEMS

17

thusly

y(t)

N,= -

From the se last three equ ations we see

N, = A,T,
Let us no w assume that our queueing system is such that the following limits
exist as t -.. CtJ:
A = lim J. ,

,-'"

T=lim T,

,- '"

Note that we are using our former definitions for A and T representing the
average customer arrival rate and the average system time , respectively. If
these last two limits exist, then so will the limit for N" which we denote by N
now representing the average number of customers in the system ; that is,

N= AT

- (2.25)

Thi s last is the result we were seeking and is known as Little's result . It states
that the average number of customers in a queueing system is equal to the
average arrical rate of customers to that system, times the average time spent
in that sys tem. * The above proof does not depend upon any specific assumption s regarding the arrival distribution A (r) or the service time distribution
B(x) ; nor does it depend upon the number of servers in the system or upon the
particular queueing discipline within the system. Th is result existed as a
" folk the orem" for man y years ; the first to establish its validity in a formal
way was J. D. C. Little [LITT 61] with some later simpl ifications by W. 'S .
Jewell [JEWE 67) . S. Eilon [EILO 69 ) and S. Stidh am [STID 74). It is important to note that we have not precisely defined the boundary around o ur
queueing system. For exampl e, the box in Figure 2.1 co uld apply to the entire
system com pose d of qu eue an d serv er , in whic h case Na nd T as defined refer
to qu ant ities for the entire syste m; on the othe r hand , we co uld have co nsidered the bound ar y of the queueing sys tem to co ntai n o nly the q ueue itsel f, in
which case the relationsh ip wou ld have been
No = AW
- (2.26)
where No represents the average number of customers in the queue and , as
defined ea rlier, W refers to the avera ge time spent waitin g in the queue. As a
third possible alterna tive the queueing system defined could have surrounded
An intuitive proof of Little' s result depends on the observa tio n that an arriving customer sho uld find the sa me average number, N, in the system as he leaves behind upon
his departure. Thi s latter quantity is simp ly the arrival ra te A times his a verage time in
system , T.

18

SOME I11PORTANT RANDOM PROCESSES

only the server (or servers) itself; in th is case our equation would have reduced
to
R. = AX
(2.27)
where R. refers to the average number of customers in the service facility
(or facilities) and x, of course, refers to the average time spent in the service
box. Note that it is always true that
T = x+ W
_ (2.28)
The queuein g system could refer to a specific class of customers, per haps
based on priority or some other attribute of this class, in which case the same
relationship would apply. In other words, the average arri val rate of custo mers
to a "queueing system" times the average time spent by customers in that
"system" is equal to the average number of customers in the " system ,"
regardless of how we define that " system."
We now discuss a basic parameter p, which is comm only referred to as the
utilization fa ctor. The utilization factor is in a fundamental sense really the
ratio R /C, which we introduced in Chapter I. It is the rat io of the rate at
which "work" enter s the system to the maximum rat e (capacity) at which the
system can perform this work; the work an arri ving customer brings into the
system equals the number of seconds of service he requires. So, in the case of
a single-server system, the definition for p becomes
p ,;; (average arrival rate of customers) X (average service time)
= AX
_ (2.29)
Thi s last is true since a single-server system has a maximum capacity for
doin g work , which equals I sec/sec and each ar riving customer brings an
am ount of work equal to x sec ; since, on the average, ..1. customers ar rive per
second , then }.x sec of work are brought in by customers each second that
passes, on th e average. In the case of multiple servers (say, III servers) the
definition remains the same when one considers the ratio R/C , where now th e
work capacity of the system is III sec/sec; expressed in terms of system parameters we then have
a -AX
p =
_ (2.30)
m
Equat ions (2.29) and (2.30) apply in the case when the maximum service
rat e is independent of the system sta te; if th is is not the case, then a more
careful definition must be pro vided. The rate at which work enters the
system is sometimes referred to as the traffic intensity of the system and is
usually expressed in Erlangs ; in single-server systems, the utilizat ion facto r is
equal to the traffic inten sity whereas for (m) multiple servers, the tr affic
intensity equal s mp . So long as 0 ~ p < I, then p may be interpreted as
p

E[fraction of busy servers1

(2.3I)

2.2.

DEFINITIO N AND CLASSIFICATION OF STOCHASTIC PROCESSES

19

[In the case of a n infinite number of servers, the ut ilizati on fact or p plays no
impor ta nt part, and instead we are interested in the number of busy servers
(and its expectati on).]
Indeed , for the system GIGII to be stable , it must be th at R < C, that is,
o ~ p < I. Occasionally, we permit the case p = 1 with in the ran ge of
sta bility (in particul ar for the system 0 /0 /1). Stability here once again refer s
to the fact that limiting distributions for all random vari ables of interest
exist , and that all customers are eventually served. In such a case we may
carry out the following simple calcul ation . We let 7 be an arbitrarily long
t ime interval ; during this interval we expect (by the law of large numbers)
with probability 1 that the number of arrivals will be very nearly equal to .AT.
M ore over , let us define Po as the probability that the server is idle at some
randomly selected time . We may, therefore, say that during the interval 7,
the server is busy for 7 - TPO sec, and so with pr obability I , the number of
customers served during the interval 7 is very nearly (7 - 7po)fx . We may
now equate the number of arri vals to the number served during thi s int erval,
which gives, for lar ge 7,

Thus, as 7 ->- 00 we ha ve Ax = I - Po; using Defin iti on (2.29) we finall y


ha ve the important conclusion for GfG /l
p =

1 - P

(2.32)

The interpretati on here is that p is merely the fracti on of time the server is
bu sy ; thi s suppo rts the conclusion in Eq. (2.27) in which Ax = p was shown
equal to the averag e number of customers in the service facilit y.
Thi s, then , is a rapid look at a n overall queueing system in which we ha ve
exp osed so me of the ba sic sto chast ic processes, as well as some of the
impo rta nt de finition s a nd notation we will enc ounter. More over , we have
establi shed Little's result , which permits us to calcul ate the average number
in the system once we have calculated the average time in the system (or vice
versa). N ow let us move on to a more careful study of the imp ortant stochas tic
processes in our queueing systems.
2.2 *.

DEFINITION AND CLASSIFICATION


OF STOCHASTIC PROCESSES

At the end of Appendix II a definiti on is given for a stochas tic process,


which in essence states that it is a famil y of random vari able s X (t) wher e the
The reader may choos e to skip Sections 2.2, 2.3, and 2.4 a t this point a nd move directly
to Section 2.5. He may then refer to this materi al only as he feels he needs to in the balan ce
of the text.

20

SOME IMPORTANT RAN DOM PRO CESSES

random variables are "indexed" by the time parameter I. F or example, the


number of pe ople .sitting in a movie theater as a funct ion o f time is a
stochastic process, as is also the atmospheric pressure in that movie the ater
as a functi on of time (at least those functi on s may be m odeled as stoc has tic
processes). Often we refer to a stochastic process as a random process. A
rand om process may be thought of as describing the moti on of a particle in
some space. The classification o f a random process depends up on three
quantities: the slate space; the index (lim e) parameter; and the statistical
dependencies among the random va ria bles X (I ) for different va lues o f the
index parameter t. Let us discuss each o f these in order to provide the general
framework for random processes.
Fir st we con sider the state space. The set of possible va lues (o r states) th at
X (I) may take on is called its state space. Referring to our analogy with regard
to the motion o f a particle , if the positions th at particle may occupy a re
finite or countable, then we say we have a discrete-state process, often
referred to as a chain. The state space for a cha in is usually the set o f inte gers
{O, 1,2, .. .}. On the other hand, if the permitted positions o f the particle
are over a finite or infinite continuous interval (or set of such intervals), then
we say that we ha ve a cont inuous-state process.
Now for the index (lime) parameler . If the permitted times at which ch an ges
in positi on may take place are finite o r countable, then we say we hav e a
discrele-(tim e) param eter process; if these changes in positi on may occur
an ywhere within (a set of) finite or infinite intervals on the time axis, then we
say we hav e a continuous-parame ter process. In the former case we o fte n write
X n rather than X(I) . X n is often referred to as a random or stochas tic sequellce
whereas X (I) is often referred to as a random or stochas tic process .
The truly distinguishing feature of a stochas tic p roce ss is th e relati on ship
of the random va ria bles X (I) or X n to other members of the sa me famil y. As
d efined in Appendi x II , one mu st specify the co mplete j oint dis trib ution
function among the random variables (which we ma y th ink of as vecto rs
den oted by the use of boldface) X = [X(t l ), X( I.) , . . .J, namel y,
Fx(x; t) ~ P[X(t I) ~ Xl ' .. , X( l n) ~ xn l

(2.33)

for a ll x = (Xl' X., . . . , X n ) , t = (II> I. , ... , In), and 11. As menti oned there,
thi s is a formidable ta sk ; fortunately , many interesting sto chastic processes
perm it a simpler description. In any ca se, it is the funct ion Fx(x ; t) th at really
de scribes the dependencies a mo ng the random va ria bles of th e stoc has tic
process. Below we de scribe some of the usual type s o f sto chas tic pr ocesses
th at a re ch aracterized by different kinds of dependency relati on s am on g their
rand om va riables. We provide thi s cla ssificati on in order to give t he read er a
global view of this field so that he may better understand in which particular

2.2.

DEFI~ 1TI0N AND C LASSIFICAT ION OF STOCH ASTI C PROC ESSES

21

region s he is o pera ting a s we proceed with our st udy of queueing theory and
it s related sto chas tic pr ocesses.
(a) Stationary Processes. As we discuss at the ver y end of Appendix II,
a sto chas tic process X (I ) is sa id to be sta tiona ry if Fx(x ; t) is inv ari ant to
shifts in time for a ll va lues o f its arguments; th at is , given an y con stant T
the following must hold :
FX(x ; t

+ T) = Fx (x ; t)

(2.34)

where the notati on t + T is defined as the vector ( 11 + T , 12 + T, . . , I n + T) .


An associated noti on , that o f wide-sense stationarity, is identified with the
random process X (I) if merely both the first and second moments are independent of the location o n the time a xis, th at is , if E[X(I )] is independent of I
and if E[X(I)X(I + T)] depends only upon T and not upon I. Observe that all
st ati onary processes are wide-sen se stati onary, but not conversely. The
theory o f sta tio na ry rand om pr oce sses is, a s o ne might expect, simp ler th an
that for nonstationary processes.
(b) Independent Processes. The simplest a nd most tr ivial sto chas tic
process to con sider is the random seq uence in which {X n } forms a set of
independent random variables, that is , the j oint pdf defined for o ur sto chastic
proce ss in Appendix .II mu st fact or in to the product, thusly

(2.35)
In th is case we are stretching th ings somewhat by calling such a sequence a
rand om proce ss since there is no stru cture or dependence among the random
variables. I n the case of a continuous random process, such an independent
pr oce ss may be defined, and it is commonl y referred to as " white noise"
(an example is the time derivative of Brownian motion).
(c) Markov Processes. In 1907 A. A. Mark ov published a paper [MARK
07] in which he defined and investigated the properties of what are now
kn own as Mark ov processes . In fact, what he created was a simp le and
highly useful form o f dependency amon g the random vari ables forming a
stochastic process, which we now describe.
A Mark ov proces s with a di screte state space is referred to as a Markov
chain. The d iscrete-time Markov chain is the easiest to conceptualize and
understand. A set of random variables {X n } forms a Markov chain if the
pr obability that the next va lue (sta te) is X n +1 depends onl y up on the current
value (state) X n and not upon any previous va lues. Thus we have a random
sequence in which the dependency extends backwards one unit in time. That

22

SOME IMPOR TANT RANDO~I PROCESSES

is, the way in which the entire past histo ry affects the future of the process is
completely summarized in the current value of the process.
In t he case of a discrete-time Markov chain the instants when state changes
may occur are preord ained to be at the integers 0, 1,2, . . . , n, . . . . In the
case of the continuous-time Markov chain, however, the transition s between
states may take place at any instant in time. Thu s we are led to consider the
rand om variable that describe s how long the proce ss remain s in its curr ent
(discrete) state before making a tr ansition to some ot her state . Because the
Markov pr operty insists that the past history be compl etely summarized in
the specification of the current state, then we are not free to requ ire th at a
specification also be given as to how long the proce ss has been in its current
sta te ! Th is imposes a heavy con straint on the distribution of time that the
process may remain in a given state. In fact , as we shall see in Eq. (2.85),
"this state time must be exponent ially distributed. In a real sense, then , the
exponential distribution is a continuous distribution which is "rn emoryless"
(we will discuss this not ion a t considerable length later in this chapter).
Similarl y, in the discrete-time Markov chain , the process may remain in the
given state for a time that must be geome trically distributed ; this is the only
discrete pr obab ility mass funct ion that is memoryless. This memoryless
property is requi red of all Markov cha ins and restri cts the generality of the
processes one would like to cons ider .
Expressed analytically the Marko v property may be written as
P[X(tn+l) = x n+1 X (t n) = Xn, X( t n_1) = Xn_l>' . . ,X(t l) =
= P[X(t n+l) = x n+1 I X (t n) = xnl
1

xtl
(2.36)

where t 1 < t 2 < . .. < I n < t n + 1 and X i is included in some discrete sta te
space.
The consideration of Markov processes is central to the study of queueing
theory and much of this text is devoted to th at study. Therefore , a good
porti on of thi s chapter deals with discrete-and continuous-time Mar kov
chain s.
(d) Birth-death Processes. A very important special class of Mar kov
chains has come to be known as the birth-death process. The se may be either
discrete-or continuou s-time processes in which the defining condit ion is that
sta te transition s take place between neighboring sta tes only. That is, one may
ch oose the set of integer s as the discrete state space (with no loss of generality)
and then the birth-death process require s that if X n = i, then Xn+l = i - I,
i, or i + I and no other. As we shall see, birth -death processes have played a
significant role in the development of queueing the ory. Fo r the moment,
however , let us proceed with our general view of stoc hastic processes to see
how each fits int o the gener al scheme of thin gs.

2.2.

DEF INITlO N AN D CLASSIFIC ATI ON OF STOCHASTI C PR OCESSES

23

(e) Semi-Markov Processes. We begin by discussing discrete-time


semi-Ma rkov proce sses. The discrete-time Mark ov chain had the propert y
that at every unit inter val on the time axis the process was required to make a
transition from the current state to some other state (possibly back to the
same state). The transition probabilities were completely arbitrary; however ,
the requirement that a transition be made at every unit time (which really
came ab out because of the Markov property) leads to the fact that the time
spent in a sta te is geo metrically distributed [as we shall see in Eq. (2.66)].
As mentioned earlier, this impo ses a strong restriction on the kind s of
processes we may consider. If we wish to relax that restriction , namel y, to
permi t an arbitra ry distribution of time the proce ss may remain in a sta te,
then we are led directly into the notion of a discrete-time semi-Markov
process; specifically, we now perm it the times between state transitions to
obey an arbitrary probability distribution. Note , however, that at the instants
of state tran sition s, the process behaves just like an ordinary Markov chain
and, in fact , at th ose instants we say we have an imbedded Markov chain.
Now the definition of a continu ous-time semi-Markov pr ocess follows
directly. Here we permit state transitions at any instant in time . However, as
opposed to the Mar kov process which required an exponentially distributed
time in state, we now permit an arbitrary distribution . Thi s then affords us
much greater generality, which we are happy to employ in our study of
queueing systems. Here , again , the imbedded Markov process is defined at
those instants of state transition. Certainly, the class of Markov processes is
contained within the class of semi-Markov processes.
(f) Random Walks. In the stud y of random processes one often encounters a process referred to as a random walk . A random walk may be
th ought of as a particle moving am ong sta tes in some (say, discrete) sta te
space. What is of interest is to identify the location of the particle in that state
spac e. Th e salient feature c f a rand om walk is th at the next positio n th e
pr ocess occupies is equal to the previ ous position plu s a random variable
whose value is drawn independently from an arbitrary distribution ; thi s
distribution , however, does not change with the sta te of the pro cess. * Th at is,
a sequence of random variables {S n} is referred to as a random walk (sta rting
at the origin) if

S n = X,

+ X z + ... + X n

I, 2, . . .

(2.37)

where So = 0 and X" X z , .. . is a seq uence of independent rand om varia bles


with a comm on distribution. The inde x n merely counts the nu mber of sta te
transitions the process goes through ; of course , if the instants of the se
tran sition s are taken from a discrete set, then we ha ve a discrete-time random

* Except perhap s at some bound ary states.

24

SOME IMPOR TANT RAND OM PR OCESSES

walk, whereas if they a re taken from a continuum , th en we have a con tinu oustime random walk. In any case , we assume th at the interval between these
tr an sition s is distributed in an a rbitra ry way a nd so a random walk is a
special case of a semi-Ma rkov process. * In the case when the co mmon
distribution for X n is a discrete distribution , th en we ha ve a discrete- stat e
random wal k ; in thi s case the transiti on probability Pi; of goi ng from sta te i
to stat e j will depend only up on th e differenc e in indices j - i (which we
den ote by q;_;).
An exa mple of a continuous-t ime rand om walk is tha t of Brownian mot ion ;
in the disc rete-time case a n exa mple is th e total number of head s observed in a
seq uence of indepe ndent coin tosses.
A random walk is occasionally referred to as a process with " independent
increments."
(g) Renewal Processes. A renewal proce ss is rela ted] to a random walk.
However, the interest is not in followin g a pa rticle am ong man y sta tes but
rather in counting transitions th at take place as a functi on of time . T ha t is,
we co nsider the real time axi s on which is laid ou t a sequence of points ; the
distribution of time between adj acent point s is an a rbitrary common distribution and each point corresponds to an instant of a state tra nsition. We
ass ume tha t the process begins in sta te 0 [i.e., X(O) = 0] a nd increases by
unity at each transiti on ep och ; th at is, X (t) eq uals the number of sta te tran siti on s that ha ve taken place by t. In thi s sense it is a special case of a rand om
walk in which q, = I and q; = 0 for i ~ I. We may think of Eq. (2.37) as
describin g a rene wal pr ocess in which S ; is the random va riable den ot ing the
time a t which the nt h tr an siti on tak es place. As earl ier , the seq uence {Xn } is a
set of inde pende nt identically distributed random variab les where X n now
represent s th e time bet ween the (n - I)th a nd nth tr an sition. One sho uld be .
careful to distinguish the interpretati on of Eq. (2.37) when it ap plies to
renewal pr ocesses as here and when it a pplies to a random walk as ea rlier.
The d ifference is that here in the renewal process th e equ at ion describes the
time of the nth renewal or tran sition , whereas in the rand om walk it describes
the state of the pr ocess and the tim e between sta te tr a nsitions is some ot he r
rand om va ria ble.
An impo rta nt example of a renewal process is th e set of a rrival insta nts
to th e G /G/m queue. In th is case, X n is identified with the interarrivaI time .
Usually, the distribution of time between intervals is of lillieconcern in a random walk;
emphasis is placed on the value (position) S n after n transitions. Often, it is assumed that
this distribution of interval time is memoryless, thereby making the randomwalk a special
case of Markov processes; we are more generous in our definition here and permit an
arbitrary distribution.
t It may be considered to be a special case of the random walk as defined in (f) above. A
renewal process is occasionally referred to as a recurrent process.

2.2.

DEFINITION AND CLASSIfi CATIO N OF STOCHASTIC PRO C ESSES

S
Prj

IT
MP
Pi j arbitrary

~l

25

arbitrary
arbitrary

RW

n.:
IT

qj - i

arbi trary

RP
q, =

ITarbitrary

Figure 2.4 Relationships among the interesting random processes. SMP : SemiMarkov process; MP: Markov process; RW: Random walk; RP: Renewal process;
BD: Birth-Death Process.
So there we have it-a self-con sistent classification of some interesting
stoc hastic processes. In order to aid the reader in understanding the relationship amo ng Markov .pr ocesses, semi-Markov processes, a nd the ir special
cases, we have prepared the diagram of Figure 2.4, which sho ws thi s relationship for discrete-state systems. The figure is in the form of a Venn diagram.
Moreover , the symbol Pii denotes the probability of making a tran sition next
to sta te j given that the process is currently in state i. Also , fr den ote s the
distribution of time between transitions; to say th at "fr is mernoryless"
implies that if it is a discrete-time process, thenfr is a geometric distribution ,
whereas if it is a continuous-time process, then fr is an exponential distributi on . Furthermore, it is implied that fr may be a functi on both of the
current a nd the next state for the pr oce ss.
The figure shows that birth-death processes form a subset of Markov
pr ocesses, which them selves form a sub set of the class of semi-Markov
processes. Similarl y , renewal processes form a subset of random walk
pr ocesses which also are a subset of semi-Ma rko v processes. Moreover ,
there are some renewal processes that may also be classified as birth-death

26

SOME IMPORTANT RANDOM PROCESSES

processes. Similarl y, those Markov processes for wh ich PH = q j - i (tha t is,


where th e tran sit ion probabilities depend only up on the di fference of the
indices) ove rla p those random walks whercj', is mem or yless. A rand om wa lk
for which [, is memo ryless and for which q j- i = 0 when Ij - i/ > I ove rlaps
the class of birth-death processes. If in addition to thi s last requirement our
rando m walk has q, = I , then we have a process th a t lies a t the intersectio n
of all five of the processes show n in th e figure. This is referred to as a "pure
birth" pr ocess ; alt ho ugh /, must be. memoryless, it may be a distribution
which depend s up on the sta te itself, ii ]; is independent of the sta te (th us
giving a con stant " birth rate" ) then we have a process th at is figura tively and
literally at the "center" of the study of stochastic pr ocesses a nd enjoys the
nice properties o f each ! This very special case is referred to as th e Poisson
p rocess and plays a major role in queueing the ory. We shall de velop its
properties later in thi s chapter.
So much for the classificati on of stochas tic processes at this poin t. Let us
now elab orate up on the defin ition and properties of discrete-state Markov
processes. Thi s will lead us naturally int o some of th e elementa ry qu euein g
systems. Some of the required th eory beh ind the mo re sop histicated continuou s-stat e Markov processes will be developed later in th is work as the need
a rises. We begin with the simp ler discrete -state , discrete-time Markov
chains in the next section and foll ow th at with a section on discrete-state,
continu ou s-t ime Mark ov chains.

2.3. DISCRETE-TL\tIE MARKOV CHAINS'


As we ha ve said , Markov processes may be used to describe the motion of
a particle in so me space. We no w con sider discrete-t ime Mar kov chai ns,
which pe rm it the particle to occu py discrete positions and permi t transiti on s
between these position s to tak e place only a t discre te time s. We present the
elements of t he th eor y by carrying along th e following contemp orary exa mple.
Con sider the hipp ie who hitchhikes from city to city acr oss the country.
Let X n den ote the city in which we find our hippie at noon on da y n. When
he is in so me particular city i, he will accept the first ride leavi ng in the
evening from th a t city. We assume that the tr avel tim e betwee n any two cit ies
is negligible . Of co urse, it is possible th at no ride comes alo ng, in which
case he will remain in city i un til the next evening. Since vehicles head ing for
va rio us neighboring cities come alo ng in some un predictable fash ion , the
hippie's posi tio n at so me time in th e fut ure is clearly a rand om variable.
It turns out t hat th is random variable may properly be described throu gh the
use of a Markov cha in.
See footnote on p. 19.

DISCR ETE- Tl ~IE MARK OV CHA IN S

2.3.

27

We hav e the foll owing definition


D EFINITIO N : The seq uence of random variables Xl' X 2 , forms a
d iscrete-time Markov chain if for all n (n = I, 2, . . .) and a ll possible
values of the random variables we have (for i 1 < i 2 < . .. < in) that

P[X.

= j I Xl =

iI' X 2 = i 2, .. . , X . _1 = in_I]

= P[X. = j

IX . _

= in_I]

- (2.38)

In term s of our example, this defin ition merely states that the city next to be
visited by the hippie depends only upon the city in which he is currently
located and not up on all the pre vious cities he has visited . In th is sense the
memory of the random p rocess, or Markov chain , goes baek only to the
mo st recent position of the particle (hippie). When X . = j (the hipp ie is in
cit y j on day 11), then the system is said to be in state E ; at time n (or at the
nth step) . T o get our hippie started on day 0 we begin with some in itial prob ab ilit y distribution P [X o = j] . The expression on the right side of Eq. (2.38)
is referred to as the (o ne-step) transition probability and gives the conditional
p robability of making a transition from state E i . _ 1 at step n - I to sta te E;
at the nth step in the proces s. It is clear that if we are given the initial state
probability distributi on and the transition probabilities, then we can
uniquely find the probabi lity of bei ng in various states at time n [see Eqs.
(2.55) and (2.56) below].
If it turns out that' the transition probabilities are independent of n, then
we have what is referred to as a hom ogeneous Markov chain and in th at case
we make the further definition
Pi; ,;; P[X.

= j IX n_ 1 =

i]

(2.39)

which gives the probability of going to sta te E; on the next step, given that
we a re currently at sta te i. What fo llows refers to homogene ous Mark ov
ch ain s only. These chain s are such tha t thei r transiti on probabilities are
statio na ry with time ": therefore, given the current city o r state (pun) the
p robability of various states III steps into the future depends only upon m and
not up on the current time; it is expedient to define the m-step tr ansition
probabilities as
p:i ) P[X . +m = j I X n = i]
- (2.40 )
From the Markov property given in Eq. (2.38) it is easy to establ ish the following recursive formula for calculating plj ):
( m)

Po

,,
.,

<m- i )

Pik

Pki

III

= 2,3 , . ..

(2.41)

This equation me rely says that if we a re to travel from E, to E; in m steps,


No te that although this is a Marko v process with sta tionary transitions, it need 1101 be a
stationary random process.

28

SOME IMPORTA NT RANDOM PROCESSES

then we must do so by first traveling from E i to some state E k in m - I


steps and then from Ek to E, in one more step ; the probability of the se last
two independent events (remember this is a Markov chain) is the pr oduct of
the probability of each and if we sum this product over all pos sible intermediate states Ek , we arrive at p:i ).
We say that a Markov chain is irreducible* if every state can be reached
from every other state; that is, for each pair of states (E , and Ej ) there exists
an integer mo (which may depend upon i and j) such that
(mol >

PH

Further, let A be the set of all states in a Markov chain. Then a subset of
states Al is said to be closed if no one-step transition is possible from any
state in Al to any state in Ale (the complement of the set AI). If Al consi sts of
a single state, say E i , then it is called an absorbing state; a necessary and
sufficient condition for E, to be an absorbing state is P = I. If A is closed
and does not contain any proper subset which is closed , then we have an
irreducible Markov chain as defined above. On the other hand , if A contains
proper subsets that are closed, then the chain is said to be reducible. If a
closed subset of a reducible Markov chain contains no closed subsets of
itself, then it is referred to as an irreducible sub-M arkov chain; the se subchains
may be studied independently of the other sta tes.
It may be that our hippie prefers not to return to a previously visited city.
However, due to his mode of travel thi s may well happen, a nd it is imp ortant
for us to define th is quantity. Accordingly , let

f i n) ~ P [first return to E , occurs n steps after leaving EjJ


It is then clear that the probability of our hippie ever returning to city j is
given by
co

fj

2: f~n ) =

P[ever returning to E;]

n_ 1

It is now possible to classify sta tes of a Markov cha in according to the value
obtained for /;. In particular, if!j = I then sta te E, is said to be recurrent ;
if on the other hand ,/; < I, then sta te E , is said to be transient. Furthermore,
if the o nly possible steps at which our hippie can return to sta te E , a re
y , 2y , 3y , . . . (where y > 1 and is the largest such integer), th en sta te E, is
said to be periodic with peri od y; if y = 1, then E, is aperiodic.
Con sidering sta tes for which/; = 1, we may then define th e mean recurrence
tim e of E, as
1'V[ j

='" 2:a:> nf \n)

(2.42)

n =l

Man y of the intcresti ng Markov chains which one encounters in queueing theory are
irreducible.

2.3.

DISCRETE -TIM E MARK OV CH AINS

29

This is me rely the average time to return to E;. With thi s we may then classify
sta tes even further. In particular, if M ; = 00 , then E; is said to be recurrent
null , whereas if M ; < 00, then E; is said to be recurrent nonnull. Let us define
71"Jnl to be the pr ob ability of finding the system in state E; at the nth step,
that is,
71"jnl ;; P[X n = j]
_ (2.43)
We may now state (without proof) two important the orems. The first
comments o n the set of sta tes for an irreducible Markov chain.

Theorem 1 The states of an irreducible Mark ov chain are either all


transient or all recurrent nonnull or all recurrent null. If p eriodic, then
all states have the same period y.
Assum ing th at our hippie wanders fore ver , he will pass through the various
cities o f the nation many times , and we inquire as to whether o r not there
exists a stationary probability distribution {71";} describing his probability of
being in cit y j a t some time arbitrarily far into the future . [A pr ob ability
distribu tion P; is said to be a stationary distribution if when we choose it for
our initial state distribution (that is, 71"JOI = Pi) then for all n we will ha ve
71"Jnl = P;.] Solvin g for {71";} is a mo st important part of the an alysis of
Markov chains. Our second theorem addresses itself to thi s question .

Theorem 2 In an irreducible and aperiodic homogeneous Mark ov


chain the lim iting probabilities
TTj

= lim 1T ~ n )

(2.44)

n -",

alway s exist and are independent of the initial state probability distr ibution. .M oreover, either
(a) all states are transient or all states are recurrent null in which
cases 71"; = 0 f or all j and there ex ists no sta tio na ry distribution,
or
(b) all states are recurrent nonnull and then 71"; > 0 f or all j , in
which case the set {1T;} is a stationary probability distribution and
Tr j

=-

(2.45)

AI ;

In this case the quantities 7T j are uniquely determined through the


fo llowing equations

1
1T j

=I
=

71",

(2.46)

L 1Ti P U

(2.47)

30

SOME IMPORTANT RA NDOM PROC ESSES

We now introduce the notion of ergodicity. A state E; is said to be ergod ic


if it is aperiodic, recurrent, and nonnull; that is, if;; = I, M ; < co, and
y = I. If all states of a Markov chain are ergodic, then the Mark ov chain
itself is said to be ergodic. Moreover, a Markov chain is said to be ergod ic if
the probability distribution {r.)"J} as a function of n always converge s to a
limitin g stationary distribution {7T;} , which is independent of the initial state
distribution. It is easy to show that all states of ajinite* aperiodic irreducible
Markov chain are ergodic. Moreover, among Foster's criteria [FELL 66]
it can be shown that an irreducible and aperi odic Markov chain is ergodic if
the set of linear equations given in Eq. (2.47) has a nonnull solution for which
L;17T;1 < co. The limiting probabilities {7T;}, of an ergodic Markov chain are
often referred to as the equilibrium probabilities in the sense that the effect of
the initial state distribution 7T)0 1 has disappeared.
By way of example, let's place the hippie in our fictitious land of Hatafla ,
and let us consider the network given in Figure 1.1 of Chapter I. In order to
simplify thi s example we will assume that the cities of Nonabel , Cadabra,
and Oriac have been bombed out and that the resultant road network is as
given in Figure 2.5. In this figure the ordered links represent permi ssible
directions of road travel ; the numbers on these links repre sent the probability
(Pi;) that the hippie will be picked up by a car travelin g over that road, given
that he is hitchhiking from the cit y where the arrow eman ate s. Note that
from the city of Sucsamad our hippie has probability 1/2 of remaining in that
city until the next day. Such a diagram is referred to as a state-transition
diagram. The parenthetical numbers following the cities will henceforth be
used instead of the city names.
Zeus

(1)

Abra (0 )

Figure 2.5 A Markov chain.


A finite Mar kov chain is one with a finite number of states. If an irre ducible Mark ov
cha in is of type (a) in Theorem 2 (i.e., recur rent null or transient ) then it ca nno t be finite .

2.3.

DISCRETE-TIME MARKO V CHAINS

31

In order to continue our example we now define, in genera l, the transition


probability matrix P as consisting of elements Pu , that is,
- (2.48)
If we further define the probability vector

1t

as
(2.49)

then we may rewrite the set of relati ons in Eq. (2.47) as


re

1tP

- (2.50)

F or our exa mple shown in Figure 2.5 we ha ve

P=

1
4

1
4

1
4

~l

~J

a nd so we may so lve Eq. (2.50) by conside rin g the three equation s deri vable
from it, th at is,

7T1

= -

7TO

0 1T1

+ -1
4

7T2

(2.51)

131
=
1To + - 1T 1 + - 7T.,
- 4
4
2 -

7T.

N ote from Eq . (2.51) that the first of the se three equ ati on s equ als the negat ive
sum of the seco nd a nd third , indicating th at there is a linear dependence
am ong them. It alway s will be the case th at o ne of the eq ua tions will be
linea rly de penden t on the others, and it is therefore necessary to int roduce the
addition al con servati on relat ionship as given in Eq. (2.46) in order to solve
the system. In ou r example we then requi re
(2.52)
Thus the sol utio n is obtai ned by simultane ously so lving any two of the

32

SOME IMPORTANT RAN DO~l PROCESSES

equations given by Eq. (2.51) along with Eq. (2.52). Solving we obtain
170

= -1 =

17,

= -7 =

0.20
0.28

25

172

(2.53)

= -13 = 0.52
25

Thi s gives us the equilibrium (stationary) state probabilities. It is clear that


this is an ergodic Markov ch ain (it is finite and irreducible).
Often we are interested in the transient behavior of the system. The
transient beha vior involves solving for 17) n ), the probability of finding our
hippie in city j at time II. We also define the probability vector at time II as
1t

1n ) ~ [17 1n ) 171n) 17(n)


o , 1 , 2 , ]

- (2.54)

Now using the definition of tran sition pr obabil ity and makin g use of Definition (2.48) we have a method for calculatin g 1t1l) expressible in term s of P
and the initial state distribution 1t101 That is,
1tlll

= 1t101P

Similarly , we may calculate the state probabilitie s at the second step by


n ( 2)

= 1tlllp

From this last we can then generalize to the result


II

1,2, ...

_ (2.55)

I , 2, . ..

- (2.56)

which may be solved recurs ively to obt ain


II

Equation (2.55) gives the general method for calculatin g the state probabilities
steps int o a process, given a tran sition pr obability mat rix P and an initial
state vector 1t1O ). From our earlier definitions , we have the stationary probability vector

II

1t

= lim 1t( n )

. assumin g the limit exists. (From Theorem 2, we know that this will be th e
case if we have an irreducible aperiodic homogeneous Markov chain.)

2.3.

DISCRETE-TIME MARKOV CHAINS

33

Then , from Eq . (2.55) we find

and so
7t

7tP

which is Eq. (2.50) again. Note that the solution for 7t is independent of the
initial state vecto r. Applying this to our example , let us assume that our
hippie begins in the city of Ab ra at time 0 with probability I, th at is
7t(0 )

[1 ,0,0]

(2.57)

From thi s we may calculate the sequence of values 7t( n ) and the se are given
in the chart below. The limitin g value 7t as given in Eq . (2.53) is also entered
in this chart.
n

7T~n )

I
0
0

(n )

7T1
(n )

7T2

0
0.75
0.25

co

0.250
0.062
0.688

0.187
0.359
0.454

0.203
0.254
0.543

0.20
0.28
0.52

We may alternati vely have chosen to assume th at the hippie begins in the
city of Zeu s with pr obability I , which would give rise to the init ial sta te
vecto r
7t (O) =
[0, I, 0]
(2.58)
and which result s in the following table:
n
( n)

7T0

o
o

(n )

( n)

7T,

7T2

0.25

0.75

0.187
0.375
0.438

0.203
0.250
0.547

0.199
0.289
0.512

0.20
0.28
0.52

Similarly , beginning in the city of Sucsamad we find


7t(0)

7T~n)

0
0

(n)

7T1
( n)

7T2

0.25
0.25
0.50

= [0, 0, I]

(2.59)

0.187
0.313
0.500

0.203
0.266
0.531

0.199
0.285
0.516

0.20
0.28
0.52

From these calculations we may make a number of observati on s. First, we

34

SOME IMPORTANT RA NDOM PR OCESSES

see th at after only four steps the quantities 11";"1 for a given value of i are
a lmost identic al regardless of the city in which we began . The rapid ity with
which these quantities converge, as we shall soo n see, depends up on the
eigenvalue s of P. In all cases, however , we o bserve th at the limiting values at
infinity are rapidly approached and, as stated earlier, are independent of the
init ial positi on of the particle.
In order to get a bett er ph ysical feel for what is occurri ng, it is instructive
to follow the probabilities fo r the vari ous states of the Mark ov chain as time
evo lves. T o this end we introduce the noti on of baricentric coordinates,
which are extremely useful in portraying probabil ity vecto rs. Consider a
pr obabil ity vecto r with N components (i.e., a Markov process with N sta tes
in o ur case) and a tetrahedron in N - I dimensions. In our example N = 3
and so o ur tetrahedron becomes an equil ateral triangle in two dimen sions. In
genera l, we let the height of thi s tetrah edr on be unity. Any pr obability vecto r
1t 1n l may be repre sented as a point in this N - I space by identifying eac h
component of tha t pr obability vecto r wit h a distance from one face of the
tetrahedron . Th at is, we mea sure from face j a distance equal to the pr oba bility assoc iated with th at component 11"~"); if we do this for each face and
th erefore for each compon ent , we will specify o ne point within th e tetr ahedr on and that point co rrectly identifies our prob ab ility vecto r. Eac h unique
prob ability vecto r will map into a un ique point in th is spa ce, and it is easy to
determine .the pr obability measure from its locati on in th at space. In our
exa mple we may plot the three initial sta te vecto rs as given in Eqs . (2.57)(2.59) as show n in Figure 2.6. The numbers in parentheses represen t which
pr oba bility compon ents a re to be measu red from the face associa ted with
th ose nu mbers. Th e initial state vecto r corresponding to Eq. (2.59), for

[a, 0 , I J

Height = 1

[0. 1, OJ

(2)

[ 1. 0 , OJ

Figure 2.6 Representation of the convergence of a Markov chain.

2.3.

DISCR ETE-Tl~[E MARKOV C HAINS

35

example, will appea r at the apex of the triangle a nd is indicated as such . In


our ea rlier calc ulation s we followed the progress of o ur pr ob ab ility vecto rs
beginning with three initial state pr obability vectors . Let us no w follow th ese
path s simultan eo usly and obse rve, for example, that the 'vector [0, 0, I] ,
following Eq . (2.59) , moves to the po sition [0.25,0.25,0.5) ; the vecto r
[0, 1, 0] moves to the position [0.25 ,0,0.75], and the vecto r [1, 0 ,0]
moves to th e position [0,0.75,0.25]. Now it is clear th at had we sta rted
with an initial sta te vector anywhere within the ori ginal equil ateral tri angle,
that point would have been mapped into t he interior of the sma ller trian gle,
which now joins the three point s ju st referred to and which represent possible
positions of the original state vectors . We note fr om t he figure th at thi s new
tri angle is a shru nken version of the ori ginal triangle. If we now continue
to map these three points into the seco nd step of the pr ocess as given by th e
three cha rts above, we find an even sma ller trian gle inter ior to both the first
and the second tri angles, and thi s region represents the possible locati ons of
any origina l stat e vecto r after tll'O steps int o th e pr ocess. Clearl y , this shrinking
will co ntinue unti l we reach a convergent point. Thi s con vergent point will
in the limit be exactl y that given by Eq. (2.53)! Thu s we can see the way
in which the possible position s of our pr o bability vecto rs move around in
o u r space.
The calculation of the transient respon se 1t l n ) from Eqs . (2.55) or (2.56) is
extremely ted ious if we desire more than just the first few term s. In o rde r to
o btai n the general solution, we often resort to tran sform meth od s. Below we
demonstrate this meth od in general and th en a pply it to o ur hippie hitchhikin g example. Thi s will give us an opportunity to apply the z-transform
calcul ation s tha t we have introduced in Append ix I. * Our point of departure is Eq . (2.55) . That equation is a difference equ ati on among
vecto rs. T he fact th at it is a difference equ ati on suggests the use of c-tra nsfor ms as in App end ix I, and so we naturally define th e following vector
tran sform (t he vecto rs in no way interfere with ou r tran sform approach
except th at we must be car eful when taki ng inverses) :
co

II(z) 4,

L: 1t l n lZ n

(2.60)

n= O

Th is tran sform will certa inly exist in the unit disk , th at is, Izi ..,:; I. We no w
a pply the transfor m method to Eq . (2.55) over its ran ge of app licatio n
(11 = 1,2 , . . . ,); thi s we do by first multiplying that equ ati on by c :' a nd th en
sum ming from I to infinity, thu s

L:"" 1t( n )z " = L:oo 1t( n-llp;; "


n= l

* The

n= l

ste ps involved in ap plying this meth od are summa rized on pp . 74-5 of th is chap ter.

36

SOME IMPORTANT RANDOM PROCESSES

We have now reduced our infinite set of difference equations to a single


algebraic equation. FolIowing through with our meth od we mu st now try to
identify our vecto r transform D (z). Our left-h and side contains all but the
initial term o f this transform and so we ha ve

The parenthetical term o n the right-ha nd side of thi s last equation is rec0llt
nized as D (z) simply by cha nging the index of summati on. Thus we find
D (=:) -

= zD(z)P

7t(O)

z is merel y a scalar in th is vector equation and may be moved freely a cross


vectors and matrices. Solving this matrix equation we immed ia tely come up
with a general solution for our vector transform :
(2.6 1)
where 1 is the identity matrix and the (-I) notation implies the matrix
in verse . If we can invert this equation , we will ha ve, by the uniqueness of
transforms, the transient solution; that is, using the double-headed, d oublebarred arrow notation as in Appendix I to denote tr an sform pairs, we have
D (z)

~ 7t (n )

7t(O)p n

(2.62)

In thi s last we have taken advantage of Eq . (2.56). Comparing Eqs . (2.6 1)


and (2.62) we have th e obvious transform pair

[I - zP j- l ~ P "

l'

- (2.63)

Of course P " is precisely what we are looking for in order to obta in our
transient solutio n since thi s will directly give us 7t( n l from Eq . (2.56). All that
is required , therefore, is that we form the matrix inverse indi cated in Eq.
(2.63). In gene ral this bec omes a rather complex ta sk when the number of
sta tes in our Markov chain is at all lar ge. Nevertheless, th is is one formal
procedure for ca rrying ou t the transient a na lysis.
Let us apply these techn iques to our hipp ie hit chhiking example. Recall
that the transiti on probability matrix P was given by

P=
I

4
I

0
I
4

3
4
I

2.3.

37

DISCRET E-TIME MARKOV CHAINS

First we must form

-- z

- z

- - z
4

1 - zP =
1
-- z
4

1- 1. z
2

Next, in order to find the inverse of this matrix we must form its determinant
thus :
det (I - zP) = 1 -

1. z _ :L Z2 _ l.. Z3
2

16

16

which factors nicely into

+~ zr

det(I- ZP)=(I- Z)(1

It is ea sy to show that z = I is always a root of the determinant for an


irreducible Markov chain (and, as we shall see, gives rise to our equilibrium
solution). We now proceed with the calculation of the matrix inverse using
the usual methods to arrive at

[I _ ?P j- t
-

1
(1 - z) [I

+ (1/4)zf

1-

1. Z _1- z 2
2

1
4-

16

I .
16 -

- "'+- .,.I
- z

+ -1

16

Z2

3
5 .
- z - - z-

1. z _l.. Z2

-1 z
4
3
- z

1-

16

1z

-t

16

3 z2
+16

+ -9

Z2

16

+ -1

.
z-

16

_1- z 2
16

Having found the matrix inverse, we are now faced with finding the inverse
transform of thi s matrix which will yield P ", This we do as usu al by carrying
out a partial fraction expansion (see Appendix I) . The fact that we have a
matrix presents no problem ; we merely note that each element in the matrix
is itself a rati onal function of z which must be expanded in partial fraction s
term by term . (This task is simplified if the matrix is written as the sum of
three matrices: a constant matrix ; a constant matrix times z ; and a constant
matrix times Z2.) Since we have three roots in the denom inator of o ur rational
functions we expect th ree terms in our partial fraction expansion. Carrying

38

SOME IMPORTA NT RANDOM PROCESSES

out this expansion and separating the three terms we find

[I - =P ]-

1/25 [5

= -- 5
1- z

-~ -~]

13]
1/5
[0
13 + (1 + =/4)2 0

13

1/25 [20

-5
1 + =/4
-5

-2

33
8
-17

-53]

-3
22

(2.64)

We observe immediately from this expansion that the matrix associated with
the root (l - e) gives precisely the equilibrium solution we found by direct
methods [see Eq . (2.53)]; the fact that each row of this matrix is identical
reflects the fact that the equilibrium solution is independent of the initial
state. The other matrices associated with roots greater than unity in absolute
value will always be what are known as differential matrices (each of whose
row s must sum to zero). Inverting on z we finally obtain (by our tables in
Appendix I)

P"

-8
13]
1
1 n[O
13 +:5 (n + 1)(- 4) 0 2

13

;,[:

1( 1)"[ -5 338 --53]


3

+--4.
25

20

-5

-17

-~]

-2

n = 0, 1, 2, . .. (2.65)

22

This is then the complete solution since application o f Eq. (2.56) directly
gives 7t ( n ) , which is the transient solution we were seeking. N ote th at for
II = 0 we obtain the identity matrix whereas for II = I we mu st, of course,
obtain the transition probability matrix P. Furthermore, we see that in thi s
case we have two transient matrices, which deca y in the limit leaving only the
con stant matrix representing our equilibrium solution. When we think ab out
the decay of the transient, we are reminded of the shrinking triangles in
Figure 2.6. Since the transients decay at a rate related to the characteristic
values (one over the zeros of the determinant) we therefore expect the
permitted positions in Figure 2.6 to decay with II in a similar fashi on . In
fact, it can be sh own that these triangles shrink by a con stant factor each time
II increases by 1. This shrinkage factor for any Markov process can be
shown to be equ al to the absolute value of the product of the characteristic
values of its tr ansition probability matrix; in our example we have characteristic value s equal to 1, 1/4, 1/4 . Their product is 1/16 and thi s indeed is
the fact or by which the area of our triangles decreases each time II is increased.

2.3.

39

DIS CRET E-TIM E MARK OV CHA INS

This method of tran sform a nalysis is extended in two excellent vo lumes by


Howard [HO WA 71] in wh ich he treat s such problems and disc usses
additional a pp roaches such as the flow- graph method of ana lysis.
Throughout thi s discussion of di screte-time M arkov chains we have not
explicitly addressed ourselves to the memoryless property* o f the time that
the syst em spe nds in a given state. Let us now prove that the nu mber of time
units th at the system spends in the sa me sta te is geome trically d istributed ;
the geometric distribution is the unique discrete memoryless di stribution .
Let us assume the system ha s just entered state E; It will rem a in in this sta te
at the next step with probability Pii; similarly, it will leave th is sta te a t the
next step with probability I - Pu- If indeed it d oes remain in thi s sta te at the
next step, then the probability of its remaining fo r an additi onal step is again
Puand sim ila rly the conditional probability of its lea ving at thi s seco nd step
is given by I - Pu - And so it goe s. Furthermore , due to the Marko v property
the fact that it has remained in a given state for a kn own number of ste ps in
n o way affects the probability that it leaves at the next step. Since the se
probabilities a re independent , we may then write
P[s y stem rem ains in E i for exactly m addition al steps given that it has

just en tered E i] = (I - Pii)Pii

ffl

(2.66)

This, of course, is the geometric distribution as we cla ime d . A simila r argument will be given later for the continuous-time Markov chain.
So far we have concerned ourselves principally with homogeneou s Markov
processes. Recall that a homogeneous M arkov ch ain is one for which the
tr ansit ion prob abilities a re ind ependent of time. Amon g the qu antities we
were able to calculate wa s the m-step transiti on pr ob ab ility p\il, which gave
the probability of passing from state E, to state E, in m steps ; the recursive
formula for thi s calculation wa s given in Eq . (2.4 1). We now wish to take a
mo re gene ra l point of view and permit the transit ion prob ab ilities to depend
u pon ti me. We intend to deri ve a relation ship not unlike Eq . (2.4 1), whic h
will form our point of depa rture for many further developments in the
a pp lica tion o f Markov pr oce sses to queueing p roblems. F or th e time bei ng
we con tinue to res trict ourselves to d iscrete-t ime , discre te- stat e Ma rkov
cha ins .
Generalizin g the homogene ou s definition for the mu ltistep tr an sition
prob ab ilities given in Eq. (2.40) we now define
_ (2.67)
which gives the probability that the system will be in state E, at step n, given
The memoryless prope rty is discussed in some detail la ler.

40

SOME IMPORTANT RANDOM PR OCESSES

In

Time step

Figure 2.7 Sample paths of a stochastic process.


th at it was in state E; at step m, where 11 ~ m. As discussed in the hom ogeneou s case, it certainly must be true that if our proces s goes from state E;
at time m to state E; at time 11, then a t some intermediate time q it must have
passed through some state Ek This is depicted in Figure 2.7. In this figure we
have shown four sample paths of a stochastic process as it moves from state
E ; at time m to state E; at time 11. We have plotted the state of the process
vertically and the discrete time steps horizontally. (We take the liberty of
dra wing continuous curves rather than a sequence of points for convenience.)
Note that sa mple paths a and b both pass through state Ek at timeq, whereas
sample paths c and d pass through other intermediate sta tes at time q. We
are certa in of one thing only , namely , that we must pass through some
intermediate state at time q. We may then express poem , 11) as the sum of
probabiliti es for all of these (mutually exclusive) intermediat e states; that is,

(2.68)

p;lm, n) = L P[X n = j , X. = k X m = i]
k

for m ~ q ~ 11. Thi s last equation must hold for any stochastic process (not
necessarily Markovian) since we are considering all mutually exclusive and
exha ustive possibilities. From the definition of conditional probability we
may rewrite this last equation as

.1

Pilm , n) = LP[X. = k

I x; =

i]P[X n =j

Ix ; =

i, X . = k ]

(2.69)

Now we invoke the Markov property and observe that


P[ X n

<! I x;

= i, X. = k] = P[ X n = j

I x, =

k]

Appl ying this to Eq . (2.69) and makin g use of our definition in Eq. (2.67) we
finally arrive at
poem, 11)

= L p;k(m , q)Pk,(q , 11 )
k

- (2.70)

2.3.

DIS CR ETE-TIM E MARK OV CHA INS

41

for m ::; q ::; n. Equation (2.70) is known as the Chapman-Kolmogorov


equat ion for discrete-time Mark ov proce sses. Were this a homogeneous
Mark ov chain then from the definition in Eq. (2.40) we would have the
relat ionship p ;;(m,n) = p\;- ml and in the casewhenn = q + I our ChapmanKolmogoro v equat ion would reduce to our earlier Eq . (2.41). The ChapmanKolmogoro v equat ion states that we can partition any n - m step tran sition
pr obab ility into the sum of products of a q - m and an n - q step tran sition
probability to and from the inte rmediate states that might have been occupied
at some time q within the interval. Indeed we are permitted to choose any
part itioning we wish, and we will take adv ant age of th is shortly.
It is conven ient at thi s point to write the Chapman-Kolmogorov equation
in matrix form . We have in the past defined P as the matrix containing the
elements PH in the case of a homogeneous Mark ov chain. Since these quant ities may now depend upon time, we define p en) to be the one-step tran sition
probability matrix at time n , that is,
p en) ~ [p;; (n, n

+ I)]

- (2.71)

Of course, p en) = P if the chain is homogeneous. Also, for the homogeneous


case we found that the n-step transition pr obab ility matrix was equal to P ".
In the nonhomogeneous case we must make a new definiti on and for this
purpose we use the symbol H (m, n) to denote the following multistep
transition pr obabil itymatrix:
H (m, n) ~ [Pi j(m, n)]

- (2.72)

Note that H (n, n + I) = p en) and th at in the homogeneous case H (m, m +


n) = P ". With the se definitions we may then rewrit e the Chapman Kolmogorov equation in matri x form as
H (m, n) = H(m , q)H(q, n)

_ (2.73)

for m ::; q ::; n. To complete the definiti on we require th at H (II, II) = I ,


where I is the identity matri x. All of the matrice s we are considering are
square matrices with dimensionality equal to the number of states of the
Mark ov chain . A solution to Eq. (2.73) will co nsist of expressing H (m, n)
in term s of the given matrices p en).
As mentioned abov e, we a re free to choose q to lie anywhere in the interval
between m and II. Let us begin by choosing q = n - I. In th is case Eq . (2.70)
becomes
(2.74)
p;;(m, II) = 2 Piim, n - I)Pk;(n - I , n)
k

which in matri x form may be written as


H (m, n)

H (m, n -

I)P (n - I)

- (2.75)

42

soxn;

IMPORTA NT RANDOM PROCESSES

Equ ati ons (2.74) and (2.75) are known as the fo rward Chaprnan-Kolrnogorov
equations for discrete-time Markov chains since they a re writt en at the
for ward (most recent time) end of the interv al. On the other hand, we could
ha ve chosen q = m + I, in which case we obtain
pilm, n)

= 2: Pik(m , m + I)Pk,(m + 1, II)

(2.76)

whose matri x form is

H (m, II)

P (m)H (m

+ 1, n)

- (2.77)

The se last two are referred to as the backward Chapman-Kolmogorov


equations since they occur at the backward (oldest time) end of the interval.
Since the forward and backward equations both describe the same discretetime Markov chain, we would expect their solution s to be the same, and
indeed this is the case. The gener al form of the solution is
H (m , n) = P (m)P (m

+ I) . .. P(I! -

I)

m::;, I! -

- (2.78)

That this solves Eqs. (2.75) and (2.77) may be establi shed by direct substitution . We observe in the homogeneou s case that this yields H (m , n) = p n- m
as we have seen earlier. By similar arguments we find that the time -dependent
probab ilities {1rl n l } defined earlier may now be obtained through the following
equation :

whose solution is
7t(n+!1 = 7t(oIP (O)P (I ) .. . P (II)

_ (2.79)

The se last two equations corre sp ond to Eqs. (2.55) and (2.56), respectively,
for the hom ogeneous case. The Chapman-Kolmogorov equations give us a
mean s for describing the time-dependent probabilities of man y interesting
queu eing systems that we develop in later chapters. *
Before leaving discrete -time Markov chains, we wish to introduce the
special case of discrete time birth-death processes. A birth-death process is an
example of a Mark ov proces s that may be thought of as modelin g chan ges
in the size of a popul ation. In what follows we say that the system is in state
Ek when the popul ation consists of k members. We further assume th at
chan ges in popul ati on size occur by at most one; th at is, a " birt h" will chan ge
the popul ati on' s size to one greater, whereas a "death" will lower the
popul at ion size to one less. In consider ing birth -death processes we do not
perm it multiple birth s or bulk disasters; such possibilities will be con sidered

* It is clear fro m this develop ment that a ll Mar kov processes must sati sfy the Chapma nKolmogorov equatio ns. Let us note, however , that a ll proc esses that sa tisfy the Cha pmanKolmogorov equation are not necessarily Mark ov processes; see . for exam ple. p. 203 of
[PAR Z 62].

2.3.

DISCR ETE-TIM E MARKOV CHAINS

43

later in the text and correspond to rand om walks . We will con sider the Mar kov
chain to be hom ogene ous in that the transition probabilities P i; do not change
with time; howe ver , cert ainly the y will be a functi on of the state of the
system. Thus we"have that for our discrete-time birth-death process
j= i - I

PH =

{ I-h.
d, - d,

b,

j=i
j

(2.80)

=i+I
ot herwise

Here d, is the pr obability that at the next time step a single death will occur,
driving the population size down to i - I , given that the population size
now is i. Similarly, b, is the probability that a single birth will occur, given
th at the current size is i, thereby dri ving the populati on size to i + I at the
next time step. I - b, - d, is the probability that neither of these event s will
occur and that at the next time step the population size will not change.
Onl y these three possibilities are permitted. Clearly do = 0, since we can
have no deaths when there is no one in the populati on to die. However,
contrary to intuition we do permit b o > 0; this correspond s to a birth when
there are no members in the population. Whereas this may seem to be
spo ntaneo us generation, or perhaps divine creation , it does provide a
meaningful model in term s of queueing the ory. The model is as follows : The
population corresponds to the custo mers in th e queueing system ; a death
corresponds to a customer departure from that system; and a birth corresponds to a customer arrival to th at system. Thus we see it is perfectly feasible
to ha ve a n arrival (a birth) to an empty system ! The sta tiona ry pr obab ility
tran sition matrix for the general birth-death pr ocess t hen appears as follows :
I- bo
d,

bo
I - b, - d ,

d'!,

b,

I - bz-dz b,

di

I- bi - d,

b,

p =

...

If we are dealing with a finite cha in, then the last row of thi s matri x would be
[00 . . . 0 ds I - dsL which illustrates the fact th at no births are permitted
when the populati on has reached its maximum size N . We see th at th e P

44

SOME IMPORTA NT RANDOM PRO CESSES

matrix has nonzero terms only along the main diagonal and along the diagonals directly above and below it. This is a highly specialized form for the
transition probability matrix, and as such we might expect that it can be
solved . To solve the birth-death process means to find the solution for the
state probabilities 1t(n l . As we have seen, the general form of solution for
these probabilities is given in Eqs. (2.55) and (2.56) and the equation that
describes the limiting solution (as n -- 00) is given in Eq , (2.50). We also
demonstrated earlier the z-transform method for finding the solution. Of
course, due to this special structure of the birth-death transition matrix, we
might expect a more explicit solution. We defer discussion of the solution to
the material on continuous-time Markov chains , which we now investigate.

2.4.

CONTINUOUS-TIME MARK OV CHAINS'

Ifwe allow our particle in motion to occupy positions (take-on values) from
a discrete set, but permit it to change positions or states at any point in time,
then we say we have a continuous-time Markov chain. We may continue to
use our example of the hippie hitchhiking from city to city, where now his
transitions between cities may occur at any time of day or night. We let X(f)
denote the city in which we find our hippie at time f. X(f) will take on values
from a discrete set, which we will choose to be the ordered integers and which
will be in one-to-one correspondence with the cities which our hippie may
visit.
In the case of a continuous-time Markov chain, we have the following
definition :
DEFINITION: The random process X(f) forms a continuous-time
Markov chain if for all integers n and for any sequence f" f 2 , , f n+l
such that f 1 < f 2 < ... < f n +1 we have

P[X(tn+l)

= j I X(tl) =

ii' X(t2)

;2' .. . , X(t n)

= P[X(tn+l)

= j

in]

I X(t n) =

in]

(2.81)

This definitiont is the continuous-time version of that given in Eq . (2.38).


The interpretation here is also the same, namely, that the future of our hippie's
travels depends upon the past only through the current city in which we firid
him. The development of the theory for continuous time parallels that for
discrete time quite directly as one might expect and, therefore, our explanations will be a bit more concise . Moreover, we will not overly concern
See footnote on p. 19.

t An alternate definition for a discrete-state continuous-time Markov process is that the


following relation must hold :
P[X(t)

= j I X(T)

for

TI

T2

< I] = P[X(I ) = j I X h

)]

2.4.

45

CONTINUOUS-TIME MARKOV CHA INS

ourselves wit h some of the deeper que stions of con vergence of limits in
passing fro m discrete to continuous time ; for a car eful treatm ent the reader
is referred to [PARZ 62, FELL 66].
Earl ier we stated for any Markov process that the time which the process
spends in any state must be " memoryless" ; th is implies th at the discrete-time
Mark ov chain s mu st have geo metrically distributed state time s [which we
have already pr oved in Eq. (2.66)] and that continuous-t ime Ma rkov chai ns
must ha ve exponentially distributed sta te time s. Let us now prove thi s last
sta tement. F or thi s purpose let T i be a random variable that repre sent s the
time which the process spends in state Ei . Recall the Markov pr operty which
sta tes that the way in which the past trajectory of the process influence s the
future de velopment is completely specified by giving the cur rent sta te of the
process. In particular, we need not specify how long the pr ocess ha s been in
its curren t state. This mean s th at the remaining time in E, mu st have a
distribution that depends only upon i and not up on how lon g the pr ocess
has been in E i . We may write th is in the followin g fo rm :

+ 1 It , > s] =

P['Ti > s

h(l)

where h(l ) is a function only of the additional time 1 (and not of the expended
time s) *. We may rewrite thi s conditional probability as follows :

P['Ti

> 5 + 1 I'Ti > 5] =

_
P~
['T.!..i..:..>_
5 --:+----'I'_'T!...i :::..
>_5-,]

P['Ti > 5]

> 5 + I]
P[Ti > 5]
> s + 1 implie s the

P[Ti

Thi s last step follows since the event 'T i


event
Rewritin g this last equati on and introducing h(l ) o nce again we find

Setting s

P['T i > s

Ti

> s.

(2.82)
> s]h( l)
= and observing that P['T i > 0] = I we have immed iately th at

+ I] =

P[Ti > I]

P[Ti

h(l )

Using thi s last equ ati on in Eq. (2.82) we then obtain

P['Ti

> 5 + I] =

P['Ti > 5 ]P['T,

> I]

(2.83)

for s, 1 ~ 0. (Setti ng s = 1 = we aga in requ ire P[Ti > 0] = I.) We now


show that the only continuous distribution satisfying Eq. (2.83) is the
T he symbo l s is used as a time variab le in this section on ly an d should not be confuse d
with its use as a transform varia ble elsewhere.

46

SOME IMPOR TANT RANDOM PROC ESSES

exponential distributi on. First we have, by definition , the following general


relati onsh ip:

dt

(P[T i

> tJ) = -d

dt

( I - Ph

(2.84)

- JT,(t )

where we use the notation!T.(t) to den ote the pdf for


tiate Eq . (2.83) with respect to s, yieldin g
dPh

t])

> s + t]

----"-'------" = - JT (s) P [T i

ds

Ti.

Now let us

diff~~en

> t]

where we have taken advantage ofEq . (2.84). Dividing both sides by P[T i
and setting s = 0 we have
dP[T,
P[T i

> t] =
> t]

If we integrate this last from 0 to

> I]

- /,(0) ds
T,

we obtain

or
P[Ti

> t] = e- f T, (O) '

Now we use Eq. (2.84) again to obtain the pdf for

JT,( t)

Ti

as

= JT,(O)e- Ir ,lOlt

(2.85)

which hold s for I ~ O. There we have it: the pdf for the time the process
spends in state E, is exponentially distributed with the parameter ;;,(0),
which may depend upon the state E,. We will have much more to say abo ut
this exponential distribution and its imp ortance in Mark ov processes sho rtly.
In the case of a discrete-time hom ogeneous Mar kov chai n we defined the
transition probabilities as Pis = P[Xn = j I X n _ 1 = i] and also the m-step
transiti on probabilities as p~j ) = P[X n+ m = j I X n = i] ; th ese quant ities
were independent of n due to the homogeneity of the Markov chain. In the
case of the nonhomogeneous Markov chain we found it necessary to identify
points along the time axis in an absolute fashion and were led to th e import ant
tr ansition probability definition Pii(m , n) = P[Xn = j I X m = i]. In a
completel y analogous way we must no w define for our continuous-time
Markov chain s the following time-d ependent transition probability:
p,;(s, t) ~ P[X(t)

=j

I X es) = i]

- (2.86)

where XCI) is the position of the particle at time I ~ s. Ju st as we considered


three successive time instant s m ~ q ~ n for the discrete case, we may

2.4.

CONTINUOUS-TIME ~IARKOV C HAINS

47

con sider the following three successive time instants for our continuous time
chain s ~ 1I ~ I . We may then refer back to Figure 2.7 and iden tify so me
sample paths for what we willnow consider to be a continuous-time Mark ov
chain; the critic al observa tion once again is that in passing from sta te E, at
time s to state E , at time t, the process must pass through some intermed iate stat e E. at the intermediate time 1I. We then proce ed exactly as we did
in derivin g Eq. (2.70) and arrive at the followi ng Chapman-Kolmogoro v
equa tio n for continuous-time Markov chains:
(2.87)
where i.] = 0, I, 2, .. . . We may pu t thi s eq uation into matri x form if we
first define the matrix con sisting of elements Pii(S, t) as

H (s , t) ~ [Pii(S, t)

- (2.88)

Then the Chapman-Kolmogorov equation becomes .

H (s, t) = H (s, lI)H (u, t)

- (2.89)

[We define H (/, t) = I, the identity matrix.]


In the case of a homogeneous discrete-time Markov chain we found th at
the mat rix equ ation 1t = 1tP ha d to be investigated in ord er to determ ine if
the chain was ergodic, and so on ; also, the tran sient solution in the nonhomogeneous case could be determi ned from 1t1n+1l = 1tIOIP (O)P (l ) . .. p en),
which was given in terms of the time-dependent transition probabilities
Pii(m, n) . For the continuous-time Markov chain the one-step tran sition
probab ilities are replaced by the infinit esimal rates to be defined below; as
we sha ll see they are given in te ~m s of the time derivat ive of P'i(S, t) as
t -->- s.
What we wish now to do is to form the continuous-time ana log of the
for ward and backward equations. So far we have reached Eq. (2.89), which is
ana logo us to Eq. (2.73) in the discrete-time case. We wish to extract the
a nalog for Eqs. (2.74)-(2.77), which sho w both the term-by-term and matrix
form of the for ward and backward equ ati ons , respecti vely. We choose to do
th is in the case of the forwa rd equation, for example, by sta rting with
Eq. (2.75), namely, H (m, n) = H (m , n - I )P (n - I), and allowing the
unit time interval to shrink toward zero . T o this end we use thi s last equation
an d form the following difference :

H (m, n) - H(m, n - I)

= H (m, n

= H (m, n -

I )P (n -

I) - H (m, n -

I)[P(n - I) - I]

I)

(2.90)

We must now con sider some limits. Ju st as in the discrete case we defined
p en) = H(n, n + I), we find it con venient in this continuous-time case to

48

SOME I:'IPORTANT RANDOM PROCESSES

define th e following matrix:


P(t ) ~ [p ;;(t , t

+ 6.t)

- (2.9 1)

Furthermore we identify the matrix H (s , t) as the limit of H (m , n) as our time


interval shrinks; similarly we see that the limit of p en) will be p et ). Retu rn ing
to Eq. (2.90) we now divide both sides by the time step, which we denote by
!'It, an d take the limit as 6.t ->- O. Clea rly then the left-hand side limits to the
derivative, resulting in

a H(s , t)

at

H(s, t)Q(t)

- (2.92)

where we have defined the matri x Q (t ) as the following limit:


Q(t) = lim p et) - I
- (2.93)
"'-0 6.t
T his matrix Q (t ) is kn own as the infin itesimal generator of the transition
matrix fu nction H (s , t). Ano the r more descriptive name for Q (t ) is t he
transit ion rat e matrix ; we will use both names interchangeably. The elements
of Q (t ), which we denote by q;;(t ), a re the rates that we referred to earlier.

Th ey are defined as follows :


.

q;;(t)

p;;(t, t

= 11m

+ 6.t)

- 1

(2.94)

D.t

41-0

. P"J( t, t + 6. t)
q;;(t) = 11m
41 -0
6.t

,e j

(2.9 5)

T hese limits have the following inte rpretation. If the system at time t is in
state E ; then the probability that a transition occurs (to any state other than
E i) during the inte rval (t, t + D.t) is given by -qii(t) 6.t + o(6.t). * Thus we
may say th at - q;;(t ) is the rat e at which the process de par ts from sta te E ;
when it is in that sta te. Similarly, given that th e system is in sta te E, at time t,
the co nditional probabil ity tha t it will make a transition from this state to
state E; in the time interval (t, 1 + 6.t) is given by q;;(I) 6.1 + o(6.t) . Thus
As usual, the notation o(~t) denotes allY functio n that goes to zero with !!'t faster than !!'t
itself, that is ,
lim oeM)
~,- o !it

More generally, one states that the function g et) is o(y (t )) as t - t l if


j"

t~~,

I I
get)
ye t)

See also Chapter 8, P: 284 for a definition of 0(').

2.4.

CON TINUOU S- TIME MARK OV CH AINS

49

q;i(t) is the rate at which the pr ocess moves from E ; to E i , given th at th e


system is currently in the sta te E;. Since it is a lways true that I i p;;(s, I) = I
then we see th at Eq s. (2.94) and (2.95) imply that
for all i

(2.96)

T hus we have interp reted the terms in Eq. (2.92); th is is nothing more than
the for ward Chapman -Kolmo gorov equ ation for the continuou s-time
Ma rkov ch a in.
In a sim ilar fas hion, beginning with Eq . (2.77) we may deri ve the back ward
Ch apman -Kolm ogor ov equ ati on

aH~;, t)

= - Q(s)H (s, t)

- (2.97)

The for ward and backward matrix equations j ust deri ved may be expressed
through their indi vidu al terms as follows. The forward equation gives us
[with t he addi tio na l condition that the pa ssage to the limit in Eq. (2.95) is
uniform in i for fixed j]
(2.98)
Th e initial sta te E, a t t he initia l time s affect s the solution of thi s set of
differential equ ati on s only through the initial condition s

pil s, s)

= {~

if j = i
if j ~ i

From the bac kwar d matrix equation we obtain


(2.99)
T he "i nitial" co ndi tio ns for th is equation are

pilt , t) =

{~

if i = j
if i ~ j

These equ ati on s [(2.98) and (2.99)] uniquely determine the tr an siti on
p rob abilities p,ieS, t) and mu st, of course , a lso satisfy Eq. (2.87) as well as
the ini tial condition s.
In matrix not ati on we may exhibit the solution to the forw ard a nd backward Eqs. (2.92) and (2.97), respectively, in a stra ightfo rwa rd manner ; the

50

SO~[E I ~I PORTANT RAN DOM PROCESSES

result is'
H(s, I)

exp

[fQ(U) duJ

- (2.10 0)

We observe that thi s so lutio n also sa tisfies Eq. (2.89) and is a continuou s-time
an alog to the discrete-time so lut ion given in Eq. (2.78) .
N ow fo r the sta te p rob ab ilit ies the mselves : In a na logy with 7Tl n ) we now
define
7T;(t) ~ P[X(I) = jj
- (2. 101)
as well as the vecto r of the se probabilitie s

n et ) ~ [1TO(t ), 7T I (t),

7T2

(1), . .. j

(2.102)

If we are given the initial state d istr ibution n CO) then we can so lve for the
t ime-dependen t sta te probabilitie s from

n et)

n(O)H(O, t)

(2.103)

where a general solutio n may be seen from Eq . (2. 100) to be

n (l)

n CO) exp

[1'

Q(u) duJ

- (2. 104)

This corresponds to the discrete-time solution given in Eq , (2.79). The mat rix
differential equ ation corresp onding to Eq. (2.103) is easi ly seen to be
dn( l )
-

dl

= n (I)Q (t)

This last is simila r in form to Eq . (2.92) a nd ma y be expr essed in terms of its


elemen ts as
(2. 105)
The sim ilarity between Eq s. (2. 105) an d (2.98) is not accidental. The latt er
de scr ibes th e pr obab ility th at t he process is in sta te E; at time t given that it
was in state E; at time s. The fo rmer merel y gives the probability that the
system is in state E; a t time t ; information as to whe re the proce ss began is
given in the initial state probability vecto r n CO). If indeed 7T k (O) = I for
k = i a nd 7T k(O) = 0 for k ,t= i, then we are sta ting for sure th at the system was
in state E ; at ti me O. In th is case 1T;(I) will be identically eq ua l to Pu(O, I).
Both form s for thi s probability are often used ; th e form Pu(s, I) is used whe n
Th e expression e P ' where P is a squa re mat r ix is defined as the following matrix po wer
series:
e Pl

=I+

PI

(2

p 2_
2!

(3

p 3_
3!

+ .. .

.I

CONTINUOUS-TIME ~IAR KOV C HAINS

2.4 .

51

we wa nt to specifica lly sho w the initial state ; th e form 7T;(I) is used wh en we


ch oose to neglect o r imply the initial sta te.
We now con sider the case where o ur continuous-time Marko v chain is
homogeneous. In this ca se we drop .the dependence upon time and ad opt the
foll owing notation :
(2. 106)
Pi' (I) ~ po(s , S + I)

qij ~ qij(l )
H (I)

=.l
.l
=

i,j = 1, 2, . . .

+ I) =

H (s, s

[pi,(I)

(2 .107)

... (2.108)

Q
Q(I) = [qij)
(2.109)
In this ca se we may list in rapid o rder the corresponding results. Fir st , the
Chapman-Kolmogorov equations become

Pij(S + I)

= L Pik(S)Pk,(l )
k

and in matrix form*

H (s

+ I) =

H (s)H(I)

The forward and backward equation s be come , respectively ,

dpi,(l )
- d- = q jiPi](l )
I

+ ,L, qk;Pik(l)

(2.1 10)

.~ ~j

and
(2 . 111)

and in matrix form th ese bec ome , respect ively ,

dH (I)

dl

H (I)Q

- (2 . 112)

QH(I)

- (2. 113)

and

dH(I)

dl

with the comm on init ial cond it ion H (O)


given by

H (I)

I. The so lutio n for thi s ma trix is

eO t

N ow for the sta te probabilit ies them sel ves we have th e d ifferenti al eq ua tio n

d7T,(t)
- d - = q ji 7T j ( l)
t

+ L qkj7Tk(l)

(2.114)

k:;:.j

which in matrix form is

d7t(I)
-

dl

= 7t(I)Q

The corresponding discrete-time result is simply p m+n = p mpn.

52

SO~IE 1 ~IPO RTA l" T RA NDO~l PROC ESSES

Fo r an irreduci ble hom ogeneou s Mark ov chain it can be shown that the
follow ing limits a lways exist and a re independent of the initi al sta te o f th e
ch a in , name ly,
lim Pil(t)

TT j

1-",

This set { TTj } will fo rm the lim iting sta te p robab ility di stribut ion . For a n e rgodi c M arkov ch a in we will ha ve the furth er limit , whic h will be ind ependen t
of th e in itia l d istr ibu tion, nam el y,
lim TT /t )

TT j

I- x

This limit ing di stribution is given uniquely as the so lutio n of th e follo win g
system o f linear equati on s :

. , + 2: q kj TTk

= 0

(2 . 115)

k* i

In matrix fo rm th is la st equati on may be expressed as


1tQ

= 0

- (2. 116)

wher e we ha ve used t he ob viou s notati on 1t = [TTO' TTl ' 71'2, ] . This la st


equati on coupled with th e probabi lity con ser vati on relat ion , namely ,
(2 . 117)

uniquely gives us o ur limit ing sta te pr ob abilities. We compar e th e Eq . (2: 1 16)


wit h ou r ea rlier eq ua tio n for d iscre te-time Mar kov chai ns, nam ely , 1t = 1tP :
here P wa s th e matrix of tr an siti on probabilities. wh erea s th e infinitesima l
genera tor Q is a matri x of tr ansition rates.
T his comp letes our d iscu ssion o f di scr ete-state Markov cha ins. In th e
table o n pp . 402-403 . we su m ma rize the maj or result s for the four eases conside red he re. For..a furt her di scu ssion , t he reader is referred to [BHA R 60] .
. ~~u s sed di screte-state Mark ov ch ain s (bo th in di scret e a nd
continu ou s time) it would see m natural th at we next co nsider co ntinuoussta te Mar kov pr oce sses. This we will not d o , but rat her we pos tpo ne co nside ra tio n o f such mat er ial un til we require it [viz. in C ha pter 5 we co nsid er
Takacs' in teg ro d ifferentia l equ ati on for M/G/I . a nd in C ha p ter 2 (Vo lume
II) we devel op the Fokk er-Planck eq ua t io n for use in th e diffu sion a pp rox ima tio n fo r qu eues]. On e wo uld furt her expect th at foll owing the stud y o f
Ma r kov proce sses. we wou ld then investi gate ren ewal processes , random
wa lks, a nd fina lly, semi- Ma r kov p rocesses. Here too, we choose to postpone
such di scu ssion s until the y are need ed later in the te xt (e.g ., th e d iscu ssi on
in C ha pter 5 o f M ar kov ch ain s imbedded in semi- Ma rkov processes).

r-

2.5.

BIRTH -DEATH PROCESSES

53

Inde ed it is fair to say that much of the balance of this textb ook depend s
upon addi tional mate ria l from the theory of stochas tic pr ocesses and will be
developed as needed. Fo r the -time being we choose to specialize the results
we have obtaine d from t he co ntinuous-time Mar kov chai ns to the class of
birth-death pr ocesses, which , as we have fore warned , playa majo r role in
queu eing systems anal ysis. Th is will lead us directl y to the imp ortant Poisson
p rocess.

2.5.

BIRTH-DEATH PROCESSES

Earli er in this chapter we said that a birth-death proce ss is the special


case of a Markov process in which transitions from state E k are permitted
only to neigh borin g states Ek+l ' Ek , and E k _ l Thi s restr iction permits us to
carry the solution much further in many cases. The se processes turn out to be
excellent mod els for all of the material we will study under elementary queueing th eory in Chapt er 3, and as such form s our point of departure for th e
st udy of queuein g systems. The discrete-time birth-death process is of less
inte rest to us than the conti nuous-time case, and , therefore , discrete-time
birth-death processes a re not considered explicitly in the following development ; needless to say , an almost parallel treatment exists for that case.
More over, tran sitions of the form from state E ; back to E ; are of direct
intere st only in th e discrete-t ime Markov chai ns ; in the continuou s-time
Mar kov chai ns, t he rate at which the process returns to the state th at it
currentl y occupies is infinite, and the as tute reader sho uld have observed
that we very-carefully subtracted this term out of our definition for q u (t )
in Eq. (2.94). Th erefore , our main interest will focu s on continuous-time
birth-death processes with discrete state space in which transition s only to
neighboring states Ek-'- I or Ek _ 1 from sta te E, a re per mitted. *
Ea rlier we described a birth-death process as one that is appro priate for
modelin g changes in the size of a population. Indeed, when the pr ocess is
said to be in sta te E k we will let th is denote the fact that the popul at ion at that
time is of size k. Moreover , a transition from E k to Ek+l will signify a " birth "
with in the popul ation, whereas a tran sition from Ek to Ek _ 1 will denote a
"deat h" in the popul ati on .
Thu s we consider chan ges in size of a populati on where tr ansitions from
sta te E k take place to nearest neighbors only. Regarding the nature of birt hs
and deaths, we int rod uce the not ion of a birth rate i.k , which describes the

* Thi s is true in the one-dimensio nal case. Later, in Chap ter 4, we consider multidimensiona l systems for which the sta tes are described by discrete vectors, a nd then each state
has two neighbors in each dimension . For example , in the two-dim ensional case, the sta te
descriptor is a cou plet (k t , k,) denoted by k, .k, whose four neighbo rs are k, - t .k" k"k, - I '
k,+1.k,- and k , . k, ~I'

54

SOM E IMP ORTANT RA NDOM PR OCESSES

ra te at which births occur when the population is of size k. Similarly , we


define a death rate fl k>. which is the rate at which deaths occur when the
population is of size k. Note that these birth and death rate s are independent
oft ime and depend only on Ek ; thus we ha ve a continuous-time hom ogeneous
Markov chain of the birth-death type. We ad opt this special notat ion since it
lead s us directly into the queueing system notation; note that, in term s of our
earlier definit ions, we have
a nd
fl k = Qk ,k-l

Th e nearest-nei ghbor condition requires that qkj = 0 for Ik - jl > I.


Moreover, since we have pre viously shown in Eq. (2.96) that L.;qkj = 0,
then we require

q kk

= _ (flk + Ak)

(2.118)

Thus our infinitesimal generator for the general hom ogeneou s birth -death
process takes the form
-,10

)'0

fll

-()., + fll)

Ai

fl .

i'2

Q=

-(A.

+ P.)

fl 3

- (;'3 +

fl 3)

,13

Note that except for the main , upper, and lower diagonals, all term s are zero.
T o be more explicit, the assumptio ns we need for the birth-death process
are th at it is a hom ogeneous Markov chain X (t ) on the sta tes 0 , I , 2, . . . ,
that births and death s are independent (this follows directl y from the Markov
pr operty), and
B, :

P [exactly I birth in (r, t

+ nt ) I k

in populat ion]

= ;'k n t + o( nl)
D1 :

P[exactly I death in (t , t

+ nt ) I k

in population]

= Pk nt + o(nt)
B.:

P[exactly 0 birth s in (r, t

+ nt) Ik

in population]
= I - ;'k n l

D. :

P[e xactly 0 deaths in (I, t

+ o(nt)

+ nt ) I k in population]
= I - Pk nl

+ o(n t)

2.5.

BIRTH-DEATH PROC ESSES

55

Fr om these assumptions we see that multiple births, multiple deaths, or in


fact, both a birth and a death in a small time interval are prohibited in the
sense that each such multipleevent is of order o (~t).
Wh at we wish to solve for is the probabil ity that the population size is k:
at some time t ; th is we denote by'
Pk(t ) ~ P[X(t )

k]

(2.119)

Thi s calculation could be carried out directly by using our result in Eq.
(2.114) for 7T J(t) and our specific values for q i j ' However , since the deriva tion
of these equation s for the bir th-death process is so straightforward and
follows from first principles, we choose not to use the heavy machine ry we
developed in the previou s section , which tend s to cam ouflage the simplicity
of the basic approach, but rather to rederive them below. The reader is
encouraged to identify the parallel steps in this development and compare
them to the more general steps taken earlier. Note in term s of our previous
definiti on th at Pk(t) = 7T k(t). Moreover , we are " suppressing" the initial
condition s temporarily, and will introduce them only when required .
We begin by expre ssing the Chapman-Kolmogoro v dynam ics, which are
quite trivial in this case. In particular, we focus on the possible motions of
our particle (that is, the number of members in our population) during an
interval (t , t + ~t) . We will find ourselves in state E k at time t + ~t if one of
the three follo wing (mutually exclusive and exhau stive) eventualities occurred:
f

1.

2.
3.

that we had k in the population at time t and no state chan ges occur red;
that we had k - 1 in the population at time t and we had a birth
during the interval (t , t + ~t);
that we had k + 1 members in th e populati on at time t and we had one
death during the interval (t, t + ~t).

Th ese three cases ar e portrayed in Figure (2.8). T he p robability for the first
of these possibilities is merely the probability Pk(t) that we were in st ate E k at
time f time s the probability hk(~f) that we moved from state Ek to state E,
(i.e., had neither a birth nor a death) durin g the next ~f seconds ; thi s is
rep resented by the first term on the right-hand side ofE q . (2.120) below. T he
second and th ird terms on the right-hand side of th at equ ati on correspond ,
respectivel y, to the second and third cases listed ab ove. We need no t concern
ourselves specifically with transition s fr om states other than neare st neighb or s
to state E k since we have assumed that such transitions in a n interval of
We use X (r) here to denote the num ber in system at time I to be consistent with the use of
for ou r genera l stochastic process. Cer ta inly we cou ld have used N(t) as defined
earlier; we use N (t) outside of this chapter.
X (l )

"/

56

SOME IMPORTANT RA NOmr PR OCESSES

Time

Figure 2.8 Possible transitions into Ek

duration !:;.t are of order o(!:;. t). Thus we may write


Pk(t

+ Ar) =

Pk(t)Pk.k(!:;.t)

+ Pk_1(t)Pk_l.k(!:;.t )
+ Pk+ (t)Pk+l.k(!:;.t)
+ oeM) k~l
1

(2.120)

We may add the three probabilitie s ab ove since these events are dearly
mutually exclusive. Of course, Eq. (2.120) only make s sense in the case for
k ~ I , since clearly we could no t have had - I members in the population .
For the case k = 0 we need the special boundary equati on given by
Po(t

+ !:;.t) = Po(t)Poo(!:;.t )
+ P, (t )PlO (!:;.t )
+ o(!:;.t) k = 0

(2.12 1)

Furthermore, it is also clear for all values of t th at we must conserve our


probability, and this is exp ressed in th e following equati on :
(2.122)
T o solve the system represented by Eqs. (2.120)-(2. 122) we must make use
of our assumptions B" D B2 , and D 2 , in order to evaluate the coefficients
"

2.5.

57

BIRTH- DEATH PROCESSES

in these equ ati on s. Carrying out thi s opera t ion our eq uati on s convert to
Pk(t

+ llt ) = Pk(t)[l .; i'k llt + o(ll t)][ l - flk llt + o (llt) ]


+ Pk_1(t)[i'k_l llt + o (llt )]
+ Pk+l(t )[Pk+lllt + o(llt) ]
+ o( llt )
k ~ I
Po(t + llt) = Po(t)[l - i,o llt + o (llt )]
+ P1(t)[fll llt + o (ll t )]
+ o( ll t)
k = 0

(2.123)

(2. 124)

In Eq. (2.124) we ha ve used the assumption that it is imposs ible to ha ve a


death when the population is of size 0 (i.e., flo = 0) and the assumption that
o ne indeed can have a birt h when the populat ion size is 0 (i,o ~ 0). Expanding
the right-hand side of Eq s. (2.123) and (2.124) we ha ve
Pk(t
Po(t

+ llt ) =

+ ,Uk) lltP.(t) + Ak-i ll tPk_l( t)


+ flk+l ll tPk+l(t) + o(llt )
+ llt ) = Po(t ) - i.o lltPo(t ) + fllllIP l(t) + o(ll t)
Pk(l ) - (I'k

k ~ 1

k=O

If we now subtract Pk(t) from both sides of each equation and divide by llt,
we have the following : .

Po(t

+ Ill ) III

Po(l )

- i.oPo(t )

+ fllP l(t ) + o( ll t)

=0

(2.126)

III

T ak ing th e limit as llt a pp roac hes 0 we see th at the left-hand sides of Eq s.


(2.125) a nd (2.126) represent the formal derivati ve of Pk(t ) with respect to t
and also that the ter m o (ll l) jll l goes to O. Con sequ ently, we have the result ing
equation s :
k ~ l

- (2.127)

k =O
The set of equations given by (2.127) is clearly a set of different ial-difference
equati on s and represent s the dynamics of our probabil ity system ; we

58

SOME IMPORTANT RANDOM PROCESSES

recognize them as Eq. (2.114) and their solution will give the behavio r of
Pk(t ). It remains for. us to solve them. (No te that t his set was obtai ned by

essentially using the Chapman- Ko lrnogorov equations.)


In order to solve Eqs. (2.127) for the time-dependent behavior Pk(t) we now
require our initial cond ition s: that is, we must specify Pk(O) for k = 0, I,
2, . . . . In addi tio n, we further require that Eq . (2.122) be satisfied.
Let us pa use temp orarily to describe a simple inspect ion technique for
finding the differenti al-difference equa tions given ab ove. We begin by observing that an alternate way for disp laying the information contained in the Q
matri x is by means of the state-transition-rate diagram . In such a diagram the
sta te Ek is represented by an ova l surro unding the number k. Each nonzero
infinitesimal rate q j j (the elements of the Q matrix) is represented in the
sta te-transition-ra te diagram by a directed branch point ing from E, to E ,
and label ed with the value q j j ' Fur thermo re, since it is clear that the terms
a long the main diagonal of Q cont ain no new informa tion [see Eqs. (2.96)
and (2.118)] we do not include the "self"-loop from E, back to E j Thus the
sta te-transition-rate diagram for the genera l birt h-death pro cess is as shown
in Figure 2.9.
In viewing this figure we may tru ly think of a pa rticle in motion moving
among the se states; the branches identify the per mitted transitions and th e
bra nch labels give the infinitesimal rates at which th ese transitions take
place. We emph asize that the labels on the ordered link s refer to birth and
dea th rates a nd not to probabilities. If one wishes to con vert these labels to
proba bilities, one must multiply eac h by the quant ity dt to obtain the
probabili ty of such a transition occurring in the next interval of time whose
duration is dt , In t hat case it is also necessary to put self-loops on each -state
indicating the prob ab ility that in the next interval of time dt the system remains in the given state . No te that t he sta te-transition-rate diagra m contains
exactly the sa me informati on as does the tr ansition-rate matrix Q .
Co ncentra ting on state E k we observe that one may en ter it only from state
E k_1 or from sta te Ek+l an d similarly one leaves state E k only by entering
sta te Ek - 1 or sta te Ek + 1 From this picture we see why such processes are
referr ed to as "nearest-neighbo r" birt h-deat h processes .
Since we a re considering a dynamic situatio n it is clea r that the difference
between the rate a t which the system ent ers Ek and the ra te at which the system
leaves E k must be equal to the rate of change of "flow" into that state . This

Figure 2.9 State-transition-rate diagram for the birth-d eath process.

2.5.

DlRTH-DEAT H PR OCESSES

59

notion is crucial an d prov ides for us a simple intu itive mea ns fo r writing
d own the equ ati on s of motion for the probabil ities Pk(t) . Specifically, if we
focus up on sta te E k we observe th at the rat e at which probability " flows"
into this state at time t is given by
Fl ow rate into Ek

Ak_1Pk_1(t ) + flk+IPk+l (t)

whereas th e flow rate out of that state at time t is given by


Fl ow rate out of Ek

(I'k + flk)Pk(t )

Clearly the difference between th ese two is the effective probability flow rate
into this sta te, that is,

dPk(t )

~ =

_
Ak_1 Pk_1( t)

+ flk+l Pk+l(t) -

U k + fJk)Pk(t )

- (2.128)

But thi s is exactly Eq . (2. l27) ! Of course, we ha ve not a tte nded to th e details
for the bound ar y state Eo but it is easy to see that the rate argument ju st given
lead s to the correct equa tio n fo r k = O. Ob ser ve that each ter m in Eq .
(2.128) is of the form : pr obability of bein g in a particular state at tim e t
multiplied by the infinitesimal rate of leaving that state. It is clear that wha t
we have done is to draw an imaginary boundary surrou nd ing sta te Ek and
hav e calcul ated th e pr obability flow rates cr ossing th at boundary , where we
place opposite signs . on flows entering as oppos ed to leaving ; thi s tot al
computatio n is th en set equa l to the time derivati ve of the prob ability flow
rate into that sta te.
Actu ally there is no reason for selecting a single sta te as the "system" for
wh ich the flow equ ati on s mu st hold . In fact one may encl ose an y number of
sta tes wit hin a contour a nd th en write a flow equ ati on for all flow crossing '
th at boundary. Th e only d an ger in de aling with such a con glomerate set is
th a t one may write down a dependent set of equ ati ons rather than an independent set; on the other hand , if one systema tically encloses each sta te
sing ly a nd writes d own a con servation law for each, then one is guaran teed to
have a n independent set o f equ a tions for the syste m with the qu ali fication
th at the co nservatio n of prob ability given by Eq. (2. 122) mu st also be
a pplied. * T hus we have a simple inspection techn iqu e for a rriving at the
equa tions of moti on for the birth-death proce ss. As we sha ll see lat er th is
ap proa ch is perfectly suita ble for other M ar kov pr ocesses (includi ng sem i- .
M arkov p rocesses) a nd will be used extensively ; the se observa tio ns also lead
us to the no tion of globa l and local balan ce equ ati on s (see C ha pter 4).
At thi s point it is imp ortant for the reader to recognize and accept the fact
that the birth-death pr ocess descr ibed abov e is capa ble of pr ovidin g the
When the number of states is finite (say. K states) then any set of K - I single-node sta te
equations will be indepe ndent. T he addi tio nal equatio n needed is Eq . (2.122).

60

SOME IMPORTANT RANDOM PRO CESSES

framework for di scussing a large number of imp ortant an d interesting


problems in queueing th eory. The direct solu tion for a ppropriate specia l
case s of Eq. (2. 127) provides for us the tr an sient behavior of the se queueing
systems and is of less int erest to this book than th e equilibrium or stea dystate beh avior of qu eues. * However, for purposes of illustrati on a nd to
elaborate fur ther up on th ese equation s, we now con sider so me imp ortant
examples.
The simplest system to con sider is a pure birth system in which we assume
fl k = 0 for all k (note th at we ha ve now entered the next- to- innermost circle
in Figure 2.4!) . Moreover , to simplify the problem we will assume th at
Ak = A for all k = 0, 1,2, . ... (N o w we have ent ered the innermost circle!
We therefore expect some marvelou s properties to emerge.) Substituting
thi s into our Eq s. (2.127) we have
d Pk(l)
- - = - APk(l)
dl

+ APk_,(I)

k?:

(2.129)
dPo(t) = - APo(l )
dl

k=O

For simplicity we assume that the system beg ins at time 0 with 0 members,
that is,

k=O
k,.,O

(2.130)

Sol ving for Po(l) we ha ve immed iately


p oet) = e- At

Inserting thi s last int o Eq. (2.129) for k = I result s in


dP,(t) = - AP,( t)
dt

+ u:"

The so lution to th is d ifferenti al equati on is clearly


P,(t)

J,te- AI

Continuing by induction, then , we finally have as a solution to Eq. (2.129)


(}.t)k P (I ) = - - e
k

k!

AI

k ?: 0, I ?: 0

_ (2.131)

This is the celebrated Poisson di stribution. It is a pure birth pr ocess with


constant birth rat e A and gives rise to a sequence of birth ep och s which a re
Transien t behavior is discussed elsewhere in this text, nota bly in Chapte r 2 (Vol. II).
For a n excellent trea tment the reader is referred to [COH E 69].

2.5.

BIRTH-DEATH PROCESSES

61

said to constitute a Poisson process. Let us study the Poisson process


more carefully and show its relat ionship to the exponential distribution.
The Poisson process is central to much of elementary and intermediate
queue ing theory and is widely used in their development. T he special position
of this process comes about for two reasons. First , as we have seen, it is the
"innermost circle" in Figure 2.4 and, therefore, enjoys a number of mar velous
and simplifying anal ytical a nd probabilistic properties ; this will become
und eniably apparent in our subsequent development. The second reason for
its great import ance is that , in fact, nume rous natu ral physical and organic
processes exhibit behavior that is probably meanin gfully modeled by Poisson
pr ocesses. For example , as Fry [FRY 28] so graphically point s out, one of the
first observations of the Poisson process was that it properly represented the
number of army soldiers killed due to being kicked (in the head ?) by their
horses. Other examples include the sequence of gamma rays emitting from a
rad ioact ive part icle, and the sequence of times at which telephone calls a re
originated in the teleph one network . In fact , it was shown by Palm [PALM
43] and Khinchin [KHIN 60] that in many cases the sum ofa large number of
independent stationary renewal processes (each with an arbitrary distribution
of renewal time) will tend to a Poisson process. Thi s is an imp ortant limit
the orem and explain s why Poisson pr ocesses appear so often in nature where
the aggregate effect of a large number of individual s or particles is under
observa tion.
"
Since this development is intended for our use in the study of queueing
systems, let us immediately adopt queueing notation and also conditi on
ourselves to d iscussing a Poisson process as the arrival of customers to some
queueing facility rather than as the birth of new members in a population.
Thus ,l is the average rate at which the se customer s arrive . With the"initial
condition in Eq, (2.130), PkV) gives the pr obability th at k arrivals occur
during the time interva l (0, I). It is intuitively clear , since the average arrival
rate is ,l per second , that the average number of a rrivals in an inte rval of
length I must be AI. Let us carry out the calculation of this last intuitive
statement. Defining K as the number of arr ivals in this interval of length I
[previously we used a(I)] we have
co

E[K] =

L kPk(t)

k ~O

-ll ~

= e

(i.t) k

L. k - -

k!

k _O

= e- 1 1 1

( ,lt)k

k _l

(k - I )!

-.<t , ~(At)k

= e

At L . - k _O

k!

sosts

62

IMPORTA NT RA NDOM PRO CESSES

By definition, we know that e"

+ x + x /2! + .. . and so we get


2

- (2. I32)

E[K] = At

Thus clearly the expected number of arrivals in (0, t) is equal to At.


We now proceed to calculate the variance of the n umber of arri vals. In
orde r to do th is we find it convenient to first calculate the foll owing momen t
E[K(K -

1)]

'"
= 2,
k(k

- l)P k (t)

k =O

'"

= e- ll 2, k(k

(,l. )k
_ 1) _t_

k!

k~ o

'"

( ~t )k-2

= e-;"(At)22, -,--''-M (k - 2)!


= e- ll(At)2~ (At)k
k_o k!
= (At)2

Now for ming the va riance in terms of this last qu antity a nd in term s of
E[K], we have
(fK

= E [K(K- I)] + E[A.1


= (At)2 + At - (At)2

= At

- (E [K ])2

and so
(fK

- (2.133)

Thus we see that the mean and variance of the Poi sson process ar e identical
and each equal to At.
In F igure 2.10 we plot the family of cur ves Pk(t) as a functi on of k a nd as a
function of At (a con venient normali zing form for r),
Recollect from Eq . (11.27) in Appendix II that the z-tra nsfo rm (p ro ba bility
generating function) for the probability mass distributi on of a discrete random
va riable K where
gk = P[K = k]
is given by
G(z) = E[zK]
=

2, Zk g k
k

for [z]

I. Applying this to the Pois son distribution deri ved ab ove we have
co

E[zK]

= 2, Zk p k (t)
k= O

= ~ e- At (lt z )k
k~O

e- 1t +J. f%

k!

2.5.

IlIRTH- DEATH PROCESSES

63

Ik II)

~I

;/
Figure 2.10 The Poisson distribution .
a nd so

G(z)

E[zK]

eW =- 1l

- (2.134)

We sha ll ma ke co nsid erable use of this result fo r the z-transfo rm of a Poisson


dis trib utio n. For exa mple , we may no w easily ca lcula te the mean and
va riance as given in Eqs . (2.132) and (2.133) by taking advantage o f the
spec ial p ro perties o f th e z-tra ns fo r m (see Appendix II ) as foll o ws * :
GIll(l)

OZ

E[ZK]!

,_1

E[K ]

Applying th is to t he Poisson distribution, we ge t

E[K ]

Alew Z- 111,_1

At

A lso

(JK 2 = GI2\1)

+ G(l)(l ) -

[G lll (1)]"

T hus , fo r the Poisson di stribution ,

(J//

= (t.t )2eW' - 1l1' _1 +


=

At - ().t)2

At

This confirms our earlier calculations.


The shorthand notation for derivatives given in Eq . (11.25) should be reviewed.

64

SOME IMPORTANT RA NDOM PROCESS ES

I "k.'

We have intr oduced the Poisson process here as a pure birth process and
we ha ve found an expression for Pk(t), the probability distribution for the
number of arrivals during a given.interval of length t. Now let us consider the
joint distribution of the arrival instants when it is known beforehand that
exactly k arrivals have occurred during that interval. We break the interval
(0 , t) into 2k + I interv als as shown in Figure 2.11. We a re intere sted in A k ,
which is defined to be the event that exactly one arri val occurs in each of the
intervals {PJ and that no arri val occurs in any of the inter vals {O'.J. We wish
to calculate the probability th at the event A k occur s given that exactly k
arrivals have occurred in the inter val (0, r) : from the definiti on of conditi onal
pr obability we thu s have
iv.:.::a.1:.:s.:.::

. (
.P..!C
[A
..:ck!E..a::.:n~d=--=ex.:.::a::.::c.:.::tI:.o.
y.:.::k.:.::
::.:ar..:cr.:.::
: in~...c(":'O:.....t:..:.!
)]
P[ Ak I exact Iy k. arrivals In 0, t )] = P[exactl y k arrivals in (0. t )]
(2.135)
When we consider Poisson arrivals in nono verlapping interv als, we are
consider ing independent events whose joint probabilit y may be calculated
as the pr oduct of the individual pr obabilitie s (i.e., the Poisson process has
independent increments). We note from Eq. (2.131), therefore , that
Prone arriv al in interval of length Pd

i.{3je- ' P,

P[n o arrival in inter val of length O'. j]

e-A'j

and
Using thi s in Eq. (2.135) we have directly
P[Ak I exactl y k: a rrivals in (0, t )]

=
=

(i.{3 ,i,P2 .. i.{3ke- lP'e-lP, . . . e-lPk)(e-l"e-ob, . . . e-l'k+')


[(i.t)kfk!Je- lt
f3 ,{32 . . . {3k k !
t

(2.136)

On the other hand, let us consider a new process th at selects k points in the
interval (0, t ) independently where each point is uniformly distributed over
th is interval. Let us now make the same calculati on that we did for the Poisson
p rocess, namely,

p[Ak l exactly k a rrivals in

(0, t)] = ( ~')( ~2) ...( ~k) k!

(2.137)

2.5 .

BIRTH-DEATH PROCESSES

65

where the ter m k! come s abo ut since we do not dist inguish a mo ng the
permutations of the k points am on g the k chosen intervals. We observe that
the two con ditiona l prob abi lities given in Eq s. (2.136) and (2.137) a re the
sa me a nd, the refore, conclude that if a n interval ofl ength t cont ains exactl y k
arrivals from a Poisson process, then the j oint distribution of the instants
when the se a rrivals occurred is "t he sa me as the d istribution of k points
un iformly distribu ted over the sa me interva l.
Furthermore , it is easy to show from the pr operties of our birth process
that -the Poisson process is one with independent increments ; that is, definin g
X es, s + t) as th e number of arrival s in t he interval (s, s + t) then the
followin g is true:
P[X(s, s

+ t) =

k]

(}.tle- 1 1

= -'---''--k!

regardless of the location of thi s interval.


We would now like to investigate the int imat e relat ionsh ip between the
Poi sson pr ocess and the exponential di stribution. This distribution also plays
a central role in queueing the ory. We consider the random vari able l, which
we recall is th e t im e be tween adjacent arrivals in a queueing system, a nd whose
PDF a nd pdf are given by A(t ) and a Ct ), respectively, as already agreed for
the inte rarrival times . From its definiti on, then , a Ct)b.t + o (b.t) is the proba bility that the next a rrival occurs at least t sec and a t mos t (t + b.t ) sec
from the time of the last arrival.
Since the definition of A (t ) is mere ly the probability that th e time between
a rrivals is ~ t , it must clearl y be given by
A (t)

I - pel

But P [l > t ] is j ust the probability that


po et) . Therefore, we ha ve
A(t)

110

> t]
arrivals occur in (0, r), that is,

I - p oet)

and so from Eq. (2.131), we obtain th e PD F (in the Po isson case)


A (t )

I - e:"

t~O

(2.138)

Di fferen tiating, we obta in the pdf


aC t) = i.e- 1 t

t~O

(2. 139)

This is th e well-known exponential di stribution ; its pdf a nd PD F a re sho wn


in Figure 2.12.
W hat we ha ve show n by Eq s. (2. 138) a nd (2.139) is tha t for a Poisson
arrival pr ocess, the time bet ween arri vals is exponentially d istributed ; thus
we say that the Po isson arrival process ha s exp onential interarrival times.

66

SO~(E 1~IPORTANT RANDOM PROCESSES

1--------- _

(b ) PDF

Figure 2.12 Exponential distribution.


The most amazing characteristic of the exponential distribution is that it
has the remarkable memoryless property, which we introduced in our discussion of Markov processes. As the name indica tes, the past history of a
random variable that is distributed exponentially plays no role in predicting
its future; precis ely, we mean the following. Consider that an arrival has ju st
occurred at time O. If we inquire as to what our feeling is regarding the distribution of time until the next arrival , we clearly respond with the pdf
given in Eq. (2. 139). Now let some time pa ss, say, to sec, during which no
arrival occurs. We may at this point in time again ask , " Wha t is the probability that the next arrival occurs t sec from now?" This que stion is the sa me
question we asked at time 0 except we now know th at the time between
arrivals is a t least to sec. To an swer the second que stion , we carry out the
following calcul ati on s :

P [i ~ t

P[t o < i < t + 10J


+ to Ii> toj = --'-"--'--_-='----'---""

P[t

P[i

> toJ

< t + toJ - P[i < toJ


P[i > toJ

Due to Eq . (2.138) we then ha ve

and so

P[i ~ t

+ to I i > toJ =

1 - e- "

(2.140)

This result sho ws that the distribution of rem aining time until the next
a rr ival , given that to sec has elap sed since the last a rrival, is iden tically equal
to the uncondition al distribution of intera rrival time . The imp act of this
stat ement is that our probabilistic feeling regard ing the time unt il a future
a rrival occurs is inde pendent of how lon g it has been since th e last arr ival
occurred. Th at is, the future of an exponentially distributed rand om vari able

...
I
I

2. 5.

BIRTH-DEATH PROCESSES

67

Figure 2. I3 The memoryless propert y of the exponential distribution.


is independent of the pa st history of that variable and th is distribution remains
constant in time . The exponential distribution is the Dilly continuou s distribution with thi s pr operty. (In the case of a discrete random vari able we have
seen th at the geo metric distribution is the only discrete distr ibution with
th at same property.) We may further a pprecia te the nature of this memo ryless property by co nsidering Figu re 2. I3. In this figure we show the exp onential den sity x:". Now given that to sec has elap sed , in o rder to calculate the
den sity funct ion for the time until the next arrival, what one must do is to
take that portion of the density function lying to the right of the point to
(show n shad ed) an d recognize that thi s region represent s our pr obabili stic
feeling regardi ng the future ; the portion of the den sity functi on in the
to to is pa st history and involves no more uncert ainty. In
interval from
o rder to make the shaded regio n into a bona fide den sity function, we must
magnify it in o rder to increase its total area to unity ; the appropriate magnification tak es place by dividing the functi on rep resenting the tail of thi s
distribution by the are a of the shaded region (which must , of course, be
P[ J > tD. T his opera tion is identical to the opera tion of creati ng a cond itional
distributi on by dividing a joint distributi on by the probability of the condition .
Thus the sha ded region magnifies into the seco nd ind icated curve in Figure
2.13. This new functi on is an exact replica of the origina l density funct ion as
shown from time 0, except that it is shifted to sec to the right. No other
den sity functi on has the pr operty th at its tail everywhere possesses the exact
same sha pe as the entire density functio n.
We now use the memoryless pr operty of the exponential distribution in
order to close th e cir cle regarding the relati ons hip between th e Poisson and
exp onential distrib uti ons. Eq uatio n (2.140) gives an expres sion for the PDF
of the inte rarrival time cond itioned on the fact th at it is at least as lar ge as to.
Let us positio n ou rselves at time to and as k for the probability that the next

68

SOME I~I PORTANT RA NDOM PRO C ESSES

arrival occurs within the next !::>.f sec. From Eq . (2.140) we have
P[i ~ 1 +!::>.I Ii> 1 ] = I - e- a t
0

[ 1 - I. !::>.I

= 1-

(A !: >. t)"

+ 2"!" - ..-]

A!::>.I + O(!::>. I)

(2. 14 1)

Equation (2.14 1) tells us, given that a n arrival has not yet occurred, th at the
prob ability of it occurring in the next interval of length !::>.I sec is A!::>.t +
O(!::>.I). But thi s is exactly assumption B[ from the opening paragraphs of thi s
section. Furthermore, the probability of no a rrival in the interval (to, to + !::>.t)
is calculated as

P[i

> 10 + !::>.t Ii> 10 ] =

1 - P[i ~ 10 + !::>.I
= 1 - (1 - e- w )
= e- a t

1 - I. !::>. I

= 1 - I.!::>.I

I i > 10 ]

(A !::>.I)"
+- - .. .

2!

+ O(!::>.I)

This corroborates assumpti on B2 Furthermore,


P[2 or more arrival s in (10' 10
= 1-

+ !::>.I)]

P[none in (to, 10

= 1 - [1 = 0(!::>.1)

1.!::>.1

+ !::>.I)] -

+ O(!::>.I)] -

Pron e in (to, 10

[I.!::>.1

+ !::>.I)]

+ O(!::>.I)]

This corroborates the "multiple-birth" assumption . Our conclusion , then ,


is tha t the ass umption of exp onentially distributed interarri val time s (which
are independent one from the other) implies that we ha ve a Poi sson proce ss,
which implies we have a con stant birth rate. The con verse implication s a re
also true. This relationship is shown gra phica lly in Figur e 2.14 in which the
symb ol <---+ here denotes implicati on in both d irecti on s.
Let us now calculate the mean a nd va ria nce for the exponential distribution
as we did for the Poisson proce ss. We sha ll proceed using two methods (the
direct method a nd the transform meth od ). We ha ve
E[i]

f =

fa;; l a(t ) dt

Jo

=i

"'IAe- ;t dl

We use a trick here to evaluate the (simple) inte gral by recogni zing t ha t the
integrand is no more than the partial deri vati ve of the following integral ,

2.5.

BIRTH-DEATH PROCESSES

69

F igure 2. 14 The mcmoryless triangle.


which may be eval ua ted by inspec tio n :

f"'

a f OO :". dt

ti.e-A'dt = - J. -

aJ.

a nd so

(2.142)

t=-

i.

Thus we have that the ave rage intera rriva l time for an exponential distrib ut ion
is give n by I Ii.. This result is intuitively pleasi ng if we examine Eq . (2. 14 1)
a nd observe th at the proba bility of a n a rrival in a n interva l of length 6>t is
given by J. 6>t [+ o(6)t)] and thus i. itse lf must be the average rate of arrivals ;
thus the average time between arrivals must be l{i.. In orde r to evaluate the
variance, we first calculate the second moment for the interarrival time as
follows:

70

SOME I~IPORTA NT RANDOM PROCESSES

Thus the variance is given by

a/ =

E[(i)"] -

vr

=:2- (1Y
and so
1

(2.143)

a( = ::;
) .-

As usual, these two moments could more easily have been calculated by
first considering the Laplace transform of the probability density functi on
for this random variable. The notati on for the Laplace transform of the
interarrival pdfis A *(s) . In thi s special case of the exponential distribution we
then have the followin g :
A*(s)

~ r)~-'la(t) dt
=

1""e- "i.e-J.' dt

and so
A *(s)

= -)'-

(2.144)

s +}.

Equation (2.144) thus gives the Laplace transform for the exponential density
functi on . Fr om Appendix II we recognize that the mean of thi s density
function is given by
f = _ dA*(s)
ds

(s

I
,~ O

i.

+ if ,_0

i.
The second moment is also calculated in a similar fashion:
2

E[(i)2] =

d A *(s)
ds

-2-

2i.

I
,~ O

= (s + }.)3 ,~O
2

2.5.

BIRTH- DEATH PR OCESSES

71

an d so

Thus we see the ease with which moments can be calculated by making use
of tr an sforms.
No te also, th at the coefficien t of variation [see Eq . (II .23)] fo r the exp onential is
(2.145)
It will be of further interest to us later in the text to be able to calculate the
pdf for the time interval X required in o rder to collect k arrivals from a
Poisson pro cess. Let us define th is random variable in terms of the random
va riables In where In = time between nth and (n - I)th arrival (where the
" zeroth" arrival is assumed to occur a t time 0). Thus

We define f x(x) to be the pdf for this random vari able . From Appendix II
we sho uld immedia tely recognize that the density of X is given by the
con volu tion of the den sities on each of th e I,:S , since they are indep endently
distri buted. Of course, thi s con voluti on operatio n is a bit lengthy t o carry
out, so let us use our further result in Appendix II, which tells us that th e
Lapl ace tr an sform of th e pdf for the sum of independent random va ria bles is
equ al to the product of the Laplace transforms of the den sity for each . In
our case each I n ha s a comm on exponential distribution and therefor e the
Laplace transform for th e pdf of X will merel y be the kth po wer of A *(s)
where A *(s) is given by Eq. (2.144); that is, defining

X*( s)

f'

e-SXfx(x) d x

for the Lapl ace tr an sform of the pdf of o ur sum, we have

X *(s) = [A *(S)]k
thus

X *(s)

= ( -?-)k
s +?

(2.146)

72

SOME IM PORT A;-,/T RA;-,/DOM PRO CESSES

We must now invert thi s tr ansform. Fortunately, we identify the needed


transform pair as entry. 10 in Table 1.4 of Appendix I. Thus the density
funct ion we are lo oking for , which describes the time required to o bserve k
arrivals, is given by

fx(x)

A(Ax )k-I
.x
(k _ I)! e-'

x ~ O

(2.147)

This family o f density functi ons (one for each va lue of k) is referred to as the
family of Erlang distributions. We will have con siderable use for thi s famil y
later when we di scu ss the method of stages, in Chapter 4.
So much for the Poisson arrival process and its relati on to the exponential
di stribution. Let us now return to the birth-death equations a nd consider a
m ore genera l pure birth process in which we perm it state-dependent birth
rates Ak (for the Poisson process , we had Ak = A). We once again insist that
th e de ath rates fl.k = O. From Eq . (2.127) thi s yield s the set of equations

dPk(t)

-- =
dt

dPo(t)

- - =
dt

- AkPk(t)

+ Ak_IPk_l(t )

k~1

(2 .148)

k=O

-)'oPo(t)

Again, let us a ssume the initial di stribution as given in Eq. (2. 130), which
states that (with probability one) the population begin s with 0 members a t
time O. Solving for poet) we have

p oet)

e- Ao'

The general solution * for Pk(t) is given bel ow with a n explicit expression for
the first two va lues of k:

Pk(t )

= e-Ak' [i'k_Ii'Pk_I(X)eAkX d x + Pk(O)]

= 0, 1,2, . ..

(2 . 149)

i.o(e-Ao' _ e- AI')
P I( t) = -=-----'

i' l

io
A"
A,'
P (t ) = . i,o)'I. [e- - :- A" _ e- 2
Al "'0
A2 - 1.1
i' 2 - Ao
-

e-Ao'J

As a th ird exam pie of the time-dependent solution to the birth-death


equations let us consider a pure death process in which a populat ion is
ini tia ted with , say, N members and all that can happen to thi s population is
that members die; none a re born. Thus Ak = 0 for all k , and fl.k = fl. ~ 0
The validity of this solution is easily verified by substituting Eq. (2.149) into Eq. (2.148).

2.5.

BIRTH -D EATH PROCESSES

73

for k = I, 2, . . . , N. For this constant de ath rate process we have


d Pk(t) =
dl

-flPk(t )

+ flPk +l(t )

O<k< N

dP.,(I )

k = N

----;;;- = - fl Ps ( I)

dPo(t) = flP I(I )


dl

k=O

Proceeding as earlier and using induction we obtain the solution


(fll)-'- k e-"I

P (I) =
k

O<kS N

(N - k) !

dPo( t)
fl (fll )-'-I _"I
-- =
e
dt
(N - I)!

(2.150)

k=O

Note the similarity of this last result to the Erlang distribution .


The last case we con sider is a birth-death process in which all birth
coefficients are equal to A for k ~ 0 and all death coefficients are equal to fl
for k ~ I. Thi s birth-death process with constant coefficients is of primary
importance and form s perh ap s the simplest interesting model of a queueing
system. It is the celebrated M/M /I queue ; recall that the notation den otes a
single-server queue with a Poisson arrival process and an exponential
distributi on for service time (from our earlier discussion we recognize that
thi s is the mem oryless system). Thus we may say

MIMI 1

I \

Ak = A} -o(~
fl k = fl

(A(I)

B(x)

=
=

1-

e -).I

(2.151)

1 - e-,n

It sho uld be clear why A (I) is of exponential form from our earlier discussion
relating the exponential interarrival distribution with the Poisson arri val
pr ocess. In a similar fashi on , since the death rate is constant (fl k = fl,
k = I, 2, ...) then the same reason ing leads to the observation that the time
between deaths is also exponentially distributed (in this case with a parameter fl) . However, deaths correspond in the queueing system to service

74

SOME IMP ORTANT RAND OM PROCESSES

completions and , therefore , the service-time distribution B(x) must be of


exp onential form . -Th e interpretation of the condition /10 = 0, which says
th at th e death ra te is zero when the population size is zero, correspond s in
our queueing system to the condition that no service may take place when no
customers are pr esent. The beha vior of the system MIMI I will be studied
throughout this text as we introduce new method s and new measures of
perform ance ; we will constantly check our sophisticat ed adva nced technique s against this example since it affo rds one of the simplest ap plications
of man y of the se advanced methods. More over , much of th e behavior
man ifested in thi s system is characteristic of more complex queueing system
behavior , and so a careful st udy here will serve to famili ari ze the reader with
so me imp ort ant queueing phenomena.
Now for our first expo sure to the M/M /l system behavior. From the genera l
equation for Pk(t ) given in Eq. (2.127) we find for this case th at th e co rresponding differenti al-difference eq uat ions are

dP
- k(t
- )
dt
dPo(t )

=-

("It

_, ()
(
+ f-l) Pk()t + I.P
k t + ,u Pk+l t)

- - = - APo(t )

.dt

k";?:. l

(2.152)

+ ,u P,(t)

k=O

Many meth ods are available for solving th is set of equ ati ons. Here, we choose
to use the meth od of z-transfo rms developed in Appendi x I. We have already
seen o ne application of this meth od earlier in this chapter [when we defined
the tr ansform in Eq. (2.60) and a pplied it to the system of equ at ion s (2.55)
to obta in the algebra ic equ ati on (2.6 1)]. Recall th at the steps involved in
a pplying the meth od of z-transfo rrns to the solution of a set of difference
equations may be summarized as follows:
1.
2.

Multiply the k th equ ati on by Zk.


Sum all th ose equ ations that have the same form (typically true for
k = K, K + I, . ..).
3. In th is single equ ati on, atte mpt to ident ify the z-tra nsform for the
unkn own functi on . If all bu t a finite set of term s fo r the transform are
present, then add the missing terms to get th e function and then
explicitly subtract them out in the equ at ion.
4. Make use of the K " bounda ry" equations (nam ely. th ose that were
omitted in step 2 ab ove for k = 0, I , . . . , K - I) to elimina te
unkn owns in the tran sform ed equation .
5. Solve for the desired tran sform in the resultin g algeb raic, matrix o r

2.5.

6.
7.

BIRTH-D EATH PR OCESSES

75

d ifferen tial * equation. U se the co nserva tion rela tion ship , Eq. (2. 122),
to elimi na te the last unkn own term. t
In ver t the .solution to get an explic it solution in terms of k.
If step 6 ca nnot be carried out, then mom ents may be ob ta ined by
diffe rentiati ng with respect to z and setti ng z = I.

us a pply th is met hod to Eq . (2.152) . First we define the time-dependent


ran sfor m

Jet

P(z, t)

I Pk(t)Zk

(2.153)

k =O

-Iext we multiply the kth differential equation by zk (step I) and then sum
rver all permitted k (k = 1,2 , .. .) (step 2) to yield a single differential
qu ati o n for th e z-tra nsfo rm of Pk(t) :

r operty 14 from Table I.l in Appendix I permits us to move the differentiao n opera to r outside the summa tion sign in th is last equ ati on . Thi s summaon th en a ppears very much like P(z , r) as defined above, except th at it is
iissing the term for k = 0 ; the sa me is true of the first summa tion on th e
ght-ha nd side of thi s last eq uat ion. In the se two cases we need merel y add
nd subtrac t the ter m Po(t)ZO, whieh perm its us to form t he tran sform we
re seeking. The seco nd summati on o n the right-hand side is clearly }.zP(z , t )
nee it contain s an extr a fact or of z, but no missing term s. The last summation
missing a fac tor of z as well as the first two term s of th is sum. We ha ve now
We sometimes.ob tain a differential equatio n at this stage if our original set of difference
lua tions was. in fact , a set of differential-difference equat ions. When th is occurs, we arc
Iectivcly back to step 1 of this procedure as far as the differential varia ble (usually time)
concerned. We then proceed thro ugh steps 1- 5 a second time using the Lapl ace transform
this new variable; our transform multipl ier becomes e- t o ur sums become integrals .
id our ' 'tricks" become the properties associa ted with Lapl ace transforms (see Append ix
. Similar " returns to step I " occu r whenever a function of more than one variable is

If

m sformed ; for each discrete variab le, we require a e-transform and , for each cont inuous
.riable, we require a Laplace transform .
When additiona l unknowns remain, we must a ppeal to th e a nalyticity o f the tran sform
rd obser ve that in its region of ana lyticity the tra nsform must have a zero to cance l eac h
ile (singularity) if the transform is to remai n bou nded . Th ese ad ditio nal conditions
mplet ely remo ve a ny remai ning unkn owns. Thi s procedure will often be used and
pla ined in the next few chap ters.

76

SOME n1PORTANT RANDOM PROCESSES

carried out step 3, which yields

.. [P(z, t)

at

=-

- Po( t)]

(i. + fl)[P(Z, t) - poet )] + }.:; P(z, t)

+ ~ [P(z, r) -

poet) - PI(t )z]


(2. 154)

The equati on for k = 0 ha s so far not been used and we now apply it a s
de scribed in step 4 (K = I) , which pe rmits us to eliminate certain terms in
Eq. (2.154) :

at~ P(z , t) =

- AP(z, r) - p [P(z, t) - Po(t)]

+ AZP(z , r) + ~~ [P(z, t) -

Po(t)]

Re arrangin g thi s last equation we obtain the following linear, first-order


(partia l) differential equation for P( z, r):
z

E.. P(z, t)

at

= ( 1 - z )[(p - h )P(z, t) - pPo(t)]

(2. 155)

This differential equati on requires further transforming, a s menti oned in the


first fo otnote to step 5. We must therefore define the Laplace tran sform for
our functi on P(z, t) as follows * :

:. f "" e- " P(z , r) dt

P*(z , s) =

(2 . 156)

0+

Returning to step I , a pp lying thi s tran sform to Eq. (2. 155), an d taking ad vantage o f pr operty II in Table 1.3 in Appendix I, we obtain

z[sP*(z, s) - P(z,O+)] = ( I - z)[(p - ).z)P*(z, s) - ,u Po*(s) ] (2.157)


where we hav e defined Po *(s) to be the Laplace transform of Po(t) , that is,

Po*(s)

~ l'''e- "po(t ) dt

(2. 158)

We ha ve now t ran sformed the set of differential-d ifference equati on s fo r


Pk(t) both on the discrete va riab le k and on the continu ou s variable t. This
has led us to Eq . (2. 157), which is a simple al geb raic equation in o ur twicetransformed funct ion P*(z, s) , and thi s we may wr ite a s

P (z, s)

z P(z, O+) - p (1 - z)Po*(s)


sz -

(1 - z)(p - I.z )

(2 . 159)

For con venience we take the lower limit of integration to be 0-'- ra ther than our usual
con vention of using 0- wit h the nonnegati ve rand om variables we o ften deal with . As
a co nsequence, we must includ e th e initial condition P(=. 0+) in Eq , (2.157).

2.5.

BIRTH- DEATH PR OCESSES

77

Let us carry this ar gument just a bit further. From the definition in Eq,
(2.153) we see th at
co

P(: , O+)

= L Pk(O+)Zk

(2.160)

k=O

Of course, Pk(O+) is just our initial condition ; whereas earlier we took the
simple point of view that the system was empty at time 0 [that is, Po(Q+) = 1
an d all other terms Pk(O+) = 0 for k ~ 0), we now genera lize and permit i
customers to be pre sen t at time 0 , th at is,

k=i

(2.161)

k~i

When i = 0 we have our original initi al condi tion. Sub stituting Eq, (2.161)
int o Eq. (2. 160) we see immediately that
P(:,O+)

= :i

which we may place int o Eq. (2.159) to obtain


*
Zi+1 - ,u(l - z)Po *(s)
P (z, s) =
( I - z)(,u - ),z)

sz -

(2.162)

We are almos t finished with step 5 except for the fact that the unknown
funct ion Po*(s) appears in o ur equ ation. The second footnote to step 5 tells
us how to proceed. F rom here on the analysis becomes a bit complex and it is
beyond our desire at thi s point to continue the calcul at ion ; instead we
relegate the excruciating details to the exerci ses below (see Exerci se 2.20). It
suffi ces to say that Po*(s) is determined throu gh the denomin at or root s of
Eq. (2. 162), which th en leaves us with an explicit expre ssion for our double
transfo rm . We are now at step 6 and mu st attempt to invert o n both the
transform varia bles ; the exercises require the reader to show th at the result
of th is inversion yields the final solution for our transient an alysis, namely,

Pk(/) = e- IArPlt[plk- il/2Ik_lat)

+ plk- i- 1l /2Ik+i+1(at )
+ (I

p)pkj~%i+2p-i/2Ij(a/)J

- (2.163)

wher e
),

(2.164)

p= -

,u

(2.165)
and
k

Z.

-1

(2.166)

EXERCIS ES

79

EXERCISES
2.1. Consider K independent sources of customers where the in.t er~rrival
time between customers for each source is exp onentially distributed
with parameter Ak (i.e. , each source is a poiss?n proc.ess). Now consider
the arrival stream , which is formed by merging the Input from each of
the K sources defined above. Prove that this mer ged stream IS al so
Poiss on with parameter A = Al + }'2 + ... + }'K'
2.2.

Referring back to the previous problem, consid.er thi.s mer ged Poi sson
stream and now assume that we wish to break It up Into several branches. Let Pi be the probability that a customer from .the mer ged strea m
is assigned to the substream i, If the overall rate IS A cu stomers per
second, and if the substream probabilities Pi are ch osen for e~ch
customer independently, then show that each of the se substreams IS a
Poi sson process with rate APi'

2.3.

Let {Xj} be a sequence of identically distributed mutually independent


Bernoulli random var iable s (with P[Xj = 1] = p, and P[Xj = 0] =
I - p). Let S-" = Xl + ... + Xx be the su m of a ran.do~ number.N
'."'.
__ ~
,
- " L ~ e " P";on distribution WIth

78

SOM E IMPORTANT RA NDOM PROCESSES

where Ik(x) is the modified Bessel functi on of the first kind o f order k . This
last expression is most disheartening. What it has to say is that an appropriate
model for the simp les t interesting queueing system (di scu ssed further in the
next chapter) leads to an ugly expression for the time-dependent beh avior o f
its state probabilities. As a consequence , we can only hope for grea ter
complexity and obscurity in attempting to find time-dependent behavior of
more general queueing systems.
More will be said about time-dependent results later in the text. Our main
purpose now is to focus upon the equilibrium behavior of queueing systems
rather than upon their transient behavior (which is far more difficult). In the
next chapter the equilibrium behavior for birth-death queueing systems will
be studied and in Chapter 4 more general Markovian queues in equilibrium
will be considered. Only when we reach Chapter 5, Chapter 8, and then
Chapter 2 (Volume II) will the time-dependent behavior be considered again .
Let us now proceed to the simplest equilibrium behavior.
REFERENCES
BHAR 60

Bharucha-Reid , A. T., Elements of the Theory of Mark ov Processes


and Their Applications , McGraw-Hili (New York) 1960.
COHE 69 Cohen, J., The Single S erver Queue, North Holland (Amsterd am) ,
1969.
ElLO 69
Eilon, S., "A Simpler Proof of L = }.W,' Operations Research , 17,
915-916 (1969).
FELL 66
Feller , W., An Introduction to Probability Theory and Its Applications.
Vol. II, Wiley (New York), 1966.
FRY 28
Fry, T. c., Probability and Its Engineering Uses , Van Nostrand,
(New York) , 1928.
HOWA 71 Howard, R. A., Dynamic Probabilistic Sy stems , Vol. I (Markov
Models) and Vol. II (Semi-Markov and Decision Processes). Wilev

80

SOME IMPORTANT RANDOM PROCESSES

(c) Solve for the equilibrium probability vector re,


(d) What is the mean recurrence time for state E 2 ?
(e) For which values of (J. and p will we have 1T, =
a physical interpretation of this case.)
2.6.

= 1T3? (Give

Consider the discrete-state, discrete-time Markov chain whose


transition probability matrix is given by

(a)
(b)
(c)
2.7.

1T2

Find the stationary state probability vector


Find [I - ZP]- l.
Find the general form for P ",

1t.

Consider a Markov chain with states Eo, E" E2> . . . and with transition probabilities
-1

PH = e

(i)

s: n P q

i-n

n- O

,1f - n

( J. - n) '.

where p q = 1 (0 < P < I).


(a) Is this chain irreducible? Periodic? Explain.
(b) We wish to find
1T i

(c)

= equilibrium probability of E,

Write 1Ti in terms of P i; and 1T; for j = 0, 1, 2, ....


From (b) find an expression relating P(z) to P[I + p(z - I)],
where
P(z)

(d)

2.8.

.-0

Recursi vely (i.e., repeatedly) appl y the result in (c) to itself and
show that the nth recursion gives
P(z)

(e)

'" 1T Z i
=L
i

ellz-1 111+p .tp'+ ...+ pO- l lp [ l

+ pO(z -

1)]

From (d) find P(z) and then recognize 1T i

Show that any point in or on the equilateral triangle of unit height


shown in Figure 2.6 represents a three-component probability vector
in the sense that the sum of the distances from any such point to each
of the three sides must always equal unity.

EXERCISES

81

2.9.

Consider a pure birth process with constant birth rate ,1.. Let us
consider an interval oflength T, which we divide up into m segments
each of length TIm. Define t1t = TIm.
(a) For t1t small, find the probability that a single arri val occurs in
each of exactly k of the m intervals and that no arrivals occur in
the remaining m - k intervals.
(b) Consider the limit as t1t ->- 0, that is, as m ->- co for fixed T,
and evaluate the probability Pk(T) that exactly k arrivals occur
in the interval of length T.

2.10.

Con sider a population of bacteria of size N(t) at time t for which


N (O) = I. We consider this to be a pure birth process in which any
member of the population will split into two new members in the
interval (t , t + t1t) with probability ;. t1t + o(t1t) or will remain
unchanged in this inter val with probability I - ;. t1t + o (t1t) as
t1t ->- O.
(a) Let Pk(t) = P[N(t) = k] and write down the set of differentialdifference equations that must be satisfied by these probabilities.
(b) Fr om part (a) show that the a-transform P (z, t) for N (t ) must
satisfy
P( z, t)

ze-.t'
.t .

1- z + ze: ,

(e) Find E[ N (t)] .


(d) Solve for Pk(t) .
(e) Solve for P (z, r), E[N(t)] and Pk(t) that satisfy the initial cond it ion N( O) = n ;::: I.
(f) Consider the corre sponding determ inistic problem in which each
bacterium splits into two every I I). sec and compare with the
answer in part (c).
2.11.

Consider a birth-death process with coefficients

k=O

k=1

k#-O

k#-I

which corresponds to an MIMII queueing system where there is no


room for waiting customers.
(a) Give the differential-difference equations for Pk(t) (k = 0, I).
(b) Solve these equations and express the answers in terms of Po CO)
and PI(O).

82
2.12.

SOME IMPOR TANT RANDOM P ROCESSES

Consider a birth-death queueing system in which


k~O
k~O

(a)

For all k, find the differential-difference equations for

Pk(t) = P[k in system at time t]


(b)

Define t he z-tra nsform


<X>

P(z, t) = 2,Pk(t)Zk
k=O

(c)

and find the partial differential equation that P(z, t ) must sat isfy.
Show that the solution to this equ ation is

P(z, t) = exp (;

(1- e-")(z - 1)

with the initial condition PoCO) = I.


Comparing the solution in part (c) with Eq . (2.134), give the
. expression for Pk(t) by inspection .
(e) F ind the limiting values for the se probabilities as t ---+ 00 .

(d)

2.13.

Consider a system in which the birth rate decre ases and the death rate
increases as the number in the system k increases, th at is,

(K - k )A

Ak

={

k~K

k > K

Write down the differential-difference equation s for

Pk(t) = P[k in system a t time t].


2.14.

Consider the case of a linear birth-death process in which Ak = k A a nd


P.k = kp:

Find the partial-differential equation th at must be satisfied by


P(z, t ) as defined in Eq . (2. 153).
(b) Assuming that the population size is one a t time zero, show that
the funct ion that satisfies the equation in part (a) is

(a)

EXERCIS ES

(c)

Expanding P(z, t) in a power series show that

Pk(t)
poet)
(d)
(e)

2.15.

83

=
=

[l ....:. ex:(t)][1 - ,8(t)][,8(t)]k-1


ex:(t)

1, 2, . . .

and find (r) and ,8(t).


Find the mean and variance for the number in system at time t .
Find the limiting probability that the population dies out by
time t for t ->- 00 .

Consider a linear birth-death process for which Ak = kA + ex: and


flk = ku,
(a) Find the differential-difference equations that must be satisfied
by Pk(t).
(b) From (a) find the partial-differential equation that must be
satisfied by the time-dependent transform defined as
eo

P(z, t) = ~ Pk(t)Zk
k -O

(c)

What is the value of P(I, t)? Give a verbal interpretation for the
expression

N(t) = lim - P(z, t)

(d)

(e)

2.16.

.-1az

Assuming that the population size begins with i members at time


0, find an ordinary differential equation for N(t) and then solve
for N(t) . Consider the case A = fl as well as A ,,-6 fl.
Find the limiting value for N(t) in the case A < fl (as t --+ (0).

Consider the equations of motion in Eq, (2.148) and define the


Laplace transform

P/(s)

i'"

Pk(t)e- $/ dt

For our initial condition we will assume poet) = I for t = 0. Transform Eq. (2.148) to obtain a set of linear difference equations in
{Pk *(s)}.
(a) Show that the solution to the set of equations is
k- l

IT )' i
P/( s) =

-k-,-,
i -,-"
O_-

IT (s + Ai)
i= O

(b)

From (a) find p.(t) for the case Ai =

).

(i

0, I, 2, ...).

84

SOME IMPORTANT RANDOM PROCESSES

2.17.

Consider a time interval (0, t) during which a Poisson process generates


arrivals at an average rate A. Deri ve Eq . (2.147) by con siderin g the
two events: exactly k - I arrivals occur in the interval (0, t - tl.t)
and the event that exactly one arrival occur s in the interval (t - tl.t, r).
Considering the limit as tl.t ->- 0 we immedi ately arrive at our desired
result.

2.18.

A barber opens up for business at t = O. Customers arrive at random


in a Poisson fashion ; that is, the pdf of interarrival time is aCt) =
Ae- lt Each haircut takes X sec (where X is some random variable).
Find the probability P that the second arriving customer will not have
to wait and also find W, the average value of his waiting time for the
two following cases :
i. X = c = constant.
ii. X is exponentially distributed with pdf:
b( x)

2.19.

= pe-~'

At t = 0 customer A places a request for service and finds all m


servers busy and n other customers waiting for service in an M/M /m
queueing system . All customers wait as long as necessary for service,
waiting customers are served in order of arri val, and no new requests
for service are permitted after t = O. Service times are assumed to be
mutually independent, identical , exponentially distributed rand om
variables, each with mean duration I Ip.
(a) Find the expected length of time custome r A spends waiting for
service in the queue.
(b) Find the expected length of time from the arrival of customer A
at t = 0 until the system become s completely empty (all cust omer s
complete service).
(c) Let X be the order of completion of service of customer A ; that
is, X = k if A is the kth customer to complete service afte r
t = O. Find P[X = k] (k = 1,2 , . .. , m + n + I).
(d) Find the probability that customer A completes service before
the custo mer immediately ahead of him in the queue.
(e) Let 1V be the amount of time custo mer A waits for service. Find
P[lV > x ] .

2.20. In thi s problem we wish to proceed from Eq. (2. 162) to the transient
solution in Eq. (2.163). Since P* (z , s) must converge in the region
[z] ~ 1 for Re( s) > 0 , then, in this region, the zero s of the denom inator in Eq. (2.162) mu st also be zeros of the numerat or.
(a) Find those two values of z th at give the denominator zeros, and
denote them by ct1 (s) , ~ (s) where 1ct2 (s) 1 < Ict 1 (s)l.

I
]

EXERCISES

85

Using Rouche's theo rem (see Appendix I) show that the denominator of P* (z, s ) has a single zero within the unit disk Izl ~ 1.
(c) Req uiring that the -numerator of P *(z , s) vanish at z = CX2(S)
from our earlier considerations, find an explicit expression for
Po* (s).
(d) Write P* (z, s) in terms of cx,(s) = cx, and cx 2 (s) = cx 2 Then show
that this equati on may be reduced to
(b)

P*(z s)

= (z' + ~zH + .. . + CX2') + cx;+l/(I

- CX2)

Acx,(1 - Z/CX1)

(e)

Usin g the fact that Icx 21 < I and that CX 1CX2 = P./A show th at the
inversion on z yields the followi ng expression for Pk *( s), which is
the Laplace transform for our tra nsient probabilities Pk(t):

(f)

In what follows we take advantage of property 4 in Table 1.3


and also we make use of the following transform pair :

k pkl _t-lf k(al)-=-

[s+

";S2 -

2.A.

4A.u]-k

where p and a are as defined in Eqs. (2. 164), (2.165) and where
f k(x) is the modified Bessel function of the first kind of order k as
defined in Eq . (2.166). Using these facts and the simple relations
among Bessel function s, namel y,

show th at Eq. (2.163) is the inverse transform for the expression


show n in part (e).
2.21.

The rand om variables Xl' X 2 , , Xi' . . . are independent. identically


distributed random variables each with density f x (x) and characteristic
funct ion <I>x (u) = E[~ Uxl . Consider a Poisson process N (t) with
parameter A which is independent of the rand om variables Xi'
Consider now a second random process of the form
..\" (t)

X(t)

LXi
i =l

86

SOME IMPORTANT RANDOM PROCESSES

This second random process is clearly a family of staircase functions


where the jumps occur at the discontinuities of the random process
N(I); the magnitudes of such jumps are given by the random variables
Xi. Show that the characteristic function of this second random
process is given by
ePXW(II) =
2.22.

e .lt[4>.rlul -ll

Passengers and taxis arrive at a service point from independent


Poisson processes at rates )., p. , respectively. Let the queue size at
time I be q" a negative value denoting a line of taxis, a positive value
denoting a queue of passengers. Show that, starting with qo = 0, the
distribution of q, is given by the difference between independent
Poisson variables of means At, ut. Show by using the normal approximation that if), = p., the probability that -k ~ q, ~ k is, for large
1)(41TAt)-1/2.
t, (2k

I
I

,I
I

PART

II

ELEMENTARY
QUEUEING THEORY

Elementary here means that all the systems we consider are pure Markovian and, therefore, our state description is convenient and manageable. In
Part I we developed the time-dependent equations for the behavior of
birth-death processes ; here in Chapter 3 we address the eq uilibrium solut ion
for these systems. The key eq uation in th is chapte r is Eq. (3.11), and the
balance of the material is the simple application of that formula. It , in fact ,
is no more than the solution to the equation 1t = 1tP deri ved in Chapter 2.
The key tool used here is again that which we find throughout the text,
namely, the calculation of flow rates across the bou ndaries of a closed
system. In the case of equilibrium we merely ask that the rate of flow into be
equal to the rate of flow out of a system. The application of these basic
results is more than just an exercise for it is here that we first obtain some
equations of use in engineering and designing queueing systems. The classical
M IM II queue is studied and some of its important performance measures
are evaluated. More comple x models involving finite storage, multiple
servers, finite customer population, and the like, are developed in the balance
of this chapter. In Chapter 4 we leave the birth-death systems and allow
more general Markovian queue s, once again to be studied in equilibrium. We
find that the technique s here are similar to our earlier ones, but find that no
general solution such as Eq. (3.11) is available ; each system is a case unt o
itself and so we are rapidly led into the solutions of difference equations,
which force us to look carefully at the method of z-transforms for these
solutions. The ingenious method of stages introduced by Erlang is considered
here and its generality discussed. At the end of the chapter we introduce (for
later use in Volume II) networks of Markovian queues in which we take
exquisite ad vantage of the memoryle ss propertie s that Mark ovian queues
provide even in a network environment. At this point , however, we have
essent ially exhausted the use of the memoryless distribution and we must
depa rt from that crutch in the following parts.

87

3
Birth-Death Queueing Systems
in Equilibrium

In the pre vious chapter we studied a variety of stochastic pr ocesses. We


indicated that Markov processes play a fundamenta l role in the study of
queueing systems, and after presenting the main results from that theory, we
then considered a special form of Markov pr ocess known as the birthdeath process. We also showed that birth-death processes enjoy a most
convenient property, namely, that the time between births and the time
between deaths (when the system is nonempty) are each exponentially
distributed. * We then developed Eq. (2.127), which gives the basic equa tions
of moti on for the general birth-death process with stationary birth and death
rates.] The solution of this set of equations gives the transient beha vior of
the queueing process and some importa nt special cases were discussed earlier.
In th is chapter we stud y the limiting form of these equations to obtain the
equilibrium behavi or of birth-death queueing systems.
The importance of elementary queueing theory comes from its histori cal
influence as well as its ability to describe behavior that is to be found in more
complex queueing systems. The methods of analysis to be used in this chapter
in large part do not carryover to the more involved queueing situations;
nevertheless, the obta ined results do provide insight into the basic behavior
of many of these other queueing syste ms.
It is necessary to keep in mind how the birth-death process describes
queueing systems. As an example , consider a doctor's office made up of a
waiting room (in which a queue is allowed to form, unfortu nately) and a
service facility consisting of the doctor's examination room. Each time a
patient ente rs the waiting room from outside the office we conside r this to be
an arrival to the queueing system; on the other hand, this arrival may well be
considered to be a birth of a new member of a population, where the population consists of all patients present. In a similar fashion, when a patient leaves
Th is comes directly fro m the fact that they are Markov processes.

t In addit ion to these equations, one requires the con servat ion relat ion given in Eq. (2.122)
and a set of initia l conditions {Pk(OJ}.

89

90

BIRTH-DEATH QUEUEING SYSTEMS IN EQUILIBRIUM

the office after being treated, he is considered to be a departure from the


queueing system; in terms of a birth-death process this is considered to be a
death of a member of the population .
We have considerable freedom in constructing a large number of queueing
systems through the choice of the birth coefficients Ak and death coefficients
flk, as we shall see shortly. First, let us establish the general solution for the
equilibrium behavior.

3.1.

GENERAL EQUILIBRIUM SOLUTION

As we saw in Chapter 2 the time-dependent solution of the birth-death


system quickly becomes unmanageable when we consider any sophisticated
set of birth-death coefficients. Furthermore, were we always capable of
solving for Pk(t) it is not clear how useful that set of functions would be in
aiding our understanding of the behavior of these queueing systems (too
much information is sometimes a curse!). Consequently, it is natural for us to
ask whether the probabilities Pk(t) eventually settle down as t gets large and
display no more "transient" behavior. This inquiry on our pa rt is analogous
to the questions we asked regarding the existence of 1Tk in the limit of 1Tk (t )
as t ->- CfJ. For our queueing studies here we choose to denote the limiting
probability as Pk rather than 1Tk> purely for convenience. Accordingly, let
Pk ~ lim Pk(t)

(3.1)

,-'"

where Pk is interpreted as the limiting probability that the system contains


k members (or equivalently is in state Ek ) at some arbitrary time in the distant
futu re. The question regarding the existence of these limiting probabilities
is of concern to us, but will be deferred at this point until we obtain the
general steady-state or limiting solution. It is important to understand that
whereas Pk (assuming it exists) is no longer a function of t, we are not claiming
that the process does not move from state to state in this limiting case;
cert ainly, the number of members in the population will change with time,
but the long-run probability of finding the system with k members will be
properly described by P
Accepting the existence of the limit in Eq. (3. I), we may then set lim dPk(t)!
dt as t ->- CfJ equal to zero in the Kolmogorov forward equations (of motion)
for the birth-death system [given in Eqs. (2.127)] and immediately obtain the
result

o=
o=

+ flk)Pk + )'k-IPk-l + flHIPHl


-AoPo + fllPl

-(At

k ~ 1

=0

(3.2)

(3.3)
The annoying task of providing a separate equation for k = 0 may be overcome by agreeing once and for all that the following birth and death
k

3.1.

91

GENERAL EQUILIBRI UM SOLUTION

coefficients are identically equal to 0 :

A_1
flo

== A_ 2 = A_3 =
= fL-1 = fL-2 =

=0
=0

Furthermore , since it is perfectly clear that we cannot have a negative number


of members in our population, we will , in most cases , adopt the co nvention
that
P- 1 = P-2 = P- 3 = .. . = 0
Thus, for all value s of k , we may reformulate Eqs. (3.2) and (3.3) into the
following set of difference equations for k = .. . , - 2, -I, 0, 1, 2, . ..

o=

(Ak

+ fLk)Pk + Ak - 1Pk- 1 + f.lk+1Pk+1

(3.4)

We also require the conservation relation


co

(3.5)

2.Pk = 1
k= O

Recall from the previ ous chapter that the limit given in the Eq. (3. 1) is
independent of the initial conditions.
Ju st as we used the state-transition-rate diagram as an inspection technique
for writing down the equations of motion in Chapter 2, so may we use the
sa me concept in writing down the equilibrium equations [Eq s. (3.2) and (3.3)]
directly from that d iagram. In thi s equilibrium case it is clear that flow mu st
be conserved in the sense that the input flow must equal the output flow from a
given state. For example, if we look at Figure 2.9 once again and concentrate
on sta te E k in equilibrium, we observe that
and

In equilibrium the se two mu st be the same and so we ha ve immediatel y


(3.6)

But this last is ju st Eq . (3.4) again! By inspection we have established the


equilibrium difference equations f or our system . The sa me comments apply
here as applied earlier regard ing the conser vat ion of flow across any closed
boundary ; for example, rather than surrounding each sta te a nd writing d own
its equation we could choose a sequence of boundaries the first of wh ich
surrounds Eo , the second of which surrounds Eo and E and so on , each time
add ing the next higher-numbered state to get a new "boundary . In such a n
exa mple the kth boundary (which surrounds sta tes Eo, E . .. , Ek _ 1) would

"

92

BIRTH-D EATH QUEUEI NG SYSTEMS IN EQUILIBRIUM

lead to the following simple conservation of flow relationship:


(3.7)

Ak-IP k-I = flkPk

This last set of equations is equivalent to drawing a vertical line separating


adjacent states and equating flows across this boundary ; this set of difference
equations is equivalent to our earlier set.
The solution for P in Eq. (3.4) may be obtained by at least two methods.
One way is first to solve for P, in terms of Po by considering the case k = O.
that is.
PI

= - O Po

(3.8)

fll

We may then consider Eq. (3.4) for the case k

o=
o=
o=
and so

I and using Eq. (3.8) obtain

+ fl,)P, + AoPo + fl2P2


o
-(AI + flI) A Po + AoPo + fl2P2

-(J' I

flI

AIAo

flI

+ AoPo + fl2P2

Po - J..,po

P2

AOA I

=-

fllfl2

(3.9)

Po

If we examine Eqs. (3.8) and (3.9) we may justifiably guess that the general
solution to Eq. (3.4) must be

(3.10)

fllfl 2 . . . flk

To validate this assertion we need merely use the inductive argument and
apply Eq. (3.10) to Eq, (3.4) solving for PHI' Carrying out this operation we
do. in fact . find that (3.10) is the solution to the general birth-death process
in this steady-state or limiting case. We have thus expressed all equilibrium
probabilities P in terms of a single unknown constant Po:
k-I }...
Pk = Po I1'
i=O

k=0.1,2 .. .

- (3.II )

I' i+l

(Recall the usual convention that an empty product is unity by definition.)


Equation (3.5) provides the additional condition that allows us to determine
Po; thus, summing over all k , we obtain
I
<X>

I +~

k -I

I1-'

k= l i = O

Pi+l

- (3.12)

3.1.

GENERAL EQUILIBRI UM SOL UTION

93

Th is "product" solution for P (k = 0 , I , 2, . . .) simply obt ained , is a


principal equati on in elementary queueing theory and, in fact, is the point of
dep arture for all of our further solutions in this chapter.
A second easy way to obtain the solution to Eq. (3.4) is to rewrite that
equ at ion as follows :
(3.13)
Defining
(3.14)
we have from Eq. (3. 13) that
(3.15)
Clearl y Eq. (3.15) implies that

gk = constant with respect to k


However, since A._I

= Po =

(3.16)

0, Eq. (3.14) gives

g_1

and so the constant in Eq. (3.16) must be O. Setting gk equal to 0, we immediately obtain from Eq , (3.14)
P HI

A.k

= - - Pk

(3.17)

P HI

Solving Eq. (3.17) successively beginning with k = 0 we obtain the earlier


solution , namely, Eqs. (3.11) and (3.12).
We now address ourselve s to the ex istence of the steady-state probabilities
P given by Eqs. (3.11) and (3. 12). Simply stated, in order for those expression s to repre sent a probability distribution, we usually require that P > O.
Thi s clearly places a condition upon the birth and death coefficients in those
equations. Essentially, what we are requiring is that the system occasionally
empties ; that this is a condition for stability seems quite reasonable when one
interprets it in terms of real life situations. * More precisely , we may classify
the possibilities by first defining the two sums
(3.18)

(3.19)
It is easy to construct counterexa mples to th is case. and so we requ ire the precise argument s which follow.

94

BIRTH-DEATII QUEUE ING SYSTEMS IN EQU ILIBRIUM

All states E k of our birth-death process will be ergodic if and only if


Ergodic :

5,

< IX)

52 =

IX)

On the other hand , all states will be recurrent null if and only if
Recurrent null:

5,
52

=
=

IX)
IX)

Also, all states will be transi ent if and only if


Transient :

5,=
52

IX)

< IX)

It is the ergodic case that gives rise to the equilibrium probabilities {fk} and
that is of most interest to our studies. We note that the condition for
ergodicity is met whenever tbesequence {).,JPk} remains below unit y from so me
k onwards, that is, if there exists some k o such th at for all k ~ k o we have

Ak < I
(3.20)
Pk
We will find this to be true in most of the queueing systems we study.
We are now ready to apply our general solution as given in Eqs. (3. 11)
and (3. 12) to some very important special cases. Before we launch headlong
into that discussion. let us put at ease those readers who feel that the birthdeath constraints of permitting on ly nearest-ne ighb or tran sition s are too
confining. It is true that the solution given in Eqs. (3. 1I) and (3. 12) applies
only to neare st-ne ighb or birth-death processes. H owever . rest assured that
t he equilibrium meth ods we have described can be extended to more general
than neare st-neighbor system s ; these generalizat ions a re co nside red in
Chapter 4.
3.2.

M/M/1: THE CLASSICAL QUEUEING SYSTEM

As mention ed in Chapter 2, the celeb rated MIM II queue is the simplest


nontrivial interestin g system and may be described by selecti ng the birth death coefficient s as follows :

k = 0, 1,2, .. .
k=I ,2,3 . . .

That is, we set all birth * coefficient s equal to a constant A and all death *
In this case. the average intera rrival time is f = 1/). a nd the average service time is
l /p; this follows since t and i are both exponen tially distributed .

i =

---3.2. M /M /I:
A

THE CLASSIC AL Q UEU EING SYST EM

95

~ ...
Figure 3.1 State-transition-rate diagram for M/M /I.

coefficients equ al to a constant /-l . We further assume that infinite queueing


space is provided and that customers a re served in a first-come-first-served
fashion (although this last is not necessary fo r man y of our results). For thi s
important example the sta te-transition-rate diagram is as given in Figure 3.1.
Applying these coefficients to Eq . (3. 11) we have
k- l }.

Pk = Po II -

'-0 /-l

or
(3.21)
The result is immediate. The condi tion s for our system to be ergodic (and ,
therefore, to ha ve an equilibrium solution P > 0) are that S, < CXJ and
So = CXJ; in this case the first condition becomes

The series On the left-hand side of the inequality will converge if and only if
Af/-l < I. The second conditi on for ergodicit y becomes

T his last condition will be satisfied if Af/-l ::;; I ; thus the necessary and sufficient condition for ergodicity in the M IMII queue is simply}. < /-l. In order
to solve for Po we use Eq. (3.12) [or Eq . (3.5) as suits the reader] and obtai n

96

BIRTH-DEATH QUEUEI NG SYSTEMS IN EQUILIBRI UM

The sum conver ges since).

< fl and so
1
1

Alp1 - ;./fl

Thus

Po = 1 - (3.22)
PFrom Eq. (2.29) we have p = Alfl' From our stability conditi ons, we therefore require that 0 ~ p < 1; note that this insures that Po > O. From Eq.
(3.21) we have, finally,

Pk = (I - p)pk

= 0, 1,2, .. .

- (3.23)

Equation (3.23) is indeed the solution for the steady-state probability of


finding k customers in the system. * We make the important observation that
P depends upon). and fl only through their ratio p.
The solution given by Eq. (3.23) for this fundamental system is graphed in
Figure 3.2 for the case of p = 1/2. Clearly, thi s is the geometric distribution
(which shares the fundamental memoryless property with the exponential
distribution). As we develop the behavior of the MIMII queue, we shall
continue to see that almost all of its important pr obabil ity distributions are
of the memoryless type.
An important measure of a queuein g system is the average number of
customers in the system N. This is clearly given by

= (I

co

- p),I kp k
k= O

Using the trick similar to the one used in deriving Eq. (2.142) we have

<Xl

N= (I-p ) p - I l
ap k- O

= (1 -

p) p - - -

N =P1- p

ap 1 - p
- (3.24)

If we inspect the transient solution for M/M /l given in Eq. (2.163), we see the term
p)pk ; the reader may verify that , for p < 1, the limit of the transient solution agrees
with our solution here.

(I -

3.2. MIM/l :

TIlE CLASSICAL QUEUEING SYSTEM

97

l- p

(l -p)p

(1_p)p2

Figure 3.2 The solution for P in the system MIMI!.


The behavior of the expected number in the system is plotted in Figure 3.3.
By similar methods we find that the va riance of the number in the system is
given by
00

G,v" = Z(k -

R)"Pk

k- O

- (3.25)
We may now appl y Little's result directly from Eq . (2.25) in order to obtain

o
Figure 3.3 The average number in the system MIMI!.

98

BIRTH-DEATH QUEUEING SYSTEMS IN EQUILIBRIUM

o
p ~

Figure 3.4

Average delay as a funcIion of p for M IMI!.

T , the average tim e spent in the sy stem a s follows :

T=-

;.

T=

C~)G)
IIp.

T=-l-p

- (3.26)

Th is d ependence o f a verage t ime on the utilizati on factor p is shown in


Figure 3.4. The va lue obtained by T when p = 0 is exactly the av e rage servi ce
time expected by a cu st omer; th at is, he spe nds n o time in queue a nd IIp. sec
in se rvi ce o n the av erage.
The beha vior given by Eqs . (3.24) and (3.26) is rather dramatic. As p
a p p roac hes unity , both the a ve ra ge number in the sys tem a nd the average tim e
in the sys te m gro w in an unbounded fa shi on. * Bot h th ese q uantities have. a

We observe at p = I that the system behavior is unstable ; this is not surprising if one
recalls that p < I was our condition for ergodicity. What is perhaps surprising is that the
behavior of the average number iii and of the average system time T deteriorates so badly as
p - I from below; we had seen for steady flow systems in Chapter I that so long as R < C
(which corresponds to the case p < I) no queue formed and smooth, rapid flow proceeded
through the system. Here in the M IMI! queue we find this is no longer true and that we pay
an extreme penalty when we attempt to run the system near (but below) its capacity. The

3.3.

DISCOURAGED ARRI VALS

99

simple pole at p = 1. This type oj behavior with respect to p as p approaches


1 is characteristic of almost every queueing system one. CUll encounter. We will
see it aga in in M IGII in Chap ter 5 as well as in the heavy traffic beh avior
of G/Gjl (and also in the tight bounds on G/Gfl behavior) in Volume II,
Chapter 2.
Another interesting quantity to calculate is the pr obability of finding at
least k customers in the system :
P[~k in system]

= I'" Pi
i =k

= I'"( l

- p)pi

i- k

P[~k in system] = pk

- (3.27)

Thus we see that the probability of exceeding some limit on the number of
customers in the system is a geometrically decrea sing function of that number
and decays very rapidly.
With the tools at hand we are now in a position to develop the probability
density function for the time spent in the system. However, we defer that
development until we treat the more general case of M/Gfl in Chapter 5
[see Eq. (5.118)]. Meanwhile, we proceed to discuss numerous other birthdeath queues in equil ibrium .
3.3.

DISCOURAGED ARRIVALS

This next example considers a case where arrivals tend to get discouraged
when more and more people are present in the system. One possible way to
model this effect is to choose the birth and death coefficients as follows :
ex

)'k

= - -

k = 0, 1,2, . . .

ft k

= ft

k = 1, 2, 3, . . .

k+1

We are here assumin g an harmonic discouragement of arriv als with respect to


the number present in the system. The state-transition-rate diagram in thi s
intuitive explanation here is that with ra ndo m flow (e.g., M/M/! ) we get occasional bursts of
traffic which temporarily overwhelm the server ; while it is still true tha t the server will be
idle on the average I - p = Po of the time this average idle time will not be distri buted
uniform ly within sma ll time interva ls but will onl y be true in the long run . On the ot her
hand , in the steady flow case (which cor respond s to our system D/D /! ) the system idle time
will be distributed quite uniforml y in the sense that a fter every service time (of exactly I II'
sees) there will be an idle time of exactly (J f).) - ( l ip.) see. Thus it is the cariability in bot h
the interarrival time and in the service time which gives rise to the disastro us beha vior near
p = 1; any reduction in the var iati on of either rand om varia ble will lead to a reduct ion in
the average wait ing time , as we shall see again and aga in.

100

BIRTH-DEATH QUEUEING SYSTEMS IN EQUILIBRIUM

ex

ex!2

cxlk

cxI(k + l )

~ ...
Figure 3.5 State-transition-rate diagram for discouraged arrivals.
ca se is as shown in Figure 3.5. We apply Eq . (3.11) immediately to obtain

iox p;
oc/(i + 1)
II :..:!...OC-'-2

k -1

Pk = Po

Pk

po(~r

(3.28)

fl

;- 0

:!

(3.29)

Solving for Po from Eq, (3.12) we have

1/[1 + I (~)k l.-J

Po =

fl

k-1

Po

k!

e~!p

From Eq. (2.32) we have therefore ,


p

1-

e -a /

(3.30)

N ote that the ergodic condition here is merely ocl fl


Eq. (3.29) we ha ve the final solution
Pk

= (OC~)k e- ' !p

< 00 .

Going back to

0, 1, 2, . . .

(3.31)

We thus ha ve a Poisson distribution for the number of customers in the


system of discouraged arrivals! From Eqs. (2.131) and (2.132) we have that
the expected number in the system is
oc
N = -

fl
In o rder to calculate T , the average time spent in the system , we may use
Little's result agai n. For this we require A, which is dir ectly calculated from
p = AX = Al fl ; thus from Eq. (3.30)

). =

flP

= fl(l

e -o / P )

Using this* and Little's result we the n obta in

(3.32)

oc

fl"( I - e- ' /P)

No te that this result could have been obtained from ;'


verify this last calculat ion.

= L kJ.kPk- The

reader should

3.4. M/M / oo:

3.4.

RESPONSIVE SERVERS (I NFINITE N UMBER OF SERVERS)

101

M/M/ oo: RESPONSIVE SERVERS (INFINITE NUMBER


OF SERVERS)

Here we con sider the case that may be interpreted either as th at of a


responsive server who accelera tes her service rate linearly when more
custom ers a re wait ing or may be interpreted as the case where there is
always a new clerk or server ava ilable for each arriving custo mer. In particular , we set
k

0, 1,2

k = 1,2.3,

.
.

Here the state-t ransition-ra te dia gram is th at shown in Figure 3.6. G oing
dire ctly to Eq. (3.11) for the solution we obtain

i,

k -1

Pk = Po .~
ITo (.I + I )f-l

(3.33)

Need we go a ny further? The reader should compare Eq . (3.33) with Eq.


(3.28). These two are in fact equi valent for ex = i., and so we immedi atel y
have the solutions for P and N,
P,

(}'If-l)k

- Al p

----z! e

k = 0. 1. 2. ..

(3.34)

N=!
f-l

Her e. too . the ergodic condition is simply i.l f-l < 00. It a ppea rs then that a
system of d iscouraged arriva ls beh aves exactly the same as a system th at
includ es a resp onsive server. H owever, Little 's result provid es a different
(and simp ler) form for T here th an th at given in Eq . (3.32) ; thus
I

T=-

f-l

This answer is, of co urse. obvio us sinc e if we use t he interpreta tion where
each a rriving customer is gra nted his own server. then his tim e in system will
be merely his service time which clearl y equ als l il t o n the ave rage.

11

211

~ ...
Figure 3.6 State-transition-rate diagram for the infinite-server case M/M/oo.

r:

102

BIRTH-D EATH QUEUEING SYSTEMS IN EQU ILIBRIUM

3.5. M/M/m: THE m-SERVER CASE


Here again we consider a system with an unlimited waiting room and with
a constant arrival rate A. The system provides for a maximum of m servers.
This is within the reach of our birth-death formulation and leads to

I' k

I.

0, 1,2, . . .

flk = min [kfl , mfl]

= {kfl

0 ~

ks

m ~ k

mft

From Eq. (3.20) it is easily seen that the condition for ergodicity is A/mft < I.
The state-transition-rate diagram is shown in Figure 3.7. When we go to
solve for P from Eq. (3.11) we find that we must separate the solution into
two parts, since the dependence of ftk upon k is also in two parts. Accordingly,
for k ~ m ,
k-l
A
Pk = Po II (. 1)
'- 0

fl

po(;r~!

Similarly, for k

(3.35)

m,

m-l

k- l

Pk = Po II .
II '-0 (I + l)ft ;-m mft

Ak

= Po ( ft)
m! mk- m

(3.36)

Collecting together the results from Eqs. (3.35) and (3.36) we have
(mp)k

POI:!
Pk =
{

- (3.37)

(p) kmm

k~

Po--m!

where

p = mfl

This expression for


A

<1

(3.38)

follows that in Eq . (2.30) and is consistent with our

}.

m~

m~

~ . ..
~

2~

(m - l)~

Figure 3.7 State-transition-rate diagram for M/M/m.

m~

3.6. M /MfI / K :

FI NIT E STORAGE

103

definition in terms of the expected fra ct ion of bu sy servers. We may now


solve for Po from Eq. (3.12), which gives us

Po = [ I

m- 1(mp )k

co

(mp)k

a nd so
)m) ( I
--+ (mp
---

m- 1 ( m p)k

Po= [ I
k-

J-1

+kd
I -k!- +k~I m -m!- -m k-- m
O

k!

m!

)J-1

I - p

- (3.39)

Th e prob ability th at a n a rriving customer is forced to join the queue is


given by
co

P[queueing]

=I

Pk

k-m

Thus
(

P[queuemg] =
.

m p)m)
m!

["f

(mp)k

k-o

k!

(_1)
1(_I_)J
p

+ (mp)m)
m!

- (3.40)

1- p

This prob ability is of wide use in telephony a nd gives th e pr ob ability th at no


trunk (i.e., server) is available for an arriving call (customer) in a system of m
tru nks; it is refer red to as Erlang's C f ormula a nd is oft en denoted * by
C(m , A/p) .

3.6. lVI/lVI/I/K: FINITE STORAGE


We now con sider for the first time the case of a queueing system in which
there is a maximum number of cu stomers th at may be stored; in particular,
we ass ume the system can hold a t most a total of K cu stomers (including the
cu stom er in service) a nd th at any further arriving cu stomers will in fact be
refused entry to the system an d will dep art immedi ately without servic e.
N ewly a rriving customers will co ntinue to be gene ra ted acco rding to a
Poisson pr ocess but only those who find the system with strictly less th an K
custo mers will be allow ed entry. In telephony the refused customers are
con sidered to be "lost" ; for the system in which K = I (i.e., no waiting
room a t all) th is is referred to as a " blocked calls cleared" system with a
single server.

* Europea ns use the symbol ~ . m(}.Ii' ).

104

BIRTH -DEATH QUE UEING SYSTEMS I N EQUILIBRIU M


A

~ ... ~
11

P.

JJ

IJ

Figure 3.8 State-transition-rate diagram for the case of finite storage room
M/M/I /K.

It is interesting that we are capable of accommo d ating this seemingly


complex system description with o ur birth-death model. In particular, we
accomplish thi s by effectively "turning off " the Poi sson input as soon as the
systems fills up, as follows:

lk =

{~

P.k = P.

<K

k:? K
k

1, 2, . . . , K

From Eq. (3.20), we see that th is system is alw ays ergodic. The sta te-tra nsiti on-rate diagram for thi s finite M a rkov chain is sho wn in F igure 3.8. Proceeding directly with Eq. (3.11) we o btai n
k-l l
Pk = Po II ;_ 0 P.

k ::;; K

or
(3.4 1)

Of co urse, we also ha ve

Pk = 0

>K

(3.42)

In order to solve for Po we use Eqs . (3.4 1) and (3.42) in Eq. (3.12) to obtai n

and so
I - i.fp.
Po = 1 _ (l/p.)K+1

Thus, finally,

1 - I./P. ( l)k
r
Pk = l~ - (l/p.)K+1 P.

- (3.43)
otherwise

3.7. M jM jm jm :
A

105

m-SERVER LOSS SYSTEMS

~ ... ~
21'

I'

Im - l )p

mu

Figure 3.9 State-transition-rate diagram for m-server loss system M/M /m/m.
For the case of blocked calls cleared (K = 1) we have
1

k=O

1+

Nil
Nil

Pk =
1

+ ;.jll

1= K

(3.44)

otherwise

3.7. MjM/m/m: m-SERVER LOSS SYSTEMS


Here we ha ve again a blocked calls cleared situa tion in which there are
available m servers. Each newly a rriving customer is given his private server ;
howe ver, if a cu stomer arrives when all servers are occupied , that customer is
lost. We create thi s artifact as above by choosing the following birth and
death coefficient s :

k<m
k

>:

1,2 , . . . , m

Here again , ergodicity is alway s assured. This finite state-transition-rate


dia gram is sho wn in Figure 3.9.
Applying Eq . (3.1 1) we obta in
J.

k- l

Pk = Po IT
(.I + 1)Il
, ~o

~ III

or

Pk =

A)k1
Po( - 0 Il k!
k

Solving for Po we have

Po =

- (3.45)

I (Il-A)k-k!1
m

k_ O

J1

III

This particular system is of great interest to th ose in telephony [so much so


th at a special case of Eq . (3.45) has been tabulated and gra phed in man y books

106

BIRTH - D EATH QUEUEING SYSTEMS IN EQUILI BRIU M

on telephony]. Specifically, Pmdescribes the fracti on of time that all m servers


are busy. The name given to this probability exp ression is Erlang's loss
f ormula and it is given by
Pm =

- (3.46)

2,(A lfl)klk!
k- O

Thi s equation is also referred to as Erlang's B f ormula and is commonly


denoted * by B(m, Alfl). Formula (3.46) was first deri ved by Erlang in 1917!

3.8. M/M /I//M t : FINITE CUSTOMER POPULATIONSINGLE SE RVER


Here we consider the case where we no longer have a Poisson input
proce ss with an infinite user population, but rather have a finit e population
of possible users. The system structure is such that we have a tot al of M
users; a customer is either in the system (co nsisting of a queue and a single
server) or outside the system and in some sense " arri ving." In particular,
when a customer is in the " arriving" condition then the lime it takes him to
arrive is a random variable with an exponential distributi on whose mean is
I/A sec. All customers act independently of each other. As a result, when there
are k customers in the system (queue plus service) then there are M - k
customers in the arriving state and, therefore , the tot al average arrival rate
in this state is }.(M - k) . We see that th is system is in a str ong sense selfregulatin g. By th is we mean that when the system gets bUSY, with many of
these customers in the queue, then the rate at which additional custome rs
arrive is in fact reduced , thus lowering the further congestion of t he system.
We model this quite appropriately with our birth-death proce ss choosing for
parameters
A(M - k )
o k ~M
}'k = {
otherwise
o

fl k = !t

I , 2, .. .

The system is ergodic. We assume that we ha ve sufficient room to contain M


custome rs in the system. The finite sta te-tra nsition-ra te dia gram is shown in
Fig ure 3.10. Using Eq . (3.11) we solve for h as follows :
k- I

}.(M - i )

Pk = Po IT -'-------'b O
fl
Europeans use the nota tion 1 m(J./Il)

t Recall tha t a blan k entry in e ither of the last two opt iona l positions in this notation means
an entry of 00; thus here we have the system M/M /I / oo/M .

3.9. MIM I rolIM:


M).

2),

(M- l)'

~ ...
Il

107

FINITE CUSTOMER POP ULATION

Il

~
Il

Il

Figure 3.10 State-transition-rate d iagram for single-server finite popul ation


system M/M/I/IM.
Thus

Pk =

:0;
(

In addition, we obtain for

A)k

M!

(M - k) !

- (3.47)

k>M

Po

JPo = [ ~.ll (.il)k


-P. (M M!
- k)!

- (3.48)

k_ O

3.9.

M/M/ roIlM: FINITE CUSTOMER POPULATION"INFINITE" NUMBER OF SERVERS

We again consider the finite population case, but now provide a separate
server for each customer in the system. We model this as follows:

Ak

. {.il(M -

k)

otherwise

P.k = kp.
k = I , 2, ...
Clearly , this too is an er godic system. The finite state-transition-rate diagram
is shown in Figure 3.11. Solving this system, we have from Eq . (3. II)
k- l

Pk =
=

}.(M - i)

PoIT (. + 1)P.
,-0

Po(~r(~)

~k~M

(3.49)

where the bin omial coefficient is defined in the usual way,

(
.11),

(,11-1 ) ),

M)
k

.l

M!
k! ( Af -

k)!
2) , ) ,

~ . .. ~
I'

21'

(.II- l lp

M I'

Figur e 3.11 State-tran sition-rate diagram for "in finite"-server finite population
system M/M/ ooI/M.

108

I
I
I

BIRTH-DEATH QUEUEING SYSTEMS IN EQUILIBRIUM

Solving for Po we have


an d so
Thus

I-

O~ k~M

(3.50)

otherwise
We may easily calculate the expected number of people in the system from

Ik(2:)k(M)
>- 0

(I

fL

+ i./fL)M

Using the partial-differentiation trick such as for obtaining Eq . (3.24) we


then have
Mi'/fL
N- = ---'-'--

1+ AlfL
3.10. M/M/m/K/M: FINITE POPULATION, m-SERVER CASE,
FINITE STORAGE
This rather genera l system is the most complicated we have so far considered and will reduce to all of the pre viou s cases (except the example of discouraged arrivals) as we permit the parameters of thi s system to vary. We
assume we have a finite population of M customers , each with an " arriving"
parameter A. In addition , the system has m servers, each with parameter fl.
The system also ha s finite storage room such that the total number of cust omers in the system (queueing plus th ose in service) is no more th an K.
We assume M ~ K ~ m; cust omers arriving to find K alre ad y in the system
are "lost" and return immediately to the arriving state as if they had just
completed service. This lead s to the followin g set of birth-death coefficients:

i' =

{OA(M -

fl k =

k fl
{mfl

k)

0~ k ~
otherwise

K-

3.10. M /M /m /K /M: FINITE POPUL ATION, m-SERVER CASE


M~

(1.1- 1) ~

(M- m+ l p

(M- K+ I) ~

.. :e:B:GD3: ...~

~
IJ

(M- m+ 2) .\ ( .\I- m) ~

109

Zu

(m - I) J:

mu

mu

/Il 1J

Figure 3. I2 State-tran sition-rate diagram for m-server, finite storage, finite


population system M/M/m/K /M.

In Figure 3.12 we see the most complicated of our finite state-transition-rate


d iagrams. In order to apply Eq. (3.11) we must consider two regions. First,
for the range 0 ~ k ~ m - I we have

A(M - i)

k- I

Pk = Po IT
(.I + I) P.
.~o

-_ po(~)k(Mk)
r:

For the region m

O~k~lII-l

(3.51)

K we have
m- 1

A(M _ i) k- 1 ;,(M - i)

i~ O

(I

Pk = Po IT

IT -'------'+ l)p. i- m IIIP.


=Po( -A)k( M ) -k!I l l m-k
s e .
P.

m!

(3.52)

The expression for Po is rather complex and will not be given here, although
it may be computed in a straightforward manner. In the case of a pure loss
system (i.e., M ~ K = m), the stationary state probabilities are given by

Pk =

(~)(;r

. ~o

(~) (2:)'
I

= 0, 1, .. . , m

(3.53)

p.

This is kn own as the Engset distribution.


We could continue these examples ad nauseam but we will instead take a
benevolent approach and terminate the set of examples here . Additi on al
examples a re given in the exercises. It should be clear to the reader by now
that a lar ge number of interestin g queueing stru ctures can be modeled with
the birth-death process. In particular, we have demonstrated the a bility to
mod el the multipl e-ser ver ca se, the finite-population case, the finite-storage
case a nd co mbina tions thereof. The common element in a ll of the se is th at
the so lution for the equilibrium probabilities {pJ is given in Eq s. (3.11) a nd
(3.12). Only systems wh ose solutions are given by the se equations have been
con sidered in thi s chapter. However, there are many other Markovian systems
that lend themselves to simp le solution and which a re important in queueing

110

BIR TH - DEATH QUEUEIN G SYSTE MS IN EQU ILIBRIUM

th eory. In the next chapter (4) we con sider the equilibrium solutio n for
M arkovian queues ; in Chapter 5 we will generalize to semi-Markov processes
in which th e service time distribution B(x) is permitted to be genera l, and in
Chapter 6 we revert back to the exponential service time case, but permit
the interarrival time d istribution A (I) to be general; in both of the se cases
a n im bedded Markov chain will be identified and solved. Onl y when both
A(I) a nd B (x) a re nonexpon ential do we requ ire the methods of adva nced
queueing theory discu ssed in Chapter 8. (There are so me special none xp onentia l distribution s tha t may be described wit h th e the ory of Markov pr ocesses
and these too are discussed in Chapter 4.)
EXERCISES
~Consider

.\...:.;;::
--??.'"

! ~
.~1 ~~
\

_'>j
...;

- . .!'!

..."i, ../ . '" .

"

'.~ )

'1>;a;;~ -!1
~')

\ !J "

" ~ (a)

(b)

3.2.

a pure Markovian queueing system in which

A.
k

~k ~
K <k

{A.
2A.

P,k=P,

k=I , 2, .. .

Find the equilibrium probabilities P for the number in the


system.
What relationship must exist am on g t he parameters of the
problem in order that the system be sta ble and, therefore , th at
thi s equilibrium solution in fact be reached ? Interpret this
an swer in terms of the possible dyn ami cs of the system.

Cons ider a Markovian queueing system in which

0, 0

~ a:

<

k~1

(a)
(b)

3.3.

Find the equilibrium pro ba bility h of having k custo mers in th e


system . Express yo ur an swer in terms of Po.
G ive a n expression for P

Con sider a n M jM j2 queue ing system where the average a rrival ra te


is ;. cu stomers per second and the average service time is l jp, sec,
where A. < 2p,.
(a) Find the diffe rential equati on s th at govern the time-dependent
probabilities Pk(I ).
(b) Find the equilibrium probabilities
Pk

lim Pk( l )
I -a:

EXERC ISES

III

3.4.

Consider an M IM II system with parameters A, p in which customer s


are impatient. Specifically, upon arri val, customers estimate their
queueing time wand then join the queue with probability e- a w (or
leave with pr obab ility I - e- a w ) . The estimate is w = k /fl when the
new arrival finds k in the system. Assume 0 ::::; oc.
(a) In terms of Po , find the equilibrium probabi lities P of finding
k in the system. Gi ve an expre ssion for Po in term s of the system
parameters.
(b) For 0 < , 0 < p und er wha t cond ition s will the equilibrium
solution hold ?
(e) For oc -> 00 , find P explicitly and find the average number in the
system.

3.5.

Consider a birth-death system with the following birth and death


coefficients :
k = 0, 1,2 , .
A. = (k + 2)A
1
k = 1,2,3, .
/ = kp:
All other coefficients are zero.
(a) Solve for h. Be sure to express yo ur answer explicitly in terms
of A, k , and p only.
(b) Find the ave rage number of customer s in the system.

3.6. Consider a birt h-death process with the following coefficient s :

+ I,
K, + I,

A. = ock(K. - k )

k = K" K ,

, K.

fl . = fJk (k - K,)

k = K"

K.

where K , ::::; K. and where these coefficients are zero outside the range
K , ::::; k ::::; K a- Solve for P (assuming tha t the system initially co ntai ns
K , ::::; k ::::; K. customers).
3.7. Consider an M/M /m system that is to serve the pooled sum of two
Poisson arrival streams; t he ith stream has an average arriva l rate
given by Ai and exponentially distribute d service times with mean
I /p , (i = 1, 2). The first stream is an ordina ry stream whereby each
ar rival requires exactly one of the In servers ; if all In servers are busy
then any newly arrivi ng custom er of type I is lost. Customers from the
second class each require the simultaneous use of Ino servers (and will
occupy them all simulta neously for the same exponenti ally distributed
amo unt of time whose mean is I Ip. sec); if a customer from th is class
finds less than mo idle servers then he too is lost to the system. Find
the fracti on of type I custo mers and the fraction of type 2 customers
that are lost.

112
3.8.

BIRTH-D EATH QUEUE ING SYSTEMS IN EQUILIBRIUM

Consider a finite customer pop ulation system with a single server such
as that considered in Section 3.8 ; let the parameters M, A be replaced
by M, i:. It can be shown that if M ->- 00 and A' ->- such that lim
MA' = A then the finite population system becomes an infinite
population system with exponential interarrival time s (at a mean rate
of ). customers per second). Now consider the case of Section 3.10 ;
the par ameters of that case are now to be denoted M, A' , m, p" Kin
the obvi ous way. Show what value these parameters must take on if
the y are to repre sent the earlier cases described in Sections 3.2, 3.4 , 3.5,
3.6,3 .7,3.8 , or 3.9.

3.9.

Usin g the definition for Bim, A/p,) in Section 3.7 and the definiti on of
C(m, Alp,) given in Section 3.5 establish the following for A/p, > 0,
m

(a)

= 1,2, . ..

S( m))
< ~ (A/p,)k ": < c(m))
p,
k!
p,
k -m

(b)

3. 10.

c(m,;)

Here we consider an M/M/l queue in di screte time where time is


segmented into intervals of length q sec each. We assume th at event s
can only occur at the ends of the se discrete time intervals. In particul ar the probability of a single arrival at the end of such an interval
is given by Aq and the pr obability of no arrival at that point is I - i.q
(thus at most o ne arrival may occur). Similarly the dep arture pr ocess
is such th at if a customer is in service during an interval he will co mplete service at the end of that interval with pr obability I - (J or will
require at least one more interval with pr obability (J.
(a) Derive the for m for a(l ) and b(x) , the intera rrival time and service
time pdf's, respecti vely.
(b) Assuming FCFS , write down the equilibrium equa tions th at
govern the behavior of Pk = P[k customers in system at the end
of a discrete time interval] where k includ es an y arrivals who

EXER CISES

(c)
3.11.

have occur red at the end of this interval as well as any customers
who are about to leave at this point.
Solve for the expected value of the number of customers at these
points.

Consider an M/M/I system with "feedback"; by this we mean that


when a customer departs from service he has probability a of rejoining
the tail of the queue after a random feedback time, which is exponentially distributed (with mean 1/1' sec) ; on the other hand , with probability I - a he will depart forever after completing service. It is
clear that a customer may return many times to the tail of the queue
before making his eventual final departure. Let hi be the equilibrium
probability that the re are k customers in the "system" (that is, in the
queue and the service facility) and that there are j customers in the
process of returning to the system .
(a) Write down the set of difference equations for the equilibrium
probabilities hi'
(b)

Defining the double z-transform


00

=~

PCZ1, 2:2)

" Z2 -

- ) apCZh Z2) + {oCI


A

""' I

00

~ Pk;ZtkZ/

1.-= 0 j= O

show that
aZ2

+ ,u[1 -

Zl

a ~J}PCZ1, Z2)

I - a ZI

-1

=,u
(c)

3.12.

113

a I -I - Zl

a ZOJ
-: P(O, Z2)
"" I

By taking advantage of the moment-generating properties of our


z-tra nsforms, show th at the mean number in the " system"
(queue plus server) is given by p!(l - p) and that the mean
number returning to the tail of the queue is given by ,uap!y ,
where p = A/ Cl - o),u.

Consider a " cyclic queue" in which 1\<1 customers circulate around


through two queueing facilities as shown below.

~~---y----'

Stage 1

~~-----y--~)

114

BIRT H- DEATH QUEUE ING SYSTEMS IN EQUILIBRIU M

Both servers are of the exponential type with rate s JlI and Jl2, respectively. Let

Pk = P[k customers in stage I and M-k in stage 2]


(a)
(b)
(c)

Draw the state-transition-rate diagram.


Write down the relati on ship among {Pk} .
Find
.U

P(z) =

2: P k Zk
k=Q

(d)

F ind Pk.

3.13.

Con sider an M{Mfl queue with parameters). and Jl. A customer in


the queue will defect (depart without service) with probability
oc tJ.t + o (tJ.t) in any interval of duration tJ.t.
(a) Dr aw the state-transition-rate diagram .
(b) Expre ss P k+1 in terms of Pk.
(c) For oc = fl , solve for Pk (k = 0, 1, 2, . . .).

3.14.

Let us elab orate on the M {M{I {K system of Section 3.6.


(a) Eva luate P when). = fl .
(b) Find N for ). ,c. Jl and for }. = fl
(c) Find T by carefull y solving for the average arrival rate to the
system:

Markovian Queues in Equilibrium

T he previou s chapter was devoted to the study of the birth-death produ ct


solutio n given in Eq. (3.11). The beau ty of that so lution lies not only in its
simplicity but also in its broad ran ge of applicati on to queue ing systems, as
we have discussed. When we venture beyond the birth-death process int o
:he more general Mar kov process, then the product solution menti oned ab ove
10 longer applies; however , one seeks and often finds some other form of
oro d uct solution for the pure Mark ovian systems. In this chapt er we intend
:0 investigat e some of the se Mark ov processes th at are of direct intere st to
[ueueing systems. Most of what we say will apply to rand om walks of the
vlark ovian type ; we may think of these as somewhat more general birthleath processes where steps beyond nea rest neighb ors are permitted , but
vhich neverthel ess contain sufficient structure so as to perm it explicit soluion s. All of the underl ying distributions are, of course, expo nential.
Our con cern here aga in is with equili brium results. We begin by outlining
L general meth od for finding the equ ilibrium equations by inspection . Th en
ve con sider the special Erlangia n distributio n E" which is applied to the
[ueucing systems M/ ET/I and ET/M/l. We find that the system M/ ET/I
Las an interpre tat ion as a bulk a rrival process whos e general form we study
urther ; similarly the system ET/M fl may be inte rpreted as a bulk service
ystem, which we also investigate sepa rately. We then consider the more
.eneral systems ET.lETJI and step beyond tha t to ment ion a broad class of
l/G fl systems th at a re derivable from the Erla ngian by "s eries-pa rallel"
ombina tions. Finally, we consider the case of qu eueing netwo rks in which
II the underlying distrib utions once agai n are of the memoryless type. As
-e shall see in most of these cases we obtain a pr oduct form of solution.
.1. THE EQ UILIBRIUM EQUATIONS

Our point of departure is Eq, (2.116), namely, 1tQ = 0, which expresses


re equili br ium conditio ns for a general ergodic discrete-state cont inuou sme Ma rkov process ; recall that 1t is t he row vector of equilibrium state
robabilities and th at Q is the infinitesimal genera to r whose elements are the
115

116

MARKOVIA N Q UEUES IN EQ UILIBRIUM

infinitesimal transition rates of our Markov proces s. As discu ssed in the


previous chapter, we adopt the more standard queueing-theory notati on
and replace the vector 7t with the row vector p whose kth element is the equilibrium probability Pkof finding the system in state Ek Our task then is to solve

pQ

with the additional conservation relation given in Eq . (2.117), namely,

LPt=l

,..

This vector equation describes the "equations of motion" in equilibrium.


In Chapter 3 we presented a graphical inspection method for writing down
equations of motion making use of the state-transition-rate diagram. For the
equilibrium case that method was based on the observation that the probabilistic flow rate into a state must equal the probabilistic flow rate out of
that state. It is clear that this notion of flow conservation applies more
generally than only to the birth-death process, but in fact to any Markov
chain. Thus we may construct "non-nearest-neighbor" systems and still
expect that our flow conservation technique should work; this in fact is the
case. Our approach then is to describe our Markov chain in terms of a state
diagram and . then apply conservation of flow to each state in turn. This
graphical representation is often easier for this purpose than , in fact, is the
verbal, mathematical, or matrix description of the system. Once we have this
graphical representation we can, by inspection, write down the equations
that govern the system dynamics. As an example, let us consider the very
simple three-state Markov chain (which clearly is not a birth-death process
since the transition Eo-- E 2 is permitted), as shown in Figure 4.1. Writing
down the flow conservation law for each state yields
(4.1)
(i,

+ Il)PI

= },Po

+ IlP2

IlP2 =

2Po + API

(4.2)
(4.3)

where Eqs . (4.1), (4.2). and (4.3) correspond to the flow conservation for
states Eo, E" and E 2 , respectively . Observe also that the last equation is
exactly the sum of the first two; we always have exactly one redundant equation in these finite Markov chains. We know that the additional equation
required is
Po

+ PI + P2 =

4.1.

T HE EQU ILIBR IU M EQUA TIONS

117

Figure 4.1 Example ofnon-near est-neighbor system.

The so lutio n to thi s system o f equations gives

(4.4)

Vo ila ! Simple as pie . In fact, it is as " simple" as invertin g a set of sim ultaneous linear equations.
We ta ke adva ntag e o f this inspection technique in so lving a number o f
Mark ov chains in equilib rium in the bal ance of th is chapter. *
As in the prev iou s chapter we are here concerned with the limit ing probability defined as P = lim P[N(t ) = k] a s t ~ co, a ssuming it exists . Th is
p roba b ility may be inte rpreted a s giving the p roportio n of time th at the
system spends in sta te Ek One could, in fact, estimate th is pr ob ability by
meas u ring how ofte n the system contained k cust omers as comp ared to the
tot al mea su rem ent time. A no ther qu antity o f interes t (perhaps of grea te r
interest) in queueing systems is the pr obability th at an arri ving customer finds
the sys tem in sta te E k ; th at is, we consider the equil ibrium probability
' k

= P[arriving cu st omer find s the syst em in state E k ]

in the cas e of an ergod ic system. One might intui tive ly feel that in all cas es

Pk = ' k, but it is ea sy to show that th is is not genera lly true. For example ,
let us con sider the (no n- Ma rkov ia n) system 0 /0/1 in which arri val s are
un iformly spaced in time such tha t we ge t one a rriva l every i sec exactl y;
the serv ice-time req uirements a re identical for all cu st omers a nd equa l, say
It should also be clear that this inspection technique permits us to wri te down the timedependent state probabilities Pk(r ) directly as we have already seen for the case of birthdeath processes; these time-dependent equations will in fact be exactly Eq. (2. 114).

118

MARKOVIA N QUE UES IN EQUILIBR IUM

to X sec. We recogni ze this single-server system as an instance of steady flow


through a single channel (remember the pineapple fact ory). For sta bility we
require that x < f. Now it is clear that no arrival will ever have to wait once
equilibrium is reached and, therefore , '0 = I and 'k = 0 for k = 1,2 , . . . .
M oreover , it is clear that the fraction of time that the system contain s one
customer (in service) is exactly equal to p = x/f, and the rem ainder o f the
time the system will be empty; therefore, we ha ve P = I - p, Pi = p,
P = 0 for k = 2,3,4, . . .. So we have a trivial example in which h ' k.
However, as is often the case , one's intuition has a basis in fact , and we find
that there is a large class of queueing systems for which h = ' k for a ll k .
This , in fact, is the class of stable queueing systems with Poisson arri vals!
Actually, we can prove more, and as we show below for any queueing
system with Poisson arrivals we must have

where Pk(t ) is, as before, the probability that the system is in sta te E k at time t
and where Rk(t) is the probability that a customer arriving a t time t find s the
system in state Ek Specifically, for our system with Poisson arrivals we define
A (t , 1+ UI) to be the event that an arrival occurs in the interval (I, I + UI) ;
then we have

Rk(t) ~ lim P[N(t) = k I A(t, I

+ ut)]

(4.5)

A t ..... 0

[where N (t) gives the number in system at time I]. Using our definition of
conditional probability we may rewrite Rk(l) as

. _P-,--[N--O(--,t)_=_
k ,:.-A-:(-:
t ,_I-:+_U
_I-,-,
)]
Rk(t ) = lim
"'-0
P[A(t, t + ut)]

. P[A(t, I + ut) I N(t ) = k]P[N(t ) = k]


lim ~-'---'----'-'--':--'-----=----=---'--'-~
" .-0
P[A(t , t + ut)]

Now for the case of Poi sson arrivals we know (due to the mem or yless
property) th at the event A(I , I + UI) must be independent of the number
in the system at time I (and also of the time I itself); consequently P[A (I, I +
UI) N(I) = k] = P[A (I, I + UI)] , and so we have

Rk(t) = lim P[N(t ) = k]

"'-0

or
(4.6)

4.2.

TH E M ETH OD OF STAGES-ERLANGlA N DISTRIBUTION

E,

I 19

Thi s is wha t we set out to prove, namely, that the time-dependent pr obability
of an arrival finding the system in state E k is exactly equal to the time-dependent probability of the system being in state E k Clearly this also
applies to the equilibrium probability 'k that an arrival finds k customers in
the system and the proportion of time Pk that th e system finds itself with k
customers. Thi s equi valence does not surprise us in view of the memoryless
property of the Poisson pr ocess , which as we have ju st shown generates a
seq uence of arri vals that take a really "random look" at the system.

4.2.

THE METHOD OF STAGES-ERLANGIAN


DISTRIBUTION E,

The "method of stages" permits one to study queueing systems that are
more general than the birth--death systems . This ingenious meth od is a
fur ther test imonial to the brill iance of A. K. Erlang, who developed it early
in this century long before our tools of modem probability theory were
available . Erlang recognized the extreme simplicity of the exponential
distribution and its great power in solving Markovian queueing systems.
H owever , he also recognized that the exponential distribution was not alwa ys
an appropriate candidate for representing the true situation with regard to
service time s (and interarri val times). He mu st also have observed th at to
allow a more general service distribution would have destroyed the Markovian
property and then would have required some more complicated solution
meth od . * The inherent beauty of the Markov chain was not to be given up so
easily. What Erlang conceived was the notion of decomposing the service]
time distribution int o a collection of structured exponential distributions.
The principle on which the meth od of stages is ba sed is the memoryless
pr operty of the exponential distribution ; agai n we repeat that this lack of
memory is reflected by the fact that the distribution of time remaining for an
expone nt ially distr ibuted random variable is independent of the acq uired
"age" of that random variable.
Consider th e diagram of Figure 4.2. In this figure we are definin g a service
facility with an expo nentially distributed service time pdf given by
~

dB(x)

b(x) = -

dx

= pe -

x ~ O

(4.7)

The notation of the figure shows an oval which repre sent s the service facility
and is labeled with the symbol /1, which repre sent s the service-rate parameter

* As we shall see in Chapter 5, a newer approac h to this problem, the "method of imbedded
Markov chains," was not ava ilable at the time of Erlang.
t Identical observa tions a pply a lso to the interarrival time distribut ion.

120

Service

Figure 4.2 The single-stage exponential server.

facili ty

as in Eq. (4 .7). The reader will recall from Ch apter 2 that the exp onential
d istribution ha s a mean and va ria nce given by
E[ i]

= 1.
fI.

(]b -

= ---:;

fI. (1b 2

where the subscrip t bon


iden tifies thi s as the service time va ria nce .
N ow con sider the system sho w n in Figure 4 .3. In th is figure the large oval
represents the service fac ility . T he internal structure o f th is service facility is
re vealed as a series or tandem connection of tw o smaller ov als. Ea ch of these
sm all ova ls represents a single exponential server such as that depicted in
Fi gure 4.2 ; in Fi gure 4.3, howe ver, the small ov a ls a re labeled intern a lly with
the parameter 2f1. ind icating th at they each ha ve a pdf given by
y ~O

(4 .8)

Thu s the mean a nd va ria nce for h(y) a re E (fi ) = I f2f1. a nd (1." = (l f2f1.)2 .
The fa shion in which th is two- st age service facilit y funct ion s is that up on
departure o f a cu stomer from th is facility a ne w cu st omer i s all owed to enter
fro m the left. T his new cu st omer enters stage I a nd rem ains there for a n
a mo u nt of time rand omly ch osen fr om h(y). U po n hi s de pa rture from this
first stage he then proceeds immediat ely int o th e seco nd stage and spends an
a mo u nt of time th ere equal to a random vari able dra wn indepe nde ntly once
a gain from h(y). After thi s seco nd random in terva l expi res he th en dep arts
fr om the service fac ilit y a nd a t thi s p oint only may a new cu st omer enter the
facility fro m th e left. We see th en , that o ne, and on ly one, custo me r is

Service facility

Figure 4.3 The two-stage Erlangian server 2.

4.2.

THE METHOD OF STAGES-ERLANG IA:-I DISTRIB UTION

E;

121

a llowed into the box entitled " ser vice facility" a t any time. * This imp lies
that at least o ne of the two service stages must always be empty. We now
inq uire as to the specific distribution o f total tim e spent in the service fac ility.
Clea rly th is is a random va ria ble , wh ich is the sum of two independent a nd
identically distributed random variables. Thus, as sh own in Append ix II ,
we must fo rm the con volution of the density function associ ated with each
of the two summa nds . Altern atively, we may ca lculate th e Laplace tr an sform
of the ser vice time pdf as being equal to the product of the Laplace transform
o f the pdf's associat ed with each of the summands. Since both random va riables a re (independen t a nd) ide nt ically distributed we mu st form the product
of a function with itself. F irst , as always, we define the a ppro p riate transforms
as

L'" e-SZb(x) d x
H*(5) ~ L'" e-'"h(y) dy
8 *(5) ~

(4.9)
(4.10)

From our earlier sta tements we have

8 *(5) = [H*(5)]2
But , we a lready kn ow the transform of the exponential fr om Eq . (2.144)
and so
2#H *(5) = 5

Thus

8 *(5)

+ 2#

= ( -2#-)2
5

+ 2#

(4.11)

We must now invert Eq . (4. 11). However , the reader may recall that we
a lrea dy have seen thi s form in Eq . (2. 146) with its inverse in Eq. (2. 147).
App lying th at result we have

b(x) = 2#(2#x) e- 2 z

x ~0

(4 .12)

We may now ca lcula te the mean and vari ance of th is two-stage system in one
of three possible ways: by a rguing on the ba sis of the structure in Fi gure 4.3 ;
by using the moment genera ting properties of B *(s) ; or by direct calculati on
As a n example of a two-stage service facility in which only one stage may be act ive at a
time, consider a courtroom in a sma ll town. A queue of defend ant s forms, waiting for trial.
The judge tries a case (the first service stage) a nd then fines the defendan t. Th e second stag e
consists of paying the fine to the cou rt clerk. Ho wever, in th is sma ll town, the j udge is a lso
the clerk and so he moves over to the clerk 's desk, collects the fine, releases the defendant,
goes back to his bench, and then accepts the next defend ant into "service."

;-=

122

I:
II

!I

MARK OVIAN QUEUES IN EQUILI BRIU M

from the den sity funct ion given in Eq. (4. 12). We choose the first of th ese
three met hod s since it is most straightforwa rd (the reader may verify the
other two for his own satisfaction). Since the time spent in service is the sum
of two random variables, then it is clear that the expected time in service is
the sum of the expectati ons of each. Thus we ha ve

E[i] = 2EW] =

p.

Similarly, since the two rando m variables being summed are independent,
we may, therefore , sum their variances to find the variance of the sum :
O"b

O"h

+ O"h.

1
2p.2

=-

Note that we have arra nged mat ter s such that the mean time in service in the
single-sta ge system of Figure 4.2 and the two-stage system of Figure 4.3
is the same. We accompli shed thi s by speeding up each of the two-stage
service stations by a factor of 2. Note further that the variance of the
two-stage system is one-half the varia nce of the one-stage system.
The previou s paragraph introduced the noti on of a two-stage service
facility but we ha ve yet to discuss the crucial point. Let us consider the state
variable for a qu eueing system with Poisson arrivals and a two-stage exponenti al server as given in Figure 4.3. As a lways, as part of ours tate descript ion,
we must record the number of cust omers waitin g in the queue. In additio n we
must supply sufficient information abo ut the service facility so as to summarize the relevant past history. Owing to the memoryless property of the
exponential distribution it is enou gh to indicate which of the following three
possible situatio ns may be found within the service facility: either both stages
are idle (indicating an empt y service facility); or the firs t stage is busy and the
second stage is idle; or the first stage is idle a nd the second stage is busy.
Th is service-facility state information may be supplied by ident ifying the
stage of service in which the customer may be found . Our sta te description
then becomes a two-dimen sional vector that specifies the number of custo mers
in queue an d the number of stages yet to be completed by our customer in
service. Th e time this customer has already spent in his current stage of
service is irrelevant in calculatin g the future behavior of the system. O nce
aga in we have a Markov process with a discrete (two-dimensio nal) sta te space!
Th e method generalizes and so now we consider the case in which we
provide an r-stage service facility , as shown in Figure 4.4. In this system, of
cour se, when a custo mer departs by exiting fro m the right side of the oval
service facilit y a new customer may then enter from the left side and proceed
one stage at a time thro ugh the sequence of r stages. Upon his departure from

4.2.

TilE METHOD OF STAG ES-

ERLANGI AN DISTRIB UTI ON

E,

123

~
C 0---0-'- ... --0-- ... --0 ~
~

Service facility

Figure 4.4 The ,-stage Erlangian server E ,.


the rt h stag e a new customer again may then enter, and so on. The time that
he spends in the ith stage is drawn from the density functi on
Y

hey) = '1-1[ ' ."

(4.13)

The total time that a customer spends in thi s service facility is the sum of ,
independent identically distributed random variables, each chosen from the
distribution given in Eq . (4. 13). We have the followin g expectati on and
vari ance asso ciated with each stage :
E[Y]

= 1'flo

It sho uld be clear to the reader that we have cho sen each stage in thi s system
to have a service rate equ al to 'Il in order that the mean service time remain

con stant :
E[i]

,(J...) = .!
'flo

flo

Similarly, since the stage time s are independent we may add the vari anc es to
obtai n

Also , we ob serve that the coefficient o f variation [see Eq. (11.23)] is

c, =

J~

(4.14)

On ce again we wish to solve for the pdf of the service time. Th is we do by


generalizing the noti on s leadin g up to Eq. (4.11) to obt ain
8 *(5)

(-!l!:....-)'
5

+ ' flo

(4.15)

124

MA RK OVIAN QUEUES IN EQU ILI BRIUM

Figure 4.5 The family of , -stage Erlangian distributions Ei:


Equation (4.15) is easily inverted as earl ier to give
b(x) -

' fl(,/t xY- ' e- r x


(, - I) !

x ~O

- (4.16)

This we recognize as the Erlangian distribution given in Eq . (2.147). We have


carefully adjusted the mean of th is den sity fun ction to be independent of r,
In order to obtain an ind icati on of its width we must exam ine the sta nda rd
de viati on as given by
a --

(1)

_I
r"
...; ' fl

Thus we see that the sta nda rd deviat ion for the r-stage Erlangian distribution
is I//r times th e sta nda rd deviation for the single stage. It should be clea r
t o the sop histicated read er th at as r increases , the den sity funct ion given by
Eq. (4.16) mu st approach th at of the normal or Gaussian distribution due to
the central limit the orem . This is indeed true but we give more in Eq . (4. 16)
by specifying the actu al seq uence of distributions as r increases to show the
fashion in which the limit is a pproa ched . In Figure 4.5 we show the family of
r-stage Erla ngian distribution s (compa re with Figure 2.10). From this figure
we observe that the mean hold s constant as th e width or sta nda rd deviat ion
of the density shrinks by I/Jr. Below , we show th at the limit (as , goes to
infinity) for thi s density functi on must , in fac t, be a unit impulse functi on

4.2.

TH E METHOD OF STAGES- ERLANGIA N DISTRIB UTIO N

125

(see Appendi x I) at the point x = I Ip ; thi s impl ies th at the time spent in an
infin ite-sta ge Erlan gian service facilit y approaches a constant with prob ability
I (this con stant, of course, equals the mean l /.u). We see further th at the
peak of the famil y shown moves to the right in a regular fashion. T o calculate
the locatio n of the peak , we differenti ate the den sity functi on as given in
Eq. (4.16) and set this deri vativ e equ al to zero to obta in
d b( x ) _ (r,u)2(r dx

l)(r,uxy-2 e- T.'

(r -

I)!

or

(r - I) = ru x
and so we have
X p ea k

(r ~ I) ~

(4. 17)

Thus we see th at the locat ion of the peak mo ves rather qui ckly toward its
final location at If.u.
We now show that the limiting distribution is, in fact , a unit impulse by
considering the limit of the Lapl ace transform given in Eq . (4.15):
lim B*(s)

!- '"

lim
T- ",

(-!1!:.-)T
s + r,u

= Iim (

T- '" I

lim B*(s)

e- ,I.

)T

+ slru
(4.18)

We recognize the inverse tr an sform of th is limitin g distribution from entry 3


in Table 1.4 of Appendix I ; it is merely a unit impulse located at x = IIp .
Thus the famil y of Erlan gian distributions varie s over a fairly br oad
ran ge; as such, it is extremely useful for approximatin g empirical (and even
the oretical) distribution s. For example , if one had measured a service-time
o perat ion and had sufficient data to give acceptable estimates of its mean and
var iance o nly, then one could select one member of th is two-p arameter
famil y such that 11ft matched the mean and I/rp 2 matched the variance; thi s
would then be a method for approx imating B(x) in a way th at permits
solution of the queuein g system (as we shall see below). If the measured
coefficient of vari ati on exceed s unity, we see from Eq . (4.14) th at thi s proced ure fails, and we must use the hyperexponential di stribu tion described
later or so me other distribution.
It is clear for each member of th is famil y of den sity functi ons that we may
describe the sta te of the service facility by merel y giving the number of stages
yet to be completed by a cust omer in service. We denote the r-stage Erlangian

126

MARK OVIAN QUEUES IN EQUILIBRIUM

distribution by the symbol E, (no t to be confused with the notation for the
sta te of a rand om process). Since our state variable is discrete, we are in a
position to anal yze the queuein g system * M/Er/l. Th is we do in the following
sectio n. Moreover, we will use the same technique in Secti on 4.4 to decompose
the interarrival time distribution A (t ) into an r-sta ge Erla ngian distribution.
Note in these next two sections that we neurotically require at least one of
our distributi ons to be a pure exponential (this is also true for Chapters 5
and 6).
4.3.

THE QUEUE M/Er/l

Here we consider the system for which


a(l) =

b( x) -

i.e- A'

I ~

rfl(rfl x )'-le- r x
(r -

I)!

>_ 0

Since in addition to specifying the number of customers in the system (as in


Chapter 3), we must also specify the number of stages remainin g in the service
facility for the man in service, it behooves us to represent each customer in
the queue as possessing r stages of service yet to be completed for him. Thus
we agree to take the state variable as the total number of service stages yet to
be completed by all customers in the system at the time the state is dcscribed.]
In particular, if we consider the state at a time when the system contains k
customers and when the ith sta ge of service contains the customer in service
we then have that the number of stages contained in the total system is
j ~ number of stages left in total system
= (k - I)r + (r - i + I)

Thus
j=rk-i+1

(4. 19)

As usual, fk is defined as the equilibrium probability for the number of


customers in the system; we further define
P; ~ P[j stages in system]

(4.20)

The relationship between customers and stages allows us to write


kr

Pk =

:L P;
; E:(k-l)r+ l

1,2,3, . . .

Clearly this is a special case of the system M IG II which we will ana lyze in Chap ter 5
using the imbedded Markov chain approach .
t Note that this converts our proposed two-dimensional state vector into a one-dimensiona l
description .

4.3.

TH E QUEUE

M IErl1

127

rll

rll

ru

rll

Figure 4.6 State-transition-rate diagram for number of stages: MIErl1.


And now for the beaut y of Erlang's approach : We may represent the sta tetransition-rate diagram for stages in ou r system as shown in Figure 4.6.
Focusing on state E; we see that it is entered from below by a state which is r
po sitions to its left and also entered from above by state E;+l ; the former
trans ition is due to the arrival of r new stages when a new customer enter s,
and th e latter is due to the completion of one stage within the r-stage service
facility. Furthermore we may leave state E; at a rate). due to an arrival and
at a rate rfl due to a service completion. Of course,.we have special bound ary
conditions for states Eo, E" .. . , Er_1 In order to handle the bound ary
situation simply let us agree, as in Ch apter 3, that state probabilities with
negat ive subscripts are in fact zero. We thus define
P;

j<O

(4.21 )

We may now write down the system sta te equations immediately by using
our flow conservation inspection method. (Note that we are writing the
forwa rd equations in equilibrium.) Thus we have
)'Po = rfl P1

(). + rfl)P ; = )'P;-r + rfl PH 1

1,2, .. .

(4.22)
(4.23)

Let us now use ou r " familiar" meth od of solving difference equat ions,
namely the z-tra nsform. Thus we define
co

P(z) =

2. P ;z;
j=o

As usual, we multiply thej th equation given in Eq. (4.23) by z; and then sum
over all applicable j. Thi s yields
co

2. (). + r,u)P ;z;


;_ 1
Rewriting we have

co

co

2. )'P ;-.z; + 2. rfl P ; +1 Z ;


;_ 1
j~ l

.',

128

MARKOVIAN QUEUES I N EQU ILIB RIU M

Recognizing P(z), we then have

(A

+ r/l)[P(z) -

Pol

U P(z)

+ r; [P(z) -

Po - P,zl

The first term on the right-hand side of thi s last equation is obtained by
ta king special note of Eq . (4.21). Simplifying wehave

_P,,-,
o[cA--,+,--'--,r/ l_-_(",--r,--/ll,--z",-"
)l__r/l,--P
-"
A + r/l - U - (r/ll z)
We may now use Eq. (4.22) to simplify this last further :
P(z) =

P(z) =
yielding finally

_ ----'r/l,-P--,o,-,-[I_----'(--'I/-'
z)~l_

A + r/l - U - (r/llz)

r/lPo(I - z)
P(z) = - --'--=----'--r/l + Azr+l - (A + r/l )z
We may evaluate the con stant Po by recognizing that P(l)
L'Hospital's rule , thus
P(l)

=I=

(4.24)

I and using

r/lP o
r/l - ).r

giving (ob serve th at Po = Po)

Po = 1 - /l
In thi s system the arri val rate is Aand the avera ge service time is held fixed at
I//l independent of r, Thus we recognize that our utilizat ion fact or is
<l. _

(4.25)

p = J.x =-

/l
Substituting back into Eq . (4.24) we find
rp(1 - p)(1 - z)
(4.26)
rfl + Azr+l - (A + r/l)z
We must now invert th is z-transform to find the distr ibution of the number of
stage s in the system.
The case r = I , which is clearly the system M/M/l , presents no difficulties;
thi s case yields
P(z) =

P( z) = _fl
!.....:(,--I _- _ ,p--,).o....
( I_------'
z)'--/l + ),Z2 - (A + /l )z
( I - p)(1 - z)

+ p Z2 -

(I

+ p)z

4.3.

TH E QUEUE

M IE,II

129

The denominator factors int o (I - z)( 1 - pz) and so canceling the common
term (I - z) we obtain

P(z)

I - P
1 - pz

We recognize this functi on as entry 6 in Table 1.2 of Appendix I, and so we


have immediately
k = 0, 1,2, .. .
(4.27)
Now in the case , = I it is clear that P k = Pk and so Eq . (4.27) gives us the
distributio n of the number of customers in the system M /M /I, as we had
seen previo usly in Eq . (3.23).
For arbitrary values of r things a re a bit more co mp lex. T he usual approach to inverting a z-transform such as that given in Eq . (4.26) is to make
a partial fraction expansion and the n to invert each term by inspection; let
us follow th is approach . Before we can carry out this expansion we must
identify the' + I zeroes of the denominator polynomial. Unity is easily
seen to be one such. The denominator may therefor e be written as
(I - z) [' 11 - A(Z + Z2 + .. . + z') ], whe re the remaining' zeroes (which
we choose to de no te by zl' Z2, . , z, ) are the roo ts of the bracketed expression.
Once we have found these roots* (which are un ique) we may then write th e
denominator as '11(1 - z)(1 - ZIZl) . .. (I - zlz r)' Su bstitut ing this back into
Eq. (4.26) we find

P(z)

= - - - - - - '1- -P-- --

(I - ZIZl)(1 - zfz2) ' .. (I - zlzr)

Our pa rtial fraction expansion now yields


(4.28)
where

We may now invert Eq . (4.28) by inspection (from entry 6 in Table 1.2) to


obtain the final solution for the dist ribution of the number of stages in the
system, namely,

r, = (I

- p) L A,{z;)- i
i= l

j = 1,2, . . . , r

- (4.29)

Many of the ana lytic pro blems in queueing theory red uce to the (difficult) task of locating
the roots of a funct ion.

130

MARKOVIA N Q UEUES IN EQUILIBRIUM

and where as before Po = I - p. Thus we see for the system M/Er/1 that the
distribution of the number of stages in the system is a weighted sum of
geometric distributions. The waiting-time distribution may be calculated
using the methods developed later in Chapter 5.

4.4.

THE QUEUE Er/M/l

Let us now consider the queueing system Er/M/1 for which


r}.(rAtr-ie- rl t
a(t ) - ---'-----'-- (r - I)!

t ~ O

(4.30)

b(x ) = p.e- P z

x ~ O

(4.3 1)

Here the roles of interarrival time and service time are interchanged from th ose
of the previou s section ; in many ways these two systems are dual s of each
other. The system operates as follows : Given that an arrival has ju st occurred,
then one immediately introduces a new " arri ving" customer into an r-stage
Erlangian facility much like that in Figure 4.4 ; however, rather than consider
this to be a service facility we consider it to be an "arriving" facility. When this
arriving cust omer is inserted from the left side he must then pass th rough r
exponential stages each with parameter rA. It is clear that the pdf of the time
spent in the arri ving facility will be given by Eq, (4.30). When he exits from
the right side of the arriving facility he is then said to " arrive" to the queue ing
system ErlM /1. Immed iately upon his arriv al, a new customer (taken from an
infinite pool of available customers) is inserted into the left side of the arriving
box and the proces s is repeated. On ce having arrived, the customer joins the
queue , waits for service, and is then served accordin g to the distribution
given in Eq. (4.31). It is clear th at an appropriate state descripti on for th is
system is to specify not only the number of customers in the system, but also
to identify which stage in the arriving facility the arriving customer now
occupie s. We will consider that each customer who has already arrived (but
not yet departed) is contributing r stages of " arrival" ; in add ition we will
count the nu mber of stages so far completed by the arriving customer as a
further contribution to the number of arrival stages in the system. T hus our
state descript ion will consist of th e total number of stages of arrival currently
in the system ; when we find k customers in the system and when our arriving
customer is in the ith stage of arrival ( I ~ i ~ r) then the total number of
stages of a rrival in the system is given by
j

rk

+i-

On ce again let us use the definition given in Eq. (4.20) so that Pi is defined
to be the numb er of arrival stag es in the system; as always Pk will be the

4.4 .
rX

rX

THE QUEUE

ET/M /I

131

r,\

...
Figure 4.7 State-transition-rate diagram for number of stages: ET/M/l.
equilibrium probability for number of customers in the system , and clearly
they are related through
r(k+I) - l

Pk

= j =Lr k

P;

The system we have defined is an irreducible ergodic Markov chain with its
state -transition-rate diagram for stages given in Figure 4.7. Note that when a
customer departs from service, he "removes" r stages of "arrival" from the
system. Using our inspection method, we may write down the equilibrium
equations as
(4.32)
rAPo = p,Pr

+ p,PHT
rAP;_1 + p,PHT

I~j~r-I

rAP; = rAP;_1
(rA + p,)P; =

(4.33)
(4.34)

r ~j

Again we define the z-transform for these probabilities as


00

P(z)

= LP;z;
j= O

Let us now apply our transform method to the equilibrium equations.


Equations (4.33) and (4.34) are almost identical except that the former is
missing the term p,Pj; consequently let us operate upon the equations in the
range j ~ I, adding and subtracting the missing terms as appropriate. Thu s
we obtain
0::>

L(P,

+ r).)Pjz; -

j= 1

r -l

co

;=1

;=1

LP,P;z; = LrAP;_1 zj

+ LP,PHTz j
(J)

j= 1

Identifyin g the transform in this last equation we have


(p,

+ rA)[P(z) -

j
Po] - Ip,p jZ = rAzp(z)

+ ~[P(Z) - t P jzj]

We may now use Eq. (4.32) to eliminate the term P and then finally solve
for our tran sform to obtain
T- 1
(l - ZT) L P jz;
j~ O
P(z) =
(4.35)
rpzT+1 - (I + rp)zT + 1
T

132

MARK OVIA N QUEUES IN EQU ILI BR IU M

where as always we have defined p = i.x = i./",. We mu st now stu dy the


p oles (zero es o f the denominator) for thi s functio n. The den ominator
p olyn omial has r + I zero es of which un ity is one such [the factor ( I - c) is
almost a lways present in the denominat or). Of the rem ain ing r zeroes it can
be shown (see Exercise 4. I0) th at exactly r - I of them lie in the range
Izi < I and the last , whi ch we sh all den ote by zo, is such th at IZol > I. We a re
still faced with the numerator summation th at contains the un kn own probabilities Pi ; we mu st now appeal to the second footnote in step 5 o f our ztransform procedure (see Chapter 2, pp. 74-75), which takes ad vantage of
the observa t ion th at the a-tra nsform of a prob ability distributio n mu st be
analytic in the range [z] < I in the following way. Since P(z) mu st be b ounded
in the range Izi < I [see Eq. (II .28) and since the denominator has r - I
zeroes in thi s ran ge , then certainly the numerator must also ha ve ~e roes a t
the same r - I points. The numerator consists of tw o factors ; the first of the
form (I - zr) all o f whose zeroes have a bso lute value equal to unity; and the
seco nd in the form of a su mmation . Consequently , the "compensating"
zero es in the numerator mu st come from the summa tion itself (the summa tio n
is a pol ynomial of de gree r - I and therefore has exactly r - I zero es). These
observa tio ns, th erefore, perm it us to equate the numerator sum to the
denominator (after its two roots a t z = I and z = Zo a re factored o ut) as
follows:
r pzr+l _ ( I + r p)zr + 1
r- I
;
...!----'---'----'-----'-- = K L P;z
( I - z)( 1 - z/zo)
;- 0
wh ere K is a con stant to be evalu at ed bel ow. This computation permits us to
re write Eq . (4.35) as

P(z) =

__

('-I _
-_z--'r)'-----_

K (I - z)( 1 - * 0)

But sin ce P( I ) = I we find th at

K = r/(I - Ilzo)
a nd so we ha ve
( I - z')( 1 - I/zo)
P( z) = - ' - - ---'---'--"'r (1 - z)( 1 - z/ zo)

(4.36)

We now kno w a ll there is to kn ow about the pole s a nd zeroes of P(z) ; we a re,


therefore, in a position to make a partial fracti on expansion so th at we may
invert o n z. U nfo rtunately P(z) as expressed in Eq . (4.36) is not in th e pr oper
form for the partial fraction exp an sion , since the numerat or degre e is not
less th an the den om inat or de gree. However , we will take adva ntage of
property 8 in Table I.l of Append ix I , wh ich sta tes that if F(z) -ce- i nthe n

4.4.

THE QUEUE

E,/ MfJ

133

z' F(z) <=> / "_,, where we recall that the notation <=> indica tes a transform
pair. With thi s observation then , we carry out the following pa rti al fraction
expan sion
P(z)

= (I

ZJ [ _I_/r- + _-_ I-'/_


..rz-'--0J
1 - z/zo

I - z

If we den ote the inverse transform of the quantity in sq ua re br ackets by /;


then it is clear that the inverse transform for P(z) must be
(4.37)

P; = /; - /;_,

By inspection we see th at

(1

j; =

(I -

ZO;-I)

j ~ O

j<O

First we solve for P; in the range j


therefore , ha ve

1Zo' - ;-1( 1 _

P i --

(4.38)

r ; from Eqs. (4.37) a nd (4.38) we,


- '\J

(4.39)

Zo

We may simplify thi s last expression by recogni zing that the den om ina tor of
Eq. (4.35) must equal zero for z = zo; th is ob servation lead s to the equality
rp (zo - I) = I - zo-', and so Eq. (4. 39) becom es
P; = p(zo - I )z~- H

j ~ r

(4.40)

On t he othe r hand , in the ran ge 0 ~ j < r we ha ve that /;_, = 0, a nd so P;


is easily found for the rest of our range. Combining thi s a nd Eq . (4.40) we
finally obta in the di stribution for the number of a rrival stages in our system :

P,
J

( I - zoH )

( p(zo _

O~j<r

(4.41)

I)Z~-;-I

U sing o ur earlier relati onship between Pk and P; we find (the reader sho uld
check thi s a lgebra for himself) that the di stribution of th e nu mber of custo mers
in th e system is given by

P = {

I - p

k =O

p(z; - 1)zi)'k

k >O

- (4.42)

We note that thi s distribution for number of customers is geo me tric with a
slightly mod ified first term. We could a t this point calcul ate the waiting time
dis tr ibutio n , but we will postpone th at unt il we study th e system G/M /l in
Cha pter 6.

134

MARKOVIAN QU EUES IN EQUILIBRIUM

4.5. BULK ARRIVAL SYSTEMS


In Section 4.3 we studied the system M/ET{I in which each customer had
to pass through r stages of service to complete his total service. The key
to the solution of that system was to count the number of service stages
remaining in the system, each customer contributing r stages to that number
upon his arrival into the system. We may look at the system from another
point of view in which we consider each "customer" arrival to be in reality
the arrival of r customers. Each of these r customers will require only a
single stage of service (that is, the service time distribution is an exponential *).
Clearly, these two points of view define identical systems : The former is the
system M{ET{1 and the latter is an M /M/I system with "bulk" arrivals of
size r. In fact, if we were to draw the state-transition-rate diagram for the
number of customers in the system, then the bulk arrival system would lead
to the diagram given in Figure 4.6; of course, that diagram was for the
number of stages in the system M/ET{I. As a consequence, we see that the
generating function for the number of customers in the bulk arrival system
must be given by Eq . (4.26) and that the distribution of number of customers
in the system is given by Eq. (4.29) since we are equating stages in the
original system to customers in the current system.
Since we are .considering bulk arrival systems, we may as well be more
generous and permit other than a fixed-size bulk to arrive at each (Poisson)
arrival instant. What we have in mind is to permit a bulk (or group) at each
arrival instant to be of random size where

gi ~ P[bulk size is i]

(4.43)

(As an example, one may think of random-size families arriving at the doctor's
office for individual vaccinations.) As usual, we will assume that the arrival
rate (of bulks) is i.. Taking the number of customers in the system as our
state variable, we have the state-transition-rate diagram of Figure 4.8. In
this figure we have shown details only for state E k for clarity. Thus we find
that we can enter Ek from any state below it (since we permit bulks of any
size to arrive); similarly, we can move from state E k to any state above it, the
net rate at which we leave Ek being i.g, + i.g. + ... = AL;';., gi = A. If,
as usual we define Pk to be the equilibrium probability fer the number of
customers in the system, then we may write down the following equilibrium
To make the correspondence complete. the parameter for this exponential distribution
should indeed be ru, However, in the following development, we will choose the parameter
merely to be Il and recall this fact whenever we compare the bulk arrival system to the
system M/ETII.

4.5.

BULK ARRIVAL SYSTEMS

135

Figure 4.8 The bulk arrival state-transition-rate diagram.


equations using our inspection method:

(A + fl)Pk = flPk+l

k-l

+ iL
PiAgk- i
=O

(4.44)

Apo = flPl
(4.45)
Equation (4.44) has equated the rate out of state Ek (the left-hand side) to
the rate into that state, where the first term refers to a service completion
and the second term (the sum) refers to all possible ways that arrivals may
occur and drive us into state Ek from below. Equation (4.45) is the single
boundary equation for the state Eo. As usual, we shall solve these equations
using the method of z-tra nsforms ; thus we have
00

(A + fl) L PkZk =
k =l

'

Jl

GO

Q)

-Z kL"",lPk+l i

k- l

+ L L PiAgk_iZk

(4.46)

k =l i =O

We may interchange the order of summation for the double sum such that
GO

k- l

<:0

00

and regrouping the terms, we have

(4.47)
The z-transform we are seeking is

P(z)

<Xl

= LPkZk
k- O

and we see from Eq. (4.47) that we should define the z-transform for the
distribution of bulk size as *
G(z) 4, gkzk
(4.48)

k- l

We could just as well have permittedgo > 0, which would then have allowed zero-size
bulks to arrive, and this would have put self-loops in our state-transition diagram corresponding to null arrivals. Had we done so, then the definition for G(z) would have ranged
from zero to infinity, and everything we say belowapplies for this case as well.

'I

136

MAR KOVIAN QUEUES IN EQU ILI BRIUM

Extra cting these transform s from Eq. (4.46) we have

(), + fl )[P(z) -

Po] =

~ [P(z)

- Po - P1z]

+ AP(Z)G(z)

N ote that the product P(z)G(z) is a manifestat ion of prop erty II in Ta ble I. I
of Appendi x 1, since we have in effect formed the tran sform of the convoluti on
of the sequence {Pk} with th at of {gk} in Eq. (4.44). Appl ying the bound ary
equation (4.45) and simplifying, Wi: have
P(z) =

fl Po(1 - z)
fl(1 - z) - ).z(l - G(z)]

To eliminate Po we use P(I) = I; direct application yields the indeterminate


form % and so we must use L'Hospital' s rule , which gives po = I - p.
We obtain
.. _,-,fl(,---I_----'p--'-)(OI_
- -_z.. :. .)_
P(z) = - (4.49)
fl (1 - z) - AZ(l - G(z)]

Th is is the final solution for th e transform of number of customers in the bulk


arrival M/M /l system. Once the sequence {gk} is given, we may then face the
pr oblem of inverting this transform . One may calculate the mean and variance
of the numb er of customers in the system in term s of the system parameters
directl y from P(z) (see Exercise 4.8). Let us note that the appropriate definition for the utilization fact or p must be carefully defined here. Recall that p
is the average arrival rate of customers times the average service time. In
our case, the average arri val rate of customers is the product of the average
a rrival rate of bulks and the average bulk size. From Eq. (1I.29) we have
immediately that the average bulk size must be G'(I). Thus we naturally
conclude that the appropriate definition for p in this system is
).G'( l)
p= -

(4.50)

fl
It is instructive to consider the special case where all bulk sizes are the same,
namely,
k

~ r

Clearly, this is the simplified bul k system discussed in the beginning of thi s
section ; it correspond s exactly to the system M/Er/l (where we must make the
minor modification as indicated in our earlier footn ote that fl must now be
replaced by rfl) . We find immediately that G(z) = ZT and after substituting
this into our solution Eq . (4.49) we find that it correspo nds exactly to our
earlier solution Eq. (4.26) as, of course, it must.

4.6.

BU LK SERVICE SYSTEMS

137

4.6. BULK SERVICE SYSTEMS


In Section 4.4 we studied the system ErIM/I in which arrivals were considered to have passed through r stages of "arrival." We fou nd it expedient
in th at case to take as our state variable the number of " arri val stages" that
were in the system (where each fully arrived customer still in the system
contributed r stages to that count) . As we found an analogy between bulk
arrival systems and the Erlan gian service systems of Section 4.3, here also we
find an anal ogy between bulk service systems and the Erlangian arri val systems
stud ied in Section 4.4. Thu s let us consider an M/M /I system which provides
service to groups of size r . Th at is, when the server become s free he will
accept a "bulk" of exactly r custo mers from the queue and administer service
to them collectively; the service time for thi s group is drawn from an exponential distribution with parameter fl.. If , up on becoming free, the server finds
less than r customers in the queue , he then wait s until a total of r accumulate
and then accepts them for bulk service, and so on. * Customers a rrive from a
simple Poisson proce ss, at a rate A, one at a time. It should be clear to the
reader th at this bulk service system and the ErlM /1 are identical. Were we to
draw the state-transition-rate diagram for the number of customers in the
bulk service system, then we would find exactly the diagram of Figure 4.7
(with the parameter rA replaced by A; we must account for this parameter
chan ge, however, whenever we compare our bulk service system with the
system Er/ M/I). Since the two systems are equivalent , then the solution for
the distribution of number of customers in the bulk service system must be
given by Eq. (4.4 1) (since stages in the original system correspond to customers
in the current system).
It certainly seems a waste for our server to remain idle when less th an r
customers are available for bulk service. Therefore let us now consider a
system in which the server will, upon becomin g free, accept r custo mers for
bulk service if they are available, or if not will accept less th an r if any are
available. We take the number of customers in the system as our state
variable and find Figure 4.9 to be the state-transition-rate diagram. In this
figure we see that all stat es (except for sta te Eo) behave in the same way in
that they are entered from their left-hand neighbor by an arrival, and from
their neighb or r units to the right by a gr oup departure, and they are exited
by eith er a n arrival or a group departure ; on the other hand, state Eo can be
entered fro m anyone of the r states immedi ately to its right and can be
exited only by an a rrival. These considerati ons lead directly to the following
set of equat ions for the equ ilibrium pr obability P of finding k customers in
For exam ple. the shared taxis in Israel do not (usually) depart unt il they have collected a
full load of custo mers, all of whom receive service simultaneously.

138

MARKOVIAN QUEUES IN EQUILIBRIUM

I'

I'

Figure 4.9 The bulk servicestate-transition-rate diagram.


the system:
(l

+ P.)h = P.Pk+r + lh-I k ~ I


lpo = fl(PI + P2 + ... + Pr)

(4.51)

Let us now apply our z-transform method; as usual we define


a>

P(z) = ~p,tZk
k _O

We then multiply by z\ sum, and then identify P(z) to obtain in the usual way
(l

+ p.)[P(z) -

Po]

= ;[P(Z) -

ktp~] + lzP(z)

Solving for P(z) we have


r

P(z)

+ fl)Pozr
(l + fl)zr + p.

fl ~ p,tZ' - (l

~kc=
-,,O -----}.Zr+1 _

From our boundary Eq. (4.51) we see that the negative term in the numerator
of this last equation may be written as
- zr(}.po

+ p.Po) =

-flZr~Pk
k= O

and so we have
r -I

P(z) =

Pk(Zk - zr)

k_O

rpzr+l - (1

+ rp) zr +

(4.52)

where we have defined p = Afflr since, for this system, up to r customers may
be served simultaneously in an interval whose average length is l /fl sec. We

4.7.

139

SERIES-PARALLEL STAGES: GENERALIZATIONS

immediately observe that the denominator of this last equation is precisely


the same as in Eq. (4.35) from our study of the system ET/M /I. Thus we may
give the same arguments regarding the location of the denominator roots; in
particular, of the r + I denominator .zeroes, exactly one will occur at the
point z = I, exactly r - I will be such that [z] < I, and only one will be
found, which we will denote by zo, such that IZol > I. Now let us study the
numerator of Eq. (4.52). We note that this is a polynomial in z of degree r .
Clearly one root occurs at z = I. By arguments now familiar to us, P(z)
must remain bounded in the region Izi < I, and so the r - I remaining
zeroes of the numerator must exactly match the r - 1 zeroes of the denominator for which Izi < I; as a consequence of this the two polynomials of degree
r - I must be proportional, that is,
T-1

PtCzk - ZT)

k= O

1- z

rp zT+l - (1

+ rp)zT + 1

(1 - z)(1 - Z/Zo)

Taking advantage of this last equation we may then cancel common factors
in the numerator and denominator of Eq. (4.52) to obtain
1

P(z) = - - " - - - K(I - z/zo)


The constant K may be evaluated in the usual way by requiring that P(I)
which provides the following simple form for our generating function:

P(z) = 1 - I/zo
1 - z/ zo

I,

(4.53)

This last we may invert by inspection to obtain finally the distribution for the
number of customers in our bulk service system

k = 0, 1,2, . . .

-(4.54)

Once again we see the familiar geometric distribution appear in the solution
of our Markovian queueing systems!

4.7. SERIES-PARALLEL STAGES : GENERALIZATIONS


How general is the method of stages studied in Section 4.3 for the system
M/Er/l and studied in Section 4.4 for the system ErfM /I? The Erlangian
distribution is shown in Figure 4.5 ; recall that we may select its mean by
appropriate choice of 11 and may select a range of standard deviations by
adjusting r. Note, however, that we are restricted to accept a coefficient of

140

MAR KOVI AN QUEUES IN EQU ILI BRIU M

varia tio n that is less than th a! of the exponenti al distributi on [from Eq.
(4.14) we see that Co = IIJ r wherea s for r = I the exponenti al gives
C b = I] and so in some sense Erlang ian rand om variables are " mo re regular"
than exponent ial variables. Thi s situation is cert ainly less than completely
general.
One dire ction for generalizatio n would be to remove the restriction that
one of our two basic queueing distributi on s must be exponential ; tha t is, we
certa inly could consider the system ErJErJ I in which we have an ra-stage
Erlangian distributi on for the interarr ival time s and an rb-stage Erlan gian
distribution for the service times . * On the other hand , we could atte mpt to
generali ze by broadening the class of distributions we consider beyond that
of the Erlangian. Thi s we do next.
We wish to find a stage-type arran gement that gives larger coefficient s of
va riation than the exponential. One might consider a generalizatio n of the
r-stage Erlangi an in which we permit each stage to have a differ ent service
rate (say, the ith stage has rate fl ,). Perhaps this will extend the ran ge of C,
ab ove unit y. In this case we will ha ve instead of Eq. (4. 15) a Lapl ace tran sform for the service-time pdf given by
B*(s) -

(~)(~)
s + fll S + fl2

...

(~)
S + fl r

(4.55)

The service time density hex) will merely be the con volution of r exponen tial
den sities each with its own parameter fl i. The squa red coefficient of variati on
in this case is easily shown [see Eq. (11.26), Appendix II] to be

But for real a, ~ 0, it is always tru e th at I i a/ ~ (I i a;)2 since the right hand side contains the left-hand side plus the sum of all the nonn ega tive
cros s term s. Ch oosing a, = I lfl;, we find that C b2 ~ I. Thu s, unfortuna tely,
no gener alization to larger coefficients of variation is obtained this way.
We previou sly found that sending a customer th rough a n increas ing
sequen ce of faster exponential stages in series tended to reduce the vari abil ity
of the service time , and so o ne might expect that sending him through a
parallel arra ngement would increase the variability. Thi s in fact is tru e. Let
us therefore con sider the two-sta ge parallel service system show n in Figure
4.10. The situation may be contrasted to th e service st ructure shown in
Figure 4.3. In Figure 4.10 an entering customer approaches the lar ge oval
(which represents the service facility) from the left. Upo n entry into the
We co nsider this short ly.

4.7.

SERI ES-PA RALL EL STAGES: GENE RALIZATIONS

141

Service facility

Figure 4.10 A two-stage parallel server H 2


facility he will procee d to serv ice stage I with probabil ity 0( , or will proceed
to service stag e 2 with pr ob ab ility 0(2' where 0( , + 0(2 = 1. He will then spend
an exponentially distributed interval of time in the ith such stage who se mean
is I{fl i sec. After th at interval the customer departs and o nly then is a new
cu stomer allowed int o the serv ice fac ility. It is clear fro m th is des cription
tha t the service time pdf will be given by
x ~ O

a nd also we ha ve

B*(5) = O(, ......f:!..!5

+ fl,

0(2

~
5

+ fl 2

Of cou rse th e more genera l case with R parallel e m

142

~1ARKOVIA N QUEU ES IN EQUILIBRIUM

Service facility

Figure 4.11 The R-stage parallel server HR.


that of the exponential. Let us prove thi s. From Eq. (II.26) we find immediatel y that
_

x =

R C1.

1: i = l }li
i

Forming the square of the coefficient of variation we then have

(4.58)

Now, Eq . (II. 35), the Cauchy-Schwarz ineq uality , may also be expre ssed as
follo ws (fo r ai' b, real):
(4.59)

4.7.

SERIES-PARALLEL STAGES: GENERALIZ ATIONS

143

Figure 4.12 State-transition-rate diagram for M/H 2 / I.

(T his is often referred to as the Cauchy inequality.) lfwe mak e th e asso ciatio n
ai = J CJ. i , hi = J "- J,u,, then Eq. (4.59) shows

(I CJ.i)2~ (ICJ.') (I CJ.~)


I

Jli

Pi

But from Eq, (4.56) the first factor on the right-hand side of thi s inequ alit y
is ju st unity; thi s result along wit h Eq. (4.58) permits us to write

- (4.60)
which pr oves the desired result.
One might expect t hat an a nalysis by the method of stages exists for the
systems M/H rt/I , H rt/M fI, H R a / H rtb fI , and thi s is indeed true. The rea son
th at the ana lysis can proceed is that we may take account of the nonexponential character of the service (or arrival) facilit y merely by specifying which
stage within the service (or arri val) facility the customer currentl y occupies.
Thi s inform at ion along with a sta tement regarding the number of customers
in the system creates a Mark ov chain , which may then be studied much as
was done earlier in this chapt er.
For exa mple, the system M/H 2 /l would have the sta te-tra nsitio n-rate
diag ram show n in Figure 4.12. In this figure the designati on k, implies th at
the system contains k customers and that the customer in service is locat ed
in stage i (i = I , 2). T he transitions for higher numbered sta tes are ide ntica l
to the transitions between states I, and 2,.
We are now led directly int o the foll owing genera lization of series stages
and parallel stages ; specifica lly we are free to combine series and par allel


144

MARKOVIAN QUEUES IN EQUILIBRIUM

r,

Service facility

Figure 4.1 3 Series-parallel server.


stages into arbitrarily complex structures such as shown in Figure 4.13.
This diagram shows R parallel "stages," the ith "stage" consisting of an
restage series system (i = 1,2, .. . , R); each stage in the ith series branch
is an exponential service facility with parameter rill- i. It is clear that great
generality can be built into such series-parallel systems. Within the service
facility one and only one of the multitude of stages may be occupied by a
customer and no new customer may enter the large oval (repre senting the
service facility) until the previous customer departs. In all cases, however,
we note that the state of the service facility is completely contained in the
specification of the particular single stage of service in which the customer
may currently be found . Clearly the pdf for the service time is calculable
directly as above to give
b()
x

- rt.i
,~l

ri/1. i(r ,ll-ix )' ,- l - r .


e
Cr, - 1)!
I

x
I

x~O

(4.61)

and has a tran sform given by


(4.62)
One further way in which we may generalize our series-parallel server is to
remove the restriction that each stage within the same series branch has the
same service rate (rill- i); if indeed we permit the jth series stage in the ith

4.7.

SERIES-PARALLEL STAGES : GENER ALIZA TION S

145

",

",

Service facilit y

Figure 4.1 4 Another stage-type server.


parallel branc h to have a service rate given by {-I;;, then we find tha t the Laplace tr ansform of the service time density will be generalized to
8 *(5)

r, (
= LR , IT
.uzu.:
II

;- 1

;- 1

+ {-IH

(4.63)

These genera lities lead to rather comple x system equations.


Another way to create the series-parallel effect is as follows. Consider the
service facility shown in Figure 4.14. In thi s system there are r service stages
only one of which may be occupied at a given time. Cust omers enter from the
left and depart to the right. Before entering the ith stage an independent
choice is made such that with probabilitys

146

MARKOVIAN QU EU ES IN EQUILIBRIUM

service time may have poles located anywhere in the negati ve half s-plane
[that is, for Re(s) < 0]. Cox [COX 55] has studied this pr oblem and suggests
that complex values for the exponential parameters rill . be permitted ; the
ar gument is that whereas this correspond s to no physically realizable exponential stage, so long as we provide poles in complex conju gate pai rs then the
entire service facility will have a real pdf, which corresponds to the feasible
cases. If we permi t complex-conjugate pair s of poles th en we have complete
generality in synthesizing any rational functi on of s for our service-time
tran sform B *(s). In addition, we have in effect outlined a meth od of solving
these systems by keeping track of the state of the service facility. Moreover ,
we can similarly construct an interarrival time distri buti on from seriesparallel stages, and thereby we are capable of con siderin g any G/G/ I system
where the distributions have transform s that are rational function s of s.
It is further true that any nonrati onal functi on of s may be approx imated
arbitrarily closely with rational functi ons. * Thus in pr inciple we have solved
a very general problem. Let us discuss this meth od of solution. Th e sta te
descript ion clearly will be the number of customers in the system, the stage
in which the arriving cust ome r finds himself within the (stage-type) arriving
box and the stage in which the cust omer finds himself in service. Fr om thi s
we may draw a (horribly complicated) state-transition dia gram . Once we
have this diagram we may (by inspect ion) write down the equilibrium
equations in a rather straightfo rward manner ; th is large set of equ ati on s
will typ ically have many bound ary conditions. H owever, these equ ati on s
will all be linear in the unknown s and so the solution meth od is straightforward (albeit extremely tedi ou s). What more natural setup for a computer
solutio n could one ask for ? Ind eed , a digital co mputer is extremely adept at
solving large sets of linear equ ati ons (such a task is much eas ier for a digital
computer to handle than is a sma ll set of nonlinear equ ations). In carrying
out the digital solution of this (typically infinite) set of linear equa tions, we
must redu ce it to a finite set; thi s can only be done in an ap pro ximate way by
first deciding at what point we ar e satisfied in truncatin g the seq uence
Po ,PI> p", .. . . Then we may solve the finite set and perh ap s extrap olate the
In a rea l sense, then, we are faced with an approximation pro blem ; how may we "best"
app roximate a given dist ribution by one tha t has a rat iona l tra nsform. If we a re given a
pdf in numerical form then Prony' s method IWHI T 44] is one acceptable procedure. On
the other hand, if the pdf is given analytica lly it is difficult to describe a genera l proced ure
for suita ble approxi mation. Of course one wou ld like to make these approximati ons with
the fewest number of stages possib le. We comment that if one wishes to fit the first an d
second moment s of a given distributi on by the method of stages then the number of stage s
canno t be significantly less than I / Cb" ; unfortun ate ly, this implies that when the distribut ion tends to concentrate ar ound a fixed value, then the num ber of stage s required
grows ra ther quickly.

4.8.

NET WOR KS O F MARKOVIAN QUEUES

147

solution to the infinite set; all this is in way of ap proximation and hopefull y
we are able to carry out the .computation far enough so that the neglected
terms a re indeed negligible.
One must not overemphas ize the usefulne ss of this pr ocedure ; this solutio n
meth od is not as yet a utomated but does at least in principl e provide a meth od
of approach. Other anal ytic meth od s for handling the more comple x qu eueing
situatio ns are discussed in the balance o f this book.

4.8. NETIVORKS OF MA RKOVIAN QUEUES


We ha ve so far considered Markovian systems in which each customer
was demanding a single service operation from the system. We may refer
to this as a "single-node" system. In this section we are concerned with
multiple-node systems in which a customer req uires service at more than one
sta tion (node). Thus we may think of a network of nodes, each of which is a
service center (perha ps wit h multiple servers at some of the nodes) and each
wit h storage room for queue s to form. Customers enter the system at va rious
points, queue for service, and up on dep arture from a given node then
pr oceed to some other node, there to receive additional service. We are now
describ ing the last category of flow system discussed in Cha pter I , namely ,
stochastic flow in a network .
A number of new considerat ion s emerge when one considers networ ks.
For example, the to pological structure of the network is important since it
describes the perm issible transition s between nodes. Also the path s taken by
individua l customers must someho w be described. Of great significance is the
nature of the stochastic flow in term s of the basic stochastic pr ocesses
describ ing tha t flow ; for example , in the case of a tandem queue where
custo mers departing fro m node i immediate ly enter node i + I , we see that
the interdeparture times from the fo rmer generate the interarrival time s to
the latter. Let us for the moment con sider the simple two-node tandem
networ k shown in Figu re 4.15. Each ova l in th at figure describes a qu eueing
system consisting of a queue and server(s) ; within each oval is given t he
node number. (It is import ant not to confuse the se physical net work diagram s
with the abstract state-transition-rate diagram s we ha ve seen earli er.) For the
moment let us assume tha t a Poisson process generates the arrivals to the
system at a rate i., all of which enter node one ; further assume th at node one
consists of a single expon en tial server at rate p . Thus node one is exactly an
M/M /l queu eing system. Also we will assume that node two has a single

8f---t--~Of----

Figure 4.15 A two-node tandem network.

148

MARKOVIAN QUEUES IN EQUI LIBRIUM

exponential server also of rate p,. The basic que stion is to solve for the interarrival time distribut ion feeding node two ; th is certainly will be equivalent to
the interdeparture time distribution from node one . Let d (t ) be the pdf
describing the interdeparture process from node one and as usual let its
Laplace transform be denoted by D*(s). Let us now calculate D*(s). When a
customer departs from node one either a second customer is ava ilable in the
queue and ready to be taken into service immed iately or the queue is empt y.
In the first case, the time until this next customer departs from node one will
be distributed exactly as a service time and in that case we will have
D* (s ) l node one nouempty =

B*(s)

On the other hand , if the node is empty upon th is first customer's departure
then we must wait for the sum of two intervals, the first being the time until
the second customer arrives and the next being his service time ; since these
two intervals are independently distributed then the pdf of the sum must be
the convoluti on of the pdf's for each. Certainly then the tran sform of the sum
pdf will be the pr oduct of the transforms of the individual pdfs and so we
have

=- -

D*(S)l nod e o ne empty

s +A

B*(s)

where we have given the explicit expression for the tran sform of the interarrival time densit y. Since we ha ve an expo nential server we may also write
B*(s) = p,/ (s + p, ); furthermore , as we shall discuss in Ch apter 5 the probability of a departure leaving behind an empty system is the same as the
probability of a n a rrival finding an empty system, namely, I - p. T his
permits us to write down the unc onditi onal transform for the inte rdepa rture
time density as
D*(s)

p) D*(S)lnode one

(I -

+ pD*(S)lnode one none m ptv

empty

Using our abo ve calculati ons we then have


D*(s)

(I _

p)(~)(_P ) + p(---.f!- )
S +A

s+ p

s+ p

A little algebra gives


D*(s)

=~

(4.65)

S +A

and so the interdeparture time distributi on is given by


D (t ) = I -

e-).'

t~ O

4.8.

NETWO R KS OF MARK OVIAN QUEUES

149

T hus we find the remar kable conclu sion that the interdeparture times are
expo nentia lly distribut ed with t he same parameter as the interarrival times!
In other words (in the case of a stable sta tionary queueing system), a Poisson
pr ocess driving an exponential server generate s a Poisson process for departures. This startling result is usually referred to as Burk e's theorem
[BURK 56]; a number of others also studied the pr oblem (see, for example,
the discussion in [SAAT 65]). In fact , Burke' s theorem says more, namely,
that the steady-sta te output of a stable M/M /m queu e with input parameter
Aand service-time parameter flo for each of the m cha nnels is in fact a Poisson
process at the same rate A. Burke also established that the output process was
independent of the other processes in the system. It has also been sho wn tha t
the M/M /m system is the only such FCFS system with this pro perty. Returning
no w to Figure 4.15 we see therefore that node two is dri ven by an independent
Poisson arrival process and therefore it too beha ves like an M/M fJ system
and so may be analyzed independently of node one. In fact Burke's the orem
tells us that we may connect many multiple- server nodes (each server with
exponential pdf) together in a feedfor ward * network fashion and still
preserve th is node-by-node decomp osition .
Jack son [JACK 57) addressed himself to this question by considering an
arbitrar y net work of queue s. The system he studied consists of N nodes
where the it h node consists of m , exponential servers each with par ameter fIo i;
fur ther the ith node receives arrivals from outside the system in the form of a
Poisson process at rate Y i' Th us if N = I then we have an M/M /m system.
Upon leaving the ith node a customer then proceeds to the jth node with
probability r ii ; this formul ati on permits the case where r ~ O. On the other
ha nd, aft er completing service in the ith node the proba bility that the customer departs from the netwo rk (never to return again) is given by I - Li'.:,l r ii .
We must calculate the total ave rage arriva l rate of customers to a given node.
T o do so, we must sum the (Poisson) ar rivals from out side the system plu s
arrivals (no t necessarily Poisson) from all intern al nodes; that is, den oting
th e total average a rrival rate to node i by j' i we easily find that this set of
par ameters must sa tisfy the following equ at ions :
S

Ai =

r, + L

}1i i

i=I , 2, .. . , N

- (4.66)

j= l

I n order for all nod es in this syste m to represent ergodic Ma rkov cha ins we
require that i'i < m ill i for all i; aga in we cau tio n the read er not to confuse
t he nodes in this discussion with the system states of each node from our
Specifically we do not permit feedba ck pat hs since this may dest roy the Poisso n nature of
the feedback depart ure stream. In sp ite of this, the following discussion of Ja ckson's work
points ou t that even networks with feedback are such that the individua l node s behave
as if they were fed totall y by Poisson arrivals, when in fact they are not.

l
ISO

MA RKO VIAN QU EUES I N EQU ILIBRIUM

previous discussion s. What is amazing is th at Jackson was a ble to show that


each node (say the it h) in the netw ork beh aves as if it were an independent
M/M /m system with a Poisson input rate A,. In general, the total input will
not be a Poisson process. The state variable for th is N-n ode system consists
of the vecto r (k ,. k 2 , k s) . where k ; is the number of cu stomers in the ith
nod e [including the customer (s) in service]. Let the equili brium pr o ba bility
asso ciated with this sta te be den oted by ptk.; k, . . . , k s ). Similarl y we
den ot e the marginal d istribution of findi ng k , customer s in the ith node by
p .(k,). Jackson was abl e to show th at the joint distri bu tion for all nodes
factored into the pr oduct of each of the mar ginal distribution s. th at is,
- (4.67)

I,

and ' pi (k ,) is given as the solutio n to the classical M/M / m system [see. for
example, Eqs . (3.37)-(3.39) with the obvious chan ge in not ation ]! This last
result is commonly referred to as Jack son's theorem . On ce agai n we see the
"product" form of solution for Mark o vian queues in equ ilibriu m.
A mod ificat ion of Jack son 's network of queues was con sidered by G ordon
and Ne well [GORD 67]. The modification th ey investiga ted was th at of a
closed Mark ovian netw ork in the sense that a fixed and finite number of
cust omers, say K , are con side red to be in the system and a re trapped in that
system in the sense th at no o thers may enter and none of the se may leave : this
cor responds to Jack son's case in which ~;:. \ r ij = I and Yi = 0 for all i.
(A n interestin g example of thi s class of systems know n as cyclic queues had
been con sidered earli er by K oenigsberg [KO EN 58]; a cyclic queue is a
tandem q ueue in which the last stage is conn ected bac k to the first.) In the
general case co nsidered by G ord on and Ne well we do not quite expect a
pr oduct soluti on since there is a dep end ency a mo ng the element s of the sta te
vecto r (k\ . k, . . . k s ) as foll ows :
S

I ki =

(4.68)

i= l

As is the case for Jackson 's model we ass ume that this discre te-state Ma rkov
pr ocess is irred ucible and therefor e a unique equ ilibrium pr o bability
distribution exists for p(k\ . k" . . . , k s ). In thi s mo del, however , th ere is a
finite num ber of sta tes; in particular it is easy to see that the num ber of
dist ingui shable states of th e system is eq ual to the nu mber of ways in which
o ne can place K custom ers a mo ng th e N nodes. and is eq ua l to the binomial
coefficient
(

+K
N -

I)

4.8.

NETWORKS OF MARKOVIAN QUEUES

151

The following equations desc ribe the behavior of the equilibrium distribution
of custo mers in this closed syste m and may be written by inspection as

.v

P(/(I, /(2' . . . , /(x)

2 0k.- ICf..( /(;)fl i

i= 1

s s

IOkj_ICf.,(k i

+ l)!t i ' ij p(k

k2 ,

l,

k, -

1, . . . , k ,

i = l r "", l

+ I , ... , ks )
(4.69)

where the discrete unit step-funct ion defined in Appendix I ta kes the for m
k = 0, 1, 2, . . .

'" {I0

(4.70)
k< O
and is included in' the eq uilibri um equations to indicate the fact that the
service rate must be zero when a given node is empty ; furthe rmore we define
Ok =

Cf.i(k i) =

k.
'

{11l

k 1<
-

nI ,,

which merely gives th e number of cust omers in service in th e ith node when
there a re k, custo mers a t th at nod e. As usual the left-h a nd side of Eq . (4.69)
des cribes the flow of 'pro bability out of sta te (k l , k 2 , , k".) whereas the
right-hand side acco unts for the flow of probability into that state from
neighboring states. Let us proceed to write down the solution to these equation s. We define the function (li(k i) as follows :

k<
, - m ,
Consider a set of numbers {Xi}' which are solutio ns to the foliowing set of
linea r equations :
N

# iXi

= L p j x jr ji

i = 1,2, . . . , lV

(4.71)

;=1

Note that thi s set of equations is in t he sa me form as 1t = 1tP whe re now


th e vector 1t may be co nsidered to be (fl,x" . . . , flsxs) and the elemen ts of
the matrix P a re conside red to be the elements rij. * Since we assume th at the
Again the reader is caut ioned that, on the one hand, we have been con sidering Markov
cha ins in which the quantities Pij refer to the transition probabilities among the possible
slates that the system may take on, wherea s, on the other hand, we have in this section in
additi on been considering a network of queuein g systems in which the prob ab ilities r ij
refer to tran sition s that customers make between nodes in tha t network .

152

MARKOVIA N Q UEUES IN EQUILIBRIUM

matrix of transition probabilities (whose elements are " i) is irreducible, then


by our previous studies we know that there must be a solution to Eqs. (4.71),
all of whose components are positive; of course, they will only be determined
to within a multiplicati ve constant since there are only N - I independent
equ ati ons there . With these definitions the solution to Eq. (4.69) can be
shown to equ al
- (4.72)
where the norm alization constant is given by
N

G(K)

x .k j

= L II - 'k e .,! ' _ 1

fliCk ,)

(4.73)

Here we imply that the summation is taken over all state vectors k ~ (k" . . . ,
k N ) that lie in the set A, and this is the set of all state vectors for which Eq.
(4.68) holds. Thi s then is the solution to the closed finite queueing network
pr oblem, and we observe once aga in that it has the product form.
We may expose the pr oduct formulati on somewhat further by co nsidering
the case where K ~ 00 . As it turns out, the quantities x;/m, are critical in this
calculation ; we will assume that there exists a unique such rati o that is
largest and we will ren umber the nodes such that x,!m, > x;/m, (i,e I).
It can then be shown that pik, k 2 , . . , k N ) ~ 0 for any state in which k, <
00 . Thi s implies that an infinite number of customers will form in node one ,
and th is node is often referred to as the " bottleneck" for the given network .
On the other hand , however, the marginal distribution p (k 2 , , k,v) is
well-defined in the limit and takes the form
(4.74)
Thus we see the pr oduct solution directly for this marginal distribution and ,
of cour se, it is similar to Jackson's theorem in Eq. (4.67); note that in one
case we have an open system (one that permit s external a rrivals) and in the
other case we have a closed system. As we shall see in Chapter 4, Volume II ,
th is model has significant applications in time-shared and multi-access
computer systems.
Jack son (JACK 63] earlier con sidered an even more genera l open queue ing
system, which includes the closed system just considered as a special case.
The new wrinkles introduced by Jackson a re, first , that the customer arrival
proce ss is permitted to depend up on the total number of customers in the
system (using this, he easily creates closed network s) and, second, that the
service rate at any node may be a function of the number of cust omers in that
node. Thus defining
S(k) ~ k , + k, + . .. + k

4.8.

NETWO RKS OF MARK OVIAN QUE UES

153

we the n permit the tota l a rrival rate to be a function of S(k) when the system
sta te is given by the vecto r k. Similarl y we define the exp onential service
rat e a t node i to be Ilk, when there are k , cu stome rs at that nod e (includ ing
th ose in ser vice). As earlier, we ha ve the node transiti on probabilities ' ij
(i , j = 1,2 , . . . , N) wit h the following additional definitions : '0, is the
probability th at the next externally generated arrival wiII enter the network
at node i ; ' i .N +l is the probability that a cu stomer leaving node i departs
from the system ; and 'O, N +l is the probability th at the next arrival will
require no service from the system and leave immediately upon arrival. Thus
we see that in this case y, = 'Oiy(S(k, where y(S(k is the total external
arrival rate to the system [conditioned on the number of customers S (k) at
the moment] from our external Poisson process. It can be seen that the
prob ability o f a customer arriving at node i l and then passing through the
node sequence i 2 , i 3 , . . , in and then departing is given by ' oil' I,,',,i,' "
" . _ l i . 'i V+l ' Rather than seek the solution of Eq . (4.66) for the traffic
rates, since the y are funct ion s of the total number of cu stomer s in the system
we rather seek the solution for the following equivalent set :
N

e,

= '0' + 1_21 ej'ji

(4.75)

[In the case where the arrival rates are independent of the number in the
system then Eqs. (4.66) and (4.75) differ by a multiplicative factor eq ual to
the total arrival rate of customers to the system.] We assume th at the solution
to Eq. (4.75) exists, is unique , and is such that e, ~ 0 for all i; th is is equ ivalent to assuming that with prob ability I a cu stomer' s j ourney throu gh the
netwo rk is of finite length . e, is, in fact , the expected number of times a
customer will visit nod e i in passing through the netw ork.
Let us define the time-dependent state probabilities as
Pk(t ) = P[system (vecto r) state a t time t is k]

(4.76)

By our usual methods we may write down the differential-difference equations go vern ing these probabilities as follows:

..v

J.V

+ 2 Ilk.+l' "N+IPk (i+)(t) + 2


i= l

1.V

2 1lk+l r jiP kl,.il (t)

i = l j= l

(4.77)

i /:- j

where terms a re omitted when any component of the vector a rgument goes
negative ; k (i-) = k except for its ith component, which takes o n the value

a:

154

MARKOV IAN QUEUES IN EQU ILI BRIUM

k, - 1; k (i+) = k except for its ith comp onent , which takes on the value
k , + I; and k (i,j) = k except that its ith comp onent is k , - I and its jth
component is k , + I where i ~ j . Complex as this notati on appears its
interpretat ion sho uld be rather straightforward for the reader. Jackson shows
that the equilibrium distribution is unique (if it exists) an d de fines it in our
earl ier notati on to be lim Pk(t ) g Pk g pt k, k 2 , , k N) as t ->- 00. In
order to give the equilibrium solution for Pk we must unfortunately define
the following furt her notation :

gII

K- l

F(K )

y(S(k

K = 0, 1, 2, . ..

(4.78)

S lk ) ~ O

ki

II II .5-

f( k) ';'

(4.79)

1"", 1 ij = l f-l; i

H(K )

I f(k )

(4.80)

k e..l

g{K~/(K)H(K)

if the sum con verges

(4.81)

otherwise
where the set A shown in Eq . (4.80) is the same as that defined for Eq. (4.73).
In ter ms of these definiti ons then Jackson's more general theorem states that
if G < 00 then a unique equilibrium-state prob ability distribution exists for
the general state-dependent networks and is given by
Pk =

f( k) F(S( k

(4.82)

Again we detect the product form of solutio n. It is also possible to show that
in the case when arrivals are independent of the total number in the system
[that is, y g y( S(k ) then even in the case of state-dependent service rates
Jack son's first the orem applies, namely, that the jo int pdf fact ors into the
produc t of the individual pd f' s given in Eq. (4.67). In fact PiCk;) tu rns out to
be the same as the probabi lity distribut ion for the nu mber of customers in a
single-node system where arriv a ls come from a Poisson pr ocess at rate y e;
and with the sta te-dependent service rates fl., such as we ha ve derived for our
general birth-death process in Chapter 3. Thu s one impact of Jackson's
second theorem is that for the constant-arrival-rate case, the equilibrium
prob abili ty distributions of number of customer s in the system at individ ual

4.8.

NETW O RKS OF MARKOVIAN Q UEUES

155

centers are independent of other centers; in addition, each of these distri but ions is identical to the weil-known single-node service center with the
sa me pa ra meters. * A remar kable result!
This last theo rem is perhap s as far as one can got with simple Markovian
networks, since it seems to extend Burke' s theo rem in its most genera l sense.
When one relaxes the Mar kovian assumpti on on arrivals and/o r service
times, then extreme complexity in the inter depar ture process arises not only
from its marginal distri butio n, but also from its lack of independence on
othe r state variables.
These Markovian queuein g network s lead to rath er depr essing sets of
(linear) system equ ations ; this is due to the enormous (yet finite) sta te
descripti on. It is indeed remar kable that such systems do possess reasonably
straightforward solutions. The key to solution lies in the observation that
these systems may be repr esented as Mark ovian population processes, as
neatly described by Kingman [KI NG 69) and as recently pursued by Chandy
[CHAN 72). In particular , a Mar kov popu lation process is a continuous-time
Markov cha in over the set of finite-dimen sional sta te vectors k = (k 1 , k 2 , ,
k s ) for which transitions are permitted only between sta tesf : k a nd k (i+)
(an external ar rival at node i) ; k and k (i- ) (an external departure from node
i) ; and k and k(i ,j ) (an internal tra nsfer from node ito nodej). Kingman
gives an elegant discussion of the interesting classes and properties of these
processes (using the notion and properties of reversible Markov chai ns).
Chandy discusses so me of these issues by observing that the equilibrium
pr obabi lities for the system sta tes obey not only the global-balance equati ons
that we have so far seen (and typica lly which lead to product-form solutions)
bu t also that this system of equati ons may be decomposed into many sets of
smaller systems of equations, each of which is simpler to solve. Th is tran sfor med set is referred to as the set of " local" -balance equa tions , which we
now proceed to discuss.
The concep t of local balance is most valuab le when one deals with a network of queu es. H owever, the concept does apply to single-node Mar kovian
queues, and in fact we have already seen an example of loca l balan ce at pla y.
Thi s model also permit s one to handle the closed queueing systems studied by Gordon
a nd Newell. In order to crea te the constant tot al number of customers one need merely set
y (k ) = 0 for k ~ K an d y( K - I) = co, where K is the fixed number one wishes to conta in
within the system. In order to keep the node tran sition probabilities iden tica l in the open and
closed systems, let us denote the former as earlier by r;; and the latter now by rii' : to mak e
th e limit of Jackson' s genera l system equivalent to the closed system of Gordon an d
Newell we then require r;;' = ri; + (r i .N+l)(rU;)'
t In Chapter 4, Volume II , we describe some recent result s that do in fact exte nd the model
to han dle different customer classes and different service disciplines at each node (permitting. in some ca ses, more genera l serv ice-time distributions).
t Sec the definitions following Eq. (4.77).

....

156

MARKOVIAN QUEUES IN EQUILIBRIUM

Node l

Node 2

Nod e 3

Figure 4.16 A simple cyclic network example: N = 3, K = 2.


Let us recall the global -balance equations (the flow-conservati on equations)
for the general birth-death process as exemplified in Eq. (3.6). Thi s equation
was obtained by balancing flow into and out of state Ek in Figure 2.9. We
also commented at that time that a different boundary could be considered
across which flow must be conserved , and this led to the set of equations
(3.7). These latter equations are in fact local-balance equations and have the
extremely interesting property that they match terms from the left-hand side
of Eq. (3.6) with corre spondin g terms on the right-hand side ; for example , th e
term Ak- 1Pk-l on the left-hand side of Eq. (3.6) is seen to be equal to fl kPk
on the right-h and side of that equation directl y from Eq. (3.7), and by a
second application of Eq. (3.7) we see that the two remainin g term s in Eq.
(3.6) must be equal. Thi s is precisely the way in which local balance operates,
namely, to observe that certain sets of term s in the global-balance equation
must balanc e by themselves giving rise to a number of "Iocal " -balance
equations.
The significant observation is that, if we are dealing with an ergodic
Markov process, then we know for sure that ther e is a uniqu e solution for the
equilibrium probabilities as defined by the generic equati on 7t = 7tP. Second ,
if we decomp ose the global-balance equations for such a process by mat ching
terms of the large global-balance equations into sets of smaller local-balance
equations (and of cour se account for all the terms in the global balanc e), then
any solutio n satisfied by this large set of local-b alance equations must also
satisfy the global-balance equations; the converse is not generally true. Th us
any solution for the local-balance equations will yield the unique solution
for our Mark ov process.
In the interesting case of a network of queues we define a local-balance
equation (with respect to a given network state a nd a network node i) as one
that equates the rate of flow out of that network stat e due to the depar ture of
a customer from node i to the rate of flow into that network state due to the
ar rival of a customer to node i, * Thi s notion in the case of networks is best
illustra ted by the simple example shown in Figure 4.16. Here we show the
case of a three-node network where the service rate in the ith node is given as
When service is nonexponential but rather given in term s of a stage-type service distr ibut ion , then one equates ar rivals to and departures from a given stage of service (ra ther
than to and from the node itself).

:r

4.8.

NETWORKS OF MARKOVIAN QUEUES

157

Figure 4.17 State-transition-rate diagram for example in Figure 4.16.


fl i a nd is independent of the number of customers at th at node ; we assume
there are exactly K = 2 customers circulating in this closed cyclic netw ork .

Clearly we have ' 13 = '32 = ' 21 = I and ' if = 0 otherwise. Our state description is mer ely the triplet (k l , k 2 , k 3 ) , where as usual k , gives the number of
custome rs in node i and where we require, of course, that k 1 + k 2 + k 3 = 2.
For thi s net work we will therefore have exactly
N
(

+K
N -

I) 6
=

states wit h sta te-tra nsitio n rat es as show n in Figure 4.17.


Fo r this system we have six glo bal-ba lan ce equations (o ne of which will be
redu ndant as usual; the extra cond ition come s from the con servation of
probability); th ese a re
fllp(2 , 0, 0)

= p2p( l,

1,0)

(4.83)

fl2P(0, 2, 0)

= P3P(0, I , I)

(4.84)

fl3P(0, 0,2) = PIP (I , 0, I)

(4.85)

+ fl 2P(l , 1,0) = P2P(0 , 2, 0) + fl3P(I, 0, I)


fl2P(0, I, I) + fl3P(0, I , I) = P3P(0 , 0, 2) + fl IP (I , 1, 0)
fl IP(! ,O, I) + p 3p(l, 0 , I) = P2P(0 , I , I) + PIP(2 , 0, 0)

(4.86)

fllp( l, 1,0)

(4.87)
(4.88)

158

MARKOVIAN QUEU ES IN EQUILIBRI UM

Each of these glob al-balance equ ati ons is of the form whereby the left-hand
side repre sents the flow out of a state and the right-hand side represents the
flow int o that sta te. Equations (4.83)-(4.85) are already local-balance
equations as we shall see; Eqs. (4.86)-(4.88) have been written so th at th e
first term on the left-hand side of each equation balances the first term on the
right-hand side of the equ ation, and likewise for the seco nd term s. Thus
Eq . (4.86) gives rise to the following local-balance equations:
PIP(1, 1,0)

= p ,p(O, 2, 0)

(4.89)

p,p(l , 1,0)

= P3P(I, 0,

(4.90)

I)

Note, for example, th at Eq . (4.89) takes the rate out of sta te (I , 1,0) due to
a departure from node I and equates it to the rate into that state due to
arrivals at node I; similarly , Eq . (4.90) doe s likewise for departures and
arrivals at node 2. This is the principle of local balance and we see therefor e
that Eqs . (4.83)-(4.85) are already of this form . Thus we genera te nine
local -balance equ ations* (four of which mu st therefore be redundan t when
we con sider the con servation of probability), each of which is extremely
simple and therefore permits a stra ightfo rward solution to be found. If thi s
set of equations do es indeed have a solution, then they certainly guarantee
that the global equations are satisfied and therefore that the solution we have
found is the unique solution to the original global equ ati ons. The read er may
easily verify the following solution :
\,

p(l , 0, I) = fil
- P( 2,0,0)
fi 3

pel, 1,0)

= fil p(2, 0, 0)
fi'

_ (/l l) 2

p(O, 1, 1) -

p(_ , 0, 0)

fl 2fl3
p(O , 0,

2)= (~rp(2, 0, 0)

fl l)2p( 2, 0, 0)
p(O, 2, 0) = ( ;:.
p(2 , 0, 0) = [1

+ PI +
fl3

!!:l

+ (fll )2 + (fl l)2+


fl 2P3

fl 3

(fll)1 fl 2

(4.91)

Had we allowed all possible transitions among nodes (rather th an the cyclic
behavior in this example) then the state-transition-rate dia gram would have
The reade r should write them o ut directly from Figure 4.17.

4.8.

159

NETWORKS OF MARKOV IAN QUEUES

Figure 4.1 8 State-transition-rate diagram showing local balance (N

3, K

4).

perm itted transitions in both directions where now only unidire ction al
transition s are perm itt ed ; however, it will always be true that only tr ansitions
t o neare st-nei ghb or states (in thi s two-d imensional dia gram ) are permitted
so that such a diagram can always be drawn in a planar fashion . For example,
had we allowed four customer s in a n arbitra rily conn ected three-node
network , then the state-transition-rate di agram would have been as shown in
Figure 4.18. In t his diagram we repr esen t possible tran siti ons between nodes
by an undirected branch (representing two one-way branches in opposi te
directions). Also , we have collected together sets of branches by joinin g the m
with a heavy line, and these are mean t to repr esent branches whose cont ributi ons appear in the same local-balance equ ati on . Th ese diagrams can be
extended to higher dimensions when the re a re more than three nodes in the
system. In particular , with four nodes we get a tetrahedron (that is, a threedimensional simplex). In general, with N nodes we will get an (N - 1)dimensional simplex with K + 1 nodes along each edge (where K = number
of customers in the closed system). We note in these diagram s that all node s
lying in a given straight line (pa ral!el to any base of the simplex) maintai n one
comp onent of the sta te vector at a constant value and that this value increases
or decreases by un ity as one moves to a parallel set of nodes. The localbalan ce equ ati ons are identi fied as balancing flow in th at set of bran ches that
conn ects a given node on one of these constant lines to all other nodes on
that constant line adjacent and parallel to this node , and th at decreases by
unity that component that had been held con stant. In summa ry, then , the

160

MA RK O VIAN QUEUES IN EQU ILI BR IU M

local- bal ance equ ati on s a re tr ivial to write down, a nd if one can succeed in
findin g a solution that satisfies them , then one has found the solut ion to the
globa l-bala nce equati on s as well!
As we see, most of the se Markovian ne tworks lead to rather complex
systems of linear equations. Wall ace and Rosenberg [WALL 66] propose a
numerical so lutio n metho d for a large class of the se equation s which is
computati on ally effi cient. They di scuss a computer program, which is designed
to evaluate the equilibrium probability distribution s of state variables in
very large finite Mark ovian queueing net works. Specifically, it is designed to
so lve the equilibrium equ ati on s of the form given in Eqs. (2.50) a nd (2. 116),
namely , 7t = 7tP and 7tQ = O. The procedure is of the "power-iteration
type" such th at if7t (i) is the ith iterate then 7t(i + I) = 7t(i)R is the (i + I)th
iterate ; the matrix R is either equal to the matri x GtP + (I - Gt) I (where a: is a
scalar) or equal to the matrix ~ Q + I (where ~ is a scalar and I is the identity
matrix), depending up on wh ich of the two above equation s is to be solved .
The sca la rs a: and ~ a re ch osen carefully so as to give a n efficient con vergence
to the solution of the se equations. The speed of solution is quite remarkable
and the reader is referred to [WALL 66] and its references for further det ails.
Thus ends our study of purely Markovian systems in equilibrium. The
unify ing feature throughout Chapters 3 and 4 has been that these systems
give rise to product-type so lutions; one is therefore urged to look for
solution s of thi s for m wheneve r Mark ovian queueing system s are enc ountered. In the next chapter we permit either A (t) or B (x) (but not both) to be
of arbitrary form , requiring the other to rem ain in exponential form .

REFERENCES
BURK 56

Burke, P. J., " The Output of a Queueing System," Operations Research,


4, 699-704 (1966).

CHAN 72 Chandy, K. M., " T he Analysis and Solutions for General Queueing
Networks," Proc. Sixth Annual Princeton Conference on Information
Sciences and Systems , Princeton University, March 1972.
Cox, D. R., " A Use of Complex Probabilit ies in the Theory of StoCOX 55
chastic Processes," Proceeding s Cambridge Philosophical Socie ty,
51,313-31 9 (1955).

GORD 67 Gordon , W. J. and G. F. Newell, " Closed Queueing Systems with


Exponential Servers," Operations Research, 15, 254-265 (1967).
JACK 57 Jackson, J . R., "Networks of Waiting Lines," Operations Research,S,
518-521 (1957).

JACK 63
KING 69

Jack son , J. R., "Jobshop-Like Queueing Systems," Manag ement


S cience , 10, 131 -142 (1963).
Kingman, J. F. C., "Markov Population Processes," Journal of Applied
Probability, 6, 1-18 (1969).

EXER CISES

161

KOEN 58 Koenigsberg, E., " Cyclic Queues," Operations Research Quarterly, 9,


22-35 (1958).

SAAT 65

Saaty, T. L., "Stochastic Network Flows: Advances in Networks of


Queues," Proc. Symp. Congestion Theory, Univ, of North Carolina
Press, (Chapel Hill), 86-107, (1965).
WALL 66 Wallace, V. L. and R. S. Rosenberg, "Markovian Models and Numerical Analysis of Computer System Behavior," AFIPS Spring Joint
Computer Confe rence Proc., 141-148, (1966).

WHIT 44

Whittaker, E. and G. Robinson, The Calculus ofObservations, 4th ed.,


Blackie (London), (1944).

EXERCISES
4.1.

Consider the Markovian queueing system shown below. Branch


labels are birth and death rates. Node labels give the number of
customers in the system.

Solve for Pk:


Find the average number in the system.
For). = fl , what values do we get for parts (a) and (b)? Try to
interpret these results.
(d) Write down the transition rate matrix Q for this pr oble m and
give the matri x equation relating Q to the probabilitie s found in
part (a) .

(a)
(b)
(c)

4.2.

Consider an Ek/En/1 queueing system where no queue is permitted to


form. A customer who arrives to find the service facility busy is " lost"
(he departs with no service). Let Ei j be the system state in which the
"arriving" customer is in the ith arrival stage and the cust omer in
service is in the jth service stage (no te that there is always so me
customer in the arrival mechanism and that if there is no customer in
the service facility, then we let j = 0). Let I lk), be the average time
spent in any arrival stage and I lnfl be the average time spent in any
service stage.
(a) Draw the state tr ansition diagram showing all the transition
rat es.

(b) Write down the equilibrium equation for E;j where I

o <i < n,

< i < k,

162

MARK OVIAN QUEUES IN EQUI LIBRIUM

4.3. Consider an MfEr/i system in which no queue is allowed to form.


Let j = the number of stages of service left in the system and let P ,
be the equilibrium probability of being in state E j
(a) Find Pj,j = 0, I, .. . , r .
(b) Find the probability of a busy system.

4.4. Con sider an M/H.fl system in which no queue is allowed to form .


Service is of the hyperexponential type as shown in Figure 4.10 with
P.l = 2p.!Y. l and p.. = 2p.(1 - !Y. l )
(a) Solve for the equilibrium probability of an empty system .
(b) Find the probability that server 1 is occupied .
(c) Find the probability of a busy system.

4.5. Consider an M/Mfl system with parameters A and p. in which exactly


two customers arrive at each arrival instant.
(a) Draw the state-transition-rate diagram.
(b) By inspectio n, write down the equilibrium equati ons for fk
(k = 0, 1, 2, . . .).
(c) Let p = 2Afp.. Express P(z) in terms of p and z.
(d) Find P(z) by using the bulk arrival results from Section 4.5.
(e) Find the mean and variance of the number of customers in the
system from P(z).
(f) Repeat' parts (a)-(e) with exactly r customers arriving at each
arrival instant (and p = rAfp.).

4.6. Consider an M/M fl queueing system with parameters i. and p.. At


each
with
with
(a)
(b)

of the a rrival instants one new customer will ent er the system
p rob ability 112 or two new customers will enter simultaneously
probabilit y 1/2.
Draw the state-transition-rate diagram for this system.
Using the method of non-ne arest-neighbor systems write down
.
the equ ilibr ium equat ions for Ps(c) Find P(z) and also evaluate any co nstants in this expression so
that P(z) is given in terms only of i. and p.. If possible eliminate
any commo n factors in the num erat or a nd den ominat or of this
expression [this make s life simpler for yo u in part (d)].
(d) Fr om part (c) find th e expected number of customers in the
system.
(e) Repeat part (c) using the results obtained in Section 4.5 directly .

4.7.

For the bulk arri val system of Section 4.5, assume (for 0
that
g i = (1 - !y')!y' i
i = 0, 1,2, . . .

< !Y. < 1)

Find h = equilibrium probability of finding k in the system.

EXERCISES

163

F or the bulk arrival system studied in Section 4.5, find the mean N
and variance aN" for the number of customers in the system. Express
your answers in terms of the moments of the bulk arrival distribution.
Consider an M/M /I system with the followin g variation: Whenever
the server becomes free, he accepts (11'0 customers (if at least two are
available) from the queue into service simultaneously. Of these two
customers, only one receives service; when the service for this one is
co mpleted, both customers depart (and so the other cust omer got a
" free ride").
If only one cust omer is available in the queue when the server
becomes free, then that cust omer is accepted alone and is serviced;
if a new customer happens to arrive when this single customer is being
served, then the new customer joins the old one in service and this
new customer receives a "free ride ."
In all cases, the service time is exponentially distributed with mean
I/p, sec and the average (Poisson) arrival rate is A customers per
second .
(a) Draw the appropriate state diagram.
(b) Write down the appropri ate difference equati ons for P =
equilibrium probability of finding k customers in the system.
(e) Solve for P(z) in term s of Po and Pt.
(d) Express Pi in terms of Po.
We con sider the denominator polynomial in Eq. (4.35) for the system
Er/ M/ I. Of the r + I roots, we know that one occurs at z = I. Use
Rouche's theorem (see Appendix I) to show that exactly r - I of the
remain ing r roots lie in the unit disk 1=1 ::s; I and therefore exactly
one roo t, say zo, lies in the region IZol > I.
Show that the soluti on to Eq. (4.7 1) gives a set of variables {Xi} which
gua ran tee that Eq. (4.72) is indeed the solution to Eq. (4.69).
(a) Draw the state-transitio n-rate diagram sho wing local balance
for the case (N = 3, K = 5) with the following structure:

(b) Solve for p(k t , k 2 k 3 ) .

164

MARKOVIAN QU EUES IN EQUILIBRIUM

4.13. Consider a two-node Markovian queueing network (of the more


general type considered by Jackson) for which N = 2, m 1 = m z = I ,
flk , = fl i (constant servi ce rate), and which has transition probabilities
(r i j ) as described in the following matrix:
j

o
o o
o
2

I -

where 0 < ~ < I and nodes 0 and N + I are the "source" and "sink'.'
nodes, respectively. We also have (for some integer K)

k,
k,

+ k, - K
+ kz = K

and assume the system initially contains K customers.


(a) Find e, (i = 1,2) as given in Eq. (4.75).
(b) Since N = 2, let us denote p(k 1 , k.) = p(k lo K - k 1) by hi'
Find the balance equations for hi'
(c) Solve these equations for hi explicitly.
(d) By considering the fraction of time the first node is busy, find
the time between customer departures from the network (via
node I, of course).

PART

III

INTERMEDIATE
QUEUEING THEORY

We are here concerned with those queueing systems for which we can still
apply certain simplifications due to their Markovian nature. We encounter
those systems that are representable as imbedded Markov chains, namely,
the M/G/I and the G/M/m queues. In Chapter 5 we rapidly develop the basic
equilibrium equations for M/G/l giving th e noto rious Pollaczek-Khinchin
equations for queue length and waiting time . We next discuss the busy period
and, finally, introduce some moderately advanced techniques for studying
these systems, even commenting a bit on the time-dependent solutions.
Similarly for the queue G/M /m in Chapter 6, we find that we can make some
very specific statements about the equilibrium system behavior and, in fact,
find that the conditional distribution of waiting time will always be exponen tial rega rdless of the interarrival time distri bution! Similarly, the conditional
que ue-length distribution is shown to be geometric. We note in this part that
the methods of solution are quite different from that studied in Part II, but
that much of the underlying behavior is similar; in particular the mean
queue size, the mean waiting time, and the mean busy period duration
all are inversely proportional to I - p as earlier. In Chapter 7 we briefly
investigate a rather pleasing interpretation of transforms in terms of
probabilities.
The techniques we had used in Chapter 3 [the explicit product solution
of Eq. (3.1 I)] and in Chapter 4 (flow co nservation) are replaced by an
indirect a-transform approach in Chapter 5. However, in Chapter 6, we return
once again to the flow con servation inherent in the 1t = 1tP solution.

165

s
The Queue MjGjl

c ....: )

-:

-;..: \_~

' L;,..

t.

:,../

- ..::::

7 ~' .:.,-.

" . I_ ~ ..

, T.hat ",::~!ch m~k!-s element~ly,~;~i.!Jg theory.q~J?P~~Jiri~ts th; siIJFIlcity ~


o.f", ~ yate 9..e.~~pJ1on . * J~ L..part lcular , ~ tha~,~" re9U1rei7~ : 2!"~r to '
sumll}:;r!z~}h~-entlrC)llst_ hist ory ~~~th~ gu;,~:~~g s~te_~,;.,~p ~peCI~~.lOn ~L~ .
the number of customersj., present. All ofller / fiiStoncal mf9.,fmatlOn "IS
iri~le~ant t6 the futti~e &ii<'i{ilif of pu re"Markovian system s. fnii~ the state
descripti on is not only one dimensional but also countable (and in some
cases finite) . It is this latter property (the countability) that simplifies our
calculations.
In this chapter and the next we study queueing systems that are driven by
non-Markovian stochastic processe s. As a consequence we are faced with
new problems for which we must find new methods of solution.
In spite of the non-Markovian nature of these two system s there exists an
abundance of techniques for handling them. Our approach in this chapter
will be the method of the imbedded Markov chain due to Palm [PALM 43]
and Kendall [KEND 51]. However, we have in reality alre ady seen a second
approach to thi s class of problems , namely, the method of stages, in which it
was shown that so long as the interarrival time and service time pdf's have
Lapl ace transforms that are rational , then the stage method can be applied
(see Section 4.7); the disadvantage of that approach is that it merely gives a
procedure for carrying out the solution but does not show the solution as an
explicit expr ession , and therefore properties of the solution canno t be studied
for a class of system s. The third approach, to be studied in Chapter 8, is to
solve Lindley 's integral equation [LIND 52] ; this approach is suitable for the
system G/G /l and so obviously may be speciali zed to some of the systems we
consider in this chapter. A fourth approach , the method of supplementary
Usually a state descripti on is given in term s of a vector which describes the system's
state at time t , A vecto r v(t) is a state vector if, given v(t) a nd all input s to this system du ring
the interval (t, (1 ) (where t < (1 ) , then we ar e capable of solving for the state vecto r v(t1) .
Clearly it beho oves us to cho ose a state vector containing that inform at ion that permit s us
to calculate quantities of importan ce for unde rstandin g system beha vior.
t We saw in Cha pter 4 that occasionally we reeor d the number of stages in the system rather
than the number of customers.

167

168

THE QUEUE

M/G /I

variables , is discussed in the exercises at the end of th is chapter ; more will be


said abo ut this method in the next section. We also discuss the busy period
analy sis [GAV E 59], which leads to the waiting-time distributi on (see
Section 5.10). Beyond these there exist other approaches to non-Mar kovian
queueing systems, amo ng which are the random- walk and combinatorial
approaches [TAK A 67] and the method of Green's fun ction [KEIL 65].

5.1. THE M/G/1 SYSTEM


The M/G /I queue is a single-server system with Poisson ar rivals and
. arbitrary service-time distribution den oted by B(x) [and a service time pdf
denoted by b(x)] . That is, the interarrival time distribution is given by
A(t)

1 - e-J.t

t::O:O

with an average arrival rate of A customers per second, a mean intera rrival
time of I/Asec, and a variance O"a 2 = 1/;,2. As defined in Chapter 2 we denote
the kth moment of service time by
x

roo

= Jo xkb(x) dx

a nd we sometimes express these service time moments by bk ~ x'.


Let us discuss' the state description (vector) for the M/G /l system. If at
some time t we hope to summarize the complete past history of this system ,
then it is clear that we must certainly specify N( t), the numb er of customers
present at time t. Moreover, we must specify Xo(t) , the service time already
received by the customer in service at time t ; this is necessary since the
service-time distribution is not necessarily of the memoryless type. (Clearly,
we need not specify how long it has been since the last arri val entered the
system, since the arrival process is of the memoryless type.) Thus we see that
the rand om pr ocess N (t) is a non-M ark ovian proce ss. However, the vector
[N (t), XoCt)] is a Markov proce ss and is an appropriate sta te vecto r for the
M IG/I system , since it completely summarizes all past history relevant to the
future system development.
We have thus gone from a single-comp onent description of state in
elementary queueing theory to what appears to be a two-component description here in intermedi ate queueing theory. Let us examine the inherent
difference between these two state descriptions. In elementary queueing
theory , it is sufficient to pro vide N (t) , the number in the system at time t , and
we then have a Markov process with a discrete-state space, where the states
themsel ves are either finite or countable in number. When we proceed to the
current situation where we need a two-dimensional state description , we find '
that the number in the system N(t) is still denumerable, but now we must also

5.2.

TH E PARADOX OF RESIDUAL LIFE: A BIT OF RENEWAL THE ORY

169

pr ovide Xo(t) , the expended service time , which is continuous. We have thus
evolved from a discrete-state description to a continuous-state description ,
and this essenti al difference complicates the analysis.
It is possible to proceed with a general theory based upon the couplet
[N (t ), Xo(t )] as a state vector and such a method of solution is referred to as
the method of supplementary cariables. For a treatment of this sort the reader
is referred to Cox [COX 55] and Kendall [KEND 53); Henderson [HEND
72] also discusses this method, but chooses the remaining service time instead
of the expended service time as the supplementary variable. In this text we
choose to use the method of the imbedded Markov chain as discussed below.
However, before we proceed with the method itself, it is clear that we should
understand some properties of the expended service time; this we do in the
following section.
5.2. THE PARADOX OF RESIDUAL LIFE: A BIT OF
RENEWAL THEORY
We are concerned here with the case where an arri ving customer finds a
partially served customer in the service facility. Problems of this sort occur
repeatedl y in our studies, and so we wish to place this situation in a more
general context. We begin with an app arent par adox illustrated through the
following example. Assume that our hipp ie from Chapter 2 arrives at a
road side cafe at an arbitra ry instant in time and begins hitchhiking. Assume
further that automobiles arrive at this cafe according to a Poisson process at
a n average rate of A cars per minute. How long must the hipp ie wait , on the
average, unt il the next car comes along ?
.
The re are two apparently logical answers to this question .T'irst, we might
argue that since the average time between automobile arri vals is I{A min , and
since the hippie arrives at a random point in time, then " obviously" the
hippie will wait on the average 1{2A min. On the other hand , we observe that
since the Poisson pr ocess is rnernoryless, the time until the next arri val is
independent of how long it has been since the previous arrival and therefore
th e hippie will wait on the average I{). min ; this second argument can be
extended to show that the average time from the last arrival until the hippie
begins hitchhiking is also Iii, min. The second solution ther efore implies
th at the average time between the last car and the next car to arrive will be
2{), min! It appears that this interval is twice as long as it should be for a
Poisson process! Nevertheless, the second solution is the correct one, and
so we are faced with an appa rent parad ox!
Let us discuss the solution to this pr oblem in the case of an arbitrary
interarrival time distribution . Th is stud y properly belon gs to renewal theory.
and we quote results freely from that field ; most of these results can be found

170

THE QUEU E

M/G/l
Time

XO~i P Pie =J

axis

aenves

I + - -- X
A,

Figure 5.1

Life, age and residual life.

in the excellent monograph by Cox [COX 62] or in the fine expo sito ry
article by Smith [SMIT 58]; the reader is also encouraged to see Feller
[FELL 66]. The basic diagram is that given in Figure 5.1. In this figure we let
A k denote the kth automobile, which we assume arrives at time T k We assume
that the intervals T k +! - T k are independent and identically distributed
random variables with distribution 'given by
F( x )

=<l. Ph+! -

r, ~

xl

(5. 1)

We further define the common pdf for these interv als as


f( x)

,g,

dF ( x)
dx

(5.2)

Let us now choose a random point in time, say t , when o ur hippie arrives a t
the roadside cafe. In thi s figure , A n _ 1 is the last au tom ob ile to a rrive pri or to t
and A n will be the first autom obile to arrive after t . We let X den ote this
"s pecial" interarrival time and we let Y den ote the time that our hippie must
wait until the next arrival. Clearly, the sequence of arriv al points {Tk} for ms
a renewal pr ocess; renewal theory discusses the instantane ous replacement
of co mpo nents. In this case , {Tk} form s the sequence of instants when the
old comp onent fails and is replaced by a new comp onent. In the language
of renewal theory X is said to be the lifetim e of the comp onent under considera tion, Y is said to be th e residual life of that comp onent at time t, and
X o = X - Y is referred to as the age of that component at time t. Let us ado pt
that termin ology and pr oceed to find the pdf for X and Y. the lifetime and
residual life of our selected componen t. We assume that the renewal process
has been operating for an arbitrarily long time since we are interested only
in limit ing distr ibuti ons.
The amazing result we will find is that X is not distr ibuted according to
F(x ). In term s of our earlier example thi s mean s that the interval which the
hippie happens to select by his arriv al at the cafe is not a typical interval.
In fact , herein lies the solution to our parad ox: A long interval is more likely

5.2.

TH E PARADOX OF RESID UAL LIFE: A BIT OF RENEWAL TH EORY

171

to be "intercepted" by our hippie than a sh ort one . In the case of a Poisson


process we shall see that this bias causes the selected interval to be on the
average twice as lon g as a typical interval.
Let the residual life have a distribution

F(x)

xl

pry ~

(5.3)

with den sity


j( x)

dF( x)
dx

(5.4)

Similarly, let the selected lifetime X have a pdfI x(x) and PDF Fx (x) where
~

Fx (x) = P[X

xl

(5.5)

In Exercise 5.2 we direct the reader through a rigorous deri vati on for the
residual lifetime density j'(). Rather than proceed through those. details , let
us give an intuiti ve der ivation for the density that take s advantage of o ur
physical intuition regarding thi s pr oblem. Our basic observation is that lon g
intervals between renewal points occupy larger segments of the time axis
than do shorter intervals, and therefore it is more likely th at our rand om point
t will fall in a long interval. If we accept this, then we recognize th at the
probability that an interval of length x is chosen should be proportional to
th e lengt h (x) as well as to the relative occurrence of such interv als [which is
given by I(x) dx]. Thus, for the selected int erval , we may write
Jx(x) dx

K xJ(x) d x

(5 .6)

where the left- hand side is P[x < X ~ x + dx] and the right-hand side
expresses the linear weightin g with respect to interval length and includes a
con stant K , which must be evaluated so as to pr operly normalize th is den sity.
Integrating both sides of Eq. (5.6) we find that K = I jm" where
Ill,

E[T. - Tk_d

( 5.7)

and is the comm on average time between rene wals (between arrivals of
automobiles). Thus we have shown that the den sity associated with the
selected interval is given in ter ms of the den sity of typical intervals by
xJ(x)
Jx(x) = - -

- (5.8)

Ill,

This is o ur first result. Let us proceed now to find the den sity of residu al life
! (x). If we are told that X = x , then the probability that the residu al life Y
does not exceed the value y is given by
pry ~ y

I X = xl = -y

172

THE QUEUE

M/G/I

s: s:

for 0
y
x; this last is true since we have randomly chosen a poin t within
this selected interval, a nd therefore thi s point must be uniformly distributed
within that interval. Thus we may write down the joint density of X and Y as

pry

< Y s: y + dy , x < X s: x + dx] = (d:)( Xf~~ d X)


=

f(x) dy d x

(5.9)

nit

s: s:

for 0
y
x . Integrating over x we obtain!(y) , which is the uncondition al
density for Y , namely,

! (y) dy

= r a:> f( x) dy dx

Jz=y

m1

This immediately gives the final result:

! (y)

I - F(y)

- (5.10)

nit

This is our second result. It gives the density of residu al life in terms of the
common distribution of interval length and its mean. *
Let us express thi s last result in terms of transforms. Using our usual
transform notation we have the following correspondences:

,, .

f (x)-=- F*(s)
!(x)-=- I *(s)
Clearl y, all the random va riables we ha ve been d iscu ssing in th is section are
nonnegati ve, and so the relationsh ip in Eq . (5. 10) may be tr ansformed directly
by use of entry 5 in Table 1.4 and entry 13 in Table 1.3 to give

r es) = 1 - F*(s)

- (5.11 )

Sni t

It is now a tri vial ma tte r to find the moments of residual life in terms of th e
moments of the lifetimes themselves. We denote the nth moment of the lifetime by m; a nd th e Ilth mom ent of the residual life by r n ' that is,
nJ " ,:;,

E[(Tk - Tk_t )" ]

r, ,:;, E[Y"]

(5.12)
(5.13)

U sing our momen t formula Eq. (1I.26), we may di ffer ent iate Eq . (5.11) to
obta in the moments of residu al life. As s ->- 0 we obtai n indeterminate for ms
It may a lso be show n that th e limiting pdf for age (%0) is the sa me as for residual life ( Y)
given in Eq. (5.10).

5.2.

TH E PARADOX OF RESIDUAL LI FE: A BIT OF RENEWAL THEORY

173

which may be evaluated by means of L 'Hospital's rule ; this computation


gives the moments of residual life as
ln n+ 1
= ---'''-'-''--

r
n

(n

I)m l

-(5 .14)

This important formula is most often used to evaluate 'I ' the me an residual
life, which is found equal to

- (5.15)
and ma y also be expressed
(J 2 ~ m z - m 12) to give

In

terms of the lifetime variance (denoted by

ml

(J2

'I = -2 +-2m

(5.16)

This last form shows that the correct answer to the hippie paradox is m, /2,
half the mean interarrival time , only if the variance is zero (regula rly spaced
arrivals); however, for the Poisson arrivals, m l = IIA a nd (J2 = IfJ.2, giving
'1 = IIA = mt> which confirms our earlier solution to the hippie paradox of
residual life. Note that mtl2 ~ 'I and 'I will gr ow without bound as (J2 ->- 00.
The result for the mean residual life (' I) is a rather counterintuitive result; we
will see it appear again and again.
Before lea vin g renewal theory we take this opportunity to qu ote so me other
useful results. In the lan guage of renewal theory the age-d ependent fa ilure
rate rex) is defined as the instantaneous rate at which a component will fail
given th a t it has already attained a n age of x ; th at is, , (x) dx ~ P[x <
lifetime o f component ~ x + dx I lifetime > z ], From firs t principles, we see
that this conditional density is

f(x)
rex ) = 1 _ F(x )

- (5.1 7)

where once again f (x) and F(x) refer to the common di stribution o f component lifetime. The renewal fun ction H (x) is defined to be

H (x) ~ E[number of renewals in an interval o f len gth xl

(5.18)

and the renewal density h ex) is merely the renewal rate at time x defined by

hex) ~ dH( x )
dx

(5.19)

174

TIl E QUEUE

M/G/!

Renewal theory seems to be obsessed with limit theorems, and one of the
important results is the renewal theorem, which states that
lim hex)
z ..... ex)

=....!.

(5.20)

nIl

Thi s merely says that in the limit one cannot identify when the rene wal
process began, and so the rate at which components are renewed is equal to
the inverse of the average time between renewal s (m.). We note that hex) is
not a pdf; in fact, its integral diverges in the typical case. Ne vertheless, it
does possess a Laplace transform which we denote by H *(s). It is easy to
show that the following relationship exists between this transform and the
transform of the underlying pdf for renewals, namely :

H*(s)

F*(s)
1 - F*(s)

(5.21)

Thi s last is merely the transform expression of the integral equation ofrenewal
theory, which may be written as

hex)

= f(x) + fh(X

- t)f(t) dt

(5.22)

More will not be said about renewal theory at this point. Again the reader
is urged to consult the references mentioned above.

5.3. THE IMBE DDED MARKOV CHAIN


We now consider the method of the imbedded Markov chain and apply it
to the M/G /I queue . The fundamental idea behind this method is that we
wish to simplify the description of state from the two-dimensional description
[N(t), Xo(t)] into a one-dimensional description N(t) . If indeed we are to be
successful in calculating future values for our state variable we must also
impl icitly give, along with this one-dimensional description of the number in
system, the time expended on service for the customer in service. Furthermore
(and here is the crucial point), we agree that we may gain this simplification
by looking not at all points in time but rather at a select set of points in time .
Clearly, these special epochs must have the property that, if we specify the
number in the system at one such point and also provide future inputs to the
system , then at the next suitable point in time we can again calculate the
number in system ; thus somehow we must implicitly be specifying the expended
service for the man in service. How are we to identify a set of points with
this property? There are many such sets. An extremely convenient set of
points with this property is the set of departure instants from service . It is

5.3.

TH E IMBEDDED MARKOV CHAI N

175

clear if we specify the number of customers left behind by a departing customer that we can calculate this same quantity at some point in the future
given only the additional inputs to the system. Certainly, we have specified the
expended service time at these instants: it is in fact zero for the customer (if
any) currently in service since he has just at that instant entered service !*
(There are other sets of point s with this property, for example, the set of
points that occur exactly I sec after customers enter service; if we specify the
number in the system at these instants, then we are capable of solving for the
number of customers in the system at such future instants of time. Such a set
as j ust described is not as useful as the departure instants since we must
worry about the case where a customer in service does not remain for a
duration exceeding I sec.)
The reader sho uld recognize that what we are describing is, in fact , a semiMarkov process in which the state transitions occur at customer departure
instants. At these instants we define the imbedded Markov chain to be the
number of customers present in the system immediately following the
departure. The transition s take place only at the imbedded points and form
a discrete-state space . The distribution of time between state transitions is
equal to the service time distribution R(x) whenever a departure leaves behind
at least one cust omer, whereas it equals the convolution of the interarrivaltime distribution (expo nentially distributed) with b(x) in the case that the
departure leaves behind .an empty system. In any case, the behavi or of the
chain at these imbedded points is completely describable as a Markov process,
and the results we have discus sed in Chapter 2 are appl icable.
Our approach then is to focus attention upon departure instants from
service and to specify as our state variable the numb er of customers lef t behind
by such a departing customer. We will proceed to solve for the system
behavior at these instants in time. F ortunately, the solution at these imbedded
Markov points happens also to provide the solution for all points in time. t
In Exercise 5.7 the reader is asked to rederive some MIG/! results using the
method of supplementary variables; this method is good at all points in time
and (as it must) turns out to be identical to the results we get here by using the
imbedded Mark ov chain approach. This proves once again that our solution
Mo reover ~ we assume that no service has been expended o n any other custome r in the
queue.
t This happ y circu mstance is due to the fact that we have a Poisson input and therefore
(as shown in Section 4.1) an ar riving custome r ta kes wha t am ou nts to a " random" look
at the system. Furthermore, in Exercise 5.6 we ass ist the reader in proving that the limiting
distribution for the number of customers left behind by a depart ure is the same as the
limiting distrib ution of custome rs found by a new arrival for a ny system that change s state
by unit step values (positive or negati ve); th is result is true for arb itrary arriva l- and
arbitrary service-time distributions ' Thu s. for MJG/I. a rrivals. depa rtu res, and random
observers all see the same distr ibution of number in the system.

176

THE QUEUE

M IG II

is good for all time. In the following pages we establi sh results for the queuelength distribution, the. waiting-time distribution, and the busy-peri od
distribution (all in terms of transforms); the waiting-time and busy-peri od
durati on results are in no way restricted by the imbedd ing we have described .
So even if the other methods were not available, these results would still hold
and would be unconstrained due to the imbedding pr ocess. As a final reassurance to the reader we now offer an intuitive ju stificati on for the equivalence between the limiting distributions seen by departures and arrivals.
Taking the state of the system as"the number of customers therein, we may
observe the changes in system sta te as time evolves ; if we follow the system
state in continuous time, then we observe that these chan ges are of the
nearest-neighb or type. In particular, if we let Ek be the system state when k
cust omers are in the system , then we see that the only tran sition s from this
state a re Ek --+ E k+l and E k --+ E k _ 1 (where this last can only occur if k > 0).
Thi s is den oted in Figure 5.2. We now make the observati on that the number
of transitions of the type E k --+ E k+l can differ by at most one from the
number of transitions of the type E k+l --+ E k . The form er corre spond to
customer arri vals and occur at the arriv al instants ; the latter refer to customer
dep artures and occur a t the dep arture instants. After the system has been in
opera tion for an arbitrarily long time, the number of such transitions upward
must essentially equal the number of transition s down ward. Since th is upand-down motion with respect to E k occurs with essenti ally the same
frequ ency, we may therefore conclude that the system states found by arrivals
must have the sa me limitin g distribution (rk ) as the system sta tes left behind
by departures (which we denote by dk ) . Thu s, if we let N(I) be the numb er
in the system at time I, we may summarize our two conclu sions as follows:
1.

F or Poisson arrivals, it is alway s true that [see Eq. (4.6)]


P[N(t) = k] = P[arrival at time t finds k in system]
th at is,
(5.23)

2.

If in any (perhaps non-Markovian) system N( I) makes only discontinuous chan ges of size (plus or minus) one , then if either one of the
following limiting distributions exists, so does the other and they are
equal (see Exercise 5.6) :
.
I"k ,;;

lim P[arrival at

finds k custom ers in system]

t - ",

dk

,;;

lim P[departure a t

1 leaves

k custom ers behind]

t - ",

- (5.24)
Thus, for M/G /l,

5.4.

THE TRANS IT ION PRO BABILITIES

177

Figure 5.2 State transitions for unit step-change systems.


Our a pproac h for the balance o f thi s chapter is first to find the mean
number in system, a result referred to as the P ollaczek-Khinch in mean-value
formula . * F ollowin g that we obta in the genera ti ng functi on for the distribution of number of custo mers in the system and then the tran sform for both
the waiting-time and total system-time distributions. These last transform
results we sha ll refe r to as Pollaczek-Khinchin tr an sform equ ations. *
Furthermore, we so lve for the transform of the bu sy-pe riod durati on a nd
for the number served in the busy pe riod; we then show how to derive
waitin g-t ime results from the bu sy-period a na lysis. Lastly, we deri ve the
Takacs integrodifferentia l equ ation for the unfinished work in the system. We
begin by defining so me notation and identifying the transiti on probabilities
associa ted with ou r im bedded Markov chain.

5.4. THE TRAl'1SmON PROBABILITIES


We have already discussed the use of customer departure instants as a set
of imb edded points in the time axis; at these instants we define the imbedded
Markov cha in as the number of customers left behind by the se departures
(th is forms our imbedded Markov chain). It should be clear to the reader
th at th is is a co mpl ete sta te description since we kn ow for sure that zero
service ha s so far been expended on the customer in service and th at the time
since the last arrival is irr elevant to the future devel opment of the process,
since th e interarriva l-time distribu ti on is mem oryless. Ea rly in Ch a pter 2 we
introduced some symbo lical and gra p hical not at ion ; we as k th at the reader
refresh his understand ing of Figure 2.2 and th at he recall the following
de finition s :
C n represents th e nth customer to enter the system
r ; = arrival time of C;
t ; = T n - T n _ 1 = intera rrival time between C n _ 1 and C;
X n = service time for C n
In addition, we int roduce two new random va ria bles of consider ab le interes t :

qn = number of cu stomers left behind by departure of Cn from service


V n = nu mber of customers a rriving during the service of C n
There is considerable disagreement within the queueing theory literature regarding the
names for the mean-value and transform equations. Some authors refer to the mean-value
expression as the Pollaczek-Kh inchin formula, whereas others reserve that term for the
transform equations. We attempt to relieve that confusion by adding the appropriate
adjectives to these names.

r-

r
i

178

TH E QUE UE

MIG/!

We are interested in solving for the distributi on of q", namely , Piq; = kj ,


which is, in fact , a t ime-dependent probability ; its limiting distribution
(as II ->- co) corresponds to elk' which we know is equ al to Pk> the basic
d istribution discussed in Chapters 3 and 4 previously. In carrying out that
so lution we will find that th e n umber of a rriving cu stomers V n plays a crucial
ro le.
As in Chapter 2, we find that the tr an sition probabilities descr ibe our
Markov chain ; thu s we define the one-step transiti on pr ob abilities

Pi; ~ P[qn+!

= j Iq. = i]

(5.25)

Since the se tr an srnons are observed only at departures, It IS clear th at


qn+J < qn - I is an impossible situa tio n ; on the other hand , q,,+! ~ q. - I
is po ssible for all values due to the arrivals V n +!. It is easy to see that the
matrix of transiti on probabilities P = [Pi;] (i,j = 0 , 1,2 , . ..) take s the
following form :

P=

,j

eL.

eLl

eL 2

eLa

eL.

eL J

eL 2

eL a

eL.

eLl

(X2

eL.

eLl

eL.

I;
where
eLk

~ P[v. +!

k]

(5.26)

For example, the jth component of the first row o f thi s matri x gives the
prob ability th at the previou s customer left behind a n emp ty system and that
during th e service of C n + l exactly j customers a rriv ed (a ll of who m were
left behind by the dep arture of C n+\); similarly, for other than the first row,
the entry Pi; for j ~ i - I gives the probability that exac tly j - i + I
customers a rr ived during the service peri od for C,,+I> give n tha t C" left behind
exactly i customers ; of these i customers one was ind eed C "+ 1 and thi s
acc ounts for the + I term in th is last co mp uta tio n. The sta te-tra nsitionprobability dia gram for th is Markov ch ain is show n in F igure 5.3, in which
we show only trans iti on s o u t of E i .
Let us now calc ulat e eLk' We ob serve first o f all th at the a rriva l pr ocess (a
Poisson process at a rate of A customers per seco nd) is ind ependen t of the sta te
of the queueing system . Similarl y, x"' the service time for C", is independent

5.4. TH E TRA NSITIO N PROBABILITI ES

179

ao

Figure 5.3 State-transition-probabilit y diagram for the M/G/I imbedded Mar kov
Chain.

of 11 and is distributed according to B( x). Therefore, Vn, the number of


arrivals during the service time X n depends on ly upon the durati on of X n and
not upon 11 at all. We may therefore dispense with the subscripts on V n and x n ,
repl acin g them with the random variables u a nd x so that we ma y write
P[x n ::;; x] = P[x ::;; x] = B(x) and P[vn = k ] = P[ u = k] = (f.k . We may
now proceed with the calcu lati on of (f.k . We have by the law of tot al prob ability
(f.k

= P[u = k] =

f'

P[u = k, x

< x ::;; x + dx] dx

By condition al probabilities we furthe r have

(f.k

f'

P[u = k

Ix =

x ]b(x) dx

(5.27)

where again b(x) = dB (x)/dx is the pdf fo r service time. Since we have a
Poisson arrival process, we may replace the pr ob abil ity bene ath th e int egral
by the expre ssion given in Eq . (2.131), t ha t is,
(f.k

i'"
o

(}.X)k
- e- l' b( x ) d x

k!

(5.28)

Thi s the n completely specifies the transition pr obability matrix P .


We note that since (f.k > 0 for all k ~ D it is possible to reach all o ther
sta tes from a ny given state ; thu s o ur Markov cha in is irreducible (a nd
a period ic). More over , let us make ou r usual definition :
p

AX

a nd point out th at thi s Markov chain is ergodi c if p < 1 (unless specified


otherwise, we sha ll assume p < I below) .
T he stationary pro ba bilities may be obtained from the vector equ ati on
p = pP where p = [Po, p" P2' . . .] whose kth component Pk ( = .-ik ) is

180

TH E QUEUE

M/G/l

merely the limiting probability that a departing customer will leave behind k
customers, namely,
Pk

P[q

k]

(5.29)

In the following section we find the mean value E[q] and in the section
following that we find the z-transform for h .

5.5. THE MEAN QUEUE LENGTH


In this section we derive the Pollaczek-Khinchin formula for the mean
value of the limiting queue length. In particular, we define

q = lim qn

(5.30)

which certainly will exist in the case where our imbedded chain is ergodic.
Our first step is to find an equation relat ing the random variable qn+l to
the random variable qn by considering two cases. The first is shown in Figure
5.4 (using our time-diagram notation) and corre spond s to the case where C;
leaves behind a nonempty system (i.e., qn > 0). Note that we are assuming a
first-come-first-served queueing discipline, alth ough this assumption only
a ffects waiting times a nd not queue lengths or busy periods. We see from
Figure 5.4 that qn is clearly greater than zero since C n+l is already in the
system when C n departs. We purposely do not show when customer Cn +2
arr ives since th at is unimportant to our developing argument. We wish now
to find an expression for q n +l ' the number of customers left behind when C n+l
dep arts. Th is is clearly given as equ al to qn the numb er of customers present
when C; departed less I (since customer C n+l departs himself) plus the
number of customers that arri ve during the service interval Xn +l ' Thi s last
term is clearly equal to Dn+l by definition and is shown as a "s et" of arri vals

Q' H-l left

q. lef t

behind

behind
Serv er--------.----~.----------

T ime~

Queue - - r - - - - - - ' - - - - - - - . - . L - - - - - - - : - - -

'---v---J

Cn "

~
v n. l arrive

F igure 5.4

Case whe re q > O.

5.5.

r--

Server

c.

lSI

THE MEAN QUE UE LENG TH

, . , --","

~ q." left behind


Ti m e ~

C,, +I

Qu eue ---,r-----'----~c_-----+__--

C.

C,,.1

~
V IJ+ l

arri ve

Figure 5.5 Case where qn = O.

in the diagram. Thus we have

qn

>0

(5.31)

Now consider the secon d case where qn = 0, that is, our departing customer leaves behind an empty system; this is illustrated in Figure 5.5. In this
case we see that qn is clea rly zero since e n+! has not yet arrived by the time
C n departs. T hus qn+!, the number of customers left behi nd by the depar ture
of C n +1 , is merely equal to the number of arrivals d urin g his service time.
Thus
(5.32)
qn = 0
Collectin g together Eq. (5.31) and Eq. (5.32) we have
qn > 0

qn = 0

(5.33)

It is convenient at thi s point to introduce D. k , the shifted discrete step function

k = 1,2, . . .
k~O

(5.34)

which is related to the discrete step functi on Ok [defined in Eq . (4.70)] through


D. k = 15k _ I , Applying thi s definition to Eq, (5.33) we may now write the single
definin g equation for qn+l as
- (5.35)

182

TH E QUEUE

M/G/I

Equation (5.35) is the key equation for the st udy of M/GfI systems. It
remain s for us to extract from Eq . (5.35) the mean value * for qn' As usual, we
concern ourselves not with the time-dependent behavior (which is inferred
by the subscript II) but rather with the limiting distribution for the rand om
variable qn, which we den ote by g. Accordingly we assume that the jth
moment of qn exists in the limit as II goes to infinity independent of II ,
namely ,
(5.36)
lim E[q /] = EW]
n -e cc

(We are in fact requiring ergodicity here.)


As a first attempt let us hope that forming the expectation of both sides of
Eq. (5.35) and then takin g the limit as II - . cc will yield the average value we
are seekin g. Proceeding as described we have

Using Eq . (5.36) we have, in the limit as II ---+ co,


E[g] = E[g] - E[Ll q ] + E[ v]
Alas, the expectation we were seeking drops out of this equation, which
yield s instead .
(5.37)
E[6 .] = E[ v]
What insight does this last equ at ion provide us ? (No te that since v is the
number of arrivals during a customer's service time , which is independent of
II , the ind ex on u; could have been dropped even before we went to the limit.)
We have by definiti on that
E[ v]

average number of arri vals in a service time

Let us now interpret the left-hand side of Eq . (5.37). By definiti on we may


calcul ate this directly as
00

E[6;;]

= .26kP[g =

k]

k= O

= 6 oP[ g = 0]

+ 6,P[g =

1]

+ ...

We could a t this point pr oceed to the next section to obtain the (z-tra nsform of the)
limit ing distribution for numbe r in system and from that expression evaluate the avera ge
number in system. Instead , let us calculate the average number in system directly from
Eq . (5.35) following the method of Kendall [KENO 51] ; we choose to car ry out this extra
work to dem onstrate to the student the simplicity of the a rgument.

5.5.

T H E MEAN QUEUE LENGTH

183

But , from the definition in Eq . (5.34) we may rewr ite this as

E [D.. ] = O{P[q

= OJ) +

I{P [q > OJ}

or

E [D.. ] = P[ q > 0]

(5.38)

Since we a re dealin g with a single-server system , Eq. (5.38) may also be


writte n as
(5.39)
E [D..] = P[busy system]
An d from o ur defin ition of the ut ilizat ion factor we furt her ha ve
(5.40)

P [busy system] = p

as we had o bserved* in Eq . (2.32). Thus from Eq s. (5.37), (5.39), and (5.40)


we con clude tha t
- (5.4 1)
E [v] = p
We thus have the perfe ctly reason able conclusion that the expected number
of arrivals pe r service inte rval is eq ual to p (= ix). For stability we of co urse
require p < I , a nd so Eq . (5.4 1) ind ica tes that customers must arrive more
slowly th an the y can be served (on the average).
We now return to the ta sk of solving for the expected va lue of q. Forming
the first mo me nt of Eq . (5.35) yielded interesti ng resul ts but fai led to give the
des ired expectati on. Let us now a ttem pt to find th is average value by first
squaring Eq . (5.35) and then ta king expectati on s as follows :
(5.42)
From o ur de finition in Eq. (5.34) we ha ve (D. o )" = D.o" an d also
Applyi ng this to Eq. (5.42) a nd taking expecta tio ns ,we have

qn

D. o"

= q n'

In this eq ua tion, we hav e t he expec ta tion of the product of two random


variab les in the last two terms . Howeve r, we o bserv e that L'n+l [the nu mber of
a rriva ls du ring the (11+ I)th service int er val] is inde penden t of q" (th e
number of customers left behind by e n)' Conseq uent ly, the last two expec ta tions may each be written as a prod uct of the expectations. Taking the limit
as n goes to infinity, an d using our limit ass umptions in Eq . (5.36), we have

o=

E[D..]

+ E[v']

- 2E[q]

+ 2E[q]E[v] -

2E[D. q]E[v]

* Fo r any M/G fl systcm , we see tha t P [g = 0] = I - P [q > 0] = 1 - p and so P[ ncw


customer need 1101 queue] = I - p. Th is agrees with our ear lier observation for G IG I I.


184

THE QUEUE

M/G /l

We now make use of Eqs. (5.37) and (5.4 1) to obtain, as an intermedi at e


result for the expectation of g,
E - _
[q] - P

E[i?] - E[ o]
2(1 - p)

(5.43 )

The only unknown here is E[v2 ] .


Let us solve not only for the second moment of 0 but, in fact , let us describe
a meth od for obta ining all the moments, Equati on (5.28) gives an expression
for (Xk = P[ o = k]. From this exp ression we should be able to calculate the
moments. However, we find it expedient first to define the z-tra nsform for the
random variable 0 as
-

.:l

.6.

V(z) = E[z"] =

00

P[o =

k] Zk

(5.44)

k= O

Forming V(z) from Eqs. (5.28) and (5.44) we have

'" r

V(z) =

k~ O

.
-(h)k e-AXb(x)
d x Zk

k!

Our summation and integral are well behaved , and we may interchange the
order of these two operations to obtain
V(z) =

ro

e- AX

I (Axzt)
- - b(x ) d x

( co

k -O

k!

= L X> e- AXe AXZb(x ) dx


=

e-IA-A=lxb(x) dx

(5.45)

At thi s point we define (as usual) the Laplace transform B*(s) for the service
time pdf as
B*(s)

~ LX> e- SXb(x) d x

We note that Eq. (5.45) is of this form , with the complex variable s replaced
by i. - }.z, and so we recognize the impo rtan t result th at
V(z)

B*(Je - h )

- (5.46)

Thi s last equation is extremely useful and rep resents a relati onship between
the z-transform of the probability distribution of the random variable 0 and
the Laplace transform of the pdf of the ra ndom variable x when the Laplace
transform is evaluated at the critical point Je - h. The se two rand om variables are such that ii rep resents the number of arrivals occurring du ring the

5.5.

THE MEAN QUEUE LENGTH

185

inte rval i where the arrival pr ocess is Poi sson at an average rate of Aarrivals
per seco nd. We will sho rtly have occa sion to incorp orate thi s interpretati on
of Eq. (5.46) in our further results.
F rom Appendix II we note th at vari ou s derivati ves of z-tra nsforms
evaluated for z = I give the various moments of the rand om varia ble under
considerati on. Similarl y, the appropriate deriv ati ve of the Laplace transform
evaluated at its ar gument s = 0 also gives rise to moments. In particular,
from th at appe ndix we recall that
B*(k\ O)

~ dkB*(s)
I = (-I )kE[Xk]
k

(5.47)

~ d V(z) I

(5.48)

ds

V(ll(1)

,_0

dz

V(2)(1)

= E[ ii]

:- 1

~ d'V~z) I =
dz"

E[ii 2 ]

E[ii]

(5.49)

:~l

In order to simplify the nota tion for the se limitin g derivat ive opera tions, we
have used the more usual superscript notation with the argument replaced
by its limit. Furthermore, we now resort to the overb ar notat ion to denote
expected value of the random variable below that bar. t Thus Eqs. (5.47)(5.49) become
B*Ckl(O)

( - I )kx"

V(ll(1) = iJ

V(2l( l)

(5.50)
(5.51)

= v' -

iJ

(5.52)

Of course, we must also have the con servati on of probability given by


B*(O)

V(1)

(5.53)

We now wish to exploit the relationship given in Eq . (5.46) so as to be


able to obtai n th e moment s of the random variable ii from the expre ssion s
given in Eqs. (5.50)-(5.53). Thus from Eq . (5.46) we have
d V(z)

dB *(}. - AZ)

dz

dz

-- -

(5.54)

t Recall from Eq. (2.19)tha t E [x nk ] _ x k = bk (ra ther tha n the more cumbersome nota tion
(ilk which one might expect). We ta ke the sa me liberties with vand ij, namely, (if = ;;;
and (fj)k = qk.

186

TH E QUEUE

M/G /l

Thi s last may be calculated as


dB*(A ~ k )
dz

(dB *,(i. - , AZ)) (d (i. - i.Z))


d( /. - I.Z)
dz
, dB *( y)

= - A-

(5.55)

dy

where
y =

A- ;.z

(5.56)

Setting Z = 1 in Eq. (5.54) we have

V(ll(1)

But from Eq . (5.56) the case Z

_ A dB *(y )
dy

:~1

= 1 is the case y = 0, a nd so we have

VOI(I) = - AB*(l)(O)

(5.57)

From Eqs. (5.50), (5.51), and (5.57), we fina lly have


ij

i3:

(5.58)

But Ax is ju st p and we have once again established that which we knew from
Eq . (5.41), namely, ij = p . (This certainl y is encouraging.) We may continue
to pick up higher moments by differentiating Eq. (5.54) once again to
obtain
d 2 V(z)
d 2 B*(A - k)
-(5.59)
2
2
dz

dz

U sing the first derivati ve of B *(y ) we now for m its second der ivative as
follows :
d

2B*
(). - i.z)
dz 2

.!!-[_;.
dz

dB *(y)]
dy .

_A(d2B*~!J))(dY)
dy-

dz

or
d 2B*(}. - i.z)
d z2

, 2

d 2B*( y )

I.

d y'

Setting z equal to 1 in Eq. (5.59) and using Eq . (5.60) we have

(5.60)

5.5.

T HE MEAN QU EU E LENGTH

187

T hus, from ea rlier results in Eqs. (5.50) and (5.52), we obtain


- (5.61)
We have thus fina lly solved for v'. Thi s clearly is the quantity requ ired in
order to evaluate Eq. (5.43). If we so desired (and with suita ble ener gy) we
could continue this differentiati on game a nd extract additional moment s of iJ
in term s of the moments of i; we prefer not to yield to that temptati on here.
Returning to Eq . (5.43) we apply Eq . (5.61) to obtain
ij

P+

j. 2 2
X

2-(1.:.:...
_"'-p)

(5.62)

T his is the result we were after ! It expresses the average queue size at customer
departure instants in terms of known quantities, namel y, the utilizati on
factor (p = AX), }., and x' (the second moment of the service-time
distr ibuti on). Let us rewr ite thi s result in terms of C; = Gb'/{x)', the squared
coefficient of variat ion for service time :

__ + ' (1 + Cb' )
q -

P 2(1 - p)

- (5.63)

Thi s last is the extremely well-known formula for the average number of
custome rs in an M/G lI system and is comm only* referred to as the PollaczekKhinchin (P- K ) mean-value f ormula. Note with emphasis th at thi s average
dep end s only up on the fi rst ruo moments (x and x' ) of the service-time
dis tribution. Moreover , observe that ij gro ws linearly with the variance of the
service-time distribution (or, if you will, linearly with its squ ared coefficient
of variation).
T he P-K mean -value formula provides a n expre ssion for ij that represent s
the average number of customers in the system at departure instants ; however, we alr eady know that this also repre sents the average number at the
arriva l instan ts and, in fact , at all point s in time. We already have a not ati on
for the average number of customers in the system, namely iii, which we
introduced in Chapter 2 and have used in pre viou s chapters; we will continue
to use the iii notat ion outside of this chapt er. Furthermore, we have defined
iii. to be th e average nu mber of custo mers in the queue (no t coun ting the
customer in service). Let us take a moment to develop a relati onship between
these two quan tities. By definiti on we have

-0- '"
N = "2: kP[ ij
k= O

See footnote on p. t 77.

k]

(5.64)

188

TH E QUEU E

MIGI I

Similarly we may calculate the ave rage queue size by subtracting unity from
this pre viou s calculation so long as there is at least o ne customer in the
system, that is (no te the iowe r lim it) ,
" ",

Nq

= I(k

- I )P[q

k)

k= l

This easily gives us

I'" P[q = k)

'"

Nq = I kP [q = k) -

k= l

k= O

But the second sum is merely p and so we have the result

Nq = N -

- (5.65)

This simple formula gives the general relationship we were seeking.


As an example of the P- K mean-val ue for mula , in the case of an MIMfI
system, we have that the coefficient of va riati on for the exponential distributi on is uni ty [see Eq . (2. 145). Thus for this system we have

__ +
q -

2
(2)
P 2(1 - p)

or
q=
-pI - P

MIMII

(5.66)

Equati on (5.66) gives the expected number of cust omers left behind by a
departi ng custome r. Compare thi s to t he expression for the average number of
customers in a n MIMfI system a s give n in Eq . (3.24). They a re identical and
lend va lidit y to our ea rlier statemen ts that th e meth od of the imbedded
Markov cha in in the MIGfI case gives rise to a so lution that is good a t all
points in time. As a second example , let us con sider the service-time distributi on in which service time is a con stant a nd equ al to x. Such systems are
de scribed by the notation MIDII , as we ment ioned earlier. In th is case
clea rly C b 2 = 0 a nd so we have

__ +

q-

P 2(1 -

p)

ij = - p- - --,P_1- P
2( 1 - p)

- (5.67)

MIDII

Thus the MIDfI system has p 2 /2(1 - p) fewer customers o n the a verage than
the MIMI I system, demonstrating the earlier sta tement th at ij increases with
the vari ance of the service-time distribution .

5.5.

T HE MEAN QUEUE LENGTH

I S9

Service faci Iity

Figure 5.6 The M/H 2 /1 example.


F or a th ird example, we consider an M /H 2/l system in wh ich
x ~ O

(5.6S)

That is, the service facility consists of two parallel service stages, as shown in
Fi gure 5.6. N ot e that A is also the arrival rate, as usual. We may immediately
ca lculate x = 5/(S).) a nd (Jb 2 = 31 /(64.12) , which yield s C/ = 31/25. Thus

--

q -

P"( 2.24)
+..:........:_-

2(1 - p)

O.12p 2

I- p

I -p

= --+-Thus we see t he (small) increase in ij for the (sma ll) increase in C;2 over th e
va lue of un ity for M/M / 1. We note in this example th at p is fixed a t p =
i.x = 5/S; th erefore, ij = 1.79, whereas for M /M/l a t thi s va lue of p we get
ij = 1.66. We have introduced thi s M /H 2/l example here since we intend to
carry it (a nd the M/M/ I exa mple) thr ou gh our MIG/l discussion.
The main result o f th is sect ion is th e Pollaczek -Khinchin fo rm ula fo r
the mean number in system, as given in Eq . (5.63). This result bec omes a
special case of ou r results in the next sect io n , but we feel th at its development
has been useful as a pedagogical device. Moreover , in ob tai ning th is res u lt
we established the ba sic equation for MIG/I given in Eq . (5.35) . We a lso
obtai ned the ge nera l relati on ship between V( z) a nd B*(5) , as given in Eq.
(5.46); from t his we a re a ble to obtai n the moments for the number o f a rr ivals
during a service interval.

190

TH E QU EUE

M IGII

We have not as yet derived an y results regarding time spent in the system ;
we are now in a positi on to do so . We recall Little's result:

This result relates the expected number of customers iii in a system to 1 , the
arrival rate of customers and to T, their average time in the system. For
MIGII we have deri ved Eq . (5.63), which is the expected number in the
system at customer departure instants. We may therefore appl y Little's result
to this expected number in order to obtain the average time spent in the
system (queue + service) . We know that ij als o represents the average
number of customers found at random , and so we may equate ij = iii. Thus
we have
_

+ C.2 )

(1

N=p+p

2(1 - p)

=1T

Solving for T we have

T =

px(1

+ C; )

x + -'--'--'----"-'2(1 - p)

(5.69)

This last is easily interpreted. The average total time spent in system is clearly
the average time spent in service plus the average time spent in the queue.
The first term above is merely the average service time and thu s the seco nd
term mu st represent the average queueing time (which we den ote by W).
Thus we have th at the average queueing time is
px(l

+ C;)

W = '---''-----''---'2(1 - p )

or
Wo
W=-I-p

- (5.70)

where W o ~ i0/2; W o is the average remaining service time for th e cust omer
(if an y) found in service by a new arrival (work it out using the mean residu al
life formula). A particularly nice normalization fact or is now apparent.
Consider T, the average time spent in system. It is natural to comp are this
time to x, the average service time required of the system by a cust omer.
Thus the ratio Tlx expre sses the ratio of time spent in system to time required
of the system and repre sents the factor by which the system inconvenie nces

5.6.

DISTRIB UTI O N OF NU MBER IN SYSTEM

191

customers due to the fact that they are sharing the system with other customers. If we use this normalization in Eqs. (5.69) and (5.70), we arrive at the
following, where now time is expre ssed in units of average service intervals:
T

+ p (1 + C b )
2

= 1

2(1 - p)

(l

+C

2
b )

2(1 - p)

_ (5.71)

_ (5.72)

Each of these last two equations is also referred to as the P-K mean-value
formula [along with Eq . (5.63)]. Here we see the linear fashi on in which the
statistical fluctuati ons of the input processes create delay s (i.e., I + C b 2 is
the su m of the squared interarrivai-time and service-time coeffici ents of
variation). Further, we see the highly nonlinear dependence of delays upon
the average load p .
Let us now comp are the mean normalized queueing time for the systems"
M /M /l and M /D fl ; these have a squared coefficient of variation Cb 2 equal to
I and 0, respectively. Applying this to Eq. (5.72) we ha ve
W

x
W

(I - p)
P

2(1 - p)

MIM II

_ (5.73)

M IDII

_ (5.74)

Note that the system with constant service time (M /D/l) has half the average
waitin g time of the system with exponentially distributed service time
(M / M {l) . Thus, as we commented earlier, the time in the system and the
number in the system both grow in proportion to the vari an ce of the
service-time distribution .
Let us now proceed to find the distribution of the number in the system.
5.6.

DISTRIBUTION OF NUMBER IN SYSTEM

In the previ ous sections we characterized the M IGII queueing system as a n


imbedded Markov chain and then established the fundamental equation
(5.35) repeated here :
(5.75)
By forming the average of this last equation we obtained a result regarding
the utilizati on factor p [see Eq . (5.41)]. By first squaring Eq. (5.75) and then
Of less interest is our highly specialized MjH zll example for which we obtain
1.12pj(1 - pl.

W j;;; =

192

TH E QUEUE

M fG fl

takin g expectati on s we were able to obtain P-K formulas that gave the
expected number in the system [Eq. (5.63)] and the norm alized expected
time in the system [Eq. (5.71)]. If we were now to seek the second moment of
the number in the system we could obtain this quantity by first cubing Eq.
(5.75) and then taking expectations. In thi s operation it is clear that the
expectation E[f] would cancel on both sides of the equation once the limit
on n was taken ; thi s would then leave an expression for the second moment of
g. Similarly, all higher moments- can be obtained by raisin g Eq. (5.75) to
successively higher powers and then forming expectations. * In this section,
however, we choose to go after the distribution for qn itself (actually we
consider the limiting random variable g). As it turns out, we will obtain a
result which gives the z-transforrn for this distribution rather than the distributi on itself. In principle, these last two are completely equivalent; in
practice, we sometimes face great difficulty in inverting from the z-tra nsform
back to the distribution . Nevertheless, we can pick off the moments of the
distributi on of g from the z-transforrn in extremely simple fashion by making
use of the usual properties of transforms and the ir deri vatives.
Let us now proceed to calculate the a-transform for the probability of
finding k customers in the system immediately following the departure of a
customer. We begin by defining the z-transform for the random va riable qn
as
(5.76)
From Appendix II (and from the definition of expected value ) we have that
thi s z-transform (or probability generating functi on) is also given by
Qn(z) ~ E[zn]

(5.77)

Of interest is the z-transform for our limiting random variable


Q(z)

'"
= lim Qn(z) = 2:
P[g =
n -e cc

k ]Zk

E[z"]

g:
(5.78)

""= 0

As is usual in these definit ions for tr an sform s, the sum on the right-hand side
of Eq. (5.76) converges to Eq . (5.77) only within some circle of co nvergence
in the z-plane which defines a ma ximum value for [z] (certai nly [a] ~ I
is allowed).
The system M fG fl is characterized by Eq. (5.75). We therefore use both
sides of thi s equ at ion as an exponent for z as follows :

Specifically, th e k th power leads to an expression for Erqk- 'j that involves the first k
momen ts of service time.

5.6:

DISTRIB UTIO N OF NU MBER IN SYSTEM

193

Let us now take expectations:


E[z" ,] = E[z .-<1<' + ' +1]
Using Eq. (5.77) we recognize the left-hand side of this last as Qn+!(z).
Similarly, we may write the right-h and side of this equat ion as the expectation of the product of two fact ors, giving us
Qn+1(z)

E[z .- dq. zV'+l ]

(5.79)

We now observe, as earlier, that the random var iable v n+t (which represents
the number of arrivals during the service of C n +!) is independent of the
ra ndo m varia ble qn (which is the number of customers left behind upon the
departure of C n) . Since this is true , then the two fact ors within the expectat ion
on the right-hand side of Eq. (5.79) must themselves be independent (since
function s of independent rand om variables are also independent). We may
thu s write the expectatio n of the produ ct in that equ ati on as the product of
the expectatio ns:
Qn+1(z) = E[z .- dq. ]E[zv. +.]
(5.80)
Th e second of these two expectations we again recogn ize as being independent
of the subscript n + I ; we thus remove the subscrip t and consider the ran dom
variable v again . From Eq . (5.44) we then recognize that the second expectation on the right-h and side of Eq. (5.80) is merely
We thus have

E[zVo+' ] = E [zV] = V(z)


Qn+1(z) = V(z)E[z .- dq.]

(5.81)

T he only complicat ing factor in this last equati on is the expectation . Let us
exam ine this term sepa rately; from the definition of expectation we have

'"
E[zo.-<1] = LP[q
n = k]Zk- <1.
k~ O

Th e difficult part of this summation is that the expo nent on z cont ains t:J. k ,
which takes on one of two values according to the value of k . In order to
simplify this special behavior we write the summa tion by exposing the first
ter m separately:
co

E[zo.-<1] = P[qn = O]ZO-O+ LP[q n = k ]Zk- '

(5.82)

k= l

Regarding the sum in this last equa tion we see that it is almost of the form
given in Eq. (5.76); the differences are that we have one fewer powers of z
and also t hat we are missing the first term in the sum. Bot h these deficiencies
may be correct ed as follows:
'"
I '"
1
LP[q n = k]Zk-t = z L P[qn = k]Zk - Z P[qn = O]ZO
k= t

k- O

(5.83)

194

T HE QUEU E

M IGII

Applying thi s to Eq. (5.82) and recognizing that the sum on the right-hand
side of Eq . (5.83) is merely Qn(z), we have

E[zqn-~'n]

P[qn = 0]

+ Q n(z) -

P[q.

= 0]

We ma y now substitute this last in Eq . (5.81) to obtain

We now take the limit as n goes to infinity and recognize the limiting value
expressed in Eq. (5.36). We thus have

Q(z)

V(Z)( P[ij = 0]

+ Q(z) -

:[ij

= 0])

(5.84)

Using P[ij = 0] = I - p , and solving Eq . (5.84) for Q(z) we find

Q(z)

II

I
I

V(z) (I - p)(I - l Iz)


I - V( z)/z

(5.85)

Finally we multiply numerator and denominator of this last by ( -z) and use
our result in Eq. (5.46) to arrive at the well-kn own equation th at gives the
z-transform for the number of customers in the system,

Q(z) = B*(A _ i.z) (I - p)(1 - z)


B*(}. - AZ) - z

We shall refer to thi s as one form of the Pollaczek -Khinchin (P-K) transform
equation.t
The P-K transform equation readily yields the momen ts for the distr ibuti on
of the number of customers in the system. Using the moment-generating
pr operties of o ur transform expre ssed in Eqs. (5.50)-(5.52) we see th at
certainly Q(1) = I ; when we attempt to set z = 1 in Eq. (5.86), we obta in
an indeterminant formj and so we are required to use L'H ospital's rule . In
carrying out this opera tion we find th at we must evalu ate lim d B*(i. - i.z)ldz
as z ---+ I, which was carried out in the pre vious section and show n to be
equ al to p. Thi s computat ion verifies that Q (I) = I. In Exercise 5.5, th e
reader is as ked to show th at Q(I)(I ) = ij .

I'
!

_ (5.8..6)

t Thi s formula was found in 1932 by A. Y. Kh inchin [KHI N 32]. Shortly we will derive
two other equat ions (each of which follow fro m an d imply this eq uation), which We also
refer to as P-K transform equ ation s ; these were studied by F. Pollaczek [PO LL 30] in
1930 an d Khinchin in 1932. See also the footn ote o n p. 177.
t We note that the denominat or of the P-K tran sform equation must a lways con tain the
factor ( I - c) since B * (O) = I.

5.6.

DISTRIB UTIO N OF NU MBER IN SYSTEM

195

Us ually , the in ver sior: o f the P-K transform equation is difficult, a nd


th erefore one settles for moments . Howeve r, the system M /Mfl yields very
n icely to in version (a nd to almost everything else). Thus , by way o f example,
we shall find it s di stribution. We ha ve

-l:!:-..
s + ,u

8 *(s) =

Clearly , the regi on o f con vergence for this last form is Re (s )


in g this to the P-K transform equati on we find

Q(z) = (

(5.87)

M /M/I

>

,Lt .

Apply-

p)(1 - z)

(I -

A- AZ + ,u [PIC;, - AZ + ,u)] - Z

N oting that p = A/p , we have

Q(Z)

I - P

(5.88)

1 - pz

Equation (5.88) is the solution for t he z-t ra nsfo r m of the distributi on of the
number of people in the sys te m. We ca n reach a point such as thi s with many
serv ice-tim e d istributi on s B(x); for th e exponential d istribution we can ev aluate the inve rse transform (by inspection !). We find immediately that

P[ ij = k ] = (1 _ p)p "

M/M /l

(5.89)

This then is the fami liar so lu tion for M /M /!. If the reader refers back to
Eq . (3 .23), he will find the same functi on for the probability of k cu stomers
in the M IMII syste m. Ho wever, Eq . (3 .23) gives the solution for all p oints
in time whereas Eq . (5.89) gives the so lutio n only at the imbedded Markov
points (na mely, at th e de pa rtu re in stants for cu st omers). The fact t hat these
two a nswe rs a re identi cal is no surprise for tw o reason s : first , because we
told yo u so (we said that the imbedded Ma rkov poin ts give so lutio ns tha t
a re good at a ll points) ; a nd second, b ecause we rec o gni ze th at the M IM I I
system forms a contin uou s-t ime Markov ch ain.
As a sec ond examp le, we conside r the system M/H 2/1 whose pdf for service
time was give n in Eq. (5.68). By in spect ion we m ay find B *(s), wh ich gives

8 *(s) =

(l) s_i.+ i. + (~)---.1L


s + 2i.
4

+ 8A
4(s + A)(S + 2i,)
7i.s

(5.90)

196

THE QUEUE

M/G fl

where the plane of con vergence is Re (5)


equati on we then have -

Q(z) =
8

>-

l. From the P-K tra nsfo rm

( 1 - p)(1 - z)[8 + 7(1 - z)]


+ 7(1 - z) - 4z(2 - z)(3 - z)

F actoring the den ominator a nd canceling the commo n term (I - z) we ha ve

( 1 - p)(1 - (7{15)z]
Q(z)

[1 _ (2/5 )z][1 - (2/3)z]

We now exp and Q(z) in partial fraction s, which gives

1{4
Q(z) = (I - p) ( I _ (2/5)z

+I

3{ 4 )
_ (2/3)z

This la st may be inverted by inspection (by now the reader sho uld rec ogni ze
the sixth entry in Table 1.2) to give
P.

P[ij

k]

(1 -

p>[~(~r+ ~(~n

(5.9 1)

Lastl y , we note th at the value for p ha s a lready been calculated a t 5/8 , and
so for a final soluti on we have
k

= 0, 1,2, . . .

(5.92)

It sho u ld not surprise us to find thi s su m of geo metric terms for our so lutio n.
Further examples will be found in the exerci ses. F or now we terminate th e
d iscussion of how many cu st omers are in the system a nd proceed with the
calculati on of how long a cu st omer spends in the system .

5.7.

DISTRIBUTION OF WAITING TIME

Let us n ow set out to find the distribution of time sp ent in th e system a nd


in the queu e. These particul ar qu ant ities are rather easy to obta in fr om o ur
earl ier principal result, nam ely, the P-K tr an sform eq ua tion (a nd as we
ha ve sa id , lead to expression s which sha re th at nam e). Note th at the order in
which cu st omers receive serv ice has so far not affected our results. No w,
however, we mu st use our ass u mptio n th at the order of service is first-co mefirst- ser ved .
In o rder to pr oceed in the sim plest possibl e fashi on , let us re-examine the
deri va tion of th e foll owing equat ion :

V(z) = B* (i. - k)

(5.93)

5.7.

DISTR IBUTION OF WA ITING TI ME

197

Time------;;....
Ououo -

-;.., _ _

\.~----,v~---'}

~.

"n

arrive

Figure 5.7 Derivation of V(z)

B* (i. - i.z).

In Figure 5.7, the reader is reminded of the structure from which we obtained
this equation. Recall that V (z) is the z-transform of the number of customer
arrivals in a particular inter val, where the arrival proce ss is Poisson at a rate A
cust omers per second. The particular time interval involved happens to be
the service interval for C n; this interval has distribution B(x) with Laplace
t ransform B *(s). Th e deri ved relati on between V(z) and B* (s) is given in
Eq. (5.93). The imp ortant observation to make now is that a relationship
of this form must exist between any two random variables where the one
identifies the number of customer arrivals from a Poisson process and the
other describes the time interval over which we are co unting these customer
arri vals. It clearly makes no difference what the interpretation of this time
interval is, only that we give the distribution of its length ; in Eq. (5.93) it
ju st so happens that the interval involved is a service interv al. Let us now
d irect our attention to Figure 5.8, which concentrates on the tim e spent in the
sys tem for C n' In th is figure we have traced the history of C n' The interval
labeled lI" n ident ifies the time from when C; enters the queue until that customer leaves the queue and enters service; it is clearly the waiting time in queue
for C n' We have also identified the service time X n for C n' We may thu s

Tj me~

\'-----~ ~-----'

q"

arrive

Figure 5.8 Derivation of Q (z ) = S * (i. - i.:;).

198

TH E QUEUE

M/G/I

identify the total time spen t

i ll

sy stem

Sn

for CO'
(5.94)

We have earlier defined gn as the number of customers left beh ind upon the
departure of Cn' In considering a first-come-first-served system it is clear
th at all those customers present upon the arri val of C n must depart before he
d oes; consequently, those customer s that C; leaves behind him (a total of
gn) must be precisely th ose who arri ve durin g his stay in th e system. Th us,
referring to Figure 5.8, we may identify those customers who arrive du ring
the time interval s; as bein g our previously defined rand om variab le gn'
Th e reader is now asked to comp are Figures 5.7 and 5.8. In bot h cases we
have a Poisson arrival process at rate I customers per second. In Figure 5.7 we
inqu ire into the number of arrivals (un) during the interval whose durat ion
is given by X n ; in Figure 5.8 we inquire int o the number of arrivals (gn)
during an interval whose durati on is given by S n' We now define the distribut ion for the total time spent in system for C; as
Sn(Y) ~ P[sn ~ y]

(5.95)

Since we are assuming ergodicity, we recognize immediat ely that the limit of
this distribution (as n goes to infinity) must be independent of II . We deno te
this limit by S(y) and the limiting rand om varia ble by s [i.e., Sn(Y) ->- S(y )
and s; ->- s]. Thus
S (y ) ;; P[s ~ y]
(5.96)
Finally, we define the Lap lace transform of the pdf for total time in system as
S *(s) ;;

f'

e- ' dS( y)

= E[e~S]

(5.97)

With these definitions we go back to the analogy between Figures 5.7 and
5.8. Clearly, since Un is an alogous to gO' then V( z) must be analogous to
Q(z), since each describes the generating functi on for the respective nu mber
distribution. Similarly, since X n is analogous to S n , then B *(s) must be
anal ogous to S *(s). We ha ve therefore by dir ect analogy from Eq. (5.93)
t hat t
Q(z) = S* (i. - }.z)
(5.98)
Since we already have an explicit expression for Q(:) as given in the P-K
transform equat ion , we may therefore use that with Eq . (5.98) to give an
explicit expression for S * (s) as
S *(,1. _ ,1.z) = B*(}. _ ,1.:) (l - p)(l - z)
B*(,1. - }.z) - z

(5.99)

t Thi s can be der ived directly by the unco nvinced reader in a fashion similar to tha t which
led to Eqs. (5.28) and (5.46).

5.7.

199

DI ST RI BUTI O N O F W AITI NG TIME

Thi s last equat ion is just crying for the o bvio us change of va ria ble
which gives

= = I -~
A
Making thi s chan ge of variable in Eq. (5.99) we then have

5 *(s) = B*(s)

s( 1 - p)
s - A + AB*(s)

- (5.1 00)

Equat ion (5.100) is the desired exp licit expression for the Lapl ace transfor m
of the distribution of total time spent in the M IGII system. It is given in
terms of known quantities derivable from the initial statement of the pr oblem
[namely, the specificati on of the servi ce-time distribution B( x ) and the
par ameters A a nd x ). This is the second of the three equ at ion s th at we refer
.
to as the P-K tra nsform equ ati on.
Fr om Eq. (5. 100) it is tr ivial to deri ve the Laplace tr an sform of the distr ibution of wai ting time , which we sha ll den ote by W*(s). We define th e
PDF for e n's waiting time (in queue) to be W n(y), th at is,
W n(y) ~ P[w n ~ y )

Furthermore , we define the limit ing quantities (as n ->- co) , Wn(y)
and W n ->- Iii, so th at
W(y ) ~ P[I~' :-:; y )

(5. 101)
->-

W(y)

(5.102)

The corresponding Laplace transform is

JV *(s)

~ L "e- s dW(y)

= E[e- ';;;)

( 5. 103)

F ro m Eq . (5.94) we may de rive the dist ributio n of l~' from the d istribut ion
of s and x (we drop subscri pt notation now since we a re con sidering equ ilibrium behavior). Since a customer' s service time is independent of his
qu eueing tim e, we hav e th at s, the time spent in system for some customer,
is the sum of two independent random vari abl es: l~' (his queueing time) and x
(his service time). T hat is, Eq. (5.94) has the limiting for m
(5.104)
As derived in Appendix II the Laplace transform of the pdf of a random
vari able that is itself the sum of two independent rand om vari able s is equal
to the prod uct of the Lapl ace transforms for th e pdf of ea ch. Con sequently,
we have
5 *(s) = W*(s) B*(s)

200

TH E QUEUE

M/G/I

Thus fr om Eq. (5.100) we obtain immed iat ely that

W *(s) =

s( 1 - p)
s - A + AB*(s)

- (5.105)

Thi s is the desired expre ssion for the Laplace tran sform of the queu eing
(waiting)-time distribution. Here we have the third equ ati on that will be
referred to as the P-K transform equation .
Let us rewrite the P-K transform equation for waitin g time as follows:

1- p

W (s)

1- p

[I - B*(S)]

(5.106)

sx

We reco gnize the bracketed term in the denominator of thi s equation to be


exactly the Laplace transform associated with the density of residual service
time from Eq. (5.1I). Using our special notation for residual den sities and
.
the ir tr ansform s, we define

B*(s) ;; 1 - B*(s)
SX
and are therefore permitted to write

(5.107)

*
I - p
W (s) - ------,'-----

(5.108)

- I - pB*(s)

Thi s observa tion is trul y amazi ng since we recognized at the outset that the
problem with the M/Gfl analysis was to take account of the expended
service time for the man in service. Fr om that investigat ion we found that the
residual service time remain ing for the customer in service had a pdf given
by b(x) , whose Laplace transform is given in Eq. (5.107). In a sense ther e is a
poetic ju stice in its appearance a t thi s point in the final solution. Let us
follow Benes [BENE 56] in inverting this transform in term s of these residu al
service time den sities. Equation (5.108) may be expanded as the following
power series :
co
(5.109)
W*(s) = ( I - p)2: l[B*(s)]k
P O

From Appendix I we know that the kth power of a Lapl ace tran sfor m corresponds to the k-fold con volution of the inverse tran sform with itself. As in
Appendix I the symbol 0 is used to denote the conv oluti on opera to r, and
we no w choose to den ote the k-fold convoluti on of a funct ion f (x ) with
itself by the use of a parenthetical subscript as follows :
d

f (k)(X) = ,f (x) 0 f ( x )0 .. 0 f( x)

k-fold convo lut ion

( 5.110)

5.7.

DISTRIBUTION OF WAITING T IME

20 1

Us ing this notation we may by inspection invert Eq. (5.109) to obtai n the
waiting-time pdf, which we de note by w(y) ~ dW(y)/dy; it is given by
w(y)

'" (I
=L

- p)pk bCkl(y)

(5.111)

k=O

Thi s is a most intriguing result! It state s th at the waiting time pdf is given by
a weigh ted sum of conv olved residual service time pdf' s. The interesting
observatio n is that the weightin g factor is simply (I - p)pk, which we now
recognize to be th e pro bab ility distribution for the number of custo mers in
an M/M /l system . Tempting as it is to try to give a physical explanation for
th e simp licity of this result and its relation to M/M /I , no satisfactory,
int uitive explan at ion has been found to explain th is dramatic form. We
note that the contributio n to the waitin g-time den sity decreases geometrically
with p in thi s series. Thu s, for p not especially close to unit y, we expect the
high-o rde r terms to be of less an d less significance, and one pract ical application of this equ ati on is to provide a rapidly converging approximatio n to the
density of waiting time.
So far in th is section we have esta blished two principle results, namely,
the P-K transfor m equatio ns for time in system and time in queue given in
Eq s. (5.100) and (5. 105), respectively. In the previous section we have already
given the first moment of these two rand om variable s [see Eqs. (5.69) and
(5.70)]. We wish now to give a recurrence formula for the moments of t he
waiting time. We denote the kth moment of the waitin g time E [wk ], as usual ,
by Irk. Takacs [TAKA 62b] has show n that if X i+! is finite, then so also are
Iii, \\,2, . . . , Wi; we now adopt our slightly simp lified notati on for the ith
moment of service time as follows : hi ~ x'. Th e Tak acs recurr ence for mula is

T""
wk = - I.'
I - P i='

(k)

~.'

i+ 1
---IV

i (i

(5.112)

I)

where \\ ,0 ~ 1. Fr om this formula we may write down the first couple of


moments for waiting time (and note that the first moment of waiting time
agrees with the P-K formula):
sb ;
lii (= IV) =
(5.113)
2(1 - p)
-;

3
(5.114)
IV- = 2(
+ - }.b
---"-3(1 - p)

l,'r

In orde r to obtain similar moments for the total time III system, that is,
s", we need merely take ad vant age of Eq. (5. 104) ;
from this equ ation we find

E[5 k ] , which we denote by

(5.115)

202

THE QUEUE M /GfI

Using the bin omi al expansion and the ind ependence bet ween wai ting time
and service time for a given customer, we find

? = i (~)Wk-ibi
i=O

(5.11 6)

Thus calculating the moments of the wa iting time from Eq . (5.112) a lso
permits us to calcul ate the moments of time in system from this las t equation.
In Exercise 5.25 , we drive a relati on ship bet ween Sk and the mom ent s o f
the number in system; the simplest of these is Little's result, a nd the others
are useful genera liza tio ns.
At the end of Section 3.2, we promised the reader th at we wo uld de velop
the pd f for the time spent in the system for an M IM II queueing system. We
are now in a position to fulfill th at promise. Let us in fact find both the
distribution of waiting time and distribution of system time for cu stomers in
M /M /I . Usi ng Eq. (5.87) for the system M /M fI we may calculate S*(s) from
Eq. (5. 100) as follows :
S*(s)

1-

(s

+ p ) L; -

s( l - p)

A + Ap/(s

S*(s) =
p (1 - p)
.
s + p( l - p)

+ p)

p ) e- P(l - p) u

MIM II

(5.117)

y ~O

M IM I 1

- (5.118)

M IMII

- (5.1l 9)

e-p(l- p) u

Simil arly, from Eq, (5.105) we may obtain W* (s) as

W*(s) =

s( 1 - p)

s - A + i.p/(s
(s
s

+ It)

+ p )(1 - p)
+ (p - ;.)

= (I

_ p)

T hi
(

to '

reF
Re
(5.
frc
d i:
tir
th
(5
m
SI

(5.120)

Before we ca n invert thi s we mu st place the right-ha nd side in proper form ,


namely, where the numerat or polyn omi al is of lower degree th an the denomi nator. We d o this by d ividin g out the constant term a nd ob ta in

W*(s)

Imp
F ro

The cor responding PDF is given by


S(y) = I -

ApF

Th is equat ion gives the Laplace tr an sform of th e pdf for time in the system
which we den ote , as usu al , by s(y) ~ dS(y) ldy. Fortunately (as is usual with
the case M /M /I) , we recogni ze the inver se of thi s tr an sform by inspection.
Thus we have immediat ely that
s(y) = p( 1 -

Th is
we I

;.(1 - p)
s + p(l - p)

(5.12 1)

a
t'

5.7.

DISTRIBUTION OF WAITING TIME

203

I" (y )

Figure 5.9 Waiting-time distribution for MIMI !.


This exp ression gives the Lap lace transform for the pdf of waiting time which
we denote, as usual , by w(y) ~ dW(y)/dy. From ent ry 2 in Table 1.4 of
Appendix I, we recogn ize that the inverse transform of ( I - p) mu st be a n
impulse at the origin ; thus by inspection we have
w(y) = (1 - p)uo(y)

+ A(I

- p)e- 11- p l

y ~0

M/M /I

- (5.122)

From this we find the PDF of waiting time simply as


W(y) = 1 - pe-(l-pl .

y ~ 0

M/M/I

- (5.123)

This distribution is sh own in Fi gure 5.9.


Ob serve that the probability of not queueing is merely I - p ; compare tbi s
to Eq . (5.89) fo r the probabil ity that if = O. Clearly, they are the same; both
represent the probability of not queueing. This also was found in Eq. (5.40).
Recall further th at the mean no rmalized queueing time was given in Eq.
(5.73); we obtain the same answer, of course, if we calcu late thi s mea n value
fr om (5.123). It is interesting to note for M/MjI that a ll of tbe interestin g
distribution s a re mem oryless: this applies not only to the given interarrival
time and service time processes, but also to th e distribution of the number in
the system given by Eq . (5.89), the pdf of time in the system given by Eq .
(5.119), and the pdf of waiting time* given by Eq . (5.122).
It turns out th at it is possible to find the density given in Eq . (5.118) by a
more direct calculation , and we display this method here to indicate it s
simplicity. O ur point of departure is our early result given in Eq . (3.23) for
the p rob ability o f finding k cu stomers in system up on arrival , namely ,

(I - p)pk

(5.124)

A simple exponential form for the tail of the waiting-t ime distribution (that is, the
probabilities associated with long waits) can bederived for thesystem M/G /1. We postpone
a discussion of this asymptotic result until Chapter 2, Volume II, in which we establish
this result for the more general system GIG/I.

204

THE QUEUE

M/G/I

We repeat agai n that thi s is the same expression we foun d in Eq. (5.89) and
we know by now that this result app lies for all po ints in time. We wish to form
t he Lapl ace transform of the pdf of total time in the system by considering
thi s Lapl ace transform conditioned on the number of customer s found in th e
system upon arrival of a new customer. We begin as generally as possible
a nd first consider the system M IGII. In particular , we define the condit iona l
d istribution

I = P[customer's total

S (y k )

time in system j; y he finds k in


system upon his arrival]

We now define the Lapl ace transform of this conditional density

Jo e- sv d5(y I k )

t. ( ""

5 *(s k ) =

(5.125)

Now it is clear that if a customer finds no one in system upon his a rrival,
then he must spend an amount of time in the system exactly equal to his own
service time , and so we have
S *( s I 0)

B *(s )

On the other hand , if our arriving customer finds exactly one customer
ahead of him , then he remains in the system for a time equal to the time to
finish the man in service, plu s his own service time; since these two int ervals
are independent, then the Laplace transform of the density of this sum must
be the product of the Lapl ace tr ansform of each density, giving
S *(s I I)

8 *(s)B *(s )

where B *(s) is, again, the tran sform for the pdf for residual service time.
Similarly, if our arriving customer finds k in front of him , then his total
system time is the sum of the k service times associated with each of t hese
customer s plus his own service time. Th ese k + I rand om variable s are all
independent, and k of them are dra wn from the same distributio n S ex).
Thus we have the k-fold product of B *(s) with B*(s) giving

I =

5 *(s k )

[B*(s)jk8*(s)

(5. 126)

Equ ati on (5.126) hold s for M IG II. Now for our M/M /I problem , we have
that B* (s) = I'-/ (s + 1'-) and, similarly, for B*(s) (memoryless); thus we have

I = ( -I'--)k+'

5*(s k )

s + 1'-

(5.127)

5.7.

DISTRIB UTION OF WAITI NG TI ME

205

In order to obtain S *(s) we need merel y weight the transform S *(s k ) with
the pr obability P of our customer finding k in the system upon his arrival,
namely ,
cc

S*(s)

= L 5*(s I k)Pk
k=O

Substituting Eqs. (5.127) and (5.124) into this last we have

S *(s)

co (
=L
-P-)k+l(I

k~O S + P

!l(I -

p)

+ p(I

- p)

- p)p'

(5.128)

We recogni ze that Eq . (5.128) is identical to Eq . (5.117) and so the remaining


steps leading to Eq . (5.118) follow immediately. This demonstration of a
simpler method for calcul ating the distribution of system time in the MIMII
queu e demon strates the followin g import ant fact: In the development of
Eq. (5. 128) we were required to consider a sum of random variables, each
distributed by the same exponential distributi on ; the number of terms in
that sum was itself a rand om "variab le distributed geometrically. What we
fou nd was t hat this geomet rical weighting on a sum of identically dis tributed
exponential random vari ables was itself expo nential [see Eq . (5.118)]. This
result is true in general, namely , that a geometric sum of exponential random
variables is itself exponentially distributed.
Let us now carry out the calculations for our M/H./I example. Using the
expr ession for B*(s) given in Eq. (5.90), and applying this to the P-K
transform equation for waiting-time den sity, we have

4_s-,-(I_----'-p
- -'--)("-s--:+'----'l)-'-(s_+
..:...-2_l )' --_
- 4(s - l )(s + l )(s + 2).) + 8).3 + 7l 2 s

W (s) -

Thi s simplifies up on fact oring the den ominator , to give

I_-----'-p.:..o
)(s--,+_).-,-,)(:.. . s.-:
+_ 2_),.:. .)
[s + (3/2)l ][s + (112).]

.0....(

W (s) = -

Once again, we must divide numerator by den ominator to reduce the degree
of the numerator by one, giving

W (s)

= (I

- p)

]
+ _ ).-:,(_1 _--,-p-,--)['--s.-:+_ (,---51,---4,---).-,--[s + (3/2)'][s + ( 1/2).]

We may now carry out our partial-fr acti on expansion:

W*(s)

= (1 -

1
p>[ + s

}./4

3}./4

+ (3/2). + s + ( 1/2) )'

==
206

MIG/!

THE QUEUE

T his we may now invert by inspection to o bta in the pd f for waiting time (a nd
recalling that p = 5/8) : .
3 ()
wy
() = - u y

3). -(3! 21".


+e
+ -9), e- (I ! 2)".

32

32

y;:::O

(5. 129)

This complete s o ur d iscussion o f the waiting-time an d system-time d istr ibution s fo r M/G/1. We now introduce the bu sy peri od , an imp ortant
stochas tic process in queueing systems.

5.8. THE BUSY PERIOD AN D ITS DURATION


We now ch oo se to study queueing systems from a different po int of view.
We make the observation tha t the system passes through a lternating cycles
o f busy peri od , idle peri od , busy pe riod, idle period, and so on . Our purpose
in this section is to deri ve the distribution for the length of th e idle peri od
a nd the length of the busy peri od for the M/G/) queue.
As we a lrea dy understand , the pertinent sequences of rand om va ria bles
that drive a queueing system a re the instants of arri val a nd the seq uence of
service times. As usual let

C;
7"n

In
Xn

the nth customer

= arrival time o f C;

7"n -

7"n _I

interarrival time betwee n C n _ I and C;

= serv ice time for C;

We now recall the imp o rtant sto chastic process V( I) as de fined in Eq . (2.3) :
V(t) ~ the unfini shed work in the system at time I
~ the rem aining time req uired to empty the system of all

customers present a t time

This functi on V (I ) is appropriately refe rred to as the unfinished work at time I


since it represents the interval of time th a t is required to empty the system
completely if no new customers are ail owed to enter a fter the insta nt I . Th is
funct ion is sometimes referred to as the "vi rtua l" waiting time a t time I
since, for a first-c ome-first-served system it repre sents how lon g a (virtual)
cu stome r would wait in queue if he entered at time I ; however , thi s waitin gtim e inte rpretation is goo d only for first-c ome-first-served disciplines, whereas the un finished work interpretation applies for all discipline s. Beh avior of
this functi on is extremely important in understand ing qu euein g systems when
one stud ies them from the point of view of the bu sy peri od .
Let us refer to Figure 5.1Oa , which shows the fashi on in which bu sy
pe riods alternate with id le pe riods. The busy-pe riod duration s a re denoted
by Y" Y 2 , Y3 , and the idle period du rations by I" 12 , Cu st omer C,

5.8.

207

TH E BUSY PERIOD AN D ITS DURATION

U(I)

(a>

~ I~

r-----;y,~ 11-+ yd<- I,~y3~

BUSY

IDLE

IBUSY,

ID LE

BUSY

c,

C, C3
Server - .---ji-- +--'-------,----L---,----+-..L---;;..
(b)

C,

C3

C,

Queue--t-. - - ' - --.-----'--

- - --

-f-

-+---,,----'-

C3

Figure 5.10 (a) The unfinished work, the busy period, and
history.

(b)

the customer

enters the system at time T I and brings with him an amount of wor k (tha t is, a
required service time) of size X l ' Thi s customer finds the system idle and
therefore his arrival termin ate s the pre vious idle period and initiates a new
busy period. Prior to his arrival we assumed the system to be empty and
therefore the unfinished work was clearly zero. At the inst ant of the arrival
of C, the system backl og or unfinished work j umps to the size X ll since it
would take this long to empty the system if we allowed no further entries
beyond this instan t. As time progresses from T 1 and the server works on C ll
this unfinished work reduce s at the rate of 1 sec/sec and so Vet ) decrease s
with slope equal to - I. t 2 sec later at time T 2 we observe that C2 ent ers the
system and forces t he unfinished work Vet) to make another vertical jump of
magnitude X 2 equal to the service time for C2 The functi on then decrea ses
agai n at a rate of I sec/sec until customer C 3 enters at time T 3 forcing a vertical
ju mp aga in of size X 3 Vet ) continues to decre ase as the server works on t he
customers in the system unt il it reaches the instant T 1 + YI , at which time he
has successfully emptied the system of all cust omers and of all work . Thi s

..
208

TH E QUEUE

M/G/!

then terminates the busy period and initiates a new idle period. The idle
per iod is terminated at time T. when C. enters . This second busy period
serves only one customer before the system goes idle again . The third busy
period serves two customers. And so it continues. For reference we show in
Figure 5. lOb our usual double-time-axis representation for the same sequence
of customer arrivals and service times dra wn to the same scale as Figure 5. IOu
and under an assumed first-come-first-served discipline. Thus we can say
that Vet) is a function which has vertical jumps at the customer-arrival instants
(these jumps equaling the service times for those customers) and decrea ses
at a rate of I sec/sec so long as it is positive; when it reaches a value of zero, it
remains there until the next customer arrival. This stochastic process is a
continuous-state Markov process subject to discontinuous jumps; we have
not seen such as this before .
Observe for Figure 5.10u that the departure instants may be obtained by
extrapolating the linearly decreasing portion of Vet) down to the horizontal
axis; at these intercepts , a customer departure occurs and a new customer
service begins. Again we emphasize that the last observation is good only for
the first-come-first-served system . What is important, however, is to observe
that the function Vet) itself is independent of the order of service ! The only
requirement for this last statement to hold is that the server remain busy as
long as some customer is in the system and that no customers depart before
they are completely served; such a system is said to be "work conserving"
(see Chapter 3, Volume II) . The truth of this independence is evident when
one considers the definition of Vet) .
Now for the idle-period and busy-period distributions. Recall
A(t) = PIt . ~ t] = 1 - e- At
B(x) = PI x.

t~O

(5.130)

x]

where ACt) and B(x) are each independent of n. Our intere st lies in the two
following distributions:
F(y) ~ prJ. ~ y] ~ idle-period distribution

(5.131)

G(y) ~ pry. ~ y] ~ busy-period distribution

(5. I32)

The calculation of the idle-period distribution is trivial for the system M/G/I .
Observe that when the system terminates a busy period , a new idle period
must begin, and this idle period will terminate immediately upon the arrival
of the next customer. Since we have a memoryle ss distribution, the time until
the next customer arrival is distributed according to Eq . (5. I30), and
therefore we have
F(y) = J - e-i.
y ~ 0
- (5.133)
So much for the idle-time distribution in M/G/1.

5.8.

TH E BUSY PERI OD AND ITS DURATI ON

209

Ulti

(Il)

T,

x, _

--.-.j-E-_

.\'.

-----J-.\'
peri~1

Service time Sub-busy


for C 1
generated by C4

3-

--...;-1-0;..--

Decomposition of the
busy period

.\' 2-

- Sub-busy period

Sub-busy period

generated by C3

generated by C1

---;-1

~------------ y ------------~
Busy period generated by C1

Nlti
(bJ Nu mber in the system

(c) Customer history

C,

Figure 5.11

C.

ClI

C9

The busy period : last-come-first-served

Now for t he busy-period d istrib ution ; this is not q uite so simple. The
reader is referred to Figur e 5.11. In part (a) of th is figure we once agai n
observ e the unfinished work U(t) . We assum e th at th e system is empty just
pri or to the instant 7"1. at which time customer
init iates a busy pe riod of
duration Y. His service time is equal to Xl . It is clear that th is customer will
depart from the system at a time 7"1 + Xl . Du ring his service other customers

C;

2 10

THE QUEUE

M/G /I

may arrive to the system and it is they who will continue th e busy period.
Fo r the function shown, three other custo mers lC2 , C3 , and C.) ar rive during
the interval of Cl's service. We now make use of a brilliant device due to
Tak acs [TAKA 62a]. In particular, we choose to permute the order in which
customers are served so as to create a last-come-first-served (LCFS) que ueing
discipline* (recall that the duration of a busy period is independent of the
order in which customers a re served). The moti vation for the reordering of
custo mers will soon be ap parent. At the departure of Cl we then take int o
service the newest customer , which in our example is C, . In add ition , since
all future arrivals du ring this busy period must be served before (LCFS!)
a ny customers (besides C,) who arrived during Cl' s service (in this case C 2
and C3 ) , then we may as well consider them to be (tempora rily) out of the
system. Thus, when C, ent ers service, it is as if he initiated a new busy period,
which we will refer to as a "sub-busy peri od"; the sub-busy period generated
by C, will have a duration X. exactly as long as it takes to service C. and all
those who enter into the system to find it busy (remember that C 2 and C3 are
not considered to be in the system at thi s time). T hus in Figure 5.l la we
show the sub-busy period generated by C. during which customers C., C. ,
and Cs get serviced in that order. At time 1"1 + Xl + X. this sub-busy period
ends and we now continue the last-come-first-served order of service by
bringing C3 back into the system. It is clear that he may be co nsidered as
generating his own sub-busy period , of durati on X 3 , duri ng which all of his
"descendents" receive service in the last-come-first-served order (name ly,
C3 , C7 , Ca, and C9) . Finall y, then , the system emptie s agai n, we reintr oduce
C 2 , and perm it his sub-busy peri od (of length X 2 ) to run its cour se (and
complete th e major busy period) in which customer s get serviced in the order
C2 , C10 , and finally Cu.
Figure 5. lla shows that the cont our of any sub-busy per iod is identical
with the con tour of the main busy period over the same time interval and is
merely shifted down by a constant amount ; th is shift, in fact, is equal to the
summed service time of all th ose customers who arrived during Cl's service
time and who have not yet been allowed to generate their own sub-busy
periods. The details of custo mer history are shown in Figure 5.1Ic and the
to tal numb er in the system at any time und er this discipline is shown in Figure
5.1l b. Th us, as far as the queueing system is concerned, it is strictly a lastcome-first-served system from sta rt to finish. However, our analysis is
simplified if we focus upon the su b-busy periods and observe th at each behaves
statistically in a fashion identical to the major busy period generated by Cl.
T his is clear since all the sub-busy periods as well as the major busy period
This is a "push-down" slack. This is only one of many perm utations that " work"; it
happens that LCFS is convenient for peda gogical purp oses.

5.8.

TH E BUSY PERIOD A!'o1) ITS DURATIO N

211

are eac h initiated by a single customer whose service times a re all drawn
from the same distribution independently ; each sub-busy period continues
until the system catches up to the work load , in the sense that the unfin ished
work funct ion U(t) drops to zero . Thus we recognize th at the random variables { X k } are each independent a nd identically distributed a nd have the sa me
distributio n as Y , the duration of the major busy peri od .
In Figure S.ll e the reader may follow the customer history in detail ; the
soli d black region in this figure identifies the customer being served during
that time interval. At each cu stomer departure the server " floa ts up" to the
top of the customer contour to engage the most recent a rrival a t that time;
occasio nally the server "fl oats d own " to the cust omer directly below him
such as a t the departure of CG The server may trul y be thought of as floating
up to the highest customer there to be held by him until his departure, a nd
so on . Occasi onall y, however, we see that our ser ver "falls down" through a
ga p in o rde r to pick up the most recent a rrival to the system, for example, a t
the departure of CS It is at such instants th at new sub-busy peri ods begin
and on ly when th e server falls down to hit the horizontal axis doe s the maj or
busy period termin ate .
Our point of view is now clear : the duration of a bu sy peri od Y is the sum
of I + v random variables, the first of which is the service time for C , and
the remainder of which a re each random vari able s de scribing the duration of
the sub-busy peri od s, each of which is distributed as a busy peri od itself. v is
a rand om va riable equal to the number of cust omer arrivals during C, 's
service interval. Thus we ha ve the important relat ion
Y =

X,

+ X v+! + X ; + ... + X + X
3

(5.134)

We defin e the busy-peri od distribution as G(y):

G(y)

pry

y]

(5.135)

We also know th at X , is distributed acco rd ing to B (x) a nd th at X k is distr ibu ted as G(y) from our earlier comments. We next derive the Laplace tra nsform for th e pdf asso cia ted with Y, which we define, as usual, by

G*(s)

f 'e-"dG(y)

(5.136)

On ce ag ain we remind the reader that these transform s may also be expressed
as expectation opera to rs, nam ely:

G*(s) ~ E[ e- SF ]
Let us now ta ke ad vantage of the powerful technique of conditioning used so
often in pr obability the ory ; thi s technique permits one to write d own the
probability associated with a complex event by cond itionin g that event on

----- - -- -212

TH E QUEUE

M IG II

enough given conditions, so that the conditional probability ma y be written


down by inspection. The unconditional pr obability is then obtained by
multiplying by the probability of each condition and summing over all
mutually exclusive and exha ust ive conditions. In our case we choose to
condition Yon two events : the du ration of Cl's service and the number of
customer arrivals d uring his service . With th is point of view we then calculate
the followin g conditional transform:

E[e- ' Y Xl

ii

X,

k]

=
=

E[e- ,(z+x ' +l+" '+ X 2 1]


E[e- SXe- SX.l: -T-l .. . e- Sx 2]

Since the sub -busy periods have durations that a re independent of each other,
we may write this last as

E[e- ' Y Xl

X,

ii

k]

E[ e- ""]E[e-,xt+l] ... E[e- ' x ,]

Since X is a given constant we have E[e-sx } = e:, and further, since the subbusy periods are identically distributed with corresponding transforms
G* (s), we have
E[e-:' Y Xl = X, ii = k ] = e- SX[G*(s)]k

Since ii represents the number of ar riva ls during an interval of length x,


then ii must have a Poisson distribution whose mean is I.X. We may therefore
remove the condition on ii as follows :

E[e- sY Xl

~ E[e- sY I Xl =

X] =

X, ii

k]P[ii

k]

k= O

I e-' X[G*(s)l' (Ax)k


ek!

1x

k- O

e -X[s+l- ;.O - (s )l

Similarly, we may remove the condition on Xl by inte grating with respect to


B( x), finally to ob ta in G*(s) thusly

G*(s) =

L"

e- zlS+1- ;.G(,I] dB(x )

This last we recognize as the transform of the pdf for service time evaluated
at a value equal to the bracketed te rm in the exponent, that is,

G*(s)

B*[s

+ ;. -

AG*(S)}

- (5.137)

This maj or result gives the transform for the M IGII busy-peri od distributi on
(for an y o rder of service) expressed as a functi onal equati on (which is usually
impossible to invert). It was ob tained by identifying sub-busy periods with in
the busy period all of which. had the same distributio n as the busy peri od
itself.

5.8.

213

THE BUSY P ERIOD AN D ITS DURATI ON

Later in this chapter we give an explicit expression for the bus y period
PDF G(y), but unfortunately.it is not in closed form [see Eq . (5.169)]. We
point out , however, that it is possible to solve Eq . (5.137) numerically for
G*(s) at any given value of s through the following iterative equation:
G ~+l( s) =

B*[s

+ ). -

i.G n *(s) ]

(5.138)

in which we choose 0 ~ Go*(s) ~ I ; for p = Ax < I the limit of this iterative


scheme will con verge to G*(s) and so one may attempt a numerical inversion
of these calculated values if so desired .
In view of the se inversion difficulties we obtain what we can fro m our
function al equ ati on , and one calculation we can make is for the mom ents
of the busy per iod . We define
gk ~ E[yk]

(5 .139)

as the kth moment of the busy-peri od distribution, and we intend to express


the first few moments in term s of the moments of the service-time distribution ,
namely, x", As usual we have
gk = ( _ I)kG*(k)(O)

(5.140)

x k = (_I)kB*(kl(O)

From Eq . (5.137) we then obtain directly


g,

[note, for s

-G *(lI(O)

0, th at s

+ i. -

-B*(lI(O)!!... [s
ds

-B*(I)(O)[I - )'G*(I)(O)]

+ ). -

i.G*(s)

g,

x( 1

)'G*(s)]

I,_0

0] and so

+ i.g,)

Solving for g , and recalling th at p = i.x, we then have

g,

= -x-

- (5.141)

1- P

If we comp are this last result with Eq , (3.26), we find th at the average length
of a busy period fo r the sys tem M IGII is equal to the average time a customer
spends in an M IMI] sys tem and depends only 0 11 ). and x
Let us now chase down the second moment of the bu sy period. Pr oceedin g
from Eq. (5. 140) and (5.137) we obtai n

g2 = G*(2)(S) = !!... [B*(lI(S + }. - i.G*(s ][ 1 - )'G*(I)(s)]


ds
,-0
= B*(2)(0)[1 _ i.G*(I)(O)f + B*(lI(O)[ _ ).G*(2\0)]

=:z=.

214

THE QU EUE

M /G/l

and so

+ Ag,)Z + XI.g2

g2 = x 2(1

Sol ving for gz a nd using our result for g" we have


x 2( 1

gz =

+ Ag,)Z

I - I.X
xZ[1

+ I.X/(i

_ p)] z

I-p

and so finally

gz

= (I

- (5.142)

_ p)3

This last result gives the second moment o f the bu sy period and it is interesting to not e the cube in the den ominat or ; this effect d oes not occu r when one
calculates th e seco nd moment of th e wai t in the system where only a squa re
power ap pears [see Eq. (5. 114)]. We may now ea sily calcul a te the va rian ce of
the bu sy period , den ot ed by u.", as follows:

XZ

( X)2

( I - p)3

( I _ p)z

and so
uz =u.-" +p (-)"
x
( I _ p)3

_ (5.143)

where uo" is theva ria nce of th e service-time distribution .


Proceeding as above we find th at

x3
g3 = (i _ p)4

(l -p)

+ (I

-p)

_ p)5

15Az(?) 3

lOA? ?

x'
g4

3A(?)"

+ (I
6

+ (I

-p)'

We observe th at the fact or ( I - p) goes up in powers of 2 for the dom ina nt


term of eac h succeeding mo me nt of the busy per iod an d this determi nes th e
behavior as p - I .
We now co nside r so me exa mples o f invert ing Eq. (5.137). We begin wit h
th e M/M /I queueing system. We have
B*(s) = _ f.1 _
s + f.1

5.8.

215

TH E BUSY PERIOD AND ITS DURAnON

which we ap ply to Eq. (5.137) to obtain


G*(s) ,;"
s

or
A[G*(s}f - (ft

ft
AG*(S)

+ A-

+ ft

+ ). + s)G* (s) + ft = 0

Solving for G * (s) and restricting our solution to the required (sta ble) case
for which IG* (s)1 ~ I for Re (s) ~ 0, gives
G*(s) = ft

+ }. + s -

[(ft

+ A+ S)2 -

4ft A]!!2

2A

(5.144)

Thi s equ ation may be inverted (by referring to transform table s) to obtain
the pdf for the busy perio d, name ly,
g(y)

~ dG( y)
dy

= _1_ e-L<+P" I I [2y(}.ft)I !2]


y( p)I!2

- (5.145)

where I I is the modified Bessel functi on of the first kind of order one.
Con sider the limit
lim G*(s)

0 < 5-0

lim

r "' e- ' YdG(y)

0 < $-0

(5.146 )

Jo

Examining the right side of this equation we observe that this limit is merely
the probability th at the busy period is finite, which is equivalent to the
probability of the busy period ending. Clea rly, for p < I the busy period
ends with prob abilit y one, but Eq. (5. 146) pro vides inform ati on in the case
p> I. We ha ve
P[busy period ends] = G* (O)
Let us examine this computati on in the case of the system M IM I !. We have
directly from Eq . (5. 144)
G*(O) = ft

+ }. -

[(ft

+ A)2 -

4ft}]!!2

2A

and so

G*(O) =

1p

Thu s

Plb"" ""';00 ends in M IM I!]

p< l

{;

(5.147)

p> 1

21 6

TH E QU EUE

M IGII

The busy peri od pdf given in Eq . (5.145) is much more complex than we
would have wished for this simplest of interesting queuein g systems ! It is
ind icati ve of the fact that Eq. (5. I37) is usually unin vertible for more general
service-time distributions.
As a seco nd exampl e, let' s see how well we can do with our M/H 2 /1 example.
Using the expression for B* (s) in our funct ional equat ion for the busy period
we get
G*(s) =
8). 2 + 7).[s + }. - )'G*(s)]
4[s + A - }.G*(s) + A][S + A - }.G*(s) + 2A]
which lead s dire ctly to the cubic equation
4[G * (S)]3 - 4(2s

+ 5)[G* (s)J2 + (4s + 20s + 31 )G* (s) 2

(15

+ 7s) = 0

Th is last is not easily solved and so we stall at this po int in our attempt to
invert G* (s). We will return to the functional equati on for the busy period
when we discuss pri orit y queueing in Chapt er 3, Volume II. Th is will lead
us to the concept of a delay cycle, which is a slight generalization of the
busy-period analysis we have j ust carried out and greatly simplifies priority
queueing calculations.
5.9. THE NUMBER SERVED IN A BUSY PERIOD
In th is section we discuss the distribution of the number of customers
served in a busy period. Th e development parallels that of the previou s
section very closely, both in the spirit of the der ivation and in the nature of
the result we will obtain.
Let N b p be the number of customers served in a busy period . We are
interested in its probab ility d istribu tion Indefined as

In =

P[ N b p

II]

(5.148)

The best we can do is to obt ain a functi onal equati on for its z-transform
defined as
(5.149)
The term for II = 0 is omitted from this definitio n since at least one customer
must be served in a busy peri od. We recall that the random var iable ii
repre sent s the number of arrivals during a service peri od and its z-transform
V(z) obeys the equation deri ved earlier, namely ,
V( z)

B*(A - Az)

(5.150)

Proceedin g as we did for the durati on of the busy period , we condition our
argument on the fact that ii = k , that is, we assume that k customers arrive

5.9.

THE NUMBER SERVED IN A BUSY PERIOD

217

during the service of C1 Moreover, we recognize immediately that each of


these arrivals will generate a. sub-busy period and the number of customers
served in each of these sub-busy periods will have a distribution given by fn.
Let the random variable M, denote the number of customers served in the ith
sub-busy period. We may then write down immediately
[zSbP

I iJ =

k] = [z1+J1I+.1[,+ .+.11,]

and since the M, are independent and identically distributed we have

I iJ = k]

[ZSbP

= z II [z lI,]
i= 1

But each of the M i is dist ributed exactly the same as N b p and, therefore,
E[ZSb P

I iJ = k] = z[F(z)]k

Removing the condition on the number of arrivals we have


00

F(z)

= L E[z.Y bP I iJ =

k]P[iJ

k]

k= O
00

= z

LP[iJ =

k][F(zW

k=O

From Eq, (5.44) we recognize this last summation as V(z) (the z-transform
associated with iJ) with transform variable F(z); thus we have
(5.151)

F(z) = zV [F(z)]

But from Eq. (5.150) we may finally write


F(z)

Z8*[A - ).F(z)]

- (5.152)

This functional equation for the z-transform of the number served in a busy
period is not unlike the equation given earlier in Eq . (5.137).
From this fundamental equation we may easily pick off the moments for
the number served in a busy period. We define the kth moment of the
number served in a busy period as 11k We recognize then
h1

Flll(l)

= 8*(1)(0)[- AF(1)(I)]

+ 8*(0)

Thus
which immediately gives us
1
h1 = - -

1- p

- (5.153)

218

TH E QUEUE

M/G/I

We further recogni ze
. F(2)(l ) = h2 - hI
Carrying o ut thi s computation in the usual way , we obtain the second moment and va ria nce of the number ser ved in the busy period:
J1. =

2p(1 -

Uk

p)

(I -

+ A x +-1
p)3
1- p
2

p(l - p) + A2 ?
(1 _ p)3

~--'-':"""';'---

- (5.154)
- (5.155)

As an example we again use the simple case of the M/M jl system to solve
for F(z) from Eq. (5.152). Carrying thi s out we find

F(z) = z

/l

+ A- AF(z)
+ A)F(z) + /lz =

/l
AF2( z) - (/l
Solving,

F(Z)=!..1'[I-(I2p

1
J

4pz )1/
( 1 + p)"

(5.156)

Fortunately, it turns o ut that the equatio n (5.156) can be inverted to obtain


prob ab ility of having n served in the busy peri od:

In' the

- (5.157)
As a seco nd example we con sider the system M/D/1. For thi s system we
have hex ) = uo(x - x) and from entry three in Table 1.4 we ha ve immediately
that
B*(s) = e- ' z
U sing thi s in our functi onal equ ati on we obta in

F(z)

= z e- Pe pF ( z )

where as usual p = AX. It is convenient to make the substitution u


a nd H (u) = pF(z), which th en permits us to rewrite Eq. (5. 158) as

(5.158)

z pe: "

u = H(u)e-ll(u)
The solutio n to th is equ ation may be obta ined [RIOR 62) a nd then our
original fun ction may be evaluated to give

F(z) = i (np )n-I


n= l
11!

e-n pz n

5.10.

FROM BUSY PERIODS TO WAITING TIMES

219

From this power series we recognize immediately that the distribution for the
number served in the MIDII busy period is given explicitly by
n- l

In

)
(
.!!.f!....-e- np
Il

- (5.159)

Fo r the case of a constan t service time we know tha t if the busy period
ser ves II customers then it must be of durat ion nii , and therefore we may
immediately write down the solution for the MIDfI busy-period dist ribution
as
[V/il( n p) n-l

G(y)

=L- n= l

It

e- np

- (5.160)

where [ylx] is the largest integer not exceedi ng ylx.


5.10.

FRO M BUS Y PERIODS TO WAITING TIMES

We had mentioned in the ope ning paragraphs of this chapter that waiting
times could be ob tai ned from the busy-period analysis. We are now in a
position to fulfill tha t claim. As the reader may be aware (and as we shall
show in Chapter 3, Volume II), whereas the distribution of the busy-period
duration is independent of the queueing discipline, the distribution of waiting
time is strongly de pendent upon order of service. Therefore, in this section
we consider on ly first-come-first-served MIG/! systems. Since we restrict
ourse lves to thi s discipline , the reordering of customers used in Section 5.8
is no longer permitted. Instead, we must now decompose the busy period
into a sequence of interva ls whose length s are dep endent random variables
as follows. Co nsider Figure 5.12 in which we show a single busy period for
the first-come-first-served system [in terms of the unfinished work Vet)].
Here we see that customer C, initiates the busy peri od upon his arrival at
time T , The first interva l we cons ider is his service time Xl> which we denote
by X o ; during this interval mo re custome rs arrive (in this case C2 and C 3 ) .
All those customers who arrive during X o are served during the next interva l,
whose duration is X, and which equals the sum of the service times of all
a rrivals du ring Xi> (in this case C2 and C 3 ) . At the expiration of X" we then
create a new inte rval of duration X 2 in which all customers arriving during
X , are served , and so on. Thus Xi is the length of time required to service
all those customers who arrive during the previous interval whose duration
is Xi_l' If we let n i denote the nu mber of customer arriva ls du ring the interval
Xi' the n n i customers arc served during the interval Xi+l' We let no equal the
number of custome rs who arrive du ring X o (the first customer's service time).

220

T HE Q UEUE

MiGfl

U(t )

c,
OL....-+_ _--->:,---_

---'':-_~

_.._:I~._

'~X" ' +-x.--+~+-x,-+x.~


Figure 5.1 2 The busy period: first-come-first-served.
Thus we see that Y, the duration of the total busy period , is given by

""

Y = LXi
;= 0

where we permit the possibility of an infinite sequence of such inter vals.


Clearly, we define Xi = 0 for those intervals that fall beyond the termination
of this busy period ; for p < I we know that with pr obability I there will be a
finite i o for which Xio (and all its successors) will be O. Furthermore, we know
that Xi+! will be the sum of , service interv als [each of which is distributed as
B(x)].

We now define Xi(y) to be the PDF for Xi' that is,

"

X i(y) = P[X i

:s;;

y]

and the correspondi ng Lapl ace transform of the assoc iated pdf to be

X,*(s)

~ 1"" e-

SV

dXi(y)

= E[e- ' X']


We wish to derive a recurrence relati on am ong the X,*( s). Th is derivat ion
is muc h like that in Section 5.8, which led up to Eq. (5. 137). Th at is, we first
condition our transform sufficiently so that we may write it down by inspection; the cond itions are on the interval length X i _ l and on the number of

5. 10.

221

FROM BUSY PERIODS TO WAITI NG T IMES

a rri vals n i - 1 during that interval, that is, we may write

E [e- 'X ; X i - -I -- Y, " i - l -- n ] -- [B*(s)]"


Thi s last follows from our con volution property leading to the multiplicati on
of tr an sforms in t he case when the va ria bles are independent ; here we have
n independent service times, all with identical distributions. We may uncondition first on n :
00 (A )"
E[e- ' x ; I X i_1 = Y] = I ..JL e-J.V[B*(s)]"
n=O n !
and next on Y:

Clearly , the left-hand side is X;*(s); evaluating the sum on the right-hand side
lead s us to

X i*(s)

f.-0
OO

e- [J.-J.B (, )]. dXi_1(y )

Thi s integra l is recogni zed as the tran sform of the pd f for Xi-I> na mely,

X i*(s)

Xi~l [A

- AB*(s)]

(5.161)

Thi s is the first step.


We now condition our calculations on the event that a new (" tagged")
arriva l occurs during the busy period and , in particular, while the busy
peri od is in its ith interval (of duration X ;). From our ob servations in Secti on
4.1 , we kn ow th at Poisson arrivals find the syste m in a given sta te with a
pro bab ility equ al to the equilibrium probability of th e system bein g in th at
state. N ow we kn ow that if the system is in a busy period, then the fracti on of
time it spends in th e interval of du rati on Xi is given by E [Xi]1E[ Y] (t his can
be made rigorous by renewal the ory a rguments). Con sider a custom er who
arrives during an interval of duration X i. Let his waiting time in system be
de not ed by IV; it is clear that th is wait ing time will equal the sum of the
remaining time (residua l life) of the ith interva l plus the sum of the service
times of all j obs who arr ived before he did during the ith interval. We wish
to calculat e E [e- ';;' i], which is the tr an sform of the waiti ng time pdf fo r a n
a rrival during the ith interval; again , we perform thi s calculation by cond ition ing on the three variables Xi' Y i (defined to be the residu al life of th is
ith interval) a nd on N , (defined to be t he number of a rrivals during the it h
interval but pri or to our customer's arrival-that is, in the interval Xi - Yi).
Thus, using our co nvo lutio n property as before , we may write

E[e- 'WI i , X i

y, Y,

v' , s,

/I]

e- '" [B*(s)r

222

THE QUEUE

M /G/l

N ow sinc e we ass ume th at n cu st omers have arrived during an interval of


duration y - y' we uncondition on N , as follows:
E[e: .;;;

I I.,

X i = y , Yt

-_

y' ' ] -_

e-'"

.L.
n :o O

[A(Y - y' )] n e-,t(' - " ' [B*(s) ]n


n!

= e- s J/' - l (V- lI' )+ A( lI- Y' ) B - ( s )

(5. 162)

We ha ve a lready observed that Y i is the residual life of the lifetime X i'


Equation (5.9) gives the joint density for the residual life Yand lifetime X;
in that equation Yand X play the roles of Y i and X; in our problem. Therefore, replacing/ ex) dx in Eq. (5.9) by dXi(y) a nd noting that y and y' ha ve
replaced x and y in that development , we see that the j oint density for X i
and Y i is given by dXJy) dy'/E[Xi] for 0 ::s; y' ::s; y ::s; 00 . By means of this
joint density we may remove the condition on Xi and Y i in Eq . (5.162) to
ob ta in

E[e- ' ;;; I i]

=
=

r'" r'

e-['-HW(, ).~-P-AB( ')l> dX / y) dy' /E[ X ;]

Jy=o J JI'= O

'"
1._0

[e- " - e-[ ,t-,tB'(,).]

[- s

+ A-

AB* (5)]E [X ;]

dX(y)

These la st integrals we recognize a s tr an sforms a nd so

E[e- ' ;;; i] = X/(5) - X/(J. - ;.8*(5


[- 5 + I. - 1.8*(5) ]E[X ;]
But now Eq . (5.161) permits us to rew rite the seco nd o f th ese tr an sforms to
ob ta in
.

- ,W I I].

E [e

X7+1(5) - X ;*(5)
[5 - I. + A8*(5)] [ X i ]

= ----"-'-'--'-----'---'-'---

Now we may rem o ve the cond ition o n our arriva l entering during the ith
interval by weighting th is la st ex pression by the probability th at we have
formerly expressed for the occurre nce of th is event (still condition ed on o ur
ar riva l en tering during a bu sy per iod) , a nd so we have

E[e- ' WI enter in b usy period ]


=

I E[e- ";;; I i] E[X


E [ Y]

i]

i- O

[5 _"I.

:L
'" [v*
,i \ ... ( S )

+ I."8*( 5)]E[ Y] .'_- 0

, ,1

X .*(s)]
,

5.11.

CO MBINATOR IAL METHO DS

223

Th is last sum nicely collap ses to yield 1 - X o*(s) since Xi*(s) = I for those
inte rvals beyond the busy period (recall X i = 0 for i ~ i o) ; also , since X o =
x" a service time, then X o*(s) = B *(s ) , and so we arrive at

E[e-S;;; enter

In

b usy peno
. d] =

1 - B*(s)
+ }.B*(s)]E[Y ]

[s - }.

Fr om pre viou s con sider ation s we know that the probability of an a rrival
ente ring during a busy per iod is merely p = Ax (and for sure he mu st wait for
service in such a case); further, we may evaluate the average length of the
busy peri od E[ Y] either from our pre vious calcul ati on in Eq . (5. 141) o r from
elementary considerations ' to give E [Y] = 'i/ (l - p). Thus, unc onditioning
on an arrival finding th e system bu sy, we finally have
E[e- SW]

= (I -

p)E[e- SWI en ter in idle period]

+ pE[e- s"- Ienter in busy period]

[1 - B*(s)](1 - p)

= ( 1 - p)

+ p [s _ A + AB*(s)]'i

= ----'s'-'-(I=--------'---p)~

(5.163)
s - A + AB*(s)
Voila! T his is exactl y the P-K tran sform equation for waiting time , namel y,
W *(s) ~ E[e- siD ] given in Eq. (5. 105).
Thus we have shown how to go from a busy-period analysis to the calcul ation of waiting time in the system. Thi s meth od is rep orted up on in [CO NW
67] and we will have occasio n to return to it in Chapter 3, Volu me 11 .

S.U.

COMBINATORIAL METH ODS

We had menti oned in the opening remarks of this chapter th at consideration of rand om walks and combinat ori al meth ods was applica ble to the study
of th e M/G!I qu eue. We take thi s oppo rtunity to ind icate so me asp ects of
th ose methods. In Figure 5.13 we have reproduced Vet) from Figur e 5.1Oa. In
additio n, we have indic at ed th e " ra ndom walk" R (t) , which is the same as
Vet) excep t th at it does not satura te at zero but rat her co ntinues to decline at
a rat e of I sec/sec below the hori zontal axis ; of course, it too tak es vertica l
j umps at the custo mer-arriva l insta nts. We intro d uce th is diagram in orde r
to define wha t are known as ladder indices. The kth (descending) ladder index
The following simple argument ena bles us to ealculate E[ Y]. In a long interval (say, I) the
server is busy a fraction p of the time. Each idle per iod in M /G /l is of average length I f}.
sec and therefore we expect to have ( I - p)I /(l I).) idle periods. This will also be the
number of busy periods, approxi mately; therefore, since the time spent in busy perio ds is
pI , the average durat ion of each must be pl l ).I( 1 - p) = :el(l - p) . As I ~ 00 , this ar gument becomes exact.

224

TH E Q UEUE

M /G !I

Figure 5.13 The descending ladder indices.


is defined as the instant when the random walk R (t) rises from its kth new
minimum (and the value of this minimum is referred to as the ladder height).
In Figure 5.13 the first three ladder indices are indicated by heavy dots.
Fluctuation theory concerns itself with the distribution of such ladder
indices and is amply discussed both in Feller [F ELL 66] and in Prabhu
[PRAB 65] in which they consider the applications of that theory to queueing
proce sses. Here we merely make the obse rvation that each ladder index
identifies the arrival instants for those customers who begin new busy p eriods
a nd it is th is observation that makes them interesting for queuein g theory.
More over, whenever R (t ) drops below its previous ladder height then a busy
peri od terminates as shown in Figu re 5.I3. Thu s, between the occurrence of a
ladder index and the first time R (t) drops below the corresponding ladder
height, a busy period ensues and both R (t) and U(t ) have exactly the same
shape, where the former is shifted down from the latter by an am ount exactly
equal to the accumulated idle time since the end of the first busy peri od . One
sees that we a re quickly led into meth ods from combinatorial theory when
we deal with such indices.
In a similar vein, Tak acs has successfully applied combinatorial theory to
the study of th e busy period. He consider s this subject in depth in his book
[TAKA 67] on combinatorial methods as applied to queuein g theory and
develops, as his cornerstone , a generali zati on of the classical ballot theorem.
The classical ballot theorem concerns itself with the counting of votes in a.
two-way conte st involving candidate A and candidate B. Ifwe assume th at A
scores a votes and B scores b votes and that a ;:::: mb , where m is a nonnegati ve integer and if we let P be the probability that through ou t the

5.11.

COMBINATORIAL METHODS

225

counting of votes A continually leads B by a factor greater than m and further ,


if all possible sequences of voting records are equally likely, then the classical
ballot theorem states that
a - mb
P = =-----:..:.::::
(5.164)

a+b

This theorem originated in 1887 (see [TAKA 67] for its history). Takacs
generalized thi s theorem and phrased it in terms of cards drawn from an urn
in the following way. Consider an urn with Il cards, where the cards are
marked with the nonnegative integers k I , k 2 , , k ; and where
n

L k, =

k ~

Il

i= l

(that is, the ith card in the set is marked with the integer k ;). Assume that all
cards are drawn without replacement from the urn. Let o; (r = I, . . . , Il)
be the number on the card drawn at the rth drawing. Let

Il

Nr =

VI

+ V2 + .. . + V

r = I, 2, . .. ,

11

NT is thus the sum of the numbers on all cards drawn up through the rth
draw. Takacs' generalization of the classical ballot theorem states that
-

P[N T

< r for

all r

1,2, .. . , 11]

11 - k
=-

(5.165)

11

The proof of this theorem is not especially difficult but will not be reproduced
here . Note the simplicity of the theorem and , in particular, that the probability expressed is independent of the particular set of integers k; and depends only upon their sum k . We may identify o; as the number of customer
arrivals during the service of the rth customer in a busy period of an fovl /G/l
queueing system. Thus FlT + I is the cumulative number of arrivals up to the
conclusion of the rth customer's service during a busy period . We are thus
involved in a race between FlT + I and r : As soon as r equals FlT + I then the
busy period must terminate since, at this point, we have served exactly as
many as have arrived (including the customer who initiated the busy period)
and so the system empties. If we now let N b P be the number of customers
served in a busy period it is possible to apply Eq . (5.165) and obtain the
following result [TAKA 67]:

P[N b p

= III = -1 P[Nn
=

11 -

I]

(5.166)

Il

It is easy to calculate the probability on the right-hand side of this equation

since we have Po isson arrivals: All we need do is condition this number of

226

TH E QUEUE

M/G/I

arriva ls on th e durati on of the busy period , multiply by the p robabi lity that 11
service interva ls will, .in fact , su m to thi s length and then integrate ove r all
p ossible lengths. Thus

P[N n

= II -

I]

( '" (AV)' - l -.l


.
e "bl.l(y) dy
. 0 (II - I)!

(5 . 167)

where bl. l(y) is the n-fold convolution of bey) with it self [see Eq . (5. 110)]
a nd repre sen ts th e pd f for the sum of n independent random varia bles , where
each is drawn from th e co mmon den sity bey). Thus we a rr ive at a n expl icit
expression for the pr obability d istr ibution for the number served in a bu sy
period:

P[N b p = II] =

'"
I
O

(;ly)n-l - .l"
- - e bl. l(y) dy

- (5.168)

il!

We may go further and ca lcula te G(y) , the distributi on of the bu sy period, by


integrating in Eq . (5.168) o nly up to so me point y (ra ther than 00) and then
summing over a ll p ossible numbers served in the bu sy per iod , th at is,

and so ,
G(y )

"I
I

co

O .~ l

e"

.lz(iX) - l

- - b1nl(X) d x
II !

- (5.169)

Thus Eq . (5.169) is a n exp licit expression in terms of known quantities for


the distribution of the bu sy period a nd in fact may be used in place of the
expression given in Eq . (5. 137), the Lapl ace tr ansform of dG(y)/dy. Th is is
the expre ssion we had p ro mis ed earlier, a ltho ug h we ha ve expressed it as a n
infinite summati on ; nevertheles s, it does pr ovide the ability to a pproxi ma te
the busy-peri od distribution numericall y in a ny given situa tion. Similarl y,
Eq . (5.168) gives an explicit expression for the number served in th e bu sy
period .
The reader may have o bserved th at ou r study of the busy per iod has
reall y been th e study of a transient phenomenon a nd thi s is one of the
reasons th at t he de velopment bogged d own . In the next sectio n we con sider
certain aspects of the transient solution for M/G/I a bit fur th er.

5.12.

THE TAKACS INT EGRO DIF F ERENTIAL EQUATION

In th is section we ta ke a cl oser look at the un finished wo rk and de rive the


forward Kolm ogoro v equation for its time-dependent beh a vior. A mom en t' s
re flection will reveal the fact th at th e unfini shed wo rk U(t) is a co nti nuoustim e continuou s-state Mark ov pr ocess that is subject to di scont inu ou s

5.12.

227

THE TAKACS INT EGRODlffERE NTIAL EQU ATION

chan ges. It is a Markov process since the entire past history of its motion is
summarized in its current value as far as its future behavior is concerned.
That is, its ver tical discont inuities occur at instants of customer arrivals and
for M/G/l these a rrivals form a Poisson pr ocess (therefore, we need not
know how lon g it ha s been since the last arrival), and the current value for
Vet) tells us exactly how much work remains in the system at each instant.
We wish to deri ve the probability distribution funct ion for Vet), given its
initial value at time t = O. Accordingly we define
F(w, t ; wo) ';; P[U(t ) ::;;

wi U(O) =

wo]

(5.170)

This notation is a bit cumbersome and so we choose to suppress the initial


value of the unfinished work a nd use the shorthand notation F(w, t) ~
F(w, I ; 1"0) with the understand ing that the init ial value is 11'0' We wish to
relate the probability F(w, t + D.I) to its possible values at time I. We
observe that we can reach th is sta te from I if, on the one hand, there had been
no arri vals during this increment in time [which occurs with probab ility
I - AD.t + o (D.t)] and the unfinished work was no larger than II' + D.I a t
time t : or if, on the other hand , there had been an arrival in this int erval
[with probabil ity AD.t + o( D.t) ] such th at the unfinished work at time I, plus
the new increment of work brought in by this customer, together do not
exceed I". These ob servati on s lead us to the followin g equation :
F(w, 1+ D.I)
= ( 1 - A D. I)F(w

+ D.I , I) + AD.t

aF( x , I)
B(w - x ) -d x + O(D. I) (5.171 )
x~o
ax
w

Clearly, (a F(x , t) jax) dx ~ dFt, t) is the pr obability that at time I we have


x < Vet ) ::;; x + dx. Expanding our distribution functi on on its first vari able
we have
aF(w, t)
D.I + O(D.I )
F(w + D.I, I) == F(w, t) +
aw
Using thi s expan sion for the first term on the right-hand side of Eq . (5.171)
we obtain
F(w, t

+ D. t) =

F(w, t)

+ aF(w, I) D.I _

AD.t[ F(W, t)

aw

+ aF(w, t) D.tJ
aw

+ i. D.tL : oB(W -

x) dxF(x , I)

+ O(D. I)

Subtracting F(w, r), dividing by D.t , and passing to the limit as D.t
finally ob tain the Taka cs integrodifferential equation for V( t) : .
aF(w, t)

aF(w, t)

at

ow

--'----'- =

- i.F(w, t)

+A

B(w - x) dxF(x, t)

x-o

--->

0 we

- (5.172)

228

M/G/I

TIl E QUEUE

T ak ac s [TAKA 55] deri ved thi s equation for the more genera l case of a
nonhom ogene ou s Poisson process, namely , where th e a rriva l rat e .1.(1)depends
up on I. He sho wed t ha t this equ ation is good for almost all W ~ 0 an d 1 ~ 0 ;
it d oes 1/01 hold a t th ose w a nd 1 for which of(lV, 1)/OlV has an accumulati on
of probability (na mely, an impulse) . This occurs , in particular, a t 1\' = 0
a nd would give rise to the term F(O , I)uo(w) in of(lV, I)/OW, whereas no other
term in the equation contains such an impulse.
We may gai n more information from the Takac s integr odifferential
equation if we transform it on the variable W (a nd not on t) ; thus using the
tr an sform variable I' we define

W *'(r, I)

~fo~ e-

TW

dF w(w, I)

(5.173)

We use t he notation (*.) to denote transformation on the first , but not the
second a rgument. The symbo l Wis ch osen since, as we shall see, lim W*'(r, I) =
W *(r) as 1 ->- 00 , which is our former tr ansform for the waitin g-time ' pdf
[see, for example, Eq . (5. 103)].
Let us examine the tran sform of each term in Eq, (5. 172) sepa ra tely. First
we note th at since F(w, I) = S~ '" d Fi, I), then from entry 13 in Table 1.3 o f
Appendix I (a nd its footnote) we mu st ha ve

'" F(w, I)e

- TW

J.o

dw =

W*'(r, I)

+ F(O- , I)

--'---'-----'-~
I'

and , sim ilarl y, we ha ve

J.

'" B(w)e

- TW

dw =

B*---,,(--,_
r)---,+,-B(,O~-)
I'

0-

H owever , since the unfini shed work and the ser vice time are both nonnegat ive
random varia bles , it mu st be that F(O-, I) = B (O-) = 0 a lways . We rec ogni ze
th at th e last term in the T ak acs inte grodifferential equa tion is a con volution
between B(w) an d of(W,I)/O W, a nd therefore th e tr an sform o f th is co nvolution (includi ng the con stant multiplier A) mu st be (by properties 10 a nd
13 in that sa me tabl e) }.W* (r, I)[B *(r) - B (O- )]Ir = }.lV*(r, I)B*(r)/r.
N ow it is clear that the tr an sform for the term of(w, I)/OW will be W* '( r, I) ;
but thi s tra nsfo rm includes F(j+- , I), the tr ansform of the impulse locat ed a t the
o rigin for thi s partial deri vative, and since we kn ow th at the T ak acs int egr od ifferential equati on does not contain that impulse it mu st be subtracted out.
Thus, we ha ve from Eq . (5.172),
I )ow*'(r , I)

(r

01

= IV

*,
( I',

I) - F(O , r) -

i.W *'(r, I)
r

W *'(r, I)B*(r)
+ A-~~--'--'r

';

I
I

(5.I 74)

5.12.

229

THE TAKACS INTEG RODl FFERENTIAL EQUATION

which may be rewritten as

oW *"(r, t)
"
*
*"
+
--o-'-t-'---'- = [r - A + .1.B (r) ]W (r, t) - rF(O , I)

(5.175)

Takacs gives the solution to thi s equ ati on {p, 51, Eq . (8) in [TAKA 62b]}.
We may now transfor m on o ur seco nd vari able 1 by first defining the
double transform

I"

(5.176)

(5.177)

F**(r, s) =t. J o e-' tW *"(r, t) dt


We also need the definiti on

r,*(s) ~

00

e- stF(O+, t) dl

We may now transform Eq . (5.175) usin g the tran sform pr operty given as
entry II in Table I.3 (and its foot note) to obtain

sF**(r, s) - W*"(r ,O-)

+ .1.B*(r)]F**(r , s) -

[r - ;.

W*"(r, 0-) - rFo*(s)


s - r + ). - .1.B*(r)

rF o*(s)

From thi s we obta in

F**(r, s)

(5.178)

The unknown funct ion Fo*(s) may be determined by insisting th at the


transform F**(r, s) be an alytic in the region Re (s) > 0, Re (r) > O. Thi s
implies th at the zeroes of the numerator and denominator must coinc ide in
th is region ; Benes [BEN E 56] has shown th at in th is region 1] = 1](s) is the
un ique root of the denominator in Eq. (5.178). Thus W *'(7J , o--) = 1]Fo*(s)
and so (writing 0-- as 0), we have

F**(r s) = W*"(r, O) - (r!7J)W*"(1] . 0)


,
s - r + A- i.B*(r)

(5.179)

Now we recall that V (O) = IVO with probability one, and so from Eq . (5. 173)
we have W *' (r ,O) = e-r u: o. Thus F**(r , s) takes the final form

F**(r, s)

(rl1])e-~ Wo

;.B*(r) - i.

- e- rwo

+r-

- (5.180)

We will return to this equati on later in Ch apter 2, Volume II , when we d iscuss


the diffusion ap proxi matio n.
For now it beh ooves us to investigate the steady-sta te value of these functions ; in particular, it can be shown that F(w, t) has a limit as t ->- 00 so
long as p < I , and thi s limit will be independent of the initi al co ndition

230

TH E QUEUE

M/G/ l

F (O, w) : we d en ot e this .lirnit by F (lI') = lim F ( lI', r) as t ---+ CIJ, a nd from


Eq . (5. 172) we find th at it mu st sa tisfy the following equ at ion:

d F(w)

-- =
dw

l WB(w -

U(w) - A

x) d F( x )

(5. 181)

=0

Furtherm ore , for p < 1 then W *(r ) ~ lim W *' (r , t) as t ---+ CIJ will exist and
be independent of-the init ial distribution . Taki ng the tr an sform o f Eq. (5.181)
we find as we did in deri ving Eq . (5. 174)
+
W*( r) - F (O )

i. W *(r)

).B*(r )W* (r )

= -- _ _

....:....:...---='-'

where F (O+) = lim F (O+, t) as t ---+ CIJ and equals the p robability that the
unfini shed wo rk is zero. Th is last may be re written to give
rF(O+)

W*(r ) = - - ---'--'-r - ). + ).B*(r)

H owe ver , we require W* (O) = 1, which requ ires th a t the unkn own consta nt
p. Finally we ha ve

F (O+) ha ve a va lue F (O+) = I -

W* (r)

=
r -

r(1 - p)
i. AB*(r)

(5 . 182)

which is exactly the Pollaczek-Khinchin transform equation for wa iting tim e


as we pr omi sed!
This completes our discu ssion of the system M/G/l (fo r the time bein g).
Next we con sider the "companion " system, G /M /m.
REFERENCES
BENE 56
cONW67
COX 55

COX 62
FELL 66

GAVE 59

Benes, V. E., " On Que ues with Poisson Arrivals," Annals of M athematical Statistics, 28, 670-6 77 (1956).
Co nway, R. W., W. L. Maxwell, and L. W. Miller, Theory ofScheduling , Addison-Wesley (Reading , Mass.) 1967.
Cox, D. R., " The Analysis of No n-Markovian Stochastic Processes by
the Inclusion of Supplementary Variables," Proc. Camb. Phil. Soc .
(M ath. and Phy s. S ci.), 51,433-441 (1955).
Cox, D. R., Renewal Theory , Methuen (London) 1962.
Feller, W., Probability Theory and its Applications Vol. II , Wiley (New
York), 1966.
Gaver, D. P., Jr ., "Imbedded Mar kov Cha in Analysis of a WaitingLine Process in Continu ous Time," Annals of Mathematical S tatistics
30, 698-720 (1959).

EXERCISES

231

HEND 72

Henderson, W., " Alterna tive Approaches to the An alysis of the


M/G /I and G/M /I Queues," Operations Research, 15,92-101 (1972).
KEIL 65
Keilson , J ., " The Role of Green's Fun ction s in Conge stion The ory ,"
Proc. Symp osium 0 11 Conge stion Theory , U niv. of No rth Carolina
Press, 43- 71 (1965).
KEND 51 Kend all, D. G ., "Some Probl ems in the The ory of Que ues," Journal
of the Royal Statistical Society , Ser. B, 13, 151-1 85 (1951).
KEND 53 Kendall, D. G ., "Stochastic Processes Occurring in the Theory of
Queues and the ir Analysis by the Method of the Imbedded Markov
Chain," Annals of Math ematical St atistics, 24, 338-354 (1953).
KHIN 32 Khinchin , A. Y. , " Ma thema tical The ory of Stati onary Queues,"
Mat . Sbornik, 39, 73-84 (1932).
Lindle y, D. Y., "The Theory of Queues with a Single Server ," Proc.
LIND 52
Cambridge Philosophical Society, 48, 277-289 (1952).
PALM 43 Palm, C.; "Intensitatschwankungen im Fernsprechverkehr," Ericsson
Technics, 6,1 -189 (1943).
Pollaczek, F., "Uber eine Aufgab e dev Wahrscheinlichkeitstheori e,"
POLL 30
I-II Mat h. Ze itschrift., 32, 64--100, 729- 750 (1930).
PRAB 65
Prabhu, N. U., Queues and Inventories, Wiley (New York) 1965.
RIOR 62
Riordan , J. , Stochastic Service Sy stems, Wiley (New York) 1962.
Smith , W. L., " Renewal Theory and its Ramifications ," Journal of the
SMIT 58
Royal Statistical Society, Ser . B, 20, 243-302 (1958).
TAKA 55 Takacs, L. , "Investigation of Wait ing Time Problems by Redu ction
to Markov Processes," Acta Math Acad. Sci. Hung ., 6,101 -129 (1955).
TAKA 62a Tak acs, L., Introduction to the Theory of Queues, Oxford University
Press (New Yor k) 1962.
TAKA 62b Takacs, L., " A Single-Server Queue with Poisson Input ," Operations
Research, 10, 388-397 (1962).
TAKA 67 Takacs, L. , Combinatorial M ethods in the Theory of Stoch astic
Processes, Wiley (New York) 1967.

EXERCISES
5.1.

Prove Eq . (5. 14) from Eq. (5.11) .

5.2.

Here we derive t he residual lifetime density j(x) di scu ssed in Section


5.2 . We u se th e notation o f Fi gure 5.1.
(a) O bservin g that the event { Y ::S; y } can o ccur if a nd only if
t < T k ::s; t
y < T k+l for so me k , show th at

t ,(y) ~ pry
<XI

= k~l

:s;

y It]

(1+.

J,

[L - F(t

+y

x) ] dP[Tk

:s;

x]

232

TH E QUEUE

(b)

M jGjl

Observing that Tk :s; x if and only if oc(x) , the number of " arrivals" in (0, x): is at least k , that is, P[Tk :s; x ] = P[oc(x) ;::: k] ,
show that
<Xl

L Ph :s;

'"

x ] = L kP [oc(x)

k= l

(c)

k]

k= l

For large x , the mean-value expression in (b) is x /m I. Let F(y) =


lim Ft(y) as t ->- 00 with corresponding pdf f ey). Show that we
now have

fey)

_I_-_F--.:-(y::..:.)

mi

5.3.

Let us rederive the P- K mean-val ue formula (5.72).


(a) Recognizing that a new arri val is delayed by one service time
for each queued customer plus the residual service time of the
customer in service, write an expression for W in terms of
R., p, X, (lb' and P[w > 0).
(b) Use Little 's result in (a) to obtain Eq. (5.72).

5.4.

Replace I Q(I) = V(I)


constant.

in Eq. (5.85) by an unknown constant and show that


p for this

= I easily gives us the correct value of I -

5.5.

From Eq. (5.86) form Q(l)(I) and show that it gives the expression for q in Eq. (5.63). Note that L'Hospital's rule will be
required twice to remove the indeterminacies in the expression
for Ql1l(I).
(b) From Eq, (5.105), find the first two moments of the waiting
time and compare with Eqs. (5.113) and (5.114).

5.6.

We wish to prove that the limiting probability rk for the number of


customers found by an arrival is equal to the limiting probability d k
for the number of customers left behind by a departure, in any
queueing system in which the state changes by unit step values only
(positi ve or negative). Beginning at t = 0, let X n be those instants
when N( t) (the number in system) increases by one and Yn be those
instants when N (t) decrease s by unity, n = I , 2, .. . . Let N (x n- ) be
denoted by OC n and N (Yn+) by f3 n. Let N(O) = i.
(a) Sho w that if f3n H :s; k , then OC n+ k+1 :s; k .
(b) Show that if OC n+ k+l
k, then f3n+i :s; k .
(c) Show that (a) and (b) must therefore give, for any k,

(a)

lim P [f3 n :s; k] = lim P[oc n


which estab lishes that rk = dk

k]

1/

233

EXERCISES

5.7.

In this 'exercise, we explore the method of supplementary variables as


applied to the M/G/I . queue . As usual , let Pk(t) = P[N (t ) = k].
Moreover, let Pk(t , x o) dx o = P[N (t ) = k, X o < Xo(t ) ~ X o + dx o]
where Xo(t) is the service already received by the customer in service at
time t.
(a) Show th at
oPo(t
-) =
ot

- APo(t )

+ I'" Pl(t , xo)r(x o) dx o


0

where

h(x o)

rex) -

_....:......:e.....-

1 - B(xo)

o -

(b)

Let h = lim Pk(t) as t -.. ~ and h (x o) = lim Pk(t , x o) as


From (a) we have the equilibrium result
Apo

t -..

co,

l'"

Pl(XO)r(x o) dx o

Show the following equilibrium results [where Po(x o) ~ 0]:


Pk(XO)
(i) - oXo

= - [}. + r(xO)]pk(xO) + APk_l(XO)

(ii)

piO)

(iii)

Pl(O) =1"'P2(x o)r(xo) d x o + Apo

1'"

Pk+l(XO)r(x o) dx o

>1

(e) The four equatio ns in (b) determine the equilibrium probabilities


when comb ined with an appropriate norm alizat ion equation. In
term s of po and hex,,) (k = 1, 2, . ..) give this norm alizati on
equation.
(d) Let R (z, x o) = 2::1 h (XO)Zk. Show that
oR(z, x o)

---''-..:--=

and

ox o

= [}.z -

zR(z, O) =

(e)

1'"

A -

r(xo)]R(z, x o)

r(xo)R(z , x o) dx o + ;.z(z - l) po

Show that t he solution for R (z, x o) from (d) mu st be


R( z, x o) = R( z,O )e- '< ZoCl- z)-JifoTCV) dV
AZ(Z - l)p o
R(z,O) = z - B*( l'.' - j.Z
.)

234

THE QUEUE

(f)

M/G/I

Definin g R (z) ~

S;' R (z , x o) dxo, show th at


I - B*(). - AZ)
R(z) = R(z, 0) - --'------ ---:.
A( I - z)

(g)

From the normalizati on equation of (c), now show th at

Po = I - p
(h)

(p

= Ax)

Con sistent with Eq. (5.78) we now define

Q(z) = Po

+ R (z)

Sh ow th at Q(z) expressed this way is identical to the P-K


transform eq uation (5.86). (See [COX 55] for additional de tails
of this meth od.)
5.8.

Consider the M/G/ oo queue in which each customer always finds a


free server; thus
s(y)

bey) and T

x. Let Pk(l )

P[N (I )

k]

and assume PoCO) = !.


(a) Sh ow that
Pk(l ) =

[11'

(AI)n(n) Ico e-.lt-

n~k

n!

[1 - B(x)] d x

l o

Jk[11'
-

B(x) dx

In-k

[HINT: (I /I) S~ B(x) dx is the probability th at a customer's


service terminates by time I , given th at his a rrival time was
uniforml y distr ibuted ove r the interval (0, I). See Eq. (2.137)
also. ]
(b) Sh ow th at P ~ lim Pk(l ) as 1 ->- 00 is

r-< ::
().X)k

-AX

regardless of the fo rm of B(x)!


5.9.

5.10.

Co nsider M/ E./ !.
(a) F ind the po lynomial for G*(s).
(b) Solve for S(y) = P[time in system

~ y].

Conside r an M/D/I system for which x = 2 sec.


(a) Sh ow th at the residu al service time pdf hex) is a rectan gular
distr ibuti on.
(b) For p = 0.25, show that the result of Eq . (5.111) with four term s
may be used as a goo d approxi matio n to the distribution of
queueing time.

I
/

EXERCISES

235

5.11.

Co nsider a n M/G/I que ue in which bul k arrivals occur at rate A and


with a probability gr that r customers arrive together at an arrival
instant.
(a) Show that the z-t ransforrn of the n umber of customers arriving
in an inte rva l of lengt h t is e- ,l '[l - Gl zl ] where G(z) = 2: g.zr.
(b) Show th at the z-transform of t he random va riables Un . the number
of arrivals during the service of a customer, is B * [A - i.G(z)].

5.12.

Consider the M/G/I bulk arrival system in the pre viou s problem .
Usi ng the method of imbedded M a rkov chains:
(a) Fi nd th e expe cted queue size. [HI NT: show th a t ij = p and

;? _

o = d2V~Z) I

z~ l

dz-

= /(C b2

+ 1) + ~(c: + 1 _ ~) (g)2
P.

where C. is the coefficient of va riatio n of the bulk group size and

it is t he mean group size.]


(b)

Show that the generating fu nctio n for queue size is


(I - p)(l - z)B*[A - AG(Z)]

Q(z)

B*[A _ AG(Z)] _ z

Using Litt le's result, find the ratio W/x of the expected wait on
queue to the ave rage service time.
(c) Using the same method (imbedded Markov chain) find the
expected nu mb er of groups in th e qu eu e (averaged over depa rture
times). [H IN TS : Show tha t D(z) = f3* (A - Az), where D(z) is the
generating functi on for the number of groups arri ving during the
ser vice time for an entire group and where f3 *(s) is the Laplace
tra nsform o f the service-time den sity for an entire gro up. Also
not e th a t f3 *(s) = G [B*(s) ], which a llows us to show that
r 2 = (X) 2(g2 - g) + x 2g , where r2 is the second moment o f the
group service time.]
(d) U sin g Little's result, find W., the expected wa it on queue for a
gr oup (measured from the arrival time of the gr oup until the
start of service o f the firs t mem ber of the group) a nd show that

xII' =

(e)

P g
2(1 - p)

C
[1+ ~
+ C2J
2

If the customers within a gr oup a rriving together are served in


ran d om order, show that the ra tio of the mean wai ting time fo r a
single customer to the average service time for a single cu stomer
is W.l x from (d) increased by (1/2)g( 1 + C; ) - 1/2.

236
5.13.

TH E QUEUE

M/Gfl

Con sider an MIGII system in which service is instant aneou s bu t is


only available at " service instants," the interval s between successive
service instants being independently distributed with PDF F(x ). T he
maximum number of custom ers that can be served at any service
instant is m. Note that thi s is a bulk service system.
(a) Show that if qn is the number of customer s in the system ju st
before the nth service instant, then
q n+t

={

qn + V n - m

vn

qn

<m

where V n is the number of arrivals in the interval between the


nth and (n + I)th service instants.
(b) Prove that the probability generating function of u, is P (/, - k) .
Hence show that Q(z) is

m-'

Q(z) =
(c)

I Pk(zm - Zk)
Zm [;:~A _ AZ)r' _

where fk = p[ij = kl (k = 0, .. . , m - I).


The {Pt} can be determined from the cond ition that within the
unit disk of the z-plane, the numerator must vanish when the
denominat or does. Hence show that if F(x) = I - r',
Q(z)

z - 1
= _m
__
zm -

where Zm is the zero of zm [I


disk.

+ A(I

- z)lfll - I out side the unit

5.14 . Con sider an M/Gfl system with bulk service. Whenever the server
becomes free , he accepts 11\"0 cust omer s from the queue into service
simult aneou sly, or, if only one is on qu eue, he accepts that one; in
either case, the service time for the group (of size I or 2) is taken from
B (x ). Let qn be the number of customers remaining after th e nth
service instant. Let V n be the number of arrivals during the nth service.
Define B*(s), Q(z), and V(z) as transform s associated with the rand om
va riables x, ij , and ii as usual. Let p = ).X12.
(a) Using the meth od of imbedded Markov chains, find

E(ij) = lim E(q n)


in terms of p, G b2 , and P(ij = 0) ~ Po.
(b) F ind Q(z) in terms of B*('), Po, and p, ~ P(ij = I).
(c) Express p, in term s of po.

237

EXERCISES

5.15. Consider an MIGII queueing system with the following variation.


The server refuses to serve any customers unless at least two customers
are ready for service, at" which time both are "taken into " service.
These two customers are served individually and independently, one
after the other. The instant at which the second of these two is finished
is called a "critical" time and we shall use these critical times as the
points in an imbedded Markov chain. Immediately following a
critical time, if there are two more ready for service, they are both
"taken into" service as above. If one or none are ready, then the
server waits until a pair is ready, and so on. Let

q.

= number of customers left behind in the system


immediately following the nth critical time

= number of customers arriving during the combined


service time of the nth pair of customers

Vn

(a)
(b)

Derive a relationship between q. +1> q., and v. H


Find

eo

V(z)

= .L P[vn =

k] Zk

k -O

(c)

Derive an expression for Q( z) = lim Q.(z) as n ~


of po = P[ij = 0], where

CX)

in terms

co

Q.(z)

= .L P[qn =

k]Zk

k- O

(d)
(e)

How would you solve for Po?


Describe (do not calculate) two methods for finding ij.

5.16. Consider an M /G /I queueing system in which service is given as


follows . Upon entry into service. a coin is tossed, which has probability p of giving Heads. If the result is Heads , then the service time
for that customer is zero seconds. If Tails , his service time is drawn
from the following exponential distribution :
x~O

(a) Find the average service time x.


(b) Find the variance of service time CJb ' .
(c) Find the expected waiting time W.
(d) Find W*(s).
(e) From (d). find the expected waiting time W.
(f) From (d), find Wet) = P[waiting time ~ t].
5.17.

Consider an M/G{I queue. Let E be the event that Tsec have elapsed
since the arrival of the last customer. We begin at a random time and

238

THE QUEUE

M /G /!

measure the time IV until event E next occurs. This measurement may
invol ve the o bserva tion o f man y customer a rriva ls before E occurs.
(a) Let A(t ) be the intera rrival-tirne distribution for th ose interva ls
during which E d oes no t occur. F ind A(1) .
(b) Find A *(s) = f;; e-st dA(t).
(c) Find W* (s I n) = f;; e- S W dW(1V I n). where W (IV I 11) = P[time
to event E :s;; IV I II arrivals occur before E).
(d) F ind W *(s) = f;; e- SW dW( IV), where W (w) = P[time to event
E:S;; w).
(e) Find the mean time to event E .

. 5.18.

Consider a n M /G f! system in which time is di vided in to intervals of


length q sec each . Assume that arrivals are B~rn oulli, th at is,
P[l arrival in an y interval)

prO arrival s in a ny interval)


P[ > I arrival in any interval)
A ssume th at a customer's service time
such th at

x is

= nq sec) = K

11

P[service time
(a)
(b)
(c)
(d)
(e)

=
=
=

).q

10

so me multiple o f q sec

0 , 1,2" . ,

Find E[number of arrivals in an interval).


Find the average a rr iva l rate.
Express E[ i) ;;, x a nd E[x (i - q) ~ x 2 - xq in terms of th e
mom ents of the g n distribution (i.e., let gk ;;, L :"o IIkg n)'
Find Ymn = P[m cu stomers a rrive in IIq sec).
Let v m = P[m cu stomers a rrive during the service of a customer)
and let
00

V( z)

L vmz m
ffl = O

(f)

).q

00

and

G(z)

L gmzm
m= O

Express V(z) in terms of G(z) and the system par ameters A a nd q.


Find the mean number of a rriva ls during a customer service tim e
from (e).

5.19. Suppose th at in an M /G/l queueing system the cos t of making a


cu stomer wait t sec is c(t) dollars , where c(t) = rJ.eP t Find the averag e
cost of queueing for a customer. Also determine th e cond iti on s
necessary to keep the average cost finite,
5.20.

We wish to find the interdeparture time p robability density fun ct ion


d (t ) for an M IGII queueing system,
\
(a)

Find the Laplace transform D *(s) of th is den sity conditione d


first on a nonempty queue left behind , and seco nd on a n empty
queue left behind by a departing customer. Co mbine these results

EXER CISES

(b)

239

to get the Laplace transform of the interdeparture time density


and from this find the density itself.
Give an explicit form fo r the p robability distribution D(I), or
density d(l) = dD(t)fdl, of the interdeparture time when we
ha ve a con stant service time, that is
B(x)

x? T

={

<T

5.21.

Co nsider the following modified order of service fo r MfGfI. In stead


of LCFS as in Figure 5.11, as sume that after the interva l X " the
sub-busy peri od generated by C 2 occurs, which is followed by the
sub-busy peri od generated by Co, and so on , until the b usy peri od
terminates. Using the sequence of arrivals and service times sho wn in
the upper contour of Figure 5.lla, redra w parts a, b, a nd c to correspond to the above order of service.

5.22.

Consider an MfGfI system in which a departing cu stomer immediately


j oin s the queue again with probability P or departs fore ver with
probability q = I - p. Service is FCFS, and the ser vice time for a
returning customer is independent of his previou s service times. Let
B* (s) be the transform for the service time pdf and let BT* (s) be the
transform for a customer's total service time pdf.
(a) Find BT*(s) in terms of B* (s), P and q.
(b) Let x T n be the nth moment of the total service time. Find X T '
and X T 2 in terms of x, x 2 , p, and q.
(c) Sh ow th at the following recurrence formula holds:
- n _
XT -

(d)

-;;

+ p.

..

~(n) X".k-;;::;;
XT

.L..

q k- 1 k

Let
co

QT(z)

= LPkTzk
k ~O

where PkT = P[number in system = k] . F or }..?

QT(z)
(e)
5,23.

Find

iV,

< q prove

that

}.x) q(1 - z)B*[I.( 1 - z)]


(1 -q- (q + pZ)B *[A(l - z) ] - z

the average number of customers in the syst em.

Consider a first-c ome-first-served MfGfI queue with the following


changes. The server serves the queue as lon g as so meo ne is in the
system. Whenever the system empties the server goes away on vacati on
for a certain length of time , which may be a random vari able. At the
end of his vacation the server returns and begins to serve cust omers
again; if he returns to an empty system then he goe s awa y on vacation

240

THE Q UEUE

M IGII

again. Let F (z) = I ;"': 1];Z; be the z-tra nsfo rm for the number of
customers awaiting service when the server returns from vaca tion to
find at least one cu stomer wa iting (tha t is, /; is the prob ability that a t
the initiation o f a bu sy period the server find s j cu stomers awaiting
service).
(a) Derive an expression which gives qn+l in terms of qn, vn+l' and j
(the number of customer a rriva ls du ring the server' s vacati on) .
(b) Deri ve an expression for Q(z) where Q(z) = lim E[z"") as
n ->- co in terms of Po (equal to the probability that a departing
cu stomer lea ves 0 customers behind). (HINT : condition o n j .)
(c) Sh ow that po = ( I - p)IF(l )(I) where F (l)(I) = aF(z)/a zlz_1
and p = Ax.
(d) Assume no w th at the service vaca tio n will end whenever a new
cu stomer enters the empty system. For th is ca se find F (z) a nd
show that when we substitute it back into our an swer for (b)
then we arrive a t the classical M IGII solutio n.
5.24.

We rec ogni ze th at a n arrivin g customer who find s k others in the system


is delayed by the rem aining service time for the customer in service
plu s the sum o f (k - I ) complete service times.
(a) U sing the notation and ap proach of Exerci se 5.7 , show that we
may express the transform of the waiting time pdf as

1V*(s) = Po +

(b)
5.25.

r'" I

Jo

klCl

pk(Xo)[B*(s )k-1

L"

e f~Or( u )d u d X o

e-' Y

r(y

+ x o) e_f~+ZOr(U)dU d y

Sh ow that the expression in (a) reduces to W *(s) as given in


Eq. (5.106).

Let us relate sk, the h moment of th e time in system to N", the k t h


moment of the number in system.
(a) Show th at Eq . (5.98) leads directly to Little's result, namely
(b)

N= J;~ J.T
F ro m Eq . (5.98) esta blish the seco nd- mo men t relationship
N 2 -

(c)

= A. 2 S 2

Prove that the general relati on ship is


N(N - 1)(N - 2) . . . (N - k

+ I) =

;'k Sk

The Queue G/M/rn

We have so far studied systems of the type MfM/I and its variants
(elementary queueing theory) and MfG/l (intermediate queueing theory).
The next natural system to study is GfM/I, in which we have an arbitrary
interarrival time distribution A (t) and. an exponentially distributed service
time . It turns out that the m-server system GfMfm is almost as easy to study
as is the single-server system GfM/I, and so we proceed directly to the
m-server case. This study falls within intermediate queueing theory along
with MfG/I, and it too may be solved using the method of the imbedded
Markov chain, as elegantly presented by Kendall [KEN D 51].

6.1. TRANSmON PROBABILITIES FOR THE IMBEDDED


MARKOV CHAIN (G/M/m)
The system under consideration contains m servers, who render service in
order of arrival. Customers arrive singly with interarrival times identically
and independently distributed according to A(t) and with a mean time between arrivals equal to IfA. Service times are distributed exponentially with
mean I/ft , the same distribution applying to each server independently. We
consider steady-state results only (see discussion below).
As was the case in M/G/I , where the state variable became a continuous
variable, so too in the system GfM/m we have a continuous-state variable
in which we are required to keep ' track of the elapsed time since the last
arrical , as well as the number in system. This is true since the probability of
an arrival in any particular time interval depends upon the elapsed time (the
" age") since the last arrival. It is possible to proceed with the analysis by
conside ring the two-dimensional state description consisting of the age since
the last arrival and the number in system; such a procedure is again referred
to as the method of supplementary variables. A second approach, very much
like that which we used for MfGfl, is the method of the imbedded Markov
chain, which we pursue below. We have already seen a th ird approach,
namely , the method of stages from Chapter 4.
241

242

T HE QUEUE

G/M /m
\

Server

- -- --i---f- - - - -- + - -- + --+--

en
Queue -

--;;;'r --

-'-----'--

q'" found

---'---:;;o1' --

---'---

Time ----;.

-'-- -

q' ;/ .l found

elf

Cn + 1

Figure 6.1 The imbedded Markov points.


If we are to use the imbedded Markov chain approach then it must be
that the points we select as the regeneration points implicity inform us of the
elapsed time since the last arrival in an analogo us way as for th e expended
service time in th e case M/G/I . The natural set of points to choose for th is
purpose is the set of arrival instants. It is cert ainl y clear th at at these epo chs
the elapsed time since the last arrival is zero. Let us therefore define

q;

= number of customers found in the system immed iately prior

to the arrival of C;
We use qn' for th is random variable to distinguish it from qn, th e number of
customers left behind by the departure of C n in the M/Gfl system. In Figure
6.1 we show a sequence of arrival time s and identify them as critical points
imbedded in the time axis. It is clear th at the sequence {q,: } forms a discretestat e Markov chain . Defining
V~+l =

the number of customers serve d between the arrival of C n and C n+!,

we see immediatel y th at the followin g fund amental relatio n must hold:


- (6.1)

We mu st now calculate the tr an sition pro ba bilities asso ciate d with this
Mar kov chain , and so we define

p;; = P[q ~+!

= j I: =

i]

(6.2)

It is clear tha t Pu is merely th e pr ob abil ity th at i + I - j custo mers are


served during an interarrival time . It is furth er clear that
p;; = 0

for

>i+

- (6.3)

since there ar e at most i + I pre sent between the arriva l of C; and C n +!.
Th e Markov state-transition-pro bability diagram has tran sition s such as
shown in Figure 6.2; in thi s figure we sho w only the transition s out of sta te E,.

6.1.

IMBEDDED MARKOV C HAIN

(G /M/ m)

243

8
Figure 6.2 Slate-tran sition-probabilit y diagram for the G/M/m imbedded
Mark ov chain.
We are concerned with steady-sta te results only and so we must inquire as
th e condition s under which this Markov chain will be erg odic. It may
easily be shown that the condition for ergodicity is, as we would expect,
A < mu ; where A is the a verage arrival rate associated with our input d istribution and fl is the parameter associated with our exponential service time
(tha t is, x = I /fl) . A s defined in Chapter 2 and as used in Secti on 3.5, we
define the utili zat ion factor for thi s system as
to

).
=4 ---

(6.4)
mfl
Once again thi s is the a vera ge rate at which work enters the system (Ax = Nfl
sec of work -per elapsed seco nd) divided by the ma ximum rate at which the
system can do work (m sec of wo rk per elap sed second). Thus our condition
for ergodicity is simply p < I. In the ergodic case we are assured th at an
eq uilibrium pr ob ab ility d istribution will exist describing the number of
cust omers present a t the a rriva l inst ants; thus we define
p

rk

lim P[q;

k]

(6.5)

a nd it is thi s probability distribution we seek for the system G/M /m . As we


kn ow from Chapter 2, the direct method of so lution for thi s equilibrium
d istributio n requires th at we so lve the following system of linear equ ati on s:

= rP

(6.6)

where
(6.7)
a nd P is the matrix whos e elements a re the one-step tr an sition pr obab ilities

Pu
Our first ta sk then is to find th ese o ne-step tran sition probabil ities. We
mu st con sider four region s in the i,j pl ane as sho wn in Figure 6.3, which gives
the case m = 6. Regarding the region labeled I , we already kn ow from Eq.
(6.3) that Pis = 0 for i + I < j . Now for region 2 let us con sider the ran ge
j ~ i + I ~ m , which is the case in which no cu stomers a re waiting and all
pre sent are engaged with their own server. During the intera rriva l period , we

244

TH E QUEUE

G/M /m

t
j

;.-

....,...: :.: ;..-,

::~:-:::::-:::::-::::~:-~::::-::

3
III

Figure 6.3 Range of validity for Pi; equation s


(equation numbers are also given in parentheses).
see that i + I - j cu stomers will complete their service. Since service times
are exponentially d istributed, the probability tha t any given customer will
dep art within I sec after the arrival of C n is given by I - e- P ' ; similarly the
probability that a given customer will not depart by this time is r'. Therefore, in thi s region we have
P[i

+I

- j departures within

sec after

= ( . i

where the bin omi al coefficient

1+

en arrives I q n' =

ij

I .) [I _ e-P']i+1-i [e-P'F (6.8)

I-J

(. i+ 1. ) = ( i ~ l )
I+I-J

merely counts the number of ways in which we can ch oo se the i + I - j


customers to dep art out of the i + I that are available in the system. With
t n+! as the interarrival time between C; a nd Cn+" Eq . (6.8) gives P[q~ + l =
j I q; = i , I n+ 1 = Ij. Rem oving the condition on I n+! we then ha ve the o nestep tr ansition probab ility in this ran ge, namel y,
Po =

L'" (i ~ I) (I -

e- P' ji+1-ie- p t; d A (/)

~ i+ I~ m

- (6.9)

Next con sider the ran ge m ~ j ~ i + I, i ~ In (region 3), * which correspo nds to the simple case in which all m servers a re bu sy throu ghout the
The point i = m - I , j = m can properly lie either in region 2 or region 3.

6.1.

IMBEDD ED MARKOV CH AIN

(G /M /m)

245

intera rrival interval. U nder this assumption (that all m servers remain busy),
since each service time is exponentially distributed (memoryless), then the
number of customers served du ring this inte rval wiII be Poisson distributed
(in fact it is a pure Poisson death process) with parameter mu ; that is defining
"t , all m busy" as the event that t n+! = t and all m servers remain bu sy during
t n + 1 , we have
(mflt)k - mn t
P[k cu stom ers served t, all m bu sy] = - - e

k!

As pointed out earlier, if we are to go from state i to state j , then exactly


i + 1 - j customers must have been served during the interarrival time ;
taking account of this a nd removing the condition on t n +! we ha ve

i~/[i + 1 -

Pi; =

or

., =i'"

P"

t-

O ( I.

j served

I t, all m busy] dA(t )

(mflt )i+1- ; e-mn t dA (t )


+ 1 - ] .) I.

l1I~j~i+l

(6.10)

Note that in Eq. (6.10) the indices i and j appear only as the difference
+ 1 - j , and so it behooves us to define a new quantity with a single index

111 ~

j ~ i

+ 1, m

(6.11)

where Pn = the probabi lity of serving n customers duri ng an interarrival time


given that all m servers remain busy during this interval ; thu s, with n = i +
1 - j, we have
Pn = Pi,i+!- n =

'i"

1= 0

(l1Iflt) n - mn t

--,- e
n.

dA(t )

o~

n ~ i

+1-

111 , In ~

- (6.12)

The last case we must consider (region 4) is j < m < i + 1, which describe s
the situa tio n where C; a rrives to find m cust omers in service and i - 111
waitin g in queu e (which he joi ns) ; upon the a rriva l of C n +! there are exactly j
custo mers, all of whom are in service. If we assume that it requires y sec unt il
the queue empties then one may calcul ate Pi} in a straightforward manner to
yield (see Exercise 6.1)

-l"'(m). e

Pi' -

- in'

j< m < i + l
- (6.1 3)

246

T HE Q UEU E

G/M /m

Thus Eqs. (6.3), (6.9), (6. I2), and (6.13) give the complete description of the
one-step transition probabilities for the G/M /m system.
Havin g established the form for our one-step transition probabilities we
may place them in the transition matrix

p=

poo

pal

PlO

P20

Pn
hI

P m- 2 .0

P m-2. 1

P m-2. m -l

Pm- I.O Pm-I. I


Pm.O P.I

Pm-l.m-I
Pm.m-l

P m+ n,m-l

P m+ n,O P m+ n.l

Pl2 0
p.)') P23

. . .

Po
PI

0
0

0
0

flo

(3n+ l

f3n

. . .

Po

In this matri x all terms above the uppe r diagonal are zero, and the terms
fln are given through Eq. (6.12). The "boundary" terms denoted in thi s
matrix by their generic symbol PH are given either by Eqs. (6.9) or (6.13)
according to the range of subscripts i and j . Of most importance to us are the
transition probabilities Pn.
6.2.

CONDITIONAL DI STRIBUTIO N O F Q UE UE SIZE

Now we are in a position to find the equilibrium probabilities r k , which


must satisfy the system oflinear equations given in Eq. (6.6). At this point we
perhaps could guess at the form for r k th at sati sfies these equat ions, bu t
rather than that we choo se to motivate the results that we obt ain by the
following intuitive arg uments . In order to do this we define
Nk(t) = number of arriva l instants in the inter val (0, t) in which
the arriving customer finds the system in state Ek> given
o customers at t = 0

(6. 14)

6.2.

247

CO NDITIO NAL DISTRIB UTIO N OF QUEUE SIZ E

Note from Figu re 6.2 that t he system can move up by at most one state, but
may mo ve down by many states in any single transition. We consider thi s
motion between states and define (fo r III - I :::;; k)
Uk

= E[number of times state

E k+I

is reached between two

successive visits to state E k ]

(6. 15)

We have that the pr obability of reaching state E k+l no times bet ween returns
to state E k is equal to I - Po(that is, given we are in state E k the onl y way
we can reach state E k+1 before our next visit to sta te E k is for no customers to
be served , which has pr obability Po, and so the probability of not getting to
E k+1 first is I - Po, the probability of serving at least one) . Furthermore, let
y = P[Ieave state

E k+l

and return to it some time later without passing


where j :::;; k]

thro ugh state

s;

P[leave state
state E k ]

E k+ 1

and return to it later without passing through

This last is true since a visit to state E , for j :::;; k must result in a visit to sta te
E k before next returning to state E k+I (we move up only one state at a time) .
We note that y is independent of k so long as k ~ III - I (i.e., all III servers
are bu sy). We have the simple calcul ati on
PIn occurrences of state

E k+ 1

between two successive visits to state


y)Po

E k]

= yn- 1(I -

Thi s last equation is calculated as the probability (Po) of reaching state


E k+I at all , times the probability (yn-I) of returning to E k+1 a total of n - I
times without first touching state E k , times the probability (I - y) of then
visitin g sta te E k without first returning to state E k+1' From th is we may calculate
<0

Uk

=I

nyn-1(1 - y)Po

n= l

as the average number of visits to


Thus
Uk

E k +I

= -Po-

1- Y

between successive visits to sta le

for k

E k-

III - 1

Note that Uk is indep endent of k and so we may dr op the subscript, in which


case we have
~

Po

U = Uk = --

l- y

for k~III-1

(6.16)

From th e definition in Eq. (6.15), U mu st be the limit of the rati o of the


number of times we find ourselves in state Ek +1 to the number of time s we find

248

TH E QUEUE

G/M /m

ourselves in state Ek ; thus we may write

. . N kH( t)
fl o
a=hm---=-/-'" Nk(t)
1- Y

m-l

(6.17)

However, the limit is merely the ratio of the steady-state probability of finding
the system in state EkH to the probability of finding it in state Ek . Con sequently, we have established
~

m-l

(6.18)

The solution to this last set of equations is clearly


k

m-l

(6.19)

for some constant K. This is a basic result, which says that the distribution of
number of customers found at the arrival instants is geometric for the case
k ~ m - 1. It remains for us to find a and K, as well as rk for k < m - 1.
Our intuitive reasoning (which may easily be made rigorous by results
from renewal theory) has led us to the basic equation (6.19). We could have
"pulled this out of a hat" by guessing that the solution to Eq . (6.6) for the
probability vector r ~ fro, rl> r2 , J might perhaps be of the form
(6.20)

This flash of brilliance would, of course , have been correct (as our calculations
have just shown) ; once we suspect this result we may easily verify it by
considering the kth equation (k ~ m) in the set (6.6), which reads
co

r, = K~ =

L riP ik
i=O
co

L riP ik

i = k- l
<Xl

=L

K a ifli+l_ k

i =k- l

Canceling the constant K as well as common factors of a we have


co
(J

..::.

(J

i+l-kR

P i + l -k

i=1.--1

Changing the index of summation we finally have

6.2.
Of course we know
tion:

fJn

CO NDITIO NAL DISTRIB UTIO N OF QUEUE SIZ E

249

from Eq. (6.12), which permits the following calculaa =

an ('Xl (mf.lt) n e- m p , dA(t)

n= O

Jt-O

n!

= J.'" e -l m.- mpa)t dA(t)


This equation must be satisfied if our assumed ("calculated") guess is to be
correct. However, we recognize this last integral as the Laplace transform
for the pdf of interarrival times evaluated at a special point ; thus we have
a

A *(mf.l - mf.la)

- (6.21)

Th is functional equation for a must be satisfied if our assumed solution is to


be acceptable. It can be shown [TAKA 62] that so long as p < I then there
is a unique real solution for a in the range 0 < a < I, and it is this solution
which we seek; note that a = I must always be a solution of the functional
equation since A *(0) = I.
We no w have the defining equation for a and it remains for us to find the
unknown constant K as well as rk for k = 0, 1,2, . . . , m - 2. Before we
settle these questions, however, let us establish some additional important
results for the G/M /m system using Eq . (6.19), our basic result so far. This
basic result establishes that the distribution for number in system is geometrically distributed in the range k ~ m - I. Working from there let us now
calculate the probability that an arriving customer must wait for service.
Clearl y
co
P[arrival queues] = I r k
k=m
co

=IKak
k =m

(6.22)

1- a

(T his operation is permissible since 0 < a < I as discussed above.) The


conditional probability of finding a queue length of size n, given that a
customer must queue, is

n arri val queues]

P[queue size

n I arrival queues]

= --="---m

and so

m+n

P[arnval queue s]

Ka n+ m

Ka /(1 - a)

= (1 -

P[queue size

a)a n

0 - (6.23)

Thus we conclude that the conditional queue length distribution (given that a
queue ex ists) is geometric for any G/Mlm system.

250

TH E QUEUE

G/ M/m

6.3. CONDITIONAL DISTRIBUTION OF WAITING TIM E


Let us now seek the distributi on of queueing time, given th at a customer
must queue. Fro m Eq. (6.23), a cust omer who queues will find In + II in the
system with pr obabili ty (1 - (J)(Jn. Und er such cond itions our arriv ing
customer mu st wait until II + I cust omers depart from the system before he
is allowed into service, and this interval will constitute his waiting time. Thus
we are as king for the distribution of an interval whose length is made up of
t he sum of II + I ind ependently and expo nentially distributed rand om variables (each with par ameter mp.). The resulting con voluti on is mo st easily
expressed as a transform , which gives rise to the usual pr oduct of tr ansforms.
Thus definin g W *(s) to be the Laplace transform of the queueing time as in
Eq . (5.103) (i.e., as E[e "D]), and definin g

W*(s n)

E[e- S ;;; ar riva l queues and queue size

n]

(6.24)

we have

I =(

W *(s n)

111P.
s

But clearl y

W*(s arrival q ueues)

00

= .2 W*(s

)n+l

(6.25)

+ mp.

I n)P[queue size = n I arrival queues]

n =O

and so fro m Eqs. (6.25) and (6.23) we have

W*(s ar rival q ueues) =

.2 (1 00

n~O

= (1 -

mp:

(J)(J n

+ mp.

(J) -

)n+1

111P.

----'---

+ mp. -

mp.(J

Luckily, we recognize th e inverse of this Lapl ace tran sfor m by inspectio n.


th ere by yieldi ng the following conditional pd f for qu eueing time,

w(y a rrival que ues) = (1 - (J)m,ue- m # lI -<1 )Y

y ~O

- (6.26)

Quite a surprise! The condition al pdf for queueing time is exponentially


distributed for the system G /M /m t
T hus far we have two principal result s: first , that the co nditio na l queue
size is geo metrically distributed wit h parameter (J as given in Eq. (6.23); and
seco nd, th at th e conditional pdf for queueing time is exponentially distributed
with par ameter m,u(l - (J) as given in Eq. (6.26). Th e parameter (J is foun d as

--6.4.

TH E QUEUE

G/Mfl

251

th e un ique root in the ran ge 0 < a < I of th e functi onal equ at ion (6.21).
We are still sea rching for the distribution r k and hav e ca rried that so lution to
the point of Eq. (6.20); we have as yet to evalu ate the constant K as well as
the first m - 1 terms in that distribution. Before we pr oceed with these last
steps let us study a n imp ortant speci al case.
6.4. THE QUEUE G/M /l
This is perhaps the most important system and form s the "dual" to the
system M/G/1. Since m = 1 then Eq. (6.19) gives us the solution for r k
for all values of k , that is,

0, 1,2, . . .

K is now easily evaluated since the se probab ilities must sum to unity. From
thi s we obtain immediately

k=0,1 ,2, . ..

_ (6.27)

whe re, of course, a is the unique root of


a = A*(fl - fla)

- (6.28)

in the ran ge 0 < a -: I. Thus the system G/ M] ! gives rise to a geometric


distribution f or numb er of customers f ound in the system by an arriral;
this applies as an unconditional statement regardless of the f orm f or the
interarrioal distribution. We have already seen a n example of this in Eq .
(4.42) for the system Er / M/ 1. We comment t hat the sta te probabilities,
P = P[k in system], differ from Eq . (6.27) in that Po = I - p whereas
r o = (l - G ) and Pk = p( 1 - a)Gk - 1 = prk - 1 for k = 1,2, . . .. {see Eq .
(3.24), p. 209 of [COHE 69]}; in the M/G/I queue we found Pk = r k
A cu stomer will be forced to wait for service wit h pr ob ab ility I - r o = G,
a nd so we may use Eq. (6.26) to obtai n the unc ondition al d istribution of
waiti ng time as follow s (whe re we define A to be t he event " a rrival queues"
and A', the complementary event) :
W(y) = P[queueing time

y]

I A ]P [A ]
y I A' ]P [A' ]

= I - P [que ueing time > y

-P[queueing time>

(6.29)

Clearly, the last term in thi s equation is zero; th e rem ainin g conditional
pro bability in this last exp ression may be obtained by integrating Eq. (6.26)
from y to infinity fo r III = I; thi s computation gives e -p (l -.) . a nd since a is the

252

TH E QUEUE

G/M/m

probab ility of queueing we have immediately from Eq. (6.29) that


y ~ O

- (6.30)

We have the remark able conclu sion that the unconditional waitin g-time
distribution is exponential (with a jump of size 1 - a at the origin) for the
system G/M /l. If we compare thi s result to (5.123) and Figure 5.9, which
gives the waitin g-time distribution for M/M /l, we see that the results agree
with p replacing a. That is, the queueing-time distribution for G/M /l is of
t he same f orm as for M/M / 1!
By straightforward calcul ati on, we also have that the mean wait in G/M / I is

- (6.31)

Exa mple
Let us now illustrate thi s meth od for the example M/M /\. Since A(t)

I - e-;"(t ~ 0) we have immedia tely


A*(s) = _i,_

+ i.

(6.32)

Usin g Eq. (6.28) we find th at a must sati sfy

i,

a = - - - -fl - ua
A

or
fla" - (fl

which yields

+ A)a + i. = 0

(a - l)(,l a - i.) = 0
Of these two solutions for a, the case a = I is un acceptable due to sta bility
conditions (0 < a < I) and therefore the only acceptable solution is

i.

a = - =p
fl

M/M /l

(6.33)

which yields from Eq . (6.27)


(6.34)
Thi s, of course, is our usual solutio n for M/M /!. Fu rth er, using a = p as the
value for a in our waiting time distr ibuti on [Eq . (6.30)] we come up immediately with the known solution given in Eq . (5. 123).
I

6.5.

THE QUEUE

G/M /m

253

Example
As a second (slightly more Interesting) example let us con sider a G/M /I
system, with an intera rrival time distribution such th at

A *(s) =

P,

(s

(6.35)

+ p,)(s + 2p,)

N ote th at th is co rresponds to an E 2/M /I system in which th e two ar rival


stages have different death rates ; we choose these rates to be linear multiples
of th e service rate p,. As always our first step is to evaluate a fr om Eq . (6.28)
a nd so we ha ve
2p,2

a =--- -- -'----- - - - (p, - ua

+ p.)(p. -

+ 2p,)

p,a

This lead s directly to the cubic equation

a3

5a 2 + 6a - 2

We know for sure th at a = I is always a root of Eq. (6.28), and this permits
the stra ightforward fact oring

(a - 'I)(a - 2 - J 2)(a - 2

+ J 2) =

J"2

Of the se three roo ts it is clear th at onl y a = 2 is accepta ble (since


I is required). Therefore Eq. (6.27) immediately gives the distribution for number in system (seen by a rriva ls)

o< a <

r,

= (li -

1)(2 - J 2f

0, 1,2, . . .

(6.36)

Similarl y we find
W(y) = 1 - (2 -

J 2)e- p/2- 1lu

y ~O

(6.37)

for the waiting-time distribution.


Let us now return to the more genera l system G/M /m .

6.5. THE QUEUE G/M/m


At the end of Section 6.3 we pointed out th at the only remain ing un kn owns
for the genera l G /M /m solution were: K, an unknown con stant , and the
m - I " boundary" pr obabilities r o, r . .. , r m _ 2 Th at is, our solutio n
"

254

TH E QUEUE

G /M /m

appears in the form of Eq. (6.20) ; we may fact or out the term Ko":" to obtain
- (6.38)

where
k = 0, 1, . .. , m - 2

(6.39)

Fu rthermore , for convenience we define


J = Ka m -

(6.40)

We have as yet not used the first m - 1 equations repre sented by the matri x
equation (6.6). We now require them for the evaluation of our unkn own
terms (of which there are m - I) . In terms of our one-step transition probab ilities PH we then have

e, = I""

k = 0, 1, .. . , m - 2

R iPik

i = k- l

where we may extend the definiti on for R k in Eq. (6.39) beyond k = m - 2


by use of Eq . (6.19), that is, R ; = a i- m+I for i ~ m - I. The tail of the
sum above may be evaluated to give
m-2

00

e, = I

R iPik

+ I

i =k- l

a i+ l - mp ik

i =m-l

Solving for R k _ h the lowest-order term present, we have


m- 2

Rk-

Rk

00

"R
.. iP;k

"

.. a

i =k

i +I- m

P ik

i =m-l

- (6.41)

l -

P k-l .k

for k = 1, 2, . . . , m - I. The set of equations (6.41) is a triangular set in the


unknowns Rk ; in particular we may start with the fact that R m _ 1 = I [see
Eq. (6.38)] and then solve recur sively over the ran ge k = m - I , m - 2, . . . ,
1, 0 in order. Fin ally we may use the con servati on of probability to evaluate
the constant J (this being equivalent to evalu atin g K ) as
J

m- 2

co

k=O

k=m- l

I n, + J I

ak- m+I

or
J

1
m -2

--+IR k
I - a

k- O

- (6.42)

6.5.

TH E Q UEUE

G/M /m

255

This then provides a complete prescription for evaluating the distribution of


the number of customers in the system. We point out that Takacs AKA 62]
gives an explicit (albeit complex) expression for these boundary probabilities.
Let us now determine the distribution of waiting time in this system [we
already have seen the conditional distribution in Eq. (6.26)]. First we have the
probability that an arriving customer need not queue , given by

rr

W(O)

m- l

m- l

k= O

k =O

= .L r, = J .L s ,

(6.43)

On the other hand , if a customer arrives to find k ~ m others in the system


he must wait until exactly k - m + I customers depart before he may enter
service. Since there are m servers working continuously during his wait then
the interdeparture time s must be exponentially distributed with parameter
mfL, and so his waiting time must be of the form of a (k - m + I)-stage
Erlangian distribution as given in Eq. (2. 147). Thus for this case (k ~ m) we
may write
P[l" .:s;; Y I customer finds k in system] =

"
I

mfL(mfLx)k- m

(k - m)!

e- mpz d x

If we now remove the condition on k we may write the unconditi onal distribution as
W(y)

W(O)

= W(O)

+J

i f"
k~m Jo

I"

+ J(J

( mfL)(mfLx)k-mr!'-m+l e- mpz d x
(k - m)!

mfLe- mp zl1 - a l

dx

(6.44)

We may now use the expression for J in Eq. (6.42) and for W(O) in Eq . (6.43)
and carry out the integration in Eq. (6.44) to obtain
(J

y~O

- (6.45)

Thi s is the final soluti on for our waitin g-time distribution and shows that in
the general case GIM/m Ire still have the ex ponential distribution twith an
accumulation point at the origin) for waiting time!
We may calculate the average waiting time either from Eq. (6.45) or as
follows. As we saw, a cust omer who arrives to find k ~ m others in the
system must wait unt il k - m + I services are complete , each of which
take s on the avera ge I jmp sec. We now sum over all those cases where our

256

TH E QUEUE

G/M /m

customer must wait to obtain


U

<X>

E[lv]

=L -

(k - m

k- m mfl

But in this ran ge we know that rk


W

= -.

+ l )r k

Ko" a nd so

<X>

L (k

- m

mf,l k- m

+ l )a

and this is easily calculated to yield


Ja

6.6. THE QUEUE G/M/2


Let us see how far we can get with the system G/M /2. F rom Eq. (6.19)
we have immediately
k = 1, 2, . ..
r k = K~
Conserving probability we find
<X>

<Xl

L rk = 1 = ro + L K~
k- O
k~l

This yields the following relationship between K and ro:


K = ( I - ro)(1 - a)
a

(6.46)

Our task now is to find another relation between K and roo Thi s we may do
from Eq. (6.41), which states
co

R1

R 0 -_

".L. a'-1Pn.
'-1

(6.47)

POl

But R,

1. The denominator is given by Eq. (6.9), namely,


P Ol

f '(D

[I -

e-"~oe-"t dA( I)

Thi s we recognize as
P Ol

A *Cfl)

(6.48)

Regardin g the one-step transition probabilities in the numerat or sum of


Eq. (6.47) we fi nd they break into two regions: the term Pu. must be calculated
from Eq. (6.9) and the terms Pi: for i = 2,3 ,4 , .. . must be calculated from

6.6.

THE QUEUE

G/M/2

257

Eq. (6.13). Proceeding we have


Pu

=1"'(7)[1 -

e-'V'dA(t)

Again we recognize this as the transform


(6.49)

Pa = 2A *(Il) - 2A *(21l)

Also for i

2,3,4 , . . . , we have

Pi! =

f"'(2)e-.'[f' (~lly)i-2 (e-" Jo 1 Jo (I - 2)!

e-')21l dyJ dA(t)

(6.50)

Substituting these last equations into Eq. (6.47) we the n have


Ro =

_1_[1-

2A*{fl)

A *(Il)

+ 2A*(21l) -

iIa Ip il]
i-2

(6.51)

Th e summation in this equation may be ca rried out withi n the integral signs
of Eq. (6.50) to give
~ i-I

""'" a
i- 2

Pn. = 2A (21l) +

2A *(21l - 21la) - 4a A *(Il)

_....0....:_----'----'- _ _----''--'-

20' - 1

(6.52)

But from Eq. (6.21) we recognize that a = A*(21l - 21la) and so we have
.
L'" a,-I
PiI
i- 2

2A*(21l)

+ - 20'-

20' - I

[I - 2A*{fl) ]

Substituting back into Eq. (6.51) we find


R _ 2A*(Il) - 1
o - (20' - I)A *(Il)

However from Eq. (6.39) we know that

R0 -- ..!:!.
and so we may express

r as

Ka

r - Ka[l - 2A *(Il)]
0 - (I - 2a)A*(Il)

(6 .53)

Thus Eqs. (6.46) and (6.53) give us two equations in our two unknowns K
and ro, which when solved simultaneously lead to
(l - 0')[1 - 2A *(Il)]
ro =
I - 0'- A*(Il)

K = A*(Il) (l - 0')(1 - 20')


0'[1 - a - A *(Il) ]

EXERCISES

259

Compa ring Eq. (6.56) with ou r results from Chapter 3 [Eqs. (3.37) and (3.39)]
we find that they agree for m. = 2.
Th is compl etes our study of the G[M [m queue . Some further results of
interest may be found in [DESM 73]. In the next chapter, we view transform s
as probabilities and gain considerable reduction in the ana lytic effort required
to solve equ ilibrium and transient queueing problems.
REFERENCES
COHE 69 Cohen, J. W., The Single Ser ver Queue , Wiley (New York) 1969.
DESM 73 De Smit, J. H. A., "On the Many Server Queue with Exponential
Service Times," Advances in Applied Probability, 5,1 70-1 82 (1973).
KEND 51 Kendall, D. G. , " Some Problems in the Theory of Queues," Journal
of the Royal Statistical Society , Ser. E., 13, 151-1 85 (1951).

TAKA 62 Takacs, L., Introduction to the Theory of Queues, Oxford University


Press (New York) 1962.
EXERCISES
6.1. Pro ve Eq. (6.13). [HINT: condition on an interarrival time of dur at ion
t and then further conditi on on the time (~ t) it will take to empty the
queue.]
6.2. Cons ider E2[M[ I (with infinite qu eueing room).
(a) Solve for r k in terms of G.
(b) Evaluate G explicitly.
6.3.

Conside r M[M[m .
(a) How do Pk and r k co mpare?
(b) Co mpare Eqs. (6.22) and (3.40).

6.4.

Prove Eq. (6.31).

6.5. Show that Eq. (6.52) follows from Eq. (6.50).


6.6.

Consider an H 2 /MfI system in which


()(l

(a)
(b)
(c)
(d)

}.t

2,

}'2

I, f1 = 2, and

5[8.

Find G .
Find r k
Fin d Il'(Y) .
Find W.

6.7. Conside r a D[MfI system with f1 = 2 and with the same p as in the
previous exercise.
(a) Find G (correct to two decimal places).

260

TH E QUEUE

(b)
(c)
(d)

G/M/m

Find r k
Find w( y).
Find W.

6.8.

Consider a G/M /I queueing system with room for at most two customers
(one in service plus one waiting) . Find rk (k = 0, 1,2) in terms of fl
and A * (5).

6.9.

Consider a G/M/I system in which the cost of making a customer wait


y sec is
c(y) = aebV
(3) Find the average cost of queueing for a customer.
(b) Under what conditions will the average cost be finite?

7
The Method of Collective Marks

W hen on e stud ies stoch astic pr ocesses such as in queueing the ory , one
finds th at the wo rk d ivides int o two parts. The first part typically requires a
careful probabilistic argument in order to arrive a t expressions inv olvin g the
random va riables of interest. * The second part is then one of a nalysis in which
the f ormal manipulation of symbo ls takes place either in the o rigina l domain
or in so me transformed d omain. Whereas the prob abil istic a rgu men ts typica lly must be made with great care, they nevertheless leave one with a comfort abl e feeling th at the " p hysics" of th e situ ation a re con stantly withi n one's
understand ing a nd gras p. On the other hand , whereas the a na lytic manipulations that o ne carries out in the seco nd part tend to be rather stra ightfo rwa rd
(albeit difficul t) formal o peratio ns, one is unfortunately left with the uneasy
feelin g th a t these man ipul at ion s relate back to the origina l p roblem in no
clearly understandable fashi on. This " no nphysica l" aspect to problem
so lving typically is taken o n when one moves into the domain of transforms,
(either Laplace o r z-t ra nsfo rms).
In th is ch ap ter we de mon str at e that one ma y deal wit h tr a nsforms a nd sti ll
maintain a hand le on the prob abilistic arguments taking place as the se
tra nsfo rms a re manipulated. Th ere a re tw o separat e opera tions involved : the
" ma rking" of custo mers ; a nd the observatio n of "catas tro phe" p roc esses.
Together the se meth od s a re referred to as the meth od of collective marks.
Both opera tio ns need not necessarily be used simultaneou sly , and we study
them sepa ra tely bel ow. This ma terial is dra wn princip ally from [R U N N 65] ;
these ideas were introduced by van Dan tzig [VA N 48] in order to expose th e
probabil ist ic int erpreta tion for tr an sforms.
7.1.

THE MARKING OF CUSTOMERS

Assume that , a t the entrance to a queueing system , the re is a gremlin wh o


ma rks (i.e., tags) a rrivi ng custo mers with the following p roba bilities :
P [cu stomer is ma rked]

I - z

P[customer is not marked] = z

(7. 1)
(7.2)

As, for exa mple. the arguments leading up to Eqs. (5.31) and (6.1).

26 1

I
!
I

262

THE METHOD OF COLL ECTIV E MARKS

where 0 ~ z ~ I. We assume that the gremlin marks customers with these


prob ab ilities independent o f all other as pects of the queueing p roces s. As we
shall see below, th is marking process allows us to cre ate genera ting fun ctions
in a very natural way.
It is most instructive if we illustrate the use o f th is marking p roc ess by
examples:

Ex ample 1: Poisson Arrivals


We first consider a Pois son a rrival pr ocess with a mean a rrival rat e of }.
customers per second. Assume that customers are marked as ab ove. Let us
co nsider the probability
q(z, t) ~ P[no marked customers arrive in (0, t) ]

(7.3)

It is clear that k cu stomers will arrive in the int er val (0 , t ) with the pro babil ity
(At)ke-At/k !. M oreover, with probability zk, none of these k cu stomers will
be marked ; thi s last is true since marking takes place independently a mo ng
customers. Now summing over all values o f k we have immediately that
0)

q(z, t)

= .2

(}. t)ke-.lt

k!

k- O

Zk

e.lt( z-Il

(7.4)

G oin g back to Eq . (2. 134) we see that Eq . (7.4) is merel y the generati ng fun ction for a Poisson arriva l proce ss. We thus conclude th at the genera ting
functi on for th is arrival process may also be interpreted as the prob abili stic
qu antity expressed in Eq . (7.3). This will not be the first time we may give a
probabili stic in terpretati on for a generat ing fun ction!

Example 2: M /M/ ro
We con sider the birth-death queueing system with a n infinite number of
servers. We also assume a t time t = 0 that there a re i customers present. Th e
parameters of our system as usual are). a nd f.l [i.e., A(t) = I - e- At a nd
B (x ) = I - e-~X] .
We are intere sted in the qu antity

Pk(t ) = P[k cu stomers in th e system a t time t ]

(7.5)

and we define its generating functi on as we did in Eq, (2. I53) to be


0)

P(z, r) =

.2 Pk(t)Zk
k=O

(7.6)

I '
f

7.1.

TH E MARKI NG OF CUS TOMERS

263

Once again we mar k customer s according to Eqs. (7.1) and (7.2). In analogy
with Exa mple I , we recognize that Eq . (7.6) may be interpreted as the pr oba bility that the system contains no marked customers at time t (where the
term Zk again repre sents the probability that none of the k customer s present
is marked). Here then is our crucial ob servati on : We may calculate P (z , t )
directly by findin g the probab ility that there are no marked customers in the
system at time t, rather than calculating Pk(t) and then finding its z-transform !
We pr oceed as follows: We need merel y find the probability that none of
the customers still present in the system at time t is mark ed and this we do by
accounting for all customers present at time 0 as well as all customer s who
arrive in the interval (0, t ). For any cust omer present at time 0 we may
calculate the probability that he is still present at time t and is mar ked as
(I - z)[ l - B(t )] where the first factor gives the probabil ity that our
customer was marked in the first place and the second factor gives the
prob abil ity that his service time is greater than t . Clearly, then , this quantity
subtracted from unity is the pr obability that a customer originally present is
not a marked customer present at time t ; and so we have
P[custome r present initially is not a marked customer present at time t]
= 1 - ( 1 - z)e-P '
Now for the new customers who enter in the interval (0, r), we have as
before P[k arrivals in (0; t)] = ().t)'e- ;" /kL Given that k ha ve arrived in this
interval then their arrival instants are uniformly distributed over this interval
[see Eq. (2.136)]. Let us consider one such arrivin g customer and assume that
he arri ves at a time or < t. Such a customer will not be a marked customer
present at time t with probability
P[new arrival is not a mar ked customer pre sent at time
arrived at or

t]

given he

I - (I - z)[ l - B(t - or)]

(7.7)

However, we have that


P[arrival time ~ or]

=!

for 0

or

and so
P[new arrival still in syste m at t]

=f' e-p('-,) dor


t

t=O

1 - e-

pt

(7.8)
flt
U nconditioning the arrival time from Eq. (7.7) as shown in Eq. (7.8) we have
1
- pi
P[n ew arr ival is not a mark ed customer present at t] = 1 - ( 1 _ z) - e
p.t

264

THE METHOD OF COLLECTIVE MARKS

Thus we may calculate the probability that there a re no marked c usto mers
a t time t as follows:
co

P( z, t)

=I

P[k arrive in (0, t)]

k= O

X {P [new arrival is not a marked customer present a t tW


x {P [initial customer is not a marked customer present at t]}i

II

I
r

U sing o ur established relati onships we a rrive a t


P(z , t) =

co (J.t)k
I
- e-

k~O

At

1 - (I - z)

1-

k!

- PJ k
e
[I - ( I - z)e-p'r
/it

which then gives the known result


P(z, t)

[I - (I - z ) e - P']ie - W pll l -Z )[l - e-P' J

(7.9)

It sho uld be clear to the student that the usual method for obtaining this
result would have been extremely complex.
Example 3: M JGJI

In this exa mple we co nsider the FCFS M JGfI system. Re call that the
random va riables w. , t . +! , t . + 2 , , x., x.+1 , . . . a re all independent of
each o ther. As usu al , we define B *(s) a nd W n *(s) as the Laplace tr an sform's
for the service-time pdf b(x) and the waitin g-time pdf w n(Y) for C n, respe ctively.
We define the event

.
{no u In
1Y,

}
IV .

a {no customers who arrive during the waiting}


.
f Cn a re mark e d
trrne
0

--

(7.10)

We wish to find the probability of this event, that is, P [no M in Jr. ]. Con ditioning on the number of arriving customers and on the wait ing time lI ' n ,
a nd then removing the se conditions , we have
P[no M

In

w.]

.
= I<Xl 1 <Xl (i.)k
....JL. e- Azk dWn(y )
k- O

f"

k!

e -A.(l- z)

d W.(y)

We recogni ze the inte gral as W. *(J. - Az) a nd so


P[no M in w. ] = W n *(J. - }.z)

(7 .11)

Thus once again we ha ve a very simple probabili stic int erpretat ion for the
(Laplace) transform of an important distribution . By identical a rguments
we may arrive at
(7.12)
P[no M in x.] = B*(J. - },z)

7.1.

THE MAR KI N G OF CUSTO MERS

265

This last gives us another interp reta tion for a n old expre ssion we have seen
in Ch apter 5.
Now co mes a startling insight! It is clear that the arriva l of cu stomers
during the waiting time of C n and the arrival of customers during the service
time of C; must be independent events since these are non overiapping
intervals and our arrival process is memoryless. Thus the events of no marked
customers arriving in each of these two disjoint intervals of time mu st be independent , and so the probability that no marked customers arrive in the
uni on of the se two disjoint intervals must be the product of the probabilities
th at none such arrive in each of the intervals sepa ra tely. Thus we may write

P[no M in

ll ' n

+ x n] =
=

P[no M in lI'n]P[no M in x n]

(7.13)

W n *(A - Az)B*(I. - Az)

(7.14)

Th is last result is pleasing in two ways . First, becau se it says th at the


prob abil ity of two independe nt joint events is equal to the product of the
pr obabilities of the indi vidu al events [Eq. (7.13)]. Second , becau se it says
that the transform of the pdf of the sum of two independent random va riables
is equal to the pr oduct of the tr an sforms of the pdf of the ind ividu al random
variables [Eq. (7.14)]. Thus two familiar results (rega rding disjoint events
and regarding sums of independent random variables) have led to a meaningful new insight, namely, that multiplication of tran sforms implies not only
the sum of two independent random variables, but also implies the product
of the probab ilities of two independent eve nts ! Not often are we privile ged to
see such fundamental pri nciples related.
Let us now continue with the argument. At the mom ent we have Eq, (7.14)
as one mean s for expressing the pr ob abil ity that no marked cu stomers a rrive
during the int erval Il' n + X n . We now proceed to calcul ate thi s prob abil ity by
a seco nd a rgume nt. Of course we have

P[no M in

lI ' n

+ x n] =

P[no M in \" n +
+ P[no Min ll ' n

X n

and C n H marked]
X n and C n H not marked] (7.15)

Furthermore , we have

P[no Min

Wn

+ X n and C n+l marked] =

if

Wn+l

>0

since if C n H mu st wait, then he must arrive in the interval w, + X n a nd it is


impossible for him to be ma rked and stilI to ha ve the event {no M in W n +
Xn} . T hus the first term on the right-hand side of Eq . (7.15) mu st be P[ wnH =
0](1 - z) where th is seco nd factor is merely the probability th at C n+l is
marked. Now cons ider the second term on the right-hand side of Eq. (7. I 5) ;
as sh own in F igure 7.1 it is clear th at no custo mers arrive between C n and
CnH , and therefore the customers of interest (namely, those arriving after

266

TH E METHOD OF COLLECTIVE MARKS

1<

Wn

X fl

- - - - -- -71

c;

Server

wn

IE
Queue

t
c;

No
arrivals

.I

>1

wn + 1

C n-tl

\.

v-

---)

A rrivals of

interest

Figure 7.1 Arrivals of interest during lV n + X n


Cn+l does, but yet in the inte rval ll'n + x n ) must arrive in the interva l 11''' +1
since this interval will end when II' n + X n ends. Thu s this second term must be

P[no Min

IV n

+ X n and

C n+l

not mar ked) = P[no Mi n IVn+l )


X P[C n+ 1 not mar ked)
= P[n o M in II' n +1) Z

From the se observations and the result of Eq. (7. \ I) we may write Eq. (7.15)
as

P[no M

in +
lV n

Xn)

= ( I - z)P[Cn+l arrives after lV n + x n) + z W~+l ( .l. - i.z)


Now if we think of a second separate marking process in which all the customers are marked (with an additional tag) with probabi lity one, and ask that
no such mar ked customers arrive du ring the interval Il'n + x'" then we are
ask ing that no customers at all arrive d uring this interval (which is the same
as ask ing that C n+l arrive a fter W n + x n ) ; we may calculate this using Eq.
(7.14) with z = 0 (since this gua rantees that all customers be ma rked) and
obtai n W n* (.l.)B*(.l.) for this pro babi lity. Th us we arrive at

P[no ,\lf in lV n + x n) = ( I - z)W.* (A)B*(A) + Z W ~+ I ( )' - ).=) (7.16)


We now have two expressions for P[no M in W n + X n ), which may be
equat ed to obtai n
Wn*(.l. - k) B*(i. - .l.z) = ( I - z)W'* (i.)B*(i.)

+ zW:+ (i. 1

i.z) (7.17)

Th e interesting pa rt is over. Th e use of the method of collective marks has


brought us to Eq. (7.17), wh ich is not easily obtaine d by other methods, but
which in fact checks with the result due to other meth ods. Rathe r than dwell
on the techniques required to carry this equa tion further we refer the reader
to Runnen burg [RUNN 65) for additional details of the time-dependent
solutio n.

7.2.

THE CAT AST RO P HE P ROCESS

Now, for p < I , we ha ve an ergodic process with W Oes )


as n -;. 00 . Equation (7.17) then reduces to
1V *(i. - i.z) B*(). - i.z) = (I - z ) W *(i.)B*()')

+ z lV*()'

267

lim W n *(s)

- i.z)

If we make the change variable s = X- )'z and solve for W*(s) , we obtain
WOes)

s W*( i.) B*(i. )


s - ).
).B*(s)

Since W *(O) = I , we evalu ate W *(i.)B*().) = ( I - p) and arrive a t


WOes)

s( 1 - p)
s - ).
i.B*(s)

(7.18)

which, of course, is the P-K transform equ ation for waiting time.
We have demonstrated three examples where the marking of customers
has allowed us to a rgue purel y with pr obabilistic reasoning to deri ve exp ressions relating transforms. What we have here traded has been straightforward
but tedious analysis for deep but physical probabilistic rea sonin g. We now
co nsider the cat astrophe pr ocess.
7.2.

THE CATASTROPHE PROCESS

Let us pursue the method of collective marks a bit further by observing


" catas tro phe" processes. Mea suring from time 0 let us con sider th at some
event occurs at time t (t ~ 0), where the pdf associa ted with the time of
occurrence of this event is given by f(t). Furthermore, let there be an independent " catastrophe" process taking place simultaneously which generates cat astrophes t at a rat e y acco rding to a Poisson pr ocess.
We wish to calcul ate the probability th at the event at time t takes place
before the first cat astr ophe (measuring from time 0). Conditioning o n t
and integra ting over all t , we get
Prevent occ urs before catastrophe] =

L'" e-Y'f( t) dt

= F *(y)

I
I

(7.19)

where, as usual , f (t )>F*(s) are Laplace tran sform pairs. Thu s we have a
prob abili stic interp reta tio n for the Laplace transform (evalua ted at the point
y) of the pdf for the time of occurrence of the event, namel y , it is the pr obability that an event with this pdf occurs before a Poisson catastr ophe at rate
y occurs.
t A catastrophe is merely an impressive name given to these generated times to distinguish
them from the " event" of intere st at time

I.

268

T HE M ETHOD OF CO LLECTIVE MARK S

A s a second illustrati on usin g catastrophe processes, con sider a sequence


o f events (tha t is, a point p rocess) on the interval (0, co). Measu ring from
time 0 we would like to calculate the pdf of th e time un til the nth event,
which we den ote ] by f (n,(I), and with distribution Ftn,(I), where the time
between events is given as before with density f( I). That is,

We are interested in deriving an expression for the ren ewal fun ction H (I),
which we rec all fr om Section 5.2 is equal to the expected number of events
(renewa ls) in a n interval of len gth I . We proceed by defin ing

P[exactly n events occur in (0, I)]

(7.20)

The renewal functi on may therefore be calculat ed as

H(I)

E[number of events in (0, I)]


OX>

=J,nPn(t)
n -O

But from its definition we see that Pn(l ) = F(n,(I) - F(n+I)(I) and so we ha ve
OX>

H(I)

= J, n(F(n,(I) -

F tn+I,(I)]

n= O
OX>

.L F(n,(I)

(7.2 1)

n- l

If we now permit a Poisson catastrophe process (a t rate y) to devel op we may


a sk for the expectat ion o f the followin g random var iable:

N;

~ number of even ts occurring before th e first ca tas tro p he

(7.22)

W ith probability ve:" dt the first catastrophe will occur in th e interval


(I, I + dl ) and then H (I) will give the expected num ber of events occurr ing
before this first ca ta strophe, th at is,

H (t ) = E[ N e I first ca tastrophe occurs in (I, I

+ dl )]

Summing over a ll possibilities we may then writ e

E[N , ]

= f.OX> H(t)ye- y t dl

(7.23)

In Section 5.2 we had defined H *(s) to be the Lapl ace transform of th e


renewal density h(l ) defined a s h(l ) ~ dH (I)/dl , th at is ,

H*( s)

fO

h(I)e- " dt

E,

u
tl
1

F(n)(I) = P[nth event has occu rre d by time I]

Pn(l )

If

(7.24)

t We use the subscript (n) to remind the reader of the definition in Eq . (5.110) denoting the
n-fold convolution. We see that[. n,(t ) is indeed the n-fold convo lution of the lifet ime density
J(t) .

REFERENCES

269

If we integra te th is last equation by parts, we see that the right-hand side of


Eq. (7.24) is merely
sH (t )e-'t dt a nd so from Eq . (7.23) we ha ve (ma king
the substitutio n s = y)
.
E[N e ] = H*(y)
(7.25)
Let us now calcul ate E[N e ] by an alternate mean s. From Eq . (7.19) we see
that the cat astrophe will occur before the first event with probability I F* (y) and in th is case N , = O. On the other hand , with probability F* (y)
we will get a t least one event occurring before the catastrophe. Let Nc' be
the random varia ble N; - I conditioned on at least o ne event ; then we have
N; = I + Nc'. Becau se of the memoryless property of the Po isson process
as well as the fac t th at the event occurrences genera te an imbedded Markov
process we see th at Nc' mu st have the sa me distribution as N, itself. Forming
expectati on s o n N; we may therefore write

S:

E[N e] = 0[1 - F*(y)] + {I + E[N e]}F*(y )


This gives immed iately
E[N] = F*(y)
e
1 _ F*(y)

(7.26)

We no w have two expressions for E[N e ] and so by equating them (and making
the change of va ria ble s = y) we ha ve the final result

H*(s) _

F*(s)
(7. 27)
F*(s)
Thi s last we recogni ze as the tr an sform expression for the integral equation
of renew al the ory [see Eq. (5.2 1)]; its integral formulation is given in Eq .
(5.22).
It is fair to say that th e method of collective marks is a rather elegant way
to get so me useful and imp ortant results in the theory of stochastic processes.
On the other han d , th is method has as yet yielded no results th at were not
prev iousl y known throu gh the application of other methods. Thus at present
its principal use lies in providin g a n alternati ve way for viewing the fundamental relat ion ship s, thereby enhancing one's insight int o the prob abili st ic
structure of the se processes.
Thus end s o ur treatment of intermediate queueing the ory. In the next
part, we venture into the kingdo m of the GIG II queue.
1 -

REFERENCES
RUNN 65 Runnenburg. J. Th ., " On the Use of the Method of Collective Marks
in Queueing Theory," Proc. Symposium O il Congest ion Theory , eds,
W. L. Smith and W. E. Wilkinson, University of North Carolina
Press (1965).
VAN 48
van Dantzig. D., " Sur la methode des fonctions generatrices,"
Colloques internationaux du CN RS, 13, 29-45 (1948).

270

TH E METH OD OF COLLECTIVE MARKS

EXERCISES
7.1.

Consider the M/G /l system sho wn in the figure belo w with average
arri val rate A and service-time distribution = B(x) . Customers a~e
served first-come-first-served from queue A until they either leave or
receive a sec of service, at which time they join an entra nce box as shown
in the figure. Cu stomers continue to collect in the ent rance box forming

,J

sec of service
received

)----'~ Depart wit h service

com pleted

Server

a gro up until queue A empties and the server becomes free. At this point ,
the entrance box "dumps" all it has collected as a bulk arrival to queue
B. Queue B will receive service until a new arrival (to be referred to as a
"starter") join s queue A at which time the server switche s from queue B
to serve queue A and the customer who is preempted returns to the head
of queue B. The entrance box then begins to fill and the process repeat s.
Let
g. = P[entrance box delivers bulk of size n to queue B]

G(z) =

'"

I gn
.-0

zn

(a)

Give a pr obabili stic interpretat ion for G(z) using the method of
collective marks.
(b) Gi ven th at the " starter" reaches the entrance box, and usin g the
method of collective marks find [in term s of A, a, B (' ), and G(z)]
P k = P[k customers arrive to queue A during the " starter's"

service time and no mar ked customers arrive to the


entrance box fr om the k sub-busy per iods creat ed in
queue A by each of these customers]
Gi ven that the "starter" does not reach the entrance box, find Pk
as defined above.
(d) From (b) and (c), give an expres sion (involving an integral) for
G(z) in terms of A, a , B(') , and itself.
(e) From (d) find the average bulk size ii = I ~-o ng.

(c)

EXER CISES

271

7.7..

Consider the M/G/ oo system. We wish to find P(z, I) as defined in


Eq . (7.6). Assume the system contains i = 0 customers at I = O. Let
p(l) be the probability that a customer who ar rived in the interval
(0, I) is still pre sen t at I. Proceed as in Example 2 of Sect ion 7.1.
(a) Express p(l) in terms of B(x).
(b) Find P(z, t ) in terms of A, I, z, and p (I) .
(c) From (b) find Pk(l) defined in Eq . (7.5).
(d) From (c), find lim Pk(t) = P as 1->- 00 .

7.3.

Con sider a n M/G/I queue, which is idle at time O. Let p = P[no


catastrophe occurs during the time the server is bu sy with those
cu stom ers who a rri ved during (0, I)] and let q = P[no catastrophe
occurs during (0, t + U(I ] where U(I) is the unfinished work a t time t.
Catastrophes occur at a rate y.
(a) Find p.
(b) Find q.
(c) Interpret p - q as a probability a nd find an independent expression for it. We may then use (a) and (0) to relate the distribution
of unfini shed work to 8* (s).

7.4.

Consider the G/M /m system. The root <1 , which is defined in Eq, (6.2 1)
plays a central ro le in the so lution. Exam ine Eq. (6.2 1) fro m the
viewpoi nt of collective marks and give a probabilistic interpretation
for <1 .

PART

IV

ADVANCED MATERIAL

We find ourselves in difficult terrain as we enter the foothills of G/G/I. Not


even the average waiting time is known for this queue! In Chapter 8, we
nevertheless develop a "spectral" method for handling these systems which
often leads to useful results. The difficult part of this method reduces to
locating the roots of a function, as we have so often seen before . The spectral
method suffers from the disadvantage of not providing one with the general
behavior pattern of the system; each new queue must be studied by itself.
However, we do discuss Kingman's algebra for queues, which so nicely
exposes the common framework for all of the various methods so far used to
attack the GIGII queue. Finally, we introduce the concept of a dual queue,
and express some of our principal results in terms of idle times and dual
queues.

273

8
The Que ue

G/G/l

We have so far made effective use of the Mark ovian property in the
queuein g systems M/M /l , M/G/l , and G/M /m . We must now leave behind
man y (but not all) of the simplificatio ns that deri ve from the Markovian
p roperty and find new meth ods for studying the more difficult system GIG /I.
In this chapter we solve the G/G/l system equat ion s by spectral meth ods,
makin g use of tran sform and complex-varia ble techniques. There are , however, numerous other approa ches : In Section 5.11 we introduced the ladder
ind ices and pointed out the way in which they were related to important
events in queueing system s ; these idea s can be extended and applied to the
general system GIG /I. Fluctuations of sums of random variables (i.e., the
ladd er indices) have been studied by Ander sen [AND E 53a, AND E 53b,
A ND E 54] and also by Spit zer [SPIT 56, SPIT 60], who simplified and
expanded Andersen' s work . Thi s led, among other thin gs, to Spitzer's identity ,
of great impo rta nce in that app roach to queueing theory. Much earlier (in
the 1930's) Pollaczek considered a form alism for solving these systems and his
approach (summarized in 1957 [POLL 57]) is now referred to as Pollaczek's
method. More recently, Kingman [KING 66] has developed an algebra fo r
queues, which places all these meth ods in a commo n fram ewor k and exposes
the und erlying similarity among them ; he also identifies where the problem
gets difficult and why, but unfortunately he shows that this method does not
exte nd to the multiple server system. Keilson [KEIL 65] applies the method
of Green's fun ction. Benes [BENE 63] studied G/G fl through the unfinished
work and its "relat ives."
Let us now esta blish the basic equat ions for this system.
8.1.

LINDLEY'S INTEGRAL EQUAT ION

Th e system under considera tion is one in which the intera rrival times
between custo mers are independent and are given by an arbitra ry distr ibution A (t) . Th e service times are also independen tly dr awn from an arb itra ry
distr ibut ion given by B(x). We assume there is one server avail able and that
service is offered in a first-come-first-served order. The basic relat ion ship
275

276

THE QUEUE

G/GfI

amo ng the per tinent random varia bles is deri ved in this sectio n and leads to
Lindley's integral equation, whose solution is given in the follo wing sectio n.
We consider a sequence of arriving customers indexed by the subscrip t n
and remind the reader of our earli er notation :

= the
= Tn -

nth customer arriving to the system


T n-1 = interarrival time between C n - 1 and Cn
X n = service time for Cn
IVn = waiting time (in queue) for C;
Cn

tn

We assume that the random variables {t n } and {x n } are independent and are
given , respectively, by the distribution functions A (t) and R(x) independent
of the subscript n. As always, we look for a Markov process to simplify our
analysis. Recall for M/G/I , that the unfinished work U( t) is a Markov proce ss
for all t. For G/G/I , it should be clear that although U( t ) is no longer
Markovian, imbedded within U(t) is a crucial Markov pr ocess defined at the
customer-arrival tim es. At these regeneration points, all of the past history
that is pertinent to future beh avior is completely summa rized in the current
value of U(t ). That is, for FCFS system s, the value of the unfinished work
just p rior to the arrival of C; is exactl y equ al to his waiting time (IVn ) and
this Mar k ov process is the object of our study . In Figures 8.1 a nd 8.2 we use
the time-diagram notation for queues (as defined in F igure 2.2) to illust rat e
the history of en in two cases : Figure 8. 1 displays the case where C n +!
arrives to the system before C; departs from the service facility ; and Figur e
8.2 sho ws the case in which Cn + 1 arrives to an empty system. Fo r the condit ion s of Figure 8.1 it is clear th at
That is,
(8.1)

if

The condition expressed in Eq . (8. 1) ass ures that C n +! ar rives to find a busy
Cn _

Server

c,

Xn

"'n

c;
Queue

t,..,
C.

'Ulnf.'

Cn

+ 1

Time --;:-

.. I

c..,,

Figure 8.1 The case where

C n+1

arrives to find a busy system.

8.1.

Server

t--

LINDLEY'S INTEGRAL EQUATION

277

x.~

ell

C.

+1

T im e~

Qu eue

t n +1

c.
Figure 8.2 The case where C n+l arrives to find an idle system.
system. From Figure 8.2 we see immediately that
if

(8.2)

where the condition in Eq. (8.2) assures that Cn +1 arrives to find an idle
system . For convenience we now define a new (key) random variable U n as
(8.3)
This random variable is merely the difference between the service time for C;
and the interarrival time between Cn+l and Cn (for a stable system we will
require th at the expectation of u.; be negative). We may thus combine Eqs.
(8.1)--(8.3) to obtain the follo wing fundamental and yet elementary relationship , first established by Lindley [LIND 52];
if
if

IV n
IV n

+ U n :?: 0
+ U n::::;; 0

(8.4)

The term IV. + Un is merely the sum of the unfinished work (w.) found by
C; plus the service time (x. ), which he now adds to the unfini shed work,
less the time durat ion (t'+l) until the a rrival of the next customer Cn+l;
if th is qu antity is nonnegati ve then it represents the a mount of unfini shed
work found by Cn+l and therefore represents his waiting time wn+1 However, if this quantity goes nega tive it indicates that an interval of time has
elap sed since the a rrival of Cn, which exceed s the a mount of un finished wo rk
present in the system j ust after th e arrival of Cn' thereby ind icating that the
system has go ne idle by the time Cn+l arrives.
We may write Eq . (8.4) as
1"n+1

max [0,

II' n

+ un]

(8.5)

We introduce the notation (x)+ ;;, max [0, x ] ; we then have


- (8.6)

278

T HE QUEUE

GIGII

Since the random va ria bles {In} and {xn} are independent a mo ng themselves
and each other, then one o bserves that the sequence of random variables
{1I'0, W I' IVz, . .} forms a Markov process with sta tiona ry tran siti on probabilities. This can be seen immediately from Eq . (8.4) since the new value
IV n +! depends upon the previou s sequence of random vari abl es W ; (i =
0, I, . .. , n) o nly through the most recent value IV n plus a random varia ble
lin' which is independent of the random variables 11'; for all i ~ n.
Let us solve Eq . (8.5) recursively beginning with W o as a n initi al condition.
We ha ve (defining Co to be our initi al arrival)

+ lIo)+
(II', + lI,) + = max [0, w, + lI l]
= max [0, lI, + max (0, Wo + lIo)]
= max [0, lI l, III + lIo + IVO]
W3 = (w z + IIz)+ = max [0, Wz + liz]
= max [0, liz + max (0, lI" lI, + lIo + 11'0)]
= max [0, liz, liz + lIl , liz + III + lIO + wo]
=
IVz =
IV,

Wn

(IVO

(IV n_ ,

+ lI n_ ,)+ =
=

max [0, Wn_ 1 + lIn_tl


max [0,

U n _ It U n _ 1

lIn_1

un_ 2 ,

. , U n_ 1

+ .. . +

+ ... + II I + lIo + 11'0]

Uh

(8.7)

However, since the sequence of random va riables {lIJ is a seq uence of


independent a nd identically distributed random vari able s, then they are
"interchan geable" and we may con sider a new random variable w n ' with the
same distribution as ll ' n , where
11':

~ max [0, lIo, lIo

+ u. , lIo + lI, + liz, . . . , lIO + III + ... + lI n_ Z,


lIO + III + ... + lI n_ Z + lI
+ 11'0 ] (8.8)
n_ ,

Equation (8.8) is obta ined from Eq. (8.7) by relabeling the ra ndo m varia bles
u.. It is no w con venient to define the qu antities U; as
n- I

o; = III;
i= O

We thus have from Eq, (8.8)

(8.9)

Uo = 0

w: = max [U o, U" U .. . , U n_I, U n + Wo]

(8.10)

8. I.

LINDLEY'S INTEGRAL EQUATION

279

w:

can only increase with Il .


F rom thi s last form we see for IV o = 0 that
Therefore the limiting random variable lim w n ' as n --+ 00 mu st converge to
the (p ossibly infinite) rand om variable IV
IV == su p U';

- (8.l! )

n ;:::O

Our imbedded Markov chain is er godic if, with probability one, IV IS


finite, and if so, then the distribution o f I\'n ' and of W n both con verge to the
distribution o f IV; in thi s case, the d istribution of IV is the wa itin g-time
d istribution . Lindley [LIND 521 has shown that for 0 < E [lunlJ < 00 then
the system is stable if and only if E[u nl < O. Therefore, we will henceforth
assume
(8. 12)
E[lI n l < 0
Equation (8. 12) is our usual condition for stabilityas ma y be seen from the
following:
E[lI nl = E[ X n

n+ll

= E[ xnl - E[t n+I]

= x-i

= i(p

- 1)

(8 .13)

where as usual we assume that the expected service time is x and the expected
interarrival time is I (and we have p = xli). From Eqs , (8. 12) and (8. 13) we
see we have requi red th at p < I , as is our usual cond ition for sta bility. Let us
denote (as usu al) the stati on ary d istribution for IV n (a nd also the refore for
\I'n') by
(8.14)
lim P[ w n ~ y ] = lim P[ w n ' ~ y ] = W(y )
n-Xl

n -CD

which mu st exist for p < 1 [LIND 52]. Thus W(y ) will be our ass um ed
sta tio na ry distribution for time spent in queue ; we will not dwell up on the
proof o f its existence but rather up on the me th od for its calcul ati on . As we
kno w for such Markov processes, this limiting distribution is ind ependen t o f
the initi al sta te 11"0'
Before proceeding to the formal derivation of result s let us inves tiga te the
way in which Eq. (8.7) in fact produces the waiting time. This we do by
exa mple; consider Figure 8.3, which represen ts the unfin ished work U(t) .
Fo r the sequence of arrivals a nd departures given in this figure, we pre sent the
table bel ow sho wing the interarri val times t n + b service time s x n ' the rand om
variables U n' and the wait ing time W n as measured from the dia gram ; in the
last row of this table we give the waiting time s W n as calculated from Eq.
(8.7) as follows.

280

G/G/l

TH E QUEUE

U(t)

1 2 3 4 5 6

7 8 91011121 31415 16171 819 2021 22 2324 25 26

t t C,t t t

Arrivals

..
t

C.

tt

C,

~ ~

C, C2

Co

Figure 8.3

C.

~ ~

Depa rtures

tt

C J C4

Co C,

e"e"

CJ C4

e"

C. C. C,

~
e"

Unfinished work V(I ) showing sequence of arr ivals and departures.

Table of values from Figure 8.3.


n

1,,+1

Xn

2
2

4 5

7 8

2
-I

/I n

-4 2 -I -6

Wn

measured from Fig. 8.3

Wn

calculated from Eq. 8.7

IVO

= 0

=
IV2 =
IVa =
IV 1

IV. =

max (0, Iro + uo) = max (0, I) = 1


ma x (0 , ub U 1 + U o + 11.0) = ma x (0 , 1,2) = 2
max (0, u2 , u2 + u1 , u2 + u1 + U o + Iro) = ma x (0, - 1, 0, I)
ma x (0, Ua, lI a + U 2 , lI a + u2
max (0, I , 0 , I , 2) = 2

+ Il l ' Ua + 112 + III + Uo + IVO)

+ Ua, II, + lI a + U2 , II. + Ua + 112 + Ill'


U, + U a + U 2 + III + 110 + IVo)

IVS =

max (0,

=
\1'. =
IV 7 =
IVB =
IV. =

max (0, -4, -3 , -4, -3 , -2)

II. , II.

=0
-2, -1 ,0) = 2

max (0, 2, -2, -I,


ma x (0, -I , 1, -3, -2, -3, - 2, - I) = 1
ma x (0, -6, -7, -5, -9, -8, -9, -8 , -7)

ma x (0, I, -5 , - 6, -4, -8 , -7 , - 8, -7 , -6)

8.1.

281

LlNDLEY'S INTEGRAL EQUAnON

These calculations are quite revealing. For example, whenever we find an m


for which W m = 0, then the .m rightmost calculations in Eq . (8.7) need be
this is due to the fact that a
made no more in calcu lating W n for all >
busy period has ended and the service times and intera rrival times from tha t
busy period cannot affect the calculations in future busy periods. Thus we
see the isolating effect of'idle periods which ensue between busy periods.
m + 11'0) gives the
Furthermore, when W m = 0, then the rightmost term
(negative of the) total accumulated idle time of the system during the interval
(0, T m).
We define
Let us now proceed with the theory for calculating
Cn(u) as the PDF for the ra ndom variable U n> that is,
(8.15)
and we note that un is not restricted to a half line. We now derive the expression for C n(u) in terms of A (r) and B(x):
C n(u)

n - In+l ~ u]

~ u + I I In+l

= I]

However, the service time for C n is independent of t n+l and therefore


Ciu) =

l~oB(U + I)

(8.16)

Thus, as we expected, C n(u) is independent of n and we therefore write


Cn(u)

Also, let

=c.

ceu)

= f t a>~oB(u + I)

(8.17)

denote the random variable

Note that the integral given in Eq . (8.17) is very much like a convolution form
for aCt) and B(x) ; it is not quite a straight convolution since the distribution
C(u) represents the difference between X n and n+1 rather than the sum.
Using our convolution notation (@), and defining cn(u) ~ dCn(u) jdu we have
Cn(u)

c(u)

a( -u) @ b(u)

- (8.18)

It is (again) convenient to define the waiting-time distribution for customer


C; as
(8.19)

282
For y

THE QUEUE
~

G/ G/I

0 we have fro m Eq. (8.4)

Wn+l(Y) = P[wn + lin ~ y )


=

i~P[Un:::; Y -

wi

Wn

And now o nce agai n, since u.; is independent of

Wn+l(Y)

= i~ Cn(y -

= w) d W.(w)
Wn

w) dWn(w)

we have

for Y

~0

(8.20)

H owever, as postul a ted in Eq . (8.14) thi s di stribution has a lim it W(y) a nd


therefore we have the following inte gral equ at ion , which defines the limiting
distribution of waitin g time for customers in th e system G/G/ I :

W(y) =

l~ C(y -

for y

w) dW(w)

Further, it is clear th at

W( y) = 0

for y

<0

Combining these last two we have Lindley 's integral equation [LI ND 52),
which is seen to be an integral equation of the Wiener-Hopf type [SPIT 57).

( <Xl C(y _ w) dW(w)

Jo(

W(y) =

y ~O

(8.2 1)

y< O

Equa tion (8.2 1) may be rewritten in at least two ot her useful fo rms, whic h
we now proceed to deri ve. In tegrating by parts, we ha ve (fo r y :2: 0)

W(y)

C(y - w) lV(w)I:::,_o-

= lim C(y - w)W(w) -

-l~ lV(w) dC(y C(y) W(O-) -

w-r co

w)

( ~W(w) dC(y

Jo

- w)

We see th at lim C(y - II') = 0 as w - > co since the limit of C( u) as u ->- - co


is the pr ob a bility th at a n intera rrival time a ppro aches infinity, which clearl y
must go to zero if the int erarrival time is to have finite moments. Similarly,
we ha ve W(O- ) = 0 and so our form for Lindle y' s integral equat ion may be
rewritten as

W(y)

_ ( <Xl W(w) dC(y _ w)


(

Jo-

y~O

(8.22)

y< O

8.2.

SP ECTR AL SOLUTI ON TO LINDL EY'S INTEGRAL EQUATION

283

Let us now show a third form for this equation. By the simple variable change
IV for the a rgument of our distributio ns we fina lly arrive at

u = y -

W(y)

(fo"

W(1i - u) dC(u)

y ~ O

- (8.23)

Q)

y<O

Equations (8.21), (8.22), and (8.23) all describe the basic integral equati on
which governs the beha vior of GIGII. These integral equ ations, as menti oned
ab ove, are Weiner-Hopf-type integral equations and are not unfamiliar in
the theory of stochastic processes.
One observes from these forms tha t Lindley 's integral equat ion is almo st,
but not quite, a convolution integral. The imp ortant distinction between a
convolution integral and that given in Lind ley's equation is that the latter
integral form holds only when the varia ble is nonnegative; the distribution
functi on is identically zero for values of negativ e argument. Unfortunately,
since the integral ho lds on ly for the half-line we must borrow techniques from
the the ory of complex variables and from contour integration in or der to
solve our system. We find a similar difficulty in the design of optimal linear
filters in the mathematical theory of co mmun icat ion ; there too, a WeinerHopf integral equ ati on describe s the optimal solution, except that for linear
filters, the unkn own appear s as one factor in the inte grand rather than as in
o ur case in que ueing theory, where the unknown appears on both sides of the
integral equation. Neverthel ess, the solution techniques are a mazingly
similar and the read er acqua inted with the theory of optimal realiza ble linear
filter s will find the following ar guments famil iar.
In the next section, we give a fairly general solutio n to Lindley's integral
equat ion by the use of spectral (transform) methods. In Exercise 8.6 we
examine a solution approach by mean s of an example that doe s not require
tran sform s ; the example chosen is the system D/E,/I con sidered by Lindley.
In that (direct) approa ch it is requ ired to ass ume the solution for m. We now
conside r the spectral so lution to Lindley's equation in which such assum ed
so lution form s will not be necessary.

8.2. SPECTRAL SOLUTION TO LINDLEY'S INTEGRAL


EQUATION
In this section we describe a meth od for solving Lindle y's integral equ ati on
by mean s of spectrum fact orization [SM IT 53]. Our point of departure is the
form for this equ ation given by (8.23). As ment ioned earlier it would be
ra ther straightforward to solve this equation if the right -han d side were a true
convo lution (it is, in fact, a con volut ion for the nonn egative half-line on the

284

TH E QU EUE

GIGII

variable y but not so otherwise). In order to get around this difficulty we use
the following ingenious device whereby we define a " complementary"
wait ing time, which completes the convolution, and which take s on the value
of the integral for negative y only, that is,

0
W_(y )

C>

y ~ O

L"

= ( "
W(y

- u ) d C(u )

y<O

(8.24)

Note that the left-hand side of Eq. (8.23) might consistently be written as
W+(y) in the same way in which we defined the left-hand side of Eq. (8.24).
We now observe that if we add Eqs. (8.23) and (8.24) then the right-hand
side takes on the integral express ion for all values of the argument, that is,
W( y)

+ W_( y ) = r oo W (y -

u )c(u) du

. for all real y

(8.25)

where we have denoted the pdf for u by c(u) [~ d C (u)ldu].


To pr oceed , we assume that the pdf of the interarrival time is* OCe- Dt) as
t ..... 00 (where D is any real number greater than zero), that is,
.

aCt)

hm

-Dt

t - oo e

< 00

(8.26)

The condition (8.26) really insists that the pdf associated with the interarrival time dr ops off a t least as fast as an exponen tial for very large interarrival times. From this cond ition it may be seen from Eq. (8.17) that the
behavior of C (u) as u ..... - 00 is governed by the behavior of the interarri val
time ; this is true since as u takes on large negative values the argument for the
service-time distribution can be made positive only for lar ge values of t ,
which also appears as the argument for the interarrival time density. T hus
we can show
.

C(u )

---v;: < 00
u - - oo e
hm

That is, C( u) is O (~U) as u ---+ - 00 . If we now use this fact in Eq. (8.24) it is
easy to establish that W_(y) is also O(eD") as y ---+ - 00
The notat ion O(g (x as x _ X o refers to a ny function that (as x - xo> decays to zero
at least as rapidl y asg(x) [where g(x) > OJ , that is,
lim
x-xo

10(g(X1

sv:

= K

<

00

8.2.

SPECTRAL SOLUTION TO LINDLEY'S I NTEGRAL EQUATION

285

Let us now define some (bilateral) transforms for various of ou r functions .


For the Laplace transform of W_ (y) we define
<I>-<s)

L:

W-<y)e- " dy

(8.27)

Due to the condition we have established regarding the asymptotic property


of W_(y), it is clear that <I> _(s) is analytic in the region Re (s) < D. Similarly,
for the distribution of our waiting time W(y) we define
<I>+(s)

L:

W(y) e- " dy

(8.28)

Note that <I>+(s) is the Laplace transform of t he PDF for waiting time,
whereas in previous chapters we have defined WOes) as the Laplace transform
of the pdf for waiting time ; thu s by entry I I of Table 1.3, we have
s<l>+(s) = WOes)

(8 .29)

Since there are regions for Eqs. (8.23) and (8.24) in which the functions drop
to zero, we may therefore rewrite these transform s as
<I>. Is)

= f~ W_(yV"

$ +(s)

dy

W(yV 'Ydy

(8.30)
(8.31)

Since W(y) is a true distribution function (and therefore it remains bo unded


as y -+ co) then <I>+(s) is analytic for Re (s) > O. As usual , we define the
transform for the pdf of the interarrival time and for the pdf of the service
time as A *(s) and S *(s), respectively. Note for the condition (8.26) that
A *( -s) is analytic in the region Re (s) < D ju st as was <I>_(s) .
From Appendix I we recall that the Laplace transfo rm for the conv olution
of two function s is the pr oduct of the transforms of each . Equ ati on (8. 18)
is alm ost the con volution of the service-time den sity with the interarrivaltime de nsity; the only difficulty is the negative arg ument for the inte rarrivaltime den sity. Ne vertheless, the above-mentioned fact regarding products of
tran sform s goes through merely with the negative argument (this is Exercise
8.1). Thus for the Lapl ace transform of c(u) we find
C*(s) = A *( - s)S*(s)

(8.32)

Let us now return to Eq . (8.25), which expresses the fundamental relati onship among the variables of ou r problem and the waiting-time distrib ution
W(y). Clearly, the time spent in queue must be a no nnegative rando m variable,
and so we recogni ze the right-hand side of Eq . (8.25) as a convolution between
the waiting time PDF and the pdf for the random va riable ii. The Laplace

286

THE QUEUE

G /G /l

transform of this convolutio n mu st therefore give the produ ct of the


Laplace tran sform <1>+(5) (for the waiting-time distributi on) and C*(s) (for
the density on ii ). The transform of the left-hand side we recognize from Eqs.
(8.30) and (8.31) as being <I>+(s) + <I>_(s) , thu s
<I>+(s)

+ <I>_(s) =

<1>+(s)C* (s)

From Eq. (8.32) we therefore obtai n


<I>+(s)

+ <I>_(s) = <1>+(s)A *( -

s)B * (s)

which gives us
<I>-Cs)

= <I>+(s)[ A * (- s)B * (s) -

I]

(8.33)

We ha ve already established that both <I> _(s) a nd A*(-s) are analytic in the
region Re (s) < D. Furthermore, since <I>+(s) and B* (s) a re transform s of
bounded function s of nonnegative varia bles then both funct ions must be
. analytic in the region Re (s) > O.
We now come to the spect rum fa ctorizat ion. The purpose of this factorization is to find a suitable repre sentation for the term
A*(-s)B*(s) -

(8.34)

in the form of two fact ors. Let us pause for a moment and recall the method
of stages whereby Erla ng conceived the ingenious idea of approxi mating a
distribution by mean s of a collect ion of series and parallel exponent ial stages.
The Laplace transform for the pdf's obtaina ble in this fashion was genera lly
given in Eq. (4.62) or Eq. (4.64); we immediately recognize these to be rati onal
functions of s (tha t is, a rati o of a polynomial in s divided by a polynomial
in s). We may simila rly conceive of appro ximating the Laplace transfor ms
A *( - s) and B* (s) each in such form s; if we so app roximate, then the term
given by Eq. (8.34) will also be a rat ion al functio n of s. We thus choose to
consider th ose queue ing systems for which A *(s) and B *(s) may be suitably
approximated with (o r which are given initia lly as) such rational functio ns
of s, in which case we then p ropose to for m the following spectrum factor ization
A *c - 5)8 *(5) _ I

= 'Y..(s)

- (8.35)

'Y _(s)

Clearly 'Y +(s)/'f_ (s) will be som e rat ional function of s, an d we are now
desirou s of finding a particu lar factored form for this exp ression. We specifically wish to find a factorizat ion such that :

F or Re (s) > 0, 'Y+(s) is an analytic functio n of s with


no zeroes in this half-pl an e.

For Re (s) < D , 't'_(s) is an analytic function of s with


no zeroes in this half-plan e.

(8.36)

8.2.

SPECTRAL SOLUTIO N TO LI NDLE Y'S I NTEGRAL EQUATION

287

F urthe rmo re, we wish to find these functions with the additional pr operties :

For Re (5)
For Re (5)

> 0,
<

'F+(s )

lim - - = 1.
[. 1- 00

(8.37)

'I" (5)
D, lim - - - = -1.
1., / - 00

The conditions in (8.37) are convenient and must have oppos ite polarity in
the limit since we o bserve that as 5 run s off to infinity along the imaginary
axis, bot h A *( - 5) and 8 *(5) must decay to 0 [if they are to have finite mom ents
and if A(t) and 8 (x ) do not contain a sequence of discontinuities, which we
will not permit] leaving the left-hand side of Eq. (8.35) equal to -I, which
we have suitably matched by the rati o of limits given by Condition s (8.37).
We shall find that this spectrum fact or izati on, which requ ires us to find
'F +(5) and '1"_ (5) with the appropri ate properties, cont ains the diffi cult part
of this method of solution. Nevertheless, assuming that we ha ve found such a
factorizati on it is then clear that we may write Eq. (8.33) as
<1> (5)
-

'1'+(5)

= <1>+(5) - lL (5)

or

<1>_(5)'L (5) = <1>+(S)'I'+(5)


(8 .38)
where the commo n region of ana lyticity for both sides of Eq . (8.38) is within
the strip
o < Re (s) < D
(8 .39)
T ha t this last is true may be seen as follows. We have already ass umed that
'l'"+(s) is a nalyt ic for Re (s) > 0 and it is further tru e that <1> +(s) is a nalytic
in this same region since it is the Lapl ace tran sform of a functi on that is
identically zero for negative ar guments; the product of these two must
therefore be an alytic for Re (5) > O. Similarl y, q ' _ (5) has been given to be
analytic for Re (s) < D and we have that <1>_(s) is ana lytic here as explained
earl ier following Eq . (8.27) ; thu s the product of these two will be a nalytic in
Re (s) < D. Thus the comm on region is as stated in Eq . (8.39). N ow, Eq .
(8.38) establis hes that these two functions are equal in the comm on strip and
so they must represen t functio ns which, when continued in the region Re (5) <
0, are ana lytic and when continued in the region Re (5) > D , are also
ana lytic; therefore their analytic continuation contains no singularities in
the entire finite s-plane, Since we have establi shed the behavior of the functi on
<1> +(5)'1"+(5) = <1> _(s)'I'_(s) to be analytic and bounded in the finite s-plane,
and since we ass ume Co ndition (8.37), we may then apply Liou ville's theor em *
Liouville's theorem states, "If I(~) is analytic and bounded for all finite values of z,
then I(z) is a constant."

288

TH E QUEUE

G/G /l

[TIT C 52], which immediately establishes that this function must be a constant
(say, K). We thus have .
$ _(s)'Y_(s) = $ +(s)'Y+(s)

(8.40)

This immediately yields


K

(8.41)

$ +( s) = - 'Y+(s)

The reader should recall that what we are seeking in this development is an
expression for the distribution of queue ing time whose Laplace tran sform is
exactly the function $ +(s), which is now given through Eq. (8.41). It remains
for us to demonstrate a method for evaluating the constant K.
Since s$+(s) = W* (s) , we have

Let us now consider the limit of this equation as s -+ 0 ; working with the
right-h and side we have
lim
8- 0

r ~ e-"

Jo

dW( Y)

= r~ dW(Y) =

Jo

We have thu s established


lim s $ +(s)

.-0

(8.42)

This is nothing mor e than the final value theorem (entry 18, Table 1.3) and
comes ab out since W( (0 ) = I. Fr om Eq. (8.41) and this last result we then
have

and so we may write


K

lim 'Y+(s)
8- 0

(8.43)

Equat ion (8.43) provides a means of calcul atin g the constant K in our solution for $ +(s) as given in Eq. (8.41). If we make a Taylor expan sion of the
funct ion 'Y+(s) around s = 0 [viz., 'Y...(s) = 'Y+(O) + s'Y<;> (O) + (s2/2 !)'Y~ I(O)
+ ...] and note from Eqs. (8.35) and (8.36) that 'Y +(0) = 0, we then
recognize that this limit may also be written as
.

d'Y+(s)

K=hm---

.-0

(8.44)

ds

8.2.

SPECTRAL SOLUTION TO LINDLEY'S INT EGRAL EQUATION

289

and this provides us with an alternate way for calculating the constant K.
We may further explore this constant K by examining the behavior of
<1>+(5)o/+(S) anywhere in the region Re (5) > 0 [i.e., see Eq. (8.40); we
choose to examine this beh avior in the limit as 5 --->- a::J where we kn ow from
Eq. (8.37) that 0/+(5) behaves as 5 does ; that is,

= lim s
s .... 00

Making the change of vari able 5Y =


K

lim

r ~ e- ' vW(y) dy

Jo
X

we have

r~e-"w(~)
s

s.. . oo )o

dx

As 5 --->- cc we may pull the con stant term W(O+) outside the inte gral and
then obtain the value of the rema ining integral, which is unit y. We thus
obtain
'(8.45)
This establishes that the con stant K is merely the probability that an arriving
.
customer need not queue].
In conclusion then , assuming that we can find the appro priate spectru m
fact ori zati on in Eq . (8.35) we may immediately solve for the Lapl ace transform of the waitin g-time distribution through Eq . (8.41), where the con stant
K is given in eith er of the three forms Eq . (8.43), (8.44), or (8.45). Of course
it then remain s to invert the transform but the pr oblems involved in that
calcul ati on have been faced before in numerous of our other solution form s.
H is possible to carry out the solution of this problem by concentrating on
0/_ (5) rather than '1'+(5) , and in some cases this simplifies the calcul ation s. In
such cases we may proceed from Eq, (8.35) to obtai n

= o/_(s)[A *( -

1)

(8.46)

(s) - - - - - =K-- - - [A *( -s) 8 *( s) - 1]'Y_(s)

(8.47)

'F+(s)

s)8*(s) -

From Eq . (8.4 1) we then have


<1>
+

t Note t ha t W (O+) is not necessaril y equ al to I - p. which is the fracti on of lime the server
is idle . (T hese two are equ al for the system M{G{I.)

290

THE QUEUE

GIGII

In order to evalua te the constant K in this ca se we di fferentiate Eq. (8.46)


at s = 0, th at is ,
'F~)(O) = [04 *(0 )8*(0) -

I]'~)(O)

+ 0/_(0) [04*(0)8 *(1)(0) -

o4 *(1\O)B*(O) ]
(8.48)

From Eq. (8.44) we recognize the left-hand side of Eq . (8.48) a s the consta nt

K and we may now evaluate the right-hand side to o bta in


K =
giving

o/_(O)[-x

+ f]

K = 0/_(0 )(1 - p)i

(8 .49)

Thus , if we wish to use 'F_(s) in our so lutio n form , we obtai n the transform
of the waiting-time di st ribution fr om Eq. (8.47), where the unknown constant
K is evalu ated in terms of 'L(s) through Eq. (8.49) .
Summari zin g then , o nce we have ca rried out the spectru m factori zat ion
as indicated in Eq . (8.35), we may proceed in one of two directions in solving
fo r <1l +(s) , the tra nsfo rm of the waiting-t ime distributi on . The firs t me th od
gives us

- (8.50)
and the seco nd provide s us with
<I> s _
0/-<0)(1 - p)i
+( ) - [A *( - s)B*(s) - I]'Y_(5)

- (8.51)

We no w proceed to demonstrate the use o f th ese result s in some examples.

Example 1: M IMI]
Our old fr iend MIMI I is extremely straightfo rwa rd and sho uld serve to
clarify the meaning of spectru m factori zati on. Since both the intera rriva l time
a nd the service time a re exponentially di stributed random va ria bles, we
immediately have A *(5) = AI(s + }.) and B*(s) = fll(s + fl) , wh ere x = I/,u
a nd i = I /A . In order to solve for <I>+(s) (the transform of th e wai ting time
di stribution), we mu st first form the expression given in Eq. (8.34), tha t is,

04 *( - 5)B *(5) - I =

(_ A_) (_fl_) ). -5

5 +fl

+ S(fl - A) .
(i. - S)(5 + fl)
52

-8.2.

SPECTRAL SOLUTIO N TO LI NDL EY'S INTEGRAL EQUATION

291

Thus , from Eq. (8.35), we obtain


' F+(s)
0/ _(s)

A *(-s) B*(s) _ 1
s(s

+ p. -

= (s + po)(), -

J.)

(8.52)

s)

In Figure 8.4 we show the location of the zeroes (denoted by a circle) and
poles (deno ted by a cross) in the complex s-plane for the funct ion given in
Eq. (8.52). No te that in this particular example the ro ots of the numer at or
(zeroes of the expression) and the roots of the denominat or (poles of the
expression) are especially simple to find ; in general , one of the most difficult
parts of this method of spectrum factori zation is to solve for the ' roots. In
order to fact orize we require that conditions (8.36) and (8.37) mainta in.
Inspecting the pole- zero plot in Figure 8.4 and rememberin g that 0/ +(s)
must be analytic and zero-free for Re (s) > 0, we may collect together the
two zeroes (at s = 0 and s = - po + I.) and one pole (at s = - po) and still
satisfy this requ ired condition. Similarly , 0/ _(s) must be a nalytic and free
from zeroes for the Re (s) < D for some D > 0; we can obtain such a
cond ition if we allow this functi on to contain the rema ining pole (at s = J.)
and choose D = J.. Th iswe show in Figur e 8.5.
Thus we have
'F+(s)

= s(s + po

- A)
s + p.

(8.53)

'L (s) = J, - s

(8.54)

No te that Co ndition (8.37) is satisfied for the limit as s ->-

00 .

Im (s)

s-plane

- - -"*--()---e>---*- - - - - Re(s)
-p

Figure 8.4 Zeroes (0) and poles ( x) of'i'+(s)/,C(s) for M iM I!.

_I

292

THE QUEUE

GIG II

Im(s)

Im(s)

s-plane

s-cp lane

----{J--{)--

(a)

-----l-~;_- R e(s)

Re(s)

'!'. (s)

Figure 8.5 Factorization into '1'+{s) and 1/'Y_{s) for M IMI !.


We are now faced with find ing K. F ro m Eq . (8.43) we ha ve
K

lim o/+(s)
.-0

s +/l-A

hm -----'-.- 0

+ /l

(8.55)

=I - p

Our expression for the La place tran sform of the waiting time PDF for M IMI !
is therefore fro m Eq. (8.41),
<I>+(s)

(I - p)(s
s(s

+ /l

/l)
- A)

(8.56)

A t this poi nt, typically, we attempt to invert the transform to get the waitingtime dist ribution. H owever , for thi s M/M /I example, we have already
carr ied out th is inversion for W *(s) = s<I>+(s) in going from Eq. (5.120)
to Eq . (5.123). T he solutio n we o btai n is th e familiar form,
y~O

(8.57)

Example 2: GIMllt
In this case B*(s) = /l l(s
giving us

+ /l)

bu t now A *( s) is completely arbitrary.

A*(- s) B*(s) _ I = A*( -s)/l _ I


s +/l

t Thi s examp le forces us to locale roo ts using Rouche's theorem in a way often nccessar y
for specific G/G !I problems when the spectrum facto rizatio n meth od is used. Of co urse,
we have already studi ed th is system in Section 6.4 and will compare the results for both
methods.

8.2.

SPECTRAL SOLUTION TO LI NDLEY'S INTEGRAL EQU ATION

293

and so we have
o/+(s} = flA*( - s) - s - fl
s +fl

'I'_(s)

(8.58)

In order to factorize we must find the roots of the numerator in this equ ation.
We need not concern ourselves with the poles due to A *( - s) since they mu st
lie in the region Re (s) > 0 [i.e., A (t ) = 0 for t < 0) and we are attempting to
find o/+(s), which cannot include any such poles. Thus we only study the
zeroes of the function

+ fl

- flA*(- s) = 0

(8.59)

Clearly, one root of this equation occ urs at s = O. In order to find the
remaining roots , we make use of Rouche's theorem (given in Appendix I but
which we repeat here) :

Rouche's Theorem Iff( s) and g(s ) are analytic functions of s inside and
on a closed contour C, and also iflg(s)1 < /f(s)1 on C, thenf(s) andf (s) +
g (s) have the same number of zeroes inside C.
In solving fo r the roots of Eq . (8.59) we make the iden tification

f (s) = s
g(s)

+ fl

-flA*( - s)

We have by definition
A*( -s) = .Ce"dA (t )

We now ch oose C to be the contour that runs up the imaginary axis a nd then
forms a n infinite-radius semicircle moving counterclockwise and surround ing
the left half of the s-plane, as shown in Figure 8.6. We consider thi s contour
since we are concerned abo ut all the pole s a nd zeroes in Re (s) < 0 so that
we may properly include them in 'Y+(s) [recall that 0/ _(s) may contain none
such); Rouche's theorem will give us information concerning the number of
zeroes in Re (s) < 0, which we must consider. As usual , we assume that the
real a nd imaginary parts of the complex variable s are given by a and (0),
respectively, that is, for j

J=I
s

= a + jw

294

GIGII

TH E QUEUE

Im (, )

s- plane

-1--

I-::--

- - Re(.<)

Figure 8.6 The contour C for G/M /I.


Now for the Re (s) = a ~ 0 we have

eat ~

l:
l,u l:
~ l,u l:
~ l,ui:

Ig(s)1 =

l,u

e st

I (t ~ 0) and so

dA(t)1

e at e 'r"

e;wt

dA(t)/

<lA(t )1

dA(t)1

=,u

(8.60)

1I(s)I = Is + ,u l

(8.6 1)

Similarly we hav e
Now, examining the contour C as shown in Figure 8.6, we observe th at for
all points on the contour, except at s = 0, we ha ve from Eqs. (8.60) and
(8.61) that
(8.62)
II(s)1 = Is + ,ul > ,u ~ Ig(s)j
This follows since s + ,u (for s on C) is a vector whose length is the distance
from the point -,u to the point on C where s is located. We are almos t in a

8.2.

SPECTRAL SOLUTION TO LINDL EY'S INTEGRAL EQUATION

295

Im(. ) = w

s-p lane

- -- =--- - - -- -t-'::7-.....:....-"+:::--- - - - -

Re(, ) = a

Figure 8.7 The excursion around the origin.


positi on to a pply Rouche's the orem; the onl y remaining con sidera tion is to
sho w that 1/(s)1 > Ig(s)1 in the vicin ity s = O. For thi s purpose we allow the
conto ur C to make a small semicircula r excu rsion to th e left of the o rigin
as show n in Figure 8.7. We note at s = 0 tha t Ig(O)1 = 1/(0)1 = fl, which
doe s no t sati sfy the conditions for Rouch e's the ore m. The small semicircular
excursion of radius ( > 0) that we take to the left of the ori gin overco mes
thi s difficult y as follows. Cons ider ing a n a rbitra ry point s on thi s semicircle
(see the figure) , which lies at an a ngle () with the a-axis, we may write s =
a + jw = - cos () + j sin 0 and so we ha ve
2

1/ (5)1 =

Is + fl l 2 = 1- cos () + j sin () + fl l 2

Formin g the product of (s

+ fl) and

its co mp lex co njugate, we get

If(sW = (fl - co s ()' + 0()

= fl '

- 2fl cos 0

+ 0()

(8.63)

N ote that the sma llest value for I/(s) I occurs for () = O. Eva lua ting g( s) on
th is sa me semicircula r excursion we have

F ro m the power-series expan sion of the expon enti al inside the inte gral we
have

Ig(sW =

fl21I.~ [I + (-

cos () + j sin 0)1 +... 1

dA(e)12

I
296

T HE Q UEUE

GIGII

We recognize the integrals in this series as proportional to the moments of


the interarrival time , and 'so
Ig(sW

= !1- 2 11-

ei eos () + j dsin () + O(EW

Forming Ig(s)j2 by multiplying g(s) by its complex conju gate , we ha ve


jg(s)12 = !1-2 (1 - 2d cos () + o( E
(8.64)

= xli =

II!1-i. Now since () lies in the ran ge -n/2 ~ (} ~


0, we have as E -+-O that on the shrinking semicircle surro unding the origin

where, as usual, p

71"12 , which gives cos (}

2
2
2!1-E
!1- - 2!1- E co s (} >!1- - - cos /}

(8.65)

This last is true since p < I for our stable system . The left-ha:nd side of
Inequality (8.65) is merely the expression given in Eq. (8.63) for I/(s)/2
correct up to thefirst order in E , and the right-hand side is merely the expression in Eq. (8.64) for Ig(S)j2, again correct up to the first order in E. Thus we
have shown that in the vicinit y s = 0, I/(s) 1 > jg(s)l. Thi s fact now having
been established for all points on the contour C , we may apply Rouche's
theorem and state that I (s) and I (s) + g(s) have the same number of zeroes
inside the contour C. Since I (s) has only one zero (at s = - !1-) it is clear that
the expression given in Eq. (8.59) [/(s) + g(s) ] ha s only one zero for Re (s) <
0 ; let this zero occur at the point s = - S1' As discussed a bove, the point
5 = 0 is also a root of Eq. (8.59).
We may therefore write Eq . (8.58) as

'I"+(s) = r!1- A *( - 5) - 5 - !1-J [ 5(5 + 51)J


'1'_(5)
L 5(5 + 51)
5 +!1-

(8.66)

where the first bracketed term contains no poles and no zero es in Re (s) ~ 0
(we have di vided out the only two zeroes at s = 0 and s = - 51 in this halfplane). We now wish to extend the region Re (s) ~ 0 into the region Re (s) <
D and we choose D (> 0) such that no new zeroes or poles of Eq. (8.59)
are introduced as we extend to this new region . The first br acket qu alifies
for ['I"_(S)]-1, and we see immediatel y that the second bracket qual ifies for
'I"+(s) since none of its zeroes (s = 0, s = - S1) or poles (s = -!1-) are in

-8.2.
Re (5)

SPECTR AL SOLU TION TO LINDLEY 'S INT EGRAL EQUATION

> O. We may then factorize Eq.

297

(8.66) in the following form:

0/+(5) = 5(5 + 51)


5+P

(8.67)

'1" _(5) =

(8.68)

-5(5 + 51)
5 +p.-.P.A*(-5)

We have now assured that the functions given in these last two equations
satisfy Conditions (8.36) and (8.37). We evaluate the unknown constant K
as follows:
.

0/+(5)

S + SI

K = hm-- = hm-,- 0

=~=

. - 0 S + p.

(8.69)

W(o+)

P.
Thu s we have from Eq. (8.41)

<1>+(5) = St(p. + s)
P.5(5 SI)

The partial-fraction expansion for this last function gives us


. <1>+(5) =

! _15

sdp.
S +51

(8.70)

Inverting by inspection we obtain the final solution for G{M{1 :


W(y)

1- (1 - ; )e-

S 1Y

~0

(8 .71)

The reader is urged to compare this last result with that given in Eq. (6.30),
also for the system G{Mjl; the comparison is clear and in both cases there
is a single constant that must be solved for. In the solution given here that
constant is solved as the root of Eq. (8.59) with Re (s) < 0; in the equati on
given in Chap ter 6, one must solve Eq. (6.28), which is equivalent to Eq.
(8.59).
Example 3:
The example for G{Mjl can be carried no further in the general case. We
find it instructive therefore to consider a more specific G{M{I example and
finish the calculations; the example we choose is the one we used in Chapter 6,
for which A *(s ) is given in Eq . (6.35) and corresponds to an Ez{M{1 system,
where the two arrival stages have different death rates. For that example we

298

THE QUEUE

GIG II
s-plane

)(
I'

Figure 8.8 Pole-zero pattern for E2 /M{1 example.


note that the poles of A *( -s) occur at the points s = fl , s = 21t, which as
promised lie in the region Re (s) > O. As our first step in factori zing we form
'I-'+(s) = A*(- s)B*(s) _ 1
'_(s)
\
\

= [(fl -

S~~2~

- s(s - It

(s

- S)]

C: J-

+ fl / 2)(S -

+ fl )(fl

fl -

,u.j2 )

- s)(2fl - s)

(8.72)

The spectrum factorization is considerably simplified if we plot these poles


and zeroes in the complex plane as shown in Figure 8.8. It is clear that the
two poles and one zero in the right half-plane must be associated with 'r'_(s).
Fu rthe rmore, since the strip 0 < Re (s) < fl contains no zeroes and no poles
we cho ose D = fl and iden tify the remaining two zeroes and the single pole
in the region Re (s) < D as being associated with 'I'"+(s) . Note well that the
zero located at s = (l - .J2)fl is in fact the single root of the expre ssion
flA *( -s) - s - fl located in the left half-plane, as discussed a bove, and
therefo re s, = -(I - J 2),u. Of course, we need go no further to solve our
problem since the solution is now given thro ugh Eq . (8.71) ; howe ver, let us
co ntin ue identifying various forms in our solution to clarify the rema ining
steps. With this factorization we may rewrite Eq. (8.72) as

'f'"+(s)

-- 'L(s)

- (S - fl - fl JZ)] rls(s - It + It J
[ (fl - s)(2fl - s)
s + ,u

2l]

8.3.

KI NGMAN'S ALGE BRA FOR QUEUES

299

In th is form we rec ogn ize the first bracket as II' F_(s) and the seco nd bracket
as 'F+(s). Thus we have
\I'"+(s)

s(s - It
It J 2)
s +1t

(8.73)

We may evaluate the con stant K fro m Eq . (8.69) to find

-1

+ ./2-

(8.74)
It
a nd thi s of co urse corresponds to W(O+) , whic h is the probability tha t a ne w
arrival mu st wa it for service. Finally then we substi tute these values into
Eq. (8.7 1) to find
W(y) = I -

= ...! =

(2 - J 2)e- p ( v "2-

Uy

~0

(8 .75)

which as expected correspon ds exactly to E q . (6.37).


T his method of spectru m facto rizatio n has been used successfully by Rice
[R IC E 62], who con siders the busy peri od for the GfGfl system. Amon g the
interesting results available, there is one corresponding to the limiting distribution of lon g waiting time s in the hea vy-traffic case (which we de velop in
Section 2.1 of Volume II) ; Rice gives a similar a pp ro ximatio n for the duration of a busy peri od in the heav y tr affic ca sco

8.3.

KINGMAN'S ALGEBRA FOR QUEUES

Let us once again sta te the funda menta l relat ionships underlying the
que ue. Fo r u; = X n - t n +! we have th e basic relat ion ship

GfGfl
(8.76)

a nd we have a lso seen th at


U,"n

max [0,

U n _ 17 U n _ 1

,
un _ 2 ,

. . . ,

U n_ 1

+ . .. + U 1 , U n_ 1 + . . . + U o + H'o]

We observed ea rlier th at {lV n } is a Markov process with sta tio nary tr an sition
prob abilities ; its tot al stoc has tic structure is givcn by P[ lV m+ n :::; y 11I'm= x],
which may be calcu lated as a n n-fold integral ove r the n-d imensional joint
d istri b utio n of the n rando m va ria bles \I' m +!' . . , \l'm+ n ove r that regio n o f
the space which results in 11.,+ n :::; y. T his ca lculation is mu ch too complicated
an d so we look fo r alternative means to so lve this p ro blem. Pollaczek [POLL
57] used a spectra l ap proach and comp lex integrals to carry ou t the sol ution.
Lind ley [LI ND 52) observed that Il' n has the sa me d istribution as
defin ed
ea rlier as

IV : ,

300

THE QUEU E

G/G/I

If we have the case E[u n ] < 0, which corresponds to p = xli < I, then a
stable solution exists for the limiting random variable IV such that
IV = sup U';

(8.77)

l1 ~ O

independent of 11'0 ' The method of spectrum factorization given in the


previous section is Smith's [SMIT 53] approach to the solution of Lindley's
Wiener-Hopf integral equation. Another approach due to Spitzer using
combinatorial methods leads to Spitzer's identity [SPIT 57]. Many proofs
for this identity exist and Wendel [WEND 58] carried it out by exposing the
underlying algebraic str ucture of the problem. Keilsen [KEIL 65] demonstrated the application of Green's functions to the solution of G/G /1. Bend
[BENE 63] also considered the G/G/I system by investigating the unfinished
work and its variants. These many approaches, each of which is rather
complicated , forces one to inquire whether or not there is a larger underlying
structure, which places these solution methods in a common framework. In
1966 Kingman [KING 66] addressed this problem and introduced his
algebra for queues to expose the comm on structure ; we study this algebra
briefly in this section .
From Eq . (8.76) we clearly could solve for the pdf of IV n +! iteratively
starting with n = and with a given pdf for 11,'0; recall that the pdf for u ;
[i.e., c(u)] is independent of n. Our iterative procedure would proceed as
follows. Suppose we had already calculated the pdf for IV .. which we denote
by IVn(Y) ~ dWn(y) /dy , where Wn(y) = P[lV n :::;: y]. To find wn+1 (y) we follow
the prescription given in Eq. (8.76) and begin by formin g the pdf for the sum
IV n + U n> which, due to the independence of these two random variable s, is
clearly the convolution IVn(Y) 0 c(y). This convolution will result in a density
funct ion that has nonne gative va lues for negative as well as positive values of
its argument. However Eq. (8.76) requires that our next step in the calculation of IVn+1 (Y) is to calculate the pdf associated with (wn + u n)+; this requires that we take the total probability associated with all negative arguments for this density just found [i.e., for II'n(Y)0 c(y) ] and collect it
together as an impul se of probability located at the origin for wn+1 (Y)' The
value of this impulse will ju st be the integral of our former density on the
negative half line. We say in this case that " we sweep the probability in the
negative half line up to the origin ." The values found from the convolution
on the positive half line are correct for W n +1 in that region. The algebra that
describes this operation is that which Kingman introduces for stud ying the
system G/G /1. Our iterative procedure continues by next forming the
convolution of wn+1(Y) with c(y), sweeping the pr obability in the negative
half line up to the origin to form \I' n+2(Y) and then proceed s to form ll'n+3(Y)
in a like fashion , and so on.

8.3.

KINGMAN'S ALGEBRA FOR ' QUEUES

301

The elements of this algebra consist of all finite signed measures on the
real line (for example, a pdf on the real line). For any two such measures,
say hi and h 2 , the sum hi + h2 a nd also all scalar multiples of either belong
to th is algebra. The product o peration hi 0 h 2 is defined as the convoluti on
of hi with h2 It can be shown th at this algebra is a real commutative algebra.
There a lso exists an identity element denoted by e such that e 0 h = h
for an y h in the algebra, and it is clear that e will merely be a unit impulse
located at the origin. We are interested in operators that map real functions
into other real functi ons and that are measurable. Specifically we are interested in the operato r that takes a value x and maps it into the value (z) " ,
where as usual we have (x)+ ~ max [0, x). Let us denote this operator by 11',
which is not to be confused with the matrix of the transition probabilities
used in Chapter 2 ; thus, if we let A denote some event which is measurable,
and let h(A) = P{w : X(w)tA } denote the measure of this event, then 11' is
defined through
11'[h(A) = P{w: X +(w)A }
We note the linearity of this operator, that is, 11'(ah) = a11'(h) and
11'(h l + h2 ) = 11'(h l ) + 11'(h2 ) . Thus we have a commutative algebra (with
identity) alo ng with the line ar o pera to r 11' that maps this algebra into itself.
Since [(x) +]+ = (x)+ we see that an important property of this operator 11'
is th at
A linear operator satisfying such a condition is referred to as a projection.
Furthermore a projection whose range and null space are both subalgebras of
the underlying algebra is called a Wendel projection; it can be shown th at 11'
has this property, and it is this that makes the solution for G/Gfl possible.
Now let us return to considerations of the queue G /G/1. Recall that the
random va riable u; has pdf c(u) and that the waiting time for the nth cu stomer
II' n has pdf 1\'n(Y). Ag ain since u.; and IVn are independent then IVn + U n has
pdf c(y) @ lI'n(Y). Furthermore, since II' n+! = (IV n + u n)+ we have therefore

n=O,I, . ..

(8.78)

and thi s equati on gives the pdf for waiting times by induction. Now if p < I
the limiting pdf 1I'(Y) exists and is independent of 11'0' That is, Ii- must hate the
same pdf as (Ii' + ii)+ (a remark due to Lindley [LIND 52]). Th is gives us the
ba sic equation defining the stationary pdf for waiting time in G /G/ I :
ll'(Y) = 11'(c(y) @ If (Y

_ (8.79)

The solution of this equation is of main interest in solving G/G/1. The remaining p orti on of th is secti on gives a succinct summa ry of some elegant
results invol ving thi s algebra; only the courageous are encouraged to continue.

302

TH E QUEUE

GIGII

The particular formalism used for constructing this algebra and car rying
out the solution of Eq. (8.79) is what distinguishes the various meth ods we
have menti oned above . In order to see the relationship among the various
approaches we now introduce Spitzer's identity . In order to state this identit y,
which involves the recurrence relation given in Eq. (8.78), we must intr oduce
the following z-transform :
co

X(z, y)

=I

wn(y)z'

(8.80)

n= O

Addition and scalar multiplication may be defined in the ob vious way for
this power series and "multiplication" will be defined as corre sponding to
convolution as is the usual case for tran sforms. Spitzer's identity is then given
as
(8.81)
where
y ~ log [e - zc(y)]
(8.82)
Thus wn(Y) may be found by expanding X( z, y) as a power series in z and
picking out the coefficient of Z' . It is not difficult to show that
X(z, y )

wo(Y)

+ Z1T(C(Y)

X (z, y)

(8.83)

We may also 'form a generating function on the sequence E[e- Su: n ) ; ;


W . *(s) , which permits us to find the transform of the limiting waiting time;
that is,
lim W n *(s) = W *(s) ;; E[e- SW)
n_ ",

so long as p < I. This leads us to the following equation , which is also


referred to as Spitzer's identity and is directly applicable to our queueing
problem :
W*(s)

= exp ( - Icc -I E[I - e- s lU n ) + ] )


n= l

(8.84)

/.

,I

We never claimed it would be simple! ']


If we deal with W . *(s) it is possible to define another real commutative
algebra (in which the product is defined as multiplicati on rather than
con volution as one might expect). The algebraic solution to our basic equat ion
(8.79) may be carried out in either of these two algebra s ; in the transformed
t Fr om this identity we easily find th at
A

E[ lj'l = W =

I'" -I E[ (U . )+j

n :>; 1 n

II

8.3.

KIN GMAN'S ALG EBRA FOR QUEUES

303

ca se o ne de als with the power series


-

il

00

L Wn*(s)zn

X* (Z, S) =

(8.85)

n =O

rather than with the series given in Eq. (8.80).


Pollaczek considers th is latter case and for G /G/I obtains the followin g
equation which serves to define the system behavior :

X (z, s) = Wo (s)

ZS J ie+oo C*(s')X*(z, s') ds'

+ -.
21r}

ie-oo

"(s -

s)

(8.86)

and he then show s after considerable complexity tha t thi s solutio n mu st be of


the form
(8.87 )
where
Yes) ~ log (I - zC *(s))
a nd
S J ie+oo yes') ds'
,T(Y(s)) ~ - .
,
,
21r} ie-oo S (S - s)
When C*( s) is simple enough then these expressions can be evaluated by
contour integrals .
On the other hand , the method we have described in the previous section
using spectru m factorization may be phrased in terms of this algebra as
follows . If we replace s<1l+(s ) by W*( s) and s<1l_(s) by W_ *(s) then our ba sic
eq uation reads
W *(s) + W_*(s) = C*( s)W*(s)
Corresponding to Eq . (8.83) the transformed version becomes

,T(X*(z, s) - Wo*(s) - zC*(s) X*(z, s)) = 0


and the spectrum factorization takes the form
I - zC*(s)

e;(Y(,)e,l s)-;I;'(s

(8.88)

This spectrum fact ori zati on, of course, is the critical step.
This uni ficati on as a n algebra for queues is elegant but as yet has provided
little in th e way of extending the theory. In particular, Kin gm an point s out
that this a pproac h d oes not eas ily extend to the system G /G/m since whereas
the range of this algebra is a subalgebra , it s null space is not; therefore, we
d o not have a Wendel pr ojection . Perhap s the most enli ghtening asp ect of
thi s discussion is the significa n t equ ation (8.79) , which gives th e basic
condition that must be sati sfied by the pdf of waiting time. We take advantag e
of its recurrence form , Eq. (8.78), in Ch apter 2, Volume II.

304

THE QUEUE

G jGjl

8.4. THE IDLE TIME AND DUALITY


Here we obtain an expression for W*(s) in terms of the transform of the
idle-time pdf and interpret this result in terms of duality in queues.
Let us return to the basic equation given in (8.5), that is,
W n+!

= max [0,

Wn

+ un]

We now define a new random variable which is the "other half" of the
waiting time, namely,
(8.89)
Yn = -min [0, Il 'n + unl
This random variable in some sense cor responds to the random variable
whose distribution is W_(y), which we studied earlier. Note from these last
two equations that when Yn > 0 then II'n+l = 0 in which case Yn is merely
the length of the idle period, which is terminated with the arrival of C n+!.
Moreover, since either II' n +l or Yn must be 0, we have that
(8.90)
We adopt the convention that in order for an idle period to exist, it must have
nonzero length, and so if Yn and w n + 1 are both 0, then we say that the busy
period continues (an annoying triviality).
From the definitions we observe the following to be true in all cases:
(8.91)
From this last equation we may obtain a number of important results and we
proceed here as we did in Chapter 5, where we derived the expected queue size
for the system M jGjl using the imbedded Markov chain approach. In
particular, let us take the expectation of both sides of Eq. (8.91) to give
E[wn+l1 - E[Yn] = E[wnl

+ E[un]

We assume E[unl < 0, which (except for D{D{I where ~ 0 will do) is the
necessary and sufficient condi tion for there to be a stationary (and unique)
waiting-time distribution independent of n ; this is the same as requiring
p = xji < I. In this case we have*
lim E[wn+!1

n-co

lim E[wnl
n_ CX)

One must be cautious in claiming that


lim E[wn kl

f'l _ oo

lim E[w: +1l


'"- co

since these are distinct random variables . We permit that step here, but refer the interested
reader to Wolff [WOLF 70) for a careful treatment.

8.4.

THE IDLE TIME AND DUALITY

305

and so our earlier equation gives


E[Y]

(8.92)

-E[u]

where Yn -- Yand u; -- ii . (We note that the idle periods are independent and
identicall y distributed, but the duration of an idle period does depend upon
the duration of the previous busy period.) Now from Eq. (8.13) we have
E[u] = i (p - I) and so
(8.93)
E[y] = i(l - p)
Let us now square Eq. (8.91) and then take expected values as follows :

Using Eq. (8.90) and recognizing that the moments of the limiting distribution
on IV n must be independent of the subscript we have
E[(y)2]

2E[lvii]

+ E[(ii)2]

We now revert to the simpler notation for moments, " ok ~ E[(w)'], etc. Since
W n and Un are independent random variables we have E[wu] = wii; using this
and Eq , (8.92) we find
_ .:l

u2

y2

2ii

2y

IV=W=----

- (8.94)

Recalling that the mean residual life of a random variable X is given by


X2f2X, we observe that W is merely the mean residual life of -u less the
mean residu al life of y! We must now evaluate the second moment of ii.
Since ii = x - i ; then u2 = (x - i)2, which gives
.
(8.95)
where a; and a/ are the variance of the interarrival-time and service-time
densities, respectively. Using this expression and our previou s result for u
we may thu s convert Eq. (8.94) to

y2
2y

(8.96)

We must now calculate the first two moments of y [we already know that
= i(l - p) but wish to express it differently to eliminate a constant].
This we do by conditi oning these moments with respect to the occurrence of
an idle peri od. Th at is, let us define

fj

Go

= P[y > 0]
= P[arrival finds the system idlej

(8.97)

306

TH E QUEUE

GIGII

It is clear tha t we have a stable system when ao > O. Fu rthermore , since we


have defined an idle period to occur only when the system remain s idle for a
nonzero interva l of time, we have that

I > 0] =

P[fi ~ y y

P[idle period ~ y]

(8.98)

a nd this last is just the idle-period distribution earlier den oted by F(y ). We
denote by I the random variable represent ing the idle period. Now we may
calculate the following:

y = E[Y I y = O]P[Y = 0]
= 0 + auE[y I y > 0]

+ E[Y I y > O]P[y > 0]

The expectation in this last equation is merely the expected value of I and
so we have
.
(8.99)
Similarly, we find
(8.100)
Thus, in particul ar, y2/2y = 12/21 (a o cancels!) and so we may rewrite the
expre ssion for Win Eq. (8.96) as
IV =

ao" + ao" + (i)"(1

- p)2

2i(1 - p)

2
1

21

- (8.101)

Unfortunately this is as far as we can go in establ ishing W for GIG /I. The
calculation now invo lves the dete rmin ation of the first two moments of the
idle period. In general, for G/Gfl we cannot easily solve for these rnojnents
since the idle period depends upon the particul ar way in which the previous
busy period terminated. However in Chapter 2, Volume II , we place bounds
on the second term in this equation , thereby bounding the mean wait W.
As we did for M IGII in Chapter 5 we now return to our basic equat ion '
(8.91) relat ing the import ant rand om variables and attempt to find the
transform of the waiting time density W* (s) ~ E[ e- siD ] for GIG/I. As one
might expect this will involve the idle-time distribution as well. Formin g the
tran sform on both sides of Eq . (8.91) we have

However since

II' n

and u.; are independent, we find


(8.102)

8.4.

THE IDL E T IME AND D UA LIT Y

307

In order to evalu a te the left-h and side o f thi s tra nsform expression we ta ke
advantage of the fact that o nly o ne or the other of the ra ndom variables
IV n +l and Yn may be nonzero. Accordingly, we have
E [e- S1wn+.--1In) ]

= E[e- sI- On ) IYn > O]P[Yn > 0]


+ E [e- swn+ I s; = O]P[Yn = 0]
1

(8.103)

T o determine the right -h and side o f this last eq uatio n we may use the following simi lar expansion :

E[e- swB+'] = E [e- SIDn+l s; = O]P[Yn = 0]


E [e- sID B+' I s;
O]P[Yn
0]

>

>
E[e- SIDft +' I Yn > 0] =

(8.104)

I. M a king use
H owever , since IV n +l Y n = 0 , we have
of the defin ition fo r 0 0 in Eq . (8.97) a nd allowi ng the limit as -+ ex) we
obta in the following tr an sform expression from Eq . (8.104) :

E [r S ;;; I fj = O]P[y

= 0] =

w *(s) -

00

We may then write the limiting form of Eq . (8. 103) as


E [e- sl ;;;- '

)]

= 1* ( -

s)oo + W*(s) -

00

(8.105)

wher e 1*(s) is the Laplace transform of the idle-time pdf [see Eq. (8.98)
for the defin ition of th is d istribution ]. Thus, fro m thi s last a nd from Eq.
(8.102), we ob tai n immediatel y

W*(s)C*(s) =

0 01*( -

s) + W*(s) -

00

where as in the past C*(s) is the Laplace tr an sform for the den sity describing
the random varia ble ii . Thi s last equation fina lly gives us [M ARS 68]

*
0 0 [1 - 1*(- s)]
W (s) =~ 1 - C*(s)

- (8.106)

which represents the genera lization o f the Pollaczek-Khinchin tran sform


equ at ion given in Chap ter 5 and which now appl ies to the system G/G/ 1.
Clearly this eq ua tio n hold s a t least alo ng the imagi nary ax is of the complex splane , since in that case it become s the characteristic func tion of the various
distr ibution s which a re kn own to exist .
Let us now co nsider some examples.
Example 1: M/M /1

For thi s system we know that the idle-period d istributi on is the same as the
intera rri val-time dist ribution, nam ely ,
F(y) = P[l ~ y ] = I - e- 1

y ~ O

(8.107)

308

THE QUEUE

G /G/I

A nd so we ha ve th e first two moments 1 = 1/)., / 2 = 2/).2; we also have


aa2 = 1/ ).2 and a b 2 = I/fl2. Using these value s in Eq. (8. 10 1) we find

).2(1/).2
l /fl2)
( I _ p)2
W = ----'--'-----'-'--'---'-- ..:....:..2).( 1 - p)

I
).

a nd so

w=

P/fl

(8. 108)

1 - p

which of course checks with o ur earlier results for M/M /!.


We kn ow th at I *(s) = A/ (s + A) and C*(s) = ).fl/ (A - s)(s + fl). Moreove r , since the prob ability that a Poisson a rrival finds the system empty is the
sa me as the lon g-run proportion of time the system is empty, we have that
Go = I - P and so Eq . (8 . 106) yields

W (s) =

( I - p)[1 - A/(A - s)]

"------'-'~-'-"---'-=

I - ).fl/(A - s)(s

- (I - p)s(s

+ fl)

+ fl) p)(s + fl)

(A - s)(s

(1 -

+ fl)
Afl

(8.109)

S + fl - A
which is the sa me as Eq . (5. I20).

Example 2: M /G//

..

In thi s case the idle-time d istribution is as in M/M /I ; howe ver , we mu st


lea ve the va ria nce for the ser vice-t ime distribution as a n unkn own. We
obta in
A2 [(I/A 2 ) + a b2] + (I - p)2 1
W = --=-'---'---'-"---.::....:..."---'-------'--'J,

2).(1 - p)
=p

(1

+C

2
b )

p)
which is the P-K formula . Also , C*(s)
(I - pl Equation (8 . 106) then gives

(8 .1 10)

2fl (1 -

w*(s)

( I - p)[1 -

B*(S)A/ <J. - s) a nd agai n

J./U -

Go

s)]

= ..:....:..---'--'-'----'-----'--'

1 - [)./(A - s)]B*(s)
s( I - p)
s - A + AB*(s)

which is the P-K transform equation for waiting time!

(8.II I)

8.4 .

THE IDLE TIME AND DUALITY

309

Example 3: D!D!]
In this case we ha ve that the length of the idle period is a constant and is
given by i = i - x = t(l - p); therefore 1 = t(l - p), and j2 = (1) 2.
Moreover, aa2 = a/ = O. Therefore Eq . (8.101) gives
IV = 0

+ Ufo -

p)2 _ H(1 _

2t(l - p)

)
p

and so

w=o

(8.112)

This last is of course correct since the equilibrium waiting time in the (stable)
system D/D!I is always zero.
Since i, I, and] are all constants, we have B*(s) = e- ii , A *(s) = e-" and
I*(s) = e- ,l = r ,m -p). Also, with probabil ity one an arrival finds the system
empty; thus ao = I. Then Eq. (8.106) gives

IV (s)

1[1 - e"ll- p)]


,._ - , I
1- e e
(8.113)

=1

and so w(y) = uo(y ), an impulse at the origin which of course checks with the
result that no waiting occurs.
Considerations of the idle-time distribution naturally lead us to the study
of duality in queues. This material is related to the ladder indices we had
defined in Section 5.11. The random walk we are interested in is the sequence
of values taken on by U'; [as given in Eq. (8.9)]. Let us denote by Un. the
val ue taken on by U; at the kth ascending ladder index (instants when the
function first drops below its latest maximum). Since u < 0 it is clear that
lim U; = - ~ as 11 ->- 00 . Therefore, there will exist a (finite) integer K such
th at K is the large st as cend ing ladder index for Un' Now from Eq. (8.11)
repe ated below
Ii '

It is clear that

sup U n
n 2:: o

Ii' = U n"

Now let us define the random varia ble


an idle time) as

t, (which as we shall see is related to


(8 .114)

310

TH E QUEU E

GIG II

for k ::;; K. That is t, is merely the amount by which the new ascending ladder
height exceeds the previous ascending ladder height. Since all of the random
variables u; are independent then the rand om variables i k cond itioned on K
are independent and identically distributed. If we now let I - a = P[Un ::;;
Un. for all n > nk ] then we may easily calculate the distribution for K as
P[K = k] = (I - a)a k

(8.115)

In exercise 8.16, we show that ( I - c) = p[iv = 0]. Also it is clear that

l+l++I-=u
1
2
K
n l -Un o + Un! -Un l + + Un g - U" K - l
=Un x
where no ~ 0 and Uo ~ O. Thus we see that
+ i K and so we may write

11 + ...

E[e-'W]

whas the same distribution as

= E[(i*(s))K]
\

E[E[e- ,l1,+ +Jx ) K]]

(8.116)

where I *(s) is the Lapl ace transform for the pdf of each of the t, (each of
which we now denote simply by I). We may now evaluate the expectation in
Eq . (8.116) by using the distribution for Kin Eq. (8.115) finally to yield

W*( )
1- a
s = I _ ai*(s)

- (8.117)

Here then , is yet another expre ssion for W*(s) in the GIGII system.
We now wish to interpret the random variable 1 by considering -a ."dual"
queue (whose varia bles we will distinguish by the use of the symbol "), The
dual queue for the GIGII system considered above is the queue in which the
service times x n in the original system become the interarrival times i n+1 in
the dual queue and also the interarrival time s I n + 1 from the origin al queue
become the service time s xn in the dual queue.] It is clear then that the
random variable Un for the dual queue will merely be Un = xn '- i n+! =
I n+! - X n = -Un and defining O n = Uo + ... + Un_ 1 for the dual queu e we
have
(8.118)
t Clearly , if the origina l queue is sta ble, the du al must be unstabl e, a nd conversely (except
that both may be unstable if p = I) .

8.4.

THE IDLE TIME AND D UALITY

311

as the relationship am ong the dual and the original queues. It is then clear
from our discussion in Section 5.11 that the ascending and descending ladder
indice s are interchanged for the original and the dual queue (the same is
true of the ladder heights). Therefore the first ascending ladder index n) in the
original queue will correspond to the first descending ladder index in the dual
queue ; however, we recall that descending ladder indices correspond to the
arrival of a customer who terminates an idle period. We denote this customer
by en,. Clearly the length of the idle period that he terminates in the dual queue
is the difference between the accum ulated interarrival times and the accumulated service times for all customers up to his arrival (these services must
have taken place in the first busy period), that is, for the dual queue,

Length of first idle period}


{following first busy period

n 1-1.

= n~o t n+l
n l-l

n~o X n

n l -l

Xn -

n ""O

n , -1

I n+J

n= O

.U n,

(8.119)

where we have used Eq. (8.114) at the last step. Thus we see that th e random
variable j is merely th e idle period in th e dual queue and so our Eq . (8.117)
relate s the transform of the waiting time in the original queue to the transform
of the idle time pdf in the du al queue [contrast this with Eq . (8.106), which
relates this waiting-time transform to the tran sform of the idle time in its own
queue] .
Thi s d uality observation permits some rather powerful conclusions to be
d rawn in simp le fashion (and these are discussed at length in [FELL 66],
especially Sections VI.9 and XII.5). Let us discuss two of these .
Example 4: GIMII

If we have a sta ble G IM II queue (with i = II), and x = IIp.) then the du al
is an unstable queue of the type M IGI I (with i = IIp. and if = II), and so 1
(the distribution of idle time in the dual queue) will be of exp onential form;
therefore 1*(s ) = p.1(s + p.), which gives from Eq. (8.11 7) the follo wing

--312

TIlE QU EUE

G/GfI

result for the original G{M /I queue :


W *(s) = (1 - O")(s
s + f-L -

+ /-l )
0"1-'

In vertin g this a nd form ing the PDF for waiting time we have
W(y) = I

_ O"e- p ll- a lY

y ~ O

(8.120)

which correspond s exactly to Eq. (6.30)_

Example 5: M IGI}

As a second example let the original queue be of the form M /G fl and therefore the dual is of the form G/M /I. Since 0" = P[ w > OJ it must be that
0" = P for M IG/I . Now in the du al system, since a busy period end s at a
random point in time (and since the service time in this dual queue is memoryless), an idle per iod will have a durati on equ al to the residual life of an
interarrival time ; therefore from Eq. (5.1I) we see that

1*(s) = I - B*(s)
sx

(8.121)

and when these calc ulatio ns are applied to Eq. (8.117) we have

W *(s)

=
1-

I - P
p{[l - B*(s)]/sx }

(8.122)

which is the P-K transform equ ati on for wailing time rewritten as in Eq.
(5.106).
Th is conclu des our study of G/G /1. Sad to say, we have been unable 10
give analytic expressions for the waiting-time distribution explicitly in terms
of known qu antities. In fact, we have not even succeeded for the mean wait
W! Nevert heless, we have given a method for handling the rational case by
spectrum fact orizati on , which is quite effective. In Chapter 2, Volume II,
we return to G/G{I and succeed in extracting many of its important properties throu gh the use of bounds , inequ alities, and approximations.

REFERENCES

313

REFERENCES
ANDE 53a
ANDE 53b
ANDE 54
BENE 63
FELL 66
KEIL 65

KING 66
LIND 52
MARS 68

POLL 57

RICE 62

SMIT 53
SPIT 56

SPIT 57
SPIT 60

SYSK 62
TITC 52
WEND 58

Andersen , S. E., "On Sum s of Symmetrically Dependent Random


Vari ables," Skan . A k tuar., 36, 123-138 (1953).
Andersen, S. E., "On the Fluctuations of Sum s of Random Variables
I," Math . Scand., 1,263-285 (1953).
Andersen , S. E., "On the Fluctuati on s o f Sums o f Random Varia bles
II," Math . Scand., 2, 195-223 (1954).
Benes, V. E., General St ochastic Processes in the Theory of Queues ,
Addison-Wesley (Reading, Mass.) , 1963.
Feller, W. , Probability Theory and its Applications, Vol. II, Wiley
(New Yo rk), 1966.
Keilson, J ., " The Role of Green's Function s in Congestion The ory,"
Proc. Symp, on Congestion Theory (edited by W. L. Smith and W. E.
Wilkinson) Uni v. of North Carolina Press (Chapel Hill) , 43-71
(1965).
Kingman, J. F. C,; "On the Algebra of Queues," Journal of Applied
Probability , 3, 285-326 (1966).
Lindley , D. V., "The Theory of Queues with a Single Server," Proc.
Cambridge Philosophical Society , 48, 277-289 (1952).
Marshall, K. T., "Some Relationships between the Distributions of
Waiting Time , Idle Time, and Interoutput Time in the GI/G /I
Queue," SIAM Journal Applied Math ., 16,324-327 (1968).
Pollaczek, F ., Problemes St ochastiques Poses par le Phenomene de
Formation dune' Queue d'Attente a un Guichet et par de Phenomenes
Apparentes, Gauthiers Villars (Paris), 1957.
R ice, S. o., " Single Server System s," Bell Sys tem Technical Journal,
41, Part I : "Relations Between Some Averages," 269- 278 a nd Part II:
"Busy Period s," 279-310 (1962).
Smith, W. L., " O n the Di stribution of Queueing Times," Proc.
Cambridg e Philosophical Society , 49, 449-461 (1953).
Spitzer, F . "A Combinatorial Lemma and its Application to Probability Theory," Transactions of the American Math ematical Society ,
82, 323-339 ( 1956).
Spitzer, E , "The Wiener-Hopf Equ ation whose Kernel is a Probability Density," Duke Math ematics Journal, 24, 327-344 (1957).
Spit zer , F . "A Tauberian Theorem a nd its Probability Interpretatio n," Transactions ofthe American M athematical Society, 94,150-1 60
(1960).
Syski, R. , Introduction to Cong estion Theory in Telephone Systems ,
Oliver and Boyd (London), 1962.
Titchmarsh, E. C, ; Theory of Functions, Oxford Uni v, Press (London) ,
1952.
Wendel, F. G ., "Spitzer'S Formula; a Short Proof," Proc. American
Math ematical Society, 9, 905-908 (1958).

3 14

THE QUEUE

WOLF 70

GIGII

Wolff, R. W., " Bounds and Inequalities in Queueing," unpublished


notes, Department of Industrial Engineering and Operations Research, University of California (Berkeley), 1970.

EXERCISES
8.1.

From Eq. (8.18) show th at C*(s)

A *( - s)B *(s).

8.2. Find ceu) for MIM I!.


8.3.

Consider the system MIDI I with a fixed service time of x sec.


(3) Find
ceu) = P[u n .::;; II]
a nd sketch its shape.
(b) Find E[u n ].

8.4 .

For the seq uence o f random variables given below, generate the figure
corresponding to Figure 8.3 and complete the ta ble.
0

II

t n -t 1

x.

2 I I 5 7 2
3 4 2 3 3 4

2
2

3 4

9
6
3

/In

lV .

measured

w. calculated

8.5.

Co nside r the case where p = I W (y - II) in Eq . (8.23) as


W(y -

u) = W(y) - u W(l)(y)

for 0

+ -u

<E

I . Let us expa nd

W(21(y)

+ R(u, y)

where w (nl(y) is the nth derivat ive of W(y) and R (II, y) is such th at
~'" R (u, y) dC(~) is negligible due to the slow varia tion of W( y)
when p = I - E. Let Il k denote the kth mom ent of ii.
(3) Und er these conditio ns con vert Lindley's integral eq uation to a
seco nd-orde r linear d ifferential eq uation involving Ii" and ii.
(b) Wit h the boundary cond ition W(O) = 0, solve the equation
foun d in (a) and expre ss the mean wait W in term s of the first
two moments of i and x.
8.6.

Consider the DIErl1 queueing system, with a con stant intera rrival
time (of [sec) and a service-time pdf given as in Eq. (4.16).
(3) Fi nd ceu).

-315

EXERC ISES

(b)

Sh ow th at Lindley' s integral equation yields W (y - i)


for y < { and

W( y - i) =

(c)

for y

W( y - w) dB(w)

=0

~ {

Assume the following solution for W (y) :


r

lV(y)

= 1 + L G,e'"

y~O

i- I

where a, a nd cx, may both be complex, but where Re (cx,) < 0


for i = 1, 2, . . . , r . Usin g thi s ass umed so lutio n , show th at the
following equations mu st hold:

_, .,

e '

i (rp.. +a,cx

i_ 0

.
i )' +!

= 0

rp.
rp.

+ cx,

)r

1,2, . . . , r

j = 0, 1, . . . , r -

where ao = 1 and CX o = O. Note th at {cx,} ma y be found fr om the


first set of (tra nscendenta l) equation s, a nd th en the seco nd set
gives {a i } . It ca n be shown th at the CX i a re di stinct. See [SYSK 62].
8.7 .

C onsider the following queueing systems in wh ich no queue is permitted . Cu st omers wh o a rrive to find the system bu sy mu st lea ve
witho ut service.
(a) M/M /l : Solve for P = P[k in system].
(b) M/H ./I: A s in Figure 4.10 with cx , = cx , cx. = 1 - CX , P., = 2p.cx
a nd p.. = 2p.(1 - ).
(i) Find the mean ser vice time x.
(ii) Solve for Po (a n empty system) , P (a cu st om er in th e 2p.:t.
box) and P'- a (a cu stomer in the 2p.(1 - Gt) box).
(c) H./M fl : Where A(t) is hyp ere xp onential as in (b), but with
par am et er s tI, = 2,1,cx an d p.. = 2}.( 1 - Gt) instead . Draw the
sta te-tra nsition diagram (with labels on br a nche s) for th e
foll ow ing four states: E iJ is state with " a rri ving" cu stomer in
a rriva l stage i and j cu stomers in service i = I , 2 and; = 0 , I.

316

TH E QUEUE

GIG fl

= P[j stages of service left to go l.

(d)

M/Er/l: Solve for Pi

(e)

M/D fl: With all service times equal to x


(i) Find the probability of an empty system.
(ii) Find the fracti on of lost customers.

(0

EJM /I : De fine the four states as Eii where i is the number o f


" arrival" stages left to go and j is the number of cust omers in
service. Draw the labeled state-tran sition diagram.

8.8.

Consider a single-server queueing system in which the interarri val


time is chosen with probability cz from an exp onential distribution of
mean II). and with probability I - cz from an expo nential distribution
with mean 11ft . Service is exponential with mean 11ft.
(a) Find A *(s) and B* (s) .
(b) Find the expression for 'I'+(s)/'I" _(s) and show the pole- zero
plot in the s-pla ne.
(c) Find 'I' +(s) and 'L(s) .
(d) Find <!l+(s) and W(y).

8.9.

Con sider a G/G/I system in which

A*(s) - -

(s

2
:::....-_

+ 1)(s + 2)
1

8 *( s) = - -

Find the expression for 'I'+(s)/'I"_ (s) and show the pole- zero plot
in the s-plane.
(b) Use spectrum factorization to find 'I' +(s) and 'F _(s).
(c) Find <I>+(s).
(d) Find W(y) .
(e) Find the average waiting time W.
(0 We solved for W(y) by the method of spectrum fact or izati on .
Can you describe another way to find W( y ) ?
(a)

8.10.

Consider the system M/G/1. Using the spectral so lution meth od for
Lindley' s integral equation, find
(a) . 'Y+ (s). {HINT: Interpret [l - B * (s)l/sx. }
(b) 'I'_(s).
(c) s<l>+(s).

EXERCISES

8.11.

317

Consider the queue Eo/E r /1.


(a) Show th at

'+(5)
'1" _(5)

F(5)
1 - F(5)

where F(5) = I - ( 1 - 5/I..Q)o(l + 5/W Y .


For p < 1, show th at F(5) has one zero at the o rigin, zeroes
5 \ ,52 , . . . s, in Re (.I) < 0, and zeroes .1' + 1> Sr +2, . , 5 r +o- 1 in
Re (5) > O.
(c) Expre ss 'Y+(.1) and q.'_ (s) in term s of S i '
(d) Expre ss W*(s) in te rms of s, (i = 1,2, .. . , r + q - I).
(b)

8.12.

Show that Eq. (8.71) is equivalent to Eq. (6.30).

8,13.

Consider a 0 /0/1 queue with p < 1. Assume W o = 4f(1 - p).


(a) Calculate lI'n(Y) using the procedure defined by Eq. (8.78) for
n = 0, 1,2, . . ..
(b) Show th at the known solution for
w(y)

= lim wn(y)

satisfies Eq. (8.79).

8.14.

Con sider an M/M /I queue with p < 1. Assume W o = O.


(a) Calcul ate 11'\ (y) using the procedure defined by Eq. (8.78).
(b) Repeat for 1t'2(Y)'
(c) Show th at o ur known solution for
W(y )

(d)

= lim wn(y)

satisfies Eq . (8.79).
Compare 1t'2(Y) with I\(Y).

8.15.

By first cub ing Eq. (8.91) and then forming expectations, expre ss a;;;
(the variance of the waiting time) in terms of the first three moments
of i, i , and I .

8.16.

Show th at P [w = 0] = I - a from Eq . (8.117) by find ing the cons tant


term in a power-series expansion of W*( s).

8.17.

Consider a G/GfI system.


(a) Express 1*(.1) in terms of the transform of the pdf of idle time
in the given system.
(b) Using (a) find 1*(.1 ) when the original system is the ordinary
M/M /1.

318

THE QUEUE

(C)

G/G /!

Using (a), show that the transform of the idle-time pdf in a


G/M/I queue is given by
1*(s)

1 - ~ *(s)
sl

(d)

thereby reaffirming Eq. (8.121).


Since either the original or the dual queue must be unstable
(except for D/DfI), discuss the existence of the transform of the
idle-time pdf for the unstable queue.

-Epilogue
We have invested eight chapters (and two a ppendices!) in studying the
theory of queuein g systems. Occasionally we have been overjoyed at the
beaut y and generality of the results, but more often we have been overco me
(with frustration) at the lack of real pr ogress in the theory. (No, we never
promi sed you a rose garden.) However, we did seduce you into believing that
this study would pro vide worthwhile methods for pract ical application to
man y of today' s pressing congestion problems, We confirm that belief in
Volume II.
In the next volume, after a brief review of th is one, we begin by tak ing a
more relaxed view of GIG II. In Chapter 2, we enter a new world leaving
behind the rigor (and pain) of exact soluti ons to exact probl ems. Here we are
willing to accep t the raw facts of life, which state that our models are not
perfect pictures of the systems we wish to ana lyze so we should be willing to
accept approximations and bounds in o ur problem solution. Upper and lower
bound s are found for the average delay in GIG II a nd we find that these are
related to a very useful heavy traffic approximati on for such qu eues. This
approximation, in fact , predicts that the long waiting times are exponentially
distributed. A new class of models is then introduced whereby the discrete
a rrival and departure processes of queueing systems are replaced first by a
fluid approximation (in which these stoc hastic pr ocesses are replaced by their
mean values as a function of time), and then secondly by a diffusion ap proximation (in which we permit a variation about these means). We happil y find
that these approximations give quite reasonable result s for rather general
queueing systems. In fact th ey even permi t us to study the transien t behavio r
not only of stab le queues but also of saturated queues, a nd this is the material
in the final section of Chapter 2 whereby we give Newell' s tr eatm ent of the
ru sh-hour approxi mation-an effective method indeed.
Cha pter 3 points the way to our application s in time-shared computer
systems by presenting some of the prin cipal results for pr iority queueing
systems. We study general methods and appl y them to a number of import ant
queue ing disciplines. The con servati on law for priority systems is establi shed ,
preventing the useless search for non realizable disciplines.
In the remainder , we cho ose applications pr incipally from the computer
field, since these application s are perh aps the most recent and successful for
the theory of queue s. In fact, the queueing an alysis of allocation of resources
and job flow through computer systems is perh aps the only tool available
319

320

EP ILOGUE

to computer scientists in understanding the behavior of the complex interaction of users, programs , processes, and resour ces. In Chapter 4 we emphasize multi-access computer systems in isolati on , handling demands of a large
collection of competing users. We look for throughput and response time as
well as utilization of resources . The major portion of this chapter is devoted
to a particular class of algorithms known as pr ocessor-shar ing algorithms,
since they are singularly suited to queueing analysis and capture the essence
of more difficult and more complex algorithms seen in real scheduling problems. Chapter 5 addresses itself to computers in network s, a field that is
perhaps the fastest growing in the young computer indu stry itself (most of the
references there are drawn from the last three year s-a tell-tale indicator
indeed) . The chapter is devoted to developing method s of anal ysis and design
for computer-communication networks and identifies many unsolved important problems. A specific existing network , the ARPANET, is used th roughout as an example to guide the reader through the motivati on and evaluati on
of the various techn iques developed.
Now it remain s for you , the reader, to sharpen and appl y your new set of
tools . The world awaits and you must serve !

A P PEN DIX

Transform Theory Re fresher:


z - Transform and Laplace Transform

In this appendix we develop some of the properties and expressio ns for


the z-transform and the Laplace transform as they apply to our studies in
queueing theory. We begin with the z-transforrn since it is easier to visualize its
operation. The forms and properties of both transforms are very similar,
and we compare them later under the discussion of Laplace transforms.
1.1.

WHY TRANSF O RMS ?

So often as we progress through the study of interesting physical systems


we find that transforms appear in one form or another. These transforms
occur in many varieties (e.g., z-tra nsform, Laplace tr ansform, Fourier
transform, Mellin transform, Hankel transform, Abel transform) and with a
variety of names (e.g., transform, characteristic function , generating function) . Why is it that they appear so ofte n? The answe r has two parts ; first,
because they arise naturally in the form ulation and the solution of systems
problems ; and second , because when we observe or introduce them into our
solution method , they greatly simplify the calculations. Moreover, oftentimes
they are the only tools we have available for proceeding with the solution at
all.
Since transforms do appear naturally, we should inquire as to what gives
rise to their appearance. The answer lies in tbe consideration of linear timeinvariant systems. A system, in the sense that we use it here, is merely a
transformation, or mapping, or input-output relationship between two
functions. Let us represent a general system as a "black" box with an input
f and an out put g, as shown in f igure 1.1. Thus the system operates on the
function f to prod uce tbe function g . In what follows we will assume that
these functions depend upon an independent time parameter t ; this arbitrary
choice results in no loss of generality but is convenient so that we may discuss
certain notions mor e explicitly. Thus we assum e th at f = J(t} . In order to
represent the input-output relationship between the functions J(t} and get}

321

322

APP ENDI X I

fa

Figure I.l

A general system.

we use the not ati on

f (t ) -.. get)

(I.!)

to den ote the fac t that get) is the output of our system when f (t) is applied as
input. A system is sa id to be linear if, when

and

then a lso the following is true :


(1.2)

where a a nd b are independent of the time va riable t. Further, a system is said


to be time-invariant if, when Eq. (Ll ) holds, then the following is a lso true :

f(t

+ T} -.. get + T)

(1.3)

for a ny T. If the a bove two properties both hold , then ou r system is sai d to be
a linear time-invariant system, a nd it is these with which we con cern ourselves
fo r the momen t. .
Whenever o ne studies such systems , o ne find s th a t complex exponent ial
fun ctions of time a ppea r throu ghout the solution . Further, as we sha ll see,
the tr an sforms o f interes t merely represent ways of dec omposing functi on s of
time int o sums (o r int egrals) of complex exp onentials. Th at is, co mp lex
expo nentials form the building blocks of ou r tr an sforms, a nd so, we m ust
inq uire further to d iscover why the se complex exponentials pervade o ur
thin kin g with such systems. Let us now pose the fundam ent al qu estio n,
namel y , which fun ction s of tim e f( t) may pass through o u r linear ti meinvariant systems with no change in form ; th at is, for whichf(t) will g( l) =
Hf (t }, where H is so me sca la r multiplier (with respect to I) ? If we can discover
such functi on s f( t} we will then have found the "eigenfunctions," or "characteri stic functi on s," o r " inva ria nts" of ou r system. Denotin g th ese eigenfun ction s by f ,(I) it will be show n th at th ey mu st be of the following form (to
within a n a rbitrary sca la r multiplier) :
!e(t} = e st

(IA)

1.1.

W HY TRAN SFOR," tS?

323

where s is, in general, a complex varia ble. Th at is, the compl ex exponentials
given in (1.4) form the set of eigenfunctions for all linear time-invariant
systems. Thi s result is so fund amental that it is worthwhile devoting a few
lines to its derivation. Thu s let us assume when we appl y[. (t) that the output
is of the form g.(t), tha t is,
f . (t ) = e' t -+ g.(t)
But , by the linearity property we have
e"f.(t)

e'(t+ T) -+ e"g.( t)

where T and therefore e" are both constants. Moreover , from the timeinvariance property we must have
f .( t

+ T) = e,(t+T)

-+

git

+ T)

From the se last two , it must be that


eSTg.(t)

g.(t

+ T)

Th e uniq ue solution to this equati on is given by


g.(t)

He' t

which confirms our earlier hypothesis that the co mplex exponentials pass
through our linear time-invariant systems unch anged except for the scalar
mult iplier H. H is independent of t but may certa inly be a funct ion of s and so
we choose to write it as H = H(s) . Therefore, we have the final conclu sion
th at
(r.5)
an d this funda menta l result exposes the eigenfun ctions of our systems.
In this way the complex expon enti als are seen to be the basic functions in
the study of linea r time-invaria nt systems. Moreover , if it is tru e th at the
in put to such a system is a complex expon ential , then it is a tr ivial computation to evaluat e the output of that system from Eq. (1.5) if we are given the
function H (s ). Thus it is natural to ask th at for a ny inputf(t ) we would hope
to be ab le to decomp ose f( t) into a sum (o r integral) of com plex expo nentials,
each of which contri butes to t he overa ll outp ut g(t ) thr ough a com putation
of the form given in Eq. (1.5). Then the overall output may be foun d by
summin g (integrating) these individual comp onents of th e output. (The
fact tha t the sum of the individual out puts is the same as the output of the
sum of the individua l inpu ts-that is, the complex expo nentia l decomposition-is due to the linear ity of o ur system.) The process of decomposing
our input into sums of exponentials, computing the respon se to each from
Eq. (r.5), and then reconstitutin g the outpu t from sums of expo nentials is

324

APPENDIX I

referred to as the transform method of analysis. This approach, as we can see,


arises very naturally from our foregoing statements. In this sense, transform s
arise in a perfectly natural way. Moreover, we know that such system s are
described by constant-coefficient linear differential equations , and so the
common use of transforms in the solution of such equations is not surprising.
We still have not given a precise definition of the transform itself; be
patient , for we are attempting to answer the question " why transform s ?"
If we were to pursue the line of reasoning that follows from Eq. (1.5), we
would quickly encounter Laplace transforms. However, it is convenient at
this point to con sider only functions of discrete time rather than function s of
continuous time, as we have so far been discussing. This change in directi on
brings us to a con sideration of a-transforms and we will return to Laplace
transforms later in this appendix. The reason for this switch is that it is easier
to visualize operations on a discrete-time axis as compared to a continuoustime axis (and it also delays the introduction of the unit impulse function
temporarily).
Thus we consider functions I that are defined only at discrete instants in
time , which, let us say , are multiples of some basic time unit T. That is,
l(t) = /(t = n T) , where n= . . . ,-2,-I,O,I ,2, .. .. In order to
inco rporate this discrete -time axis into our notation we will denote the
function I(t = nT) by In. We assume further that our systems are also discrete in time. Thus we are led to consider linear time-invariant systems with
inputs I and outputs g (also functions of discrete time) for which we obtain
the following three equations corresponding to Eqs. (Ll)-(I.3):
( 1.6)

i n ->- gn
ai~ll

+ bi~;'

->-

a g~ll

f n+ m - . g n+ m

+ bg ~2 )

(1.7)

( 1.8)

where m is some integer constant. Here Eq. (I.7) is the expressi on of linearit y
wherea s Eq. (I.8) is the expression of time -invariance for our discrete systems.
We may ask the same fundamental question for the se discrete systems and ,
of course, the an swer will be essentially the same, namel y, that the eigenfunction s are given by
On ce again the complex exponentials are the eigenfunctions. At this point it
is convenient to introduce the definition
(1.9)

and so the eigenfunctions I~' ) take the form

1.1.

WH Y TRANSFORMS ?

325

Sin ce s is a complex variable, so, to o , is z. Following through steps essentially


identical to those which led from Eq. (104) to Eq. (I.5) we find that
( 1.10)
where H(z) is a function independent of n. This merely expresses the fact that
the set of functi on s {z:"} for any value of z form the set of eigenfunction s for
discrete line ar time-invariant sys tems. Moreover, the functi on (co ns ta nt) H
either in Eq . (1.5) o r (Ll O) tells us precisely how much of a given complex
exp onential we get o ut of our linear system when we ins ert a unit a mo unt of
that exp onential at the input. Th at is, H really describes the effect of the
system on these exponenti als; for this reason it is usually referred to a s the

system (or transfer)function.


Let us pursue this line of reasoning somewhat further. As we all know , a
common way to discover what is ins ide a system is to kick it-hard and
quickly. F or our sys tems this corresponds to providing an input only at time
t = 0 a nd then ob serving the subsequent output. Thus let us define the
Kronecker delta fun ction (also known as the unit fun ction) as

n=O
(I. 11)
n~O

When we apply u.; to our system it is common to refer to the output as the
unit response, a nd th is is usually denoted by h; That is,

F ro m Eq . (1.8) we may therefore also write


U n+ m

---+

hn+ m

From the linearity p roperty in Eq . (1.7) we have therefore

Certa inly we may multiply both expressions by unity , and so


z- nz nzmu n+m

---+

z -nznzmhn+m

( 1. 12)

Furthermore , if we con sider a set of inputs {J~I}, and if we define the output
for each of these by
then by the linearity of our system we must h ave
2J~ I _Lg ~)
i

(1.l3 )

326

APPENDIX t

If we now apply this last observation to Eq. (1.12) we have


.,.- n "" II

""

..

n+ m

z n+ 1n ----+- .,.-n
OJ

v hn+ m'"
_ n+ m

Lm

where the sum ranges over all integer values of m . Fr om the definition in
Eq. (I. I I) it is clear that the sum on the left-hand side of this equation has
only one nonzero term , namely, for m = -n, and this term is equal to unity;
moreover, let us make a change of variable for the sum on the right-hand
side of this expression , giving
z - !1

---+- z-n

L hkz k
k

This last equation is now in the same form as Eq. (1.10); it is obvious then
that we have the relationship
(l.I 4)

This last equ ation relates the system functi on H( z) to the unit respon se hk
Recall that our linear time-invariant system was completely* specified by
knowledge of H( z), since we could then determine the output for any of our
eigenfunctions ; similarly, knowledge of the unit response also completely*
determines the operation of our linear time-invariant system. Thus it is no
surprise that some explicit relationship must exist between the two , a nd,
of course, this is given in Eq. (I.I4).
Finally, we are in a position to answer the question-why tran sforms ?
The key lies in the expression (1.14), which is, itself, a transform (in this case
a z-transform), which con verts'[ the time function hk int o a function of a
complex variable H(z). This transform aro se naturally in our study of linear
time-invariant systems and was not intr oduced into the anal ysis in an artificial way. We shall see later that a similar relationship exists for continuoustime systems, as well, and this gives rise to the Laplace transform . Recalling
that continuous-time systems may be described by constant-coefficient linear
differential equations and that the use of tr ansform s greatly simplifies the
solution of these equations, we are not surprised that discrete-time systems
lead to sets of constant-coefficient linear difference equations whose solution
is simplified by the use of e-transforms. Lastly, we comment that the input s f
Completely specified in the sense that the only additional requ ired information is the
initial sta te of the system (e.g., the initial condit ions of all the ener gy sto rage elements).
Usually, the system is assumed to be in the zero-energy state, in which case we truly have a
complete specification.
t Transforms not only change the form in which the informati on describing a given function is presented , but they also present this inform ation in a simplified form which is
con venient for. mathem atical man ipulation.

1.2.

TilE Z- T RAN SFORM

327

and the outputs g are easily decomposed into weighted sums of complex
exponentials by means of transforms, and of course, once this is done, then
results such as (1.5) or (1.10) immediately give us the component-by-component output of our system for each of these inputs; the total output is then
formed by summing the output components as in Eq. (1.13).
The fact that these transforms arise naturally in our system studies is really
only a partial answer to our basic question regarding their use in analysis.
The other and more pragmatic reason is that they greatly simplify the analysis
itself ; most often , in fact, the analysis can only proceed with the use of transforms leading us to a partial solution from which properties of the system
behavior may be derived.
The remainder of this appendix is devoted to giving examples and properties of these two principal transforms which are so useful in queueing theory.
1.2. THE z-TRANSFORM [JURY 64, CADZ.73]
Let us consider a function of discrete time In' which takes on nonzero
values only for the nonnegative integers, that is, for n = 0, 1,2, ... (i.e.,
for convenience we assume that/n = 0 for n < 0). We now wish to compress
this semi-infinite sequence into a single function in a way such that we can
expand the compressed form back into the original sequence when we so
desire. In order to do this, we must place a "tag" on each of the terms in
the sequence/no We choose to tag the term j', by multiplying it by z" ; since
n is then unique for each term in the sequence, each tag is also unique.
z will be chosen as some complex variable whose permitted range of values
will be discussed shortly. Once we tag each term, we may then sum over all
tagged terms to form our compressed function, which represents the original
sequence . Thus we define the z-transform (also known as the generating
function or geometric transform) for /n as follows:
F(z)

<Xl

= 2. fnz'"

(1.15)

n= O

F(z) is clearly only a function of our complex variable z since we have summed
over the index n; the notation we adopt for the c-transforrn is to use a capital
letter that corresponds to the lower-case letter describing the sequence , as in
Eq. (LIS). We recognize that Eq. (Ll4) is, of course, in exactly this form . The
z-transform for a sequence will exist so long as the terms in that sequence
grow no faster than geometrically, that is, so long as there is some a > 0 such
that

Furthermore, given a sequence / n its e-transform F(z) is unique.

328

APPENDIX I

If the sum over all term s in the sequence f n is finite , th en certainly th e unit
disk [z] ~ 1 represents a range of analyticity for F(z). * In such a ca se we have
a>

F(l) =

Lfn

(I.16)

n= O

We now consider some important examples of z-transforms. It is convenient


to den ote the relati onship between a sequence and its transform by mean s of a
double-barred, double-headed arrowr ; thu s Eq. (US) may be written as

fn<=> F(z)

(I.1 7)

For our first example, let us consider the unit function as defined in Eq . (1.1I).
For this function and from the definition given in Eq . (1.15) we see that
. exactly one term in the infinite summation is nonzero, and so we immediately
have the transform pair
(U8)
For a related example, let us consider the unit function shifted to the right
by k units, that is,
Il=k

n,pk
From Eq . (US) again, exactly one term will be non zero , giving
U n_ k <=> Zk

As a third example, let us consider the unit step fun ction defined by
for
(recall that all functions are zero for n
series, that is,

0, 1,2 , . ..

< 0). In thi s case we have a geometric

a>

n- O

I - z

15 n<=> L l e" = - -

(I.19)

We note in thi s case that Izl < 1 in order for the z-transform to exist. An
extremely important sequence often encountered is th e geo metric series

n=0,1,2, .. .

* A functi on of a comple x varia ble is said to be analytic at a point in the complex plane if
that function has a unique derivative at that point. The Ca uchy- Rieman n necessary and
sufficient condition for analyticity of such functions may be found in any text on functions
of a complex variable [AHLF 66).
t Th e do uble bar denotes the tran sform relati on ship whereas the doubl e heads on the arrow
indicate that the journe y may be made in either direction , f => F a nd F => f

1.2.

TH E Z-T RANSFORM

329

Its z-tra nsform may be calculated as


co

F(z)

= L Aocnzn
n=O

A
1 - ocz

And so
n
A
A oc <=> - - 1 - ocz

(1.20)

where, of course, the region of analyticity for th is function is [z] < i ];


note that oc may be greater or less than unity.
Linear transformations such as the z-transform enjoy a number of important properties. Many of these are listed in Table 1.1. However , it is instructive
for us to derive the convolution property which is most important in queue ing
systems. Let us consider two functions of discrete time I n and gn, which may
take on nonzero values only for the no nnegative integers . Their respective
z-tra nsforms are , of course, F(z) and G(z). Let 0 denote the convolution
oper ator, which is defined for I n and g n as follows:

We are intere sted in deriving the z-transform of the convoluti on for I n and gn,
a nd this we do as follows:
co

f n 0 gn<=> L U n 0 gn)zn
n- O
00

= L L fn _kgk?n-kzk
n-=O k=O

However, since

co

n=O k=O

we have

00

co

L L=L L
00

1;- 0 n= k
00

fn gn<=>L g~k L f n_kZn- k


k""'O

n=k

G(z)F(z)

-330

APPE NDIX I

Table 1.1
Some Properties of the z-Transf orm
z-TRA NSFORM

SEQUENCE

co

I. f n

2. af;

+ bg n

0, 1,2, . . .

aF(: )

n =0,k,2k, ...

>0

F(zk)

f ol

-F(zkz) - iL=sl z.-k-'j;

l- J

zF (z)

7. f n-1
8. f n-k

+ bG (z)

I
- [F(z) z

5. f n+!
6. fn+k

L: f nzn
n_ O

F(a z)

3. a"fn
4.fnlk

F(z) =

k>O

zkF(z)

9. nfn

z dz F(z)
dm

10. n(n - I)(n - 2), . . . , (Il - m + I )fn

z" ' - F (z)

11. f n @ g n

F(z)G(z)

12. fn - f n-1

(I - z )F(z )

13.

L: f k

F C: )
n = 0, 1, 2, . ..

k~ O

a
oa [

14. -

dz'"

(a is a parameter off n)

I - z

o
oa "

- F (z)
co

L: f n
n= O

15. Series sum property

F(I ) =

16. AlIem ating sum property

F( -I) =

17. In itial value theorem

F (O) =

18. Intermediate value theorem

--

19. Final value theorem

,_1

co

L: ( -

I) nfn

n= O

fo

I d nF(: )

n! dz"

I
%=0

= j,

lim ( I - : )F(z ) = f oo

1.2.

THE Z-TRANSFORM

331

"he

2
Transform Pairs
SEQUENCE

z-TRA NSFORM

F (z) =

co

(~

333

0, 1, 2, . . .

2: I nzn
n= O

n = 0

rm
"he
ion
the
itly
vay
sed

1/ ~ 0

zk
, 1

1/ = 0 ,1 , 2 , .. .

1 - z
Zk

her
) is

1 - z

~i a l

uo r

1 - z

wer
hen

ctZ

(I - ctZ)2
Z

(I - z )'
ctZ(I + ctZ)
(I - ctZ )3
z(1

+ z)

(I - z)" .
1

1) ,,_

(I - ctz)2
I)

+ m)(1/ + m

(I - z )"-

I) . . . (1/

+ 1)ctn

(I _ ctz)m+l

' to
ress
In S-

.ing
lOW

for
h is
IS a
5 in
rms
I to
out
.eed

eZ

len , we have that the a-transform of the convolutio n of two


equal to the produ ct of the z-transforrn of eac h o f the sequences
.1 we list a number of important pro perties of the z-tra nsfor m,
,g t hat in Table 1.2 we provide a list of importa nt common

.ach
UIL

332

APPENDIX I

Some comments regarding these tables are in order. First, in the propertv
table we note that Property 2 is a statement of linearity, and Properties 3 and
4 are statements regarding scale change in the transform and time domain,
respectively . Properties 5-8 regard translation in time and are most useful.
In particular, note from Property 7 that the unit delay (delay by one unit of
time) results in multiplication of the transform by the factor z whereas
Property 5 states that a unit advance involves division by the factor z,
Properties 9 and 10 show multiplication of the sequence by terms of the form
n(n - I) . . . (n - m). Combinations of these may be used in order to find,
for example, the transform of n2jn; this may be done by recognizing that
n2 = n(n - I) + n, and so the transform of n2jn is merely Z2 d 2F(z)/dz2 +
zdF(z)/dz. This shows the simple differentiation technique of obtaining
more complex transforms. Perhaps the most impor~ant, however, is Property
I I showing that the convolution of two time sequences has a transform that
is the product of the transform of each time sequence separately. Properties
12 and 13 refer to the difference and summation of various terms in the
sequence. Property 14 shows if a is an independent parameter of In' differentiating the sequence with respect to this parameter is equivalenttodifferentiating the transform. Property 15 is also important and shows that the transform
expression may be evaluated at z = I directly to give the sum of all term s in
the sequence. Property 16 merely shows how to calculate the alternating sum.
From the definition of the z-tra nsform, the initial value theorem given in
Property 17 is obvious and shows how to calculate the initial term of the
sequence directly from the transform. Property 18, on the other hand, shows
how to calculate any term in the original sequence directly from its z-transform
by successive differentiation; this then corresponds to one method for
calculating the sequence given its transform. It can be seen from Property 18
that the sequence In forms the coefficients in the Taylor-series expansion
of F(z) about the point o. Since this power- series expansion is unique,
then it is clear that the inversion process is also unique. Property 19 gives
a direct method for calculating the final value of a sequence from its
z-transform.
Table 1.2 lists some useful transform pairs . This table can be extended
considerably by making use of the properties listed in Table 1.1 ; in some cases
this has already been done. For example, Pair 5 is derived from Pair 4 by use
of the delay theorem given as entry 8 in Table 1.1. One of the more useful
relationships is given in Pair 6 considered earlier.
Thus we see the effect of compressing a time scquence In into a single
function of the complex variable z. Recall that the use of the variable z
was to tag the terms in the sequence Inso that they could be recovered from
the compressed function; that is,ln was tagged with the factor z", We have

see!
pre
F(z

firs
F (:

sec
ex
wr

a,

I.2.

THE Z-TRANSFORM

333

seen how to form the z-transforrn of the sequence [through Eq . (U5)]. Th e


problem confronting us now is to find the sequence f n given the z-tra nsform
F(z). Th ere a re basically three meth ods for carrying out this inversio n. The
fi rst is th e powe r-series method, which attempts to ta ke the given func tion
F(z) a nd express it as a power series in z; once thi s is done the term s in the
sequence f n may be picked off by inspection since the tagging is now explicitly
expo sed . T he powe r series may be obtained in one of two ways : the first way
we have already seen through our intermediate value theorem expressed
as Item 18 in Table I.l , tha t is,

f = 1- d nF(z) I
n

It!

dz"

%= 0

(t his meth od is useful if one is only interested in a few term s but is rather
tedi ou s if man y term s are required) ; the second way is useful if F(z) is
expressible as a rationa l fun ction of z (that is , as the rati o of a polyn omial
in z over a polynomial in z) and in thi s ca se one may divide the den omin ator
int o the numerator to pick off the sequence of leadin g term s in the power
series directly. The power-series expan sion meth od is usually difficult when
man y term s are req uired.
Th e second a nd most useful meth od for inverting z-tra nsforms [that is, to
calcul ate j', from F(z)] is the inspection method. That is, one att empts to express
F(::.) in a fas hion such that it co nsists of term s that are recognizable as tran sform pairs, for example , fr om Table I.2. The sta nda rd approach for placing
F(z) in this for m is to carry out a par tial-fraction expansion, * which we now
discuss. Th e partial-fr act ion expansion is merely an algebraic techn iqu e for
expre ssing rat ional fun ction s of z as sums of simple term s, each of which is
easily inverte d. In pa rticular , we will attempt to express a rati onal F(z) as a
sum of terms , each of which looks either like a simple pole (see entry 6 in
Ta ble I.2) or as a multi ple pole (see entry 13). Since the su m of the tran sform s
equals the tra nsform of the sum we may apply Property 2 from Tabl e I.l to
inve rt each of these now recognizable forms sepa rately, th ereby carrying out
the req uired inversion. To carry out the parti al-fraction expan sion we proceed
as follows. We ass ume that F(z) is in rati on al for m, that is
F(z)

N( z)
D(z)

where both the nu merat or N (z) and the den ominat or D (z) are each
T his procedure is related to the La u rent expa nsion o f F( z) around each pole [G U IL
49 ].

334

APP ENDI X I

polyn omials in z. * F urthermo re we will assume that D (z)


factored form , that is,

IS

a lready

in

D(z)

= II ( I -

,,;z)m;

( 1.21)

i= l

The pr oduct notation used in this last equ ati on is defined as

Equati on (1.21) implies th at the ith root at z =

II"; occurs with

multiplicity

m.. [We note here th at in most problems of interest, the difficult part of the
so lution is to take a n arbitr ary polynomial such as D(z) and to find its roots
so that it ma y be put in the factored form given in Eq. (1.21). A t this point
we ass ume th at that difficult ta sk has been accomplished. ] If F(z) is in thi s
form then it is possible to express it as follows [G UlL 49]:

IX;

This last form is exactl y wha t we were looking for, since eac h term in this
sum may be found in o ur table o f transform pa irs ; in particul ar it is Pair I3
(a nd in the simplest case it is Pa ir 6). Thus if we succeed in ca rrying ou t the
partial-fraction expa nsion, then by inspection we ha ve o ur time seq uence In.
It rem ain s now to descr ibe the meth od for ca lculating th e coefficient s A i;'
The genera l expression for such a term is given by

)J I

1
1 / - 1 d l-l [
"(
A =
- ( I - IX .Z)m, ~
)
(
I 1
"
(j - I )!
,
dZ D(z)

:~ I/..

(1.23)

This rather formida ble procedure is, in fact , rather stra ightforwa rd as long as
the function F(z) is not terribly complex.
We no te here tha t a partial-frac tion ex pansion ma y be ca rrie d ou t o nly if the degree of the
numerato r po lynomial is strictly less th an th e degr ee of the den omi na to r polyno mia l : if
thi s is not thc case, then it is nece ssary to d ivide the den omina to r into the numerator until
the remaind er is o f lower degree th an the de nom inat or. This remainder divided by th e
origina l den omi na tor may then be exp anded in partial frac tions by the method show n ; the
terms ge nerated from the division al so may be invert ed by inspectio n mak ing use of
tr an sform pa ir 3 in T a ble 1.2. An alterna tive way of sa tisfying the de gree co ndit ion is to
attempt to factor o ut e no ugh pow ers o f z from the numera tor if possi ble.

l.2.

TH E Z-TRANSFORM

335

worthwhile at th is point to ca rry o utan example in order to demonstrate


etho d. Let us ass ume F(z) is given by

F(z)

4z (1 - 8z)
(1 - 4z)(1 - 2z)"

(1.24)

; exampl e th e numerator a nd denominator both ha ve the sa me degree


) it is necessary to bring the expressio n into pr oper form (numerato r
: less th an den omin ator degree). In this case our task is simp le since
ly factor o ut two power s of z (we a re req uired to fact or o ut onl y one
of z in o rder to brin g the numerator degree below that of the denorninalit o bviously in this case we may as well facto r out both a nd simp lify
.lculati ons). Thus we have

F(z)

Z2 [

4(l - 8z)
]
(1 - 4z)(I - 2Z)2

; define the term in sq ua re br ackets as G(z). We note in this example


.ie den ominator ha s three poles: one at z = 1/4 ; a nd two (that is a
~ pole) at z = 1(2. Thus in ter ms of the variab les defined in Eq . (1.21)
ve k = 2, 0(1 = 4, 1111 = I , 0(2 = 2, 111. = 2. From Eq . (1.22) we are
ore seekin g the following expan sion :
G(z)

t>.

4(1 - 8z)
0

(1 - 4z)(1 - 2z)'
All
1 -4z

= ---

A. I
A22
+ - ----"'
'-( 1 -2z)" ( 1-2z)
0

such as All (that is, coefficients of simple pole s) are easily obtained
:q. (1.23) by mult iplying the ori ginal functi on by the factor correspond the pole and t hen evalu ating the result at the po le itself (that is, whe n
; o n a value that d rives the facto r to 0). Thus in o ur example we ha ve
A , = ( 1 - 4z)G(z) l.
.1

. -1/ 1

= 4[1 - (8/4)] = - I

[I _ (2(4)]2

ly be evaluat ed in a similar way from Eq. (l.23 ) as follows :


2

A'I = (1 - 2z) G(z)lz_l/o =

4[ 1 - (8/2) ]
[I - (4/2)]

12

-------------<

336

APP ENDI X I

Finall y, in order to evaluate A 22 we must apply the differentiati on formu la


given Eq. (1.23) once , that is,
A 22 =

- -I -d

2 dz

[(I - 2Z)2G(Z)]

%- 1/ 2

I d 4(1 - 8z)

2: dz ( I - 4z) %- 1/2
!.(I - 4z)( -<32) - 4{~

= = _

- 8z)(-4)

(I - 4z)"

%~ 1 /2

=8
Thus we conclude that
-16
G(z ) = - I - 4z

12
+ -8 (I - 2Z)2 I - 2z

This is easily sh own to be equal to the original fact ored form of G(z) by
placing these terms over a common denominator. Our next step is to invert
G(z) by inspecti on . Thi s we do by observing that the first and third term s are
of the form given by transform pair 6 in Table 1.2 and that the second term
is given by transform pair 13. This, coupled with the linearity property 2
in Table 1.1 gives immediately that
G(z) <=> gn =

0
{-16(4)'

<0

+ 12(n + 1)(2)' + 8(2)n

(1.25)

0, I , 2, .. .

Of course, we must now account for the factor Z2 to give the expressio n for
f n. As menti oned ab ove we do thi s by taking ad vantage of Property 8 in
Table 1.1 and so we have (for n = 2 , 3, . . .)

f. =

-16(4)n- 2 + 12(n - l) (2)n- 2 + 8(2) n- 2

and so

f,n -

0
{(3/1-.1 )2' - 4'

n <2

2, 3,4, ...

Thi s completes our example.


The third method for carrying out the inver sion process is to use the
inversion f ormula. This involves evaluating the following integral :

-Ii

I n = -.
F(z)z-l- n dz
27TJ C

(1.26)

1.2.

THE Z-T RA NSFORM

337

whe rej = ,j~ a nd the int egral is evaluated in th e complex z-pla ne a round a
closed circular contou r C, whic h is large en ou gh * to surround a ll poles of
F(z) . T his method of eva lua tion works properly whe n facto rs of the for m Zk
are removed fro m th e express io n ; the reduced expre ssion is th en evalu at ed
a nd the final solutio n is obtained by taking ad vantage of Property 9 in Table
I.l as we sha ll see below. This contour integrati on is most ea sily performed by
making use of the Cauchy residue theorem [G UlL 49]. This th eorem may be
sta ted as follows:

Cauchy Residue Theorem The integral of g(z) over a closed contour C


containing within it only isolated singular points of g(z) is equal to 27Tj
times the sum of the residues at these points , wheneoer g(z) is analytict
0 11 and within the closed contour C.
An iso lat ed singula r point of an analytic functi on is a singula r point whose
neighborhood contains no other singula r points ; a simple pole (i.e., a pole of
order one-see below) is th e classical example. lf z = a is a n isolated singula r
point of g( z) a nd if y(z) = (z - a)rng(z) is a na lytic at z = a an d y (a) ,e 0,
th en g(z) is sa id to have a pole of order m a t z = a with the residue ra given by
(1.27)
We note th at th e residue given in this last equ at ion is a lmos t the sa me as A i;
given in Eq . (1.23), the main difference bein g the form in which we write the
pole. Thus we have now defined a ll that we need to apply the Ca uchy
residue theo rem in order to eva luate th e inte gral in Eq. (1.26) an d thereby to
recover the time fun ction f n fro m our z-tra nsfo rm . By way of illustration we
ca rry o ut the calculat ion of our pre vious example given in Eq . (1.24). Ma king
use of Eq . (1.26) we have

1.
27Tj 'J;;
- I

gn =

4z- 1 - n (1 - 8z) d
(1 _ 4z)(1 _ 2zl z

Since Jo rdan 's lemma (see p. 353) req uires that F(z) ~ 0 as z ~ 00 if we a re to let the
. con tour grow , then we require tha t any function F(z) that we consider have this property ;
thu s for rat ional functi ons of z if the numerator degree is not less than the denominat or
degree , then we must divide the numerator by the denominator un til the remainder is of
lower degree than the denominator, as we ha ve seen ear lier. The terms generate d by this
division are easily transformed by inspection , as discussed ~1 rlier . a nd it is only the remaining function which we now consider in this inversion meth od for the z-transfo rm.
t A function F(z) of a complex variable z is said to be analytic in a region of the complex
plane if it is single-valued an d differentiab le at every point in that region .

338

APPENDI X I

where C is a circle large enou gh to enclose the pol es of F(z) at z = 1/4 and
z = 1/2 . Using the residue theorem and Eq . (1.27) we find that the residu e at
z = 1/4 is given by
4z- 1- n(1 - 8z)

(z -.1)

r1f< =

=
whereas the residue at z

1/2 is calculated as

JI

1 n(14z- 8z)
(1 - 4=)(1 - 2Z)2

r1/ 2 = -d [( z - dz

4 ( I - 4z)(1 - 2Z)2 z-1 /'


_ (1/4)-1- n(l - (8/4)]
(I - (2/4)]2
= 16(4)n

z- 1/2

)JI

d [z-l- n(1 - 8z
1 - 4z

= dz

z-1/2
(1 - 4 z)[( -1 - n)z-2- n(1 - 8z) + z-:-n( -8)]- Z-I-"(1 - 8z)( - 4)

(I - 4z)"

I
' =1/2

J (2:I)-I-n( - 3)( - 4)

1)- 2- n
( I)-I~n
=(-1 ) [ ( - I - n) ( 2:
(-3)+ 2:
(- 8) = - 12(n

+ 1)2 n + 16(2)" -

24(2)"

N ow we mu st take 27Tj times the sum of the residues and then multiply by the
factor preceding the integral in Eq . (1.26) (thus we mu st take - 1 times the
su m of the residues) to yield
gn = - 16(4)" + 12(n + 1)2 n + 8(2 )n

0, 1, 2, ...

But thi s last is exactly equal to the form for gn in Eq . (1.25) found by the
method of partial-fraction expansions. From here the solutio n pr oceeds as in
th at meth od , thus confirming the consistency o f these two a pproaches .
Thus we have reviewed some of the techniques for applyin g and inverting
th e z-tra nsfo rm in the handling of discrete-time fun ction s. The a pplica tion
of these methods in the so lution o f difference equations is carefully described
in Sect. 1.4 below .

1.3. THE LAPLACE T RANSFORM [WIDD 46]


The Laplace tran sform , defined below , enjoys man y of the sa me pr operti es
as the z-tra nsfo rm. As a result, the followin g discu ssion very closely paralle ls
that given in the prev ious section .
We now con sider funct ion s of continuou s time f (t) , whic h take o n non zer o
values only for nonnegative values of t he continuous parameter t. [Again for

1.3.

TH E LAPLA CE TRANSFO R..\ I

339

con ven ience we are assuming t ha t f( l) = 0 for I < O. For the more general
case , mo st of the se techniques apply as discussed in the paragraph containing
Eq . (1.38) below .] As with d iscrete-time functi on s, we wish to tak e our
continuous-time func tion a nd transform it from a functi on of t to a function
of a new complex variable (say, s). At the same time we would like to be able
to " untransform" back int o the t domain, a nd in order to do this it is clear
we must so mehow " tag"f(t) a t each va lue of t. For reason s related to th ose
de scribed in Secti on 1.1 the tag we ch oose to use is e: , The complex va riable
s may be wri tten in ter ms of its real a nd co mplex parts as s = o + j w where ,
again , j = J~ . Having multiplied by thi s tag , we then integrate over a ll
no nzero values in order to obtain our transform function defined as follows :

F*(s)

~ L:!(t)e- Sl dt

(1.28)

Agai n, we have ado pted the notation fo r genera l Laplace tr an sforms in which
we use a capital letter for the tran sform of a function of tim e, which is
described in terms of a .lo wer case letter. This is usually referred to as the
"two-sided, " or "bilateral" Laplace tran sform since it opera tes on both
the negative and positive time axes. We have assumed thatf(t ) = 0 for t < 0,
and in th is case the lower limit of integration may be replaced by 0- , which is
defined as the limit of 0 - as (> 0) go es to zero; further , we often denote
this lower limit merel y by 0 with the understanding th at it is meant as 0(usually thi s will cau se no confusion). There also exists what is known as the
"one-sided" Lapl ace transform in which the lower limit is repl aced by 0+,
which is defined as th e limit of 0 + e as (> 0) goes to zero; th is o ne-sided
tr an sform has application in the so lution of tran sient problems in linear
systems . It is impo rtant th at th e reader distingu ish bet ween th ese two transfo rms with zero as th eir lower limit since in th e former case (the bilat eral
tr ansform) an y accumulation at the origin (as, for example , the unit impulse
defined below) will be included in the tr an sform , wherea s in the la tte r case
(t he o ne-sided transform) it will be o mitted .
For o ur assumed case in which f(l) = 0 for t < 0 we may write o ur
t ran sform as

F*(s)

= f '! (t)e- dt
S

'

(1.29)

where, we repeat , the lo wer limit is to be int erpreted as 0- . Thi s Lap lace
transform will exist so lon g asf(l) gro ws no fa ster than an exponential , th at
is, so lon g as there is so me real number u. such that

340

APPENDIX I

The smallest possible value for " is referred to as the abscissa of absol ute
con vergence. Aga in we stat e tha t the Laplace transfo rm F*(s) for a given
functio n j (r) is unique.
If the integral of f(l) is finite, then certainly the right-ha lf plane Re (s) ~ 0
represents a region of analyticity for F*(s) ; the notati on Re ( ) reads as
" the rea l part of the complex function withi n the parentheses." In such a
case we have, corresponding to Eq. (1.16),
F*(O) = 1"'j (l) dt

(I.30)

From our earlier definition in Eq. (1.9) we see th at prope rt ies for the ztran sform when z = I will corres po nd to properties for the Lapl ace transform when s = 0 as, for example , in Eqs. (1.16) a nd (I.30).
Let us now co nsider so me importa nt examples of Laplace tr an sfor ms. We
use notati on here ident ical to that used in Eq. (1.17) for z-t ransforms, namely,
we use a dou ble-ba rred , dou ble-headed arrow to denote the relat ion ship
bet ween a functio n a nd its transform; thu s, Eq. (1.29) may be written as
j(t) <=;o- F*(s)

(I.31)

Th e use of the double ar row is a sta tement of th e uniqueness of the transform


as earlier.
As in the case of z-tra nsforrns, the most useful meth od for finding t he
inverse [that is, calculatingf (t) from F*(s) is the insp ection me thod, namely,
looking up the inverse in a tabl e. Let us, therefore, concentrate on the
calculat ion of so me Lapl ace tr an sform pairs. By fa r the mos t important
Laplace transfor m pair to con sider is for the one-sided expo nent ial function ,
namel y,
A e-a ,
t ~O
j( t) = {0

<0

. t

Let us carry out the computation of this transform , as follows :


f( t) <=;0- F*(s) =

f '"Ae --<J'e- s t dt

J.

=A

e -( H al t dt

s+ a
And so we have the fund ament al relation sh ip
A
Ae-at 0(1) <=;0---

s+ a

( 1.32)

1.3.

T HE LAPLACE TR ANSFORM

341

where we hav e defined the unit ste p func tio n in con tinuo us time as

o(t) =

{~

t~O

( 1.33)

t<O

In fact , we ob ser ve that the un it step function is a special ca se of o ur on esided expo nentia l function when A = I , a = 0, a nd so we ha ve immed iatel y
th e additional pair

o( t) <=> -

(1.34)

We note that the tran sform in Eq . (1.32) has a n ab scissa of convergence


Ga

-G .

Thus we ha ve calculated a nalogo us z-tra nsform and Laplace-tr an sform


pairs: the geo metric series given in Eq. (1.20) with the exp onential functi on
given in Eq . (1.32) and also the unit step function in Eqs . (1.19) and (1.34),
respect ively. It rem ain s to find the continu ou s a na log of the unit fun ct ion
defined in Eq . (1.1 I) a nd whose z-tra nsfo rm is given in Eq. (1.18). Thi s brings
us face to face with the unit impulse junction. The unit impulse funct ion
pla ys a n important part in transform the ory, linear system the or y, as well as in
probability a nd qu euei ng the or y. It th erefore beh oves us to learn to work
with th is funct ion. Let us ado pt the following notation
uo(l ) ;;; un it impulse funct ion occurring a t t = 0
uo(l ) corresponds to highly concentrated unit-a rea pu lses that are of such
sho rt durati on that th ey ca nno t be distin guished by a va ila ble measurement
instru ments fr om o ther perh ap s briefer pul ses. Therefore, as one might
expect , the exact sha pe of the pulse is unimportant , rather o nly its time of
occurrence a nd its area matter. This function has been stud ied and utili zed by
scientists for ma ny yea rs [G UlL 49], a mong them Dirac, a nd so th e un it
impulse funct ion is often referred to as the Dirac delta junction. For a lo ng
time pure mathematicians have refrained from using uo(l ) since it is a highly
improper fun ct ion , but yea rs ago Schwartz's the ory of distribution s [SCHW
59] pu t th e co ncept of a un it impulse fun cti on o n firm ma thematical gro und.
Part of the difficulty lies with the fact that the un it impulse functi on is not a
fun ction at all , but merely provides a notati on al way for handlin g di sco ntinu ities a nd th eir der ivat ives. In th is regard we will intro d uce the un it
impul se as the limit of a seq uence witho ut appealin g to the more so phisticated
generalized functi ons that place muc h of what we do in a more rigor ous
frame wor k.

~---------342

APPENDI X I

As we mentioned earlier, the exact shape of the pul se is unimportant. Let


us therefore ch oo se the followin g representative pulse shape for o ur discussio n of impulses :

ItI ::;;

ItI > J.

20'.

This rectangular wave form has a height a nd width dependent up on the


parameter 0'. as shown in Figure 1.2. Note that this functi on has a con stant
area of unity (hence the name unit imp ulse function). As 0'. increases, we note
that the pul se gets taller and narrower. The limit of thi s seq uence as 0'. ->- ex)
(o r the lim it of an yone of an infinite number of other seq uences with simila r
properties, i.e. , increasing height, decreasing width , un it a rea) is wha t we
mean by the unit impulse "function ." Thus we are led to the following
description o f the unit impulse functi on.
uo(t )

t=O

= {ex)
o

L:

1:;60

uo(t) dt = 1

This functi on is represented gr aphically by a vertical arrow located a t the


instant of the impulse a nd with a number a djacent to the head of the a rrow
fa it!

= 8

0:

4
0:

= 4

2
0: =
0:

11

1
1

- 8" - 16

16

Figure 1.2 A sequence of functions whose limit is the unit impulse function uo(t ).

1.3.

TH E LA PLA CE TRA NSFORM

343

o
Figure i.3

Graphical representation of Auo(t - a) .

indicating the area of the impulse; that is, A times a unit impulse function
loca ted at the point t = a is denoted as Auo(t - a) and is depicted as in Figure

1.3.
Let us now co nsider the integral of the unit imp ulse funct ion. It is clear that
if we integra te from - 00 to a point I where I < 01hen the total integral must
be 0 whereas if I > 0 th en we will ha ve succe ssfully integr ated past the unit
impulse and thereby will have accumulated a total area of unity. Thus we
conclude

r' uo(x) d x = {I

L"

I~O

1<0

But we note immediately that the right-hand side is the same as the definition
of the unit step fun ct ion given in Eq. (1.33). Therefore , we conclude th at the
un it step fun cti on is the integr al of th e un it impulse functi on , a nd so the
" de riva tive" o f th e uni t st ep functi on mu st therefore be a unit impulse
funct ion. However , we rec ogni ze th at th e derivative o f this discontinuous
funct ion (the step function) is not pr operly defined; once again we appeal to
the th eory of distr ibutions to place thi s o peration on a firm mathematical
foundation. We will therefore assume thi s is a proper operation and proceed
to use the unit impulse functi on as if it were an ordinary function .
One of the very important properties of the unit impulse fun ction is its
sifting property ; that is, for an arbitrary differentiable function g(l) we have
L :uo(t - x)g(x) d x = get)

Th is las t equa tio n merel y says th a t the integr al o f the product of o u r fu nction
g(x) with an imp ulse loca ted at x = I "sifts" the fun cti on g(x) to p rod uce its
val ue a t t , g(I). We no te that it is possible al so t o define the deriva tive of the
unit impulse which we den ote by U,(I) = dUo(I) /d l; th is is known as th e uni t
doublel and ha s th e property th at it is everywhere 0 except in the vicinity of
the origi n where it run s o ff to 00 j ust to the left of the o rigin a nd o ff to - 00

344

APPE NDIX I

just to the right of the origin , and, in addition , ha s a tot al area equal to zero .
Such function s correspond to electrostatic dip oles, for example , used in
physics. In fact , an impulse function may be likened to the force placed o n a
piece o f paper when it is laid over the edge of a knife and pre ssed down
whereas a unit doublet is similar to the force the paper experiences when cut
with scissors. Higher-order deri vatives are possible a nd in genera l we may
have un(t) = dU n_l (t )/dt . In fact , a s we have seen, we may also go back down
the sequence by integrating these functi ons as, for example, by generating
the unit step function as the integral of the unit impulse functi on ; the obvious
notation for the unit step function , therefo re, would be U_I(t) and so we may
write uo(r) = dU_I(t)/dt. [Note, from Eq. (1.33), that we have also reserved
the notation bet) to represent the unit step functi on.) Thus we have defined
an infinite sequence of specialized functions beginning with the un it impulse
a nd proceeding to higher-order derivatives such as the doublet, a nd so on,
as well as integrating the unit imp ulse and thereby generating the unit ste p '
function , the ramp, namel y,
11_ 2(1)

"' I'

II _I( X)

dx

-a:>

t~O

= {I

1<0

the parabol a , namely,

"-.<) g,

L"~(x) ~
dx

I~O

(:

1 <0

and in general
tn-l
(

~n - I) !

I~O

(1.35)

1 <0

This entire family is called the family of singularity fun ctions , and the most
important members are the un it step funct ion a nd the u nit impulse function.
Let us now return to our main discu ssion and con sider the Laplace tra nsform of uo(t ). We proceed directly from Eq . (1.28) to ob tai n
lIo(I) <=>l''' uo(l)e- " dt = 1
(Note that the lower limit is interpreted as 0- .) Thus we see th at the unit
impulse has a Laplace transform equal to the constant unity.
Let us now consider so me of the important pr operties of the transformation. As with the z-tra nsform , the convolution property is the most imp ortant
and we proceed to derive it here in the continuous time ca se. Thus. con sider
two functi on s of continuous time f(t) and get), which ta ke on non zero values

1.3.

THE LAPLACE TRAl':SFO~\1

345

only for t ~ 0; we denote their Laplace transforms by F*(s) and G*(s),


respectively. Defining 0 once aga in as the convolu tion operator, that is,
J( t) 0 get)

~L:

J (I - x)g(x) d x

(1.36)

which in our case reduces to


J(t) 0 get)

=f

J (1 - x)g(x) dx

we may then ask for the Laplace tran sform of this convolut ion . We obta in
this formall y by plugging into Eq. (1.28) as follows :
J (t) 0

g(/)-=-I~o(J(t) 0
= I t: o

g(t))e- S'dl

I~/(t -

= L : oi: /

x)g(x) dx e- ' dt

(t - x)e-('-xl dt g(x)e- SXdx


x

= L : / (x)e- ' d X.C / (V)e- dv

And so we have
j( t) 0 g(t) -=- F*(s)G*(s)

Once again we see that the tran sform of the con volution of two functions
equals the produ ct of the transform s of each . In Table 1.3, we list a nu mber of
impo rta nt properties of the Laplace transform , and in Table 1.4 we list some
o f the import an t tran sform s themselves. In these tabl es we ado pt the usual
notatio n as follows:

s- .J

dnJ (t ) ~ f nl(t)
dt n

j(x) dx

(1.37)

~ j(- nl(t)

--------n times

Fo r example, p - ll(t) = j!.. ",! (x) dx ; when we dea l with functions which are
zero for t < 0, then p-ll(O-) = O. We comm ent here "that the o ne-sided
tr an sform tha t uses 0+ as a lower limit in its definition is quite co mmonly
used in tran sient analysis, but we prefer 0- so as to include impul ses at the
origi n.
The table of properties permits one to compute many transform pairs from
a given pair. Proper ty 2 is the sta tement of linearit y and Pr oper ty 3 describes
the effect of a scale change. Property 4 gives the effect of a tr anslation in time,

Table {.3
Some Properties of the Laplace Transform
FUNCTION

I. f(t)

2. af (t)

+ bg(t)

f(~)

3.

TRANSFORM

(a

> 0)

F* (s) =

5.:

f(t)e- ' t dt

+ bG*(s)

aF*(s)

aF*(as)

4. f (t - a)

e- "'F*(s)

5. e-a'f(t)

F*(s + a)
dF *(s)

- - -

6. tf(t )

ds
d nF*(s)

( _ I )n ~

7. t nf (t )

1:,

8. f (t )

F *(s, ) ds,

r'"

9. f( l)
In

10. f (l ) g(l)

I I.
12.

ds;

J 51""" 3

r'"
J 8: = 81

ds 2 ' "

I"

dsnF*(sn)

) Sn= 5 n _ l

F*(s)G*(s)

t df(l )

sF*(s)

---;[(

t d"f( t)

snF*(s)

di"

F* (s)

B .t f ",f(t )dl

f"",

14. t

s
F*(s)
sn

f J (t )(dr)n

n times

[a is a parameter)

15 . aa f(t)

16. Int egra l property


17. Ini tial value theorem

a
aa F(s)
F*(O) =

f(t) dt

lim sF*(s) = lim f(t)


$ _00

18. Fin al va lue theo rem

fo:

t_ O

lim sF*(s) = lim f (t)


8_ 0

l _co

if sF*(s) is ana lytic for Re (s)

t To be co mplete. we wish to sh ow the form of the tran sform for entries 11 -14 in
the cas e when f( l) may ha ve nonzero va lues for I < 0 a lso :

d nf (t ) <=> snF*(s) _ sn-'[(o-) _ sn-2[(1 )(0-) - . .. - [<n-1) (0-)


dt n
F* (s) [1 -1)(0-) f (-2)(O -)
f l-11) (0- )
...
f (t )(dt )n <=> - - +
+
_ + ... +
co
- 00
sn
s"
s" 1
S

, It
I
-

'"

,.

n times

346

1.3.

TH E LAPLA CE TRA NSFOR)\

347

Table 1.4
Some Laplace Transform Pairs
FUN CTION

t ~ 0

1. !(t)

2.

110 (1)

3.

1I0 (t

4. 1I. (r)

TRAN SFORM

F*(s)

(un it impulse)

s.:

!(t)e--<J' dt

- a)
A

= dt 1I._I(t )

5. 1Ct(r) ~

6. 1C , (t - a)

(uni t step)

a r
7. lI_n(r) = (II _ I) !
n- 1

8. Ae- a t 6(t)
9. te- a t 6(t )

s .
A

s+a

--(s

+ a)2

(s

+ a)"+1

whereas Property 5, its d ua l, gives the effect of a para meter shift in the
tra nsfo rm do mai n. Pr operties 6 a nd 7 show the effect of mul tiplication by t
(to so me power) , which co rres po nds to differen tia tion in the tran sform
d om a in ; sim ila rly, Prope rties 8 and 9 sho w the effect of di vision by t (to so me
power) , which correspo nds to integ ra tio n. Property 10, a most im porta nt
p rop er ty (de rived ea rlier), shows th e effect of con volut ion in the time d om ai n
going over to simple multiplicat ion in the tran sfo rm domain . Properties 11
and 12 give the effect of time diffe rentiatio n ; it shou ld be noted that this
corresponds to mu ltip lica tio n by s (to a p ower equal to the number of
differentia tion s in time) times the origi na l tra nsfo rm . In a simi la r way
Prop er ties 13 a nd 14 show the effect of time integration goi ng over to division
by s in the transform domai n. Property 15 shows that differ en tiat ion with
res pect to a pa ram eter off(t) corresp on d s to differentia tio n in the transfo rm
domai n as well. Pro perty 16, th e integra l pr op erty, shows the simple way in
which the tra nsform may be eva lua ted a t the o rigi n to give the total integral

348

APP END IX I

ofJ(t). Pr operties 17 and 18, the initial and final va lue theorems , show how to
compute the va lues for J(t) at t = 0 and t = CIJ directly from the tra nsform.
In Table 1.4 we have a rather short list of important Laplace tran sform
pairs. Much more extensive tables exist and may be found elsewhere
[DOET 61]. Of course , a s we said earlier, the table shown can be extended
considerably by making use of the properties listed in Table 1.3. We n ote , for
example , th at the transform pair 3 in Table f.4 is obtained from tr ansform
pair 2 by application of Pr operty 4 in Table 1.3. We point out again that thi s
table is limited in length since we ha ve included only those functi ons that find
relevance to the material contained in thi s text.
So far in this discu ssion of Laplace transforms we have been considering
only functionsJ(t) for which J(t) = 0 for t < O. This will be satisfactory for
most of the work we consider in this text. However, there is an occas iona l
need for transforming a function of time which may be nonzero an ywhere o n
the real -time axi s. For this purpose we mu st once again con sider the lower
limit of integration to be - CIJ, that is,

F*(s) =

L:J(IV

S1

( 1.38)

t~O

t<O
t~O

and so it immediately follows that

J( t)

= J-(1) + I +(t)

We now observe th at J-( -t) is a functi on that is nonzero only for positi ve
values of t, and J+(t) is non zero only for nonnegative values of t. Thus we have

J+(t)

<=:>

F+*(s)
*(s)

J- (-t) <=:> L

where these tr an sforms are defined as in Eq . (1.29). H owe ver , we need the
tran sform of J- (t) which is easily shown to be

J- (t)

<=:>

L *( -s)

Thus, by the linear ity o f transforms, we may finally write the bilateral
transform in terms of one-sided tran sforms:

F*(s) = L *( - s)

fOI

R,
ta

cc
ta

"
\'

dt

One can ea sily sho w th at thi s (bilate ra l) Laplace transform may be calculated
in terms of one-sided time functi ons and their transforms as foll ows. First we
define
t<O
J ( /)
J-C t ) = {0

J+(t) = { 0
J (t )

As
Let
th a
wil
wh
th,

+ F+*(s)

1.3.

TH E LAP LACE TRANSFOR.~l

349

As always, these Laplace tran sfor ms have abscissas of abso lute convergence.
Let us therefor e define 0'+ as the co nvergence abscissa for F; *(s) ; this implies
th at the region of convergence for F+*(s) is Re (s) > 0'+. Similarl y, F_ *(s)
will have some abs cissa of abso lute convergence, which we will denote by 0'_ ,
which implies that F_ *(s) converges for Re (s ) > 0'_ , It then follows directly
that F_ *( - s) will have th e same convergence ab scissa (0'_) bu t will converge
for Re (s) < 0'_ , Thus we ha ve a situation where F *(s) converges for 0'+ <
Re (s) < 0'_ and therefor e we will have a " convergence st rip" if and only if
0'+ < 0'_ ; if such is not the case, then it is not useful to define F*(s). Of course,
a similar a rgument ca n be made in the case of z-transforrns for funct ion s that
ta ke on non zero values for negati ve time indices.
So far we have seen the effect of taggin g our time funct ion f( t) with the
compl ex exponential e- s t a nd then compressing (integrating) over all such
ta gged functi ons to form a new functi on, namel y, the tran sform F*(s). The
purpose of the tagging was so that we could later " untransfo rrn" or, if you
will, "unwind" the transform in order to obtainf(t) once again. In princ iple
we know this is possible since a tr an sform and its time functi on are uniquely
related. So far , we have specified how to go in the ~Jn e direction from f( t)
to F*(s). Let us now discuss the problem of inverting the Laplace tr an sform
F *(s) to recover f( t) . There are basically two meth ods for conducting this
inversio n : The insp ection method a nd th e f ormal inversion int egral m ethod.
These two meth od s are very similar.
First let us discuss the inspection method , which is perh ap s the most
useful scheme for invertin g transforms. Here , as with z-transforms, the
approac h is to rewrite F*(s ) as a sum of term s. each of which ca n be recognized from the table of Laplace transform pa irs . Then , makin g use of the
linear ity pr operty, we may invert the transform term by term , and then sum
the result to recover f( t) . Once again , the basic method for writing F*(s) as a
sum of recognizable term s is that of the partial-fraction expan sion . Our
description of that meth od will be so mewhat sho rtened here since we have
discussed it at so me length in the z-tra nsform sectio n. First. we will ass ume
th at F*(s) is a rat ion al functio n of s , nam ely,
F*(s)

N( s)
D(s)

where bot h the numerator N(s) and den omin at or D (s ) are each polynomials
in s. Again , we ass ume that the degree of N(s) is less than the degree of D (s ) ;
if this is not t he case. N (s) must be divided by D(s) until the remainder is of
degree less than the degree of D(s), and then the partial-fract ion expan sion is
ca rried out for this remainder, whereas the terms of the qu otient resultin g
from the division will be simple powers of s, which may be inverted by
appealing to Tr an sform 4 in Table 1.4. In additi on , we will assume that the

350

AP PEN DIX [

" hard" part of the problem ha s been done , namely, that D (s) has been put in
facto red form
k

II (s + ai)m,
i :::lt l

D(s) =
.

(1.39)

Once F "'(s) is in this form we may th en expre ss it as the following sum:

Bkl
(s

+ ak)m.

Bk 2
(s

+ ak)'n.-,

+ ... +

Bk m
(s

+ ak)

(1.40)

Once we ha ve expressed F* (s) as ab ove we are then ina position to invert


each term in this sum by inspection from Table 1.4. In particular, Pair s 8
(for simple poles) a nd 10 (for multiple poles) give us the answer dire ctly. As
before, the method for calculating th e coefficients B i; is given in general by
.

Bi ;

= (j

I
d
[
_ I)! ds i- ' (s

N(S)]

+ ai)m, D(s)

s--a,

(1.41)

Thus we have a complete pre scription for findingf (t) from * (s) by inspectien in those cases where F*(s) is rat ional and where D (s ) has been facto red
a s in Eq. (1.39). This method works very well in those cases where F *(s ) is not
overly complex.
To elucidate some of these principles Jet us carry out a simple exam ple.
Assume that F *(s) is given by
8(S2 + 3s

l)

F (s) - - ' - -- --'- (s + 3)(s + 1)3

(1.42)

We ha ve already written the den ominator in factored form, and so we may


proceed directly to expand F* (s) as in Eq. (1.40). Not e th at we have k = 2,
a, = 3, m, = I , a2 = I , m 2 = 3. Since the denominat or degree (4) is greater
than the numerator degree (2), we may immediately expand F*(s) as a parti al
fraction as given by Eq. (1.40), namel y,

F*(S)=~+
S+ 3

B 2,

(s

+ 1)3

B 22

(s

+ 1)2

+~
(s

+ 1)

1.3.

TH E LAPLA CE TRANSFORM

351

Evaluat ion o f the coe fficients Bij pr oceeds as foll ows. Bl l is especi ally simple
since no differentia tions are requi red , and we obtain

+ 3)F

B l l = (s

(S)I'~_3 = 8

(9 - 9 + I)
(_2)3
=-1

B2 1 is a lso easy to evaluate:

+ I) F *(s)I.~-1 =
3

B2 1 = (s

(I - 3
2

+ I)

= - 4

For B 22 we mu st d ifferentiate once, na mely ,

B = .!!...[8(S2 + 3s
22

cis

= 8 (s

+ I)J
+3

3)(2s

+ 3) (s

S2

=8

6s

+ 81

(s + 3)"

I
,--I
(S2 + 3s

1)(1)

+ 3)2

I
.--1

I -

=8 --~~

.--1

(2)2

=6
Lastly , the calculati on of B23 in volve s two differentiati ons ; however , we have
a lrea dy carried o ut the first diffe rentiation , a nd so we take adva ntage o f the
fo rm we ha ve derived in B22 j ust pr ior to eva luation at s = -I ; furthermore ,
we note th at since j = 3, we have for the first time an effect due to the term
(j - I)! fr om Eq . (1.4 1). Thus

1 2 [8(S2 +s +3s 3+ OJ I.-- 1

B' 3 =
ci
2! ds 2

_ 1 (8) .{ [S2 + 6s + 8J

- 2

cis

= 4 (s

(s

3)2(2s

+ 3)2
+

6) - (S2

(s
= 4

(2)2(4) - ( I - 6

6s

+ 3)4

+ 8)(2)(2)

'---'-~---'---'----'-:"':""':

(2)4
=1

I.-- 1
+

8)(2)(s

3)

I
s~- I

352

APP ENDI X I

This completes the evaluation of the con stants B ii to give the parti al-fracti o n
expansion
F*(s) =

---=.!.- + .
s

+3

(s

-4

1)3

(s

+ 1)2

+ _I_
(s + I)

(1.43)

This last form lend s itself to inversion 'by inspecti on as we had promi sed . In
particular, we o bserve that the first a nd last terms invert directly accordin g to
transform pair 8 from Table lA, whereas the seco nd a nd third term s in vert
directly from Pair 10 of that table; thus we have for I ~ the following :

(1.44)

and of course, JU) =


for I < 0.
In the course of carrying out a n inversi on by partial-fracti on expansion s
there a re two natural points at which o ne can conduct a test to see if an y
errors ha ve been made : first , once we ha ve the partial-fraction expa nsion
[as in o ur example, the result give n in Eq. (1.43)], then one ca n co mbine thi s
sum of terms int o a single term over a co mmo n den omin at or a nd check th at
thi s single term corresp onds to the o rigina l given F *(s) ; the other check is to
take th e fina l form for J (I) a nd carry o ut the forw ard transform ati on a nd
confirm th at it gives the o riginal F*(s) [of course, one then gets F*(s) expanded directly as a partial fraction].
The second meth od for findin g j'(r) from F *(s) is to use the incersion int egral

J( t)

= - I . J"'+iOO F*(s)es t ds
2-rr}

( 1.45)

", - ;00

for t ~ and (Je > (J a ' The integration in the complex s-plane is tak en to be a
straight-line inte grati on parallel to the imaginary axis a nd lying to the right
of (J a ' the ab scissa of ab solute con vergence for F *(s). The usu al means for
carrying out this integra tion is to make use of the Cauchy residue th eorem as
applied to the integral in the complex domain around a closed contour. T he
closed con tour we ch oose for this purpose is a semicircle of infinite radiu s as
shown in Figure IA. In thi s figure we see the path of int egrati on required for
Eq . (lAS) is S3 - S1 and the semicircle of infin ite radius closing thi s co nto ur is
given as S1 - S 2 - S3 ' If the integral al on g the path S 1 - S2 - S3 is 0, then the
integral along the entire closed contour will in fact give us J(I) from Eq.
(lAS) . To establish that this contribution is 0, we need

1.3.

TH E LAPLA CE TR ANSFORM

353

W = Im (s)

s- plane

--:-1-----;;+---':-""+------ 0 =

Re(s)

Figure 1.4 Closed contour for inversion integral.

Jordan's Lemma

If IF*(s)l- 0 as R -

00

on s, - s. for

S 3,

then

>0

Thus, in orde r to carry out the complex in version int egral show n in Eq .
(1.45) , we mu st first express F*(s) in a form for which J o rdan' s lemm a
applies. Ha ving done thi s we may then evaluate the integral a ro und the
clos ed co nto u r C by ca lculating residues and using Cauchy' s residu e the orem .
Thi s is most easily ca rried o ut if F*(s) is in rational form with a fact or ed
den ominat or as in Eq . (1.39). In order for Jordan' s lemma to apply, we will
require, as we did before , th at th e degree o f the numerator be strictly less than
the degree of the den om ina tor, a nd if thi s is not so , we mu stdividetheration al .
functi on until the remainder has th is property. That is all there is to the
meth od . Let us carry th is o ut o n our previous example, namely th at given in
Eq . (1.42). We note this is alread y in a form for which Jordan's lemm a
applies, and so we ma y proceed directly with Cauchy's residue theorem.
Our poles a re loca ted .at s = -3 and s = -I. We begin by ca lculating the
residu e a t s = -3 , thu s

r-

= (s

+ 3)F*(s)eSlf'~_3

+ 3s + l )e',
(s + 1)3
8(9 - 9 + 1)e-

= 8(s'

31

=_

( _2)3

e-3'

I
s_ _ 3

354

APPENDIX I

Similarly , we mu st calculate the re sidue a t s = -I , which requires the


differentiation s indicated in our residue formula Eq. (1.27) :

I d (5 + 1)3F*(s)e"
r_ 1=:;-;----;
.
_. ds
,~ - I
2
= I d 8(S2 + 3s + l)e"/
2
2 d5
(5 + 3)
,-- 1

= !~[(s +
2 ds

3)[8(2s + 3)e' t + 8(5 + 3s + 1) te")- 8(S2 + 3s +


(s + 3)2

Oe"JI
.~_I

=!

I
{(s + 3)"[8(25 + 3)e" + 8(5 2+ 3s + 1)le"
2 (s + 3)4

+ (s + 3)8[2e" + (25 + 3)le")


+ (5 + 3)8[(2s + 3)lest + (S2 + 3s + 1)1 2e")
- 8(2s + 3)e" - 8(S2 + 35 + 1)le"]
- [(s + 3)8[(2s + 3)e' t + (52 + 3s + ljre"]
- 8(S2 + 3s + l )e"]2(s + 3)}
= e-' + 61e- ' - 21 2e-'

1,--1

Combining the se residues we have

f(l )

_ e- 3 '

+ e:' + 6le- t

-21 2e- '

Thus we see that o ur solution here is the same as in Eq . (1.44), as it must be:
we ha ve once again that f( l) = 0 for t < O.
In o ur earlier discussion of the (bila teral) Laplace transform we discussed .
functi on s of time 1_(1 ) and f +(I) defined for the negat ive a nd positive realtime a xis, respect ively. We also obse rved th at the tr an sform for each of th ese
functi on s was a na lytic in a left half-plane and a right half-pl ane , respecti vely,
as mea sured from their appropriate a bscissas of a bsolute con vergenc e.
More over , in o ur last inversio n method [th e a pplication of Eq . (1.45) ] we
observed that closing th e co nto ur by a semicircle of infinit e rad ius in a co unterclockwise directi on gave a result fo r t > O. We comment now th at had we
closed the co nto ur in a clockwi se fashion to the right , we would have ob ta ined
the result th at would have been a pplica ble for I < O. ass umi ng tha t the
contribu tion of thi s contour could be sh own to be 0 by Jordan ' s lemma . In
o rde r to invert a bilateral tr ansform, we pr oceed by o btaining first f( l) for
positive values of I a nd then for negati ve values of I. For the first we tak e a
path of inte gra tion within the con vergence strip de fined by G_ < G c < G...
and th en closing the contour with a counterclockwise semicircle; for I < 0,
we ta ke th e same vertical contour but close it with a semicircle to t he righ t.

- - - - - - - - - - - - --

1.4.

TRA NSFORMS IN DIFF ERENC E AND DIFFERE NTIAL EQUATIONS

355

A s may be anticipated from our contour integration methods, it is some times necessary to determine exactly how many singularities of a function
exist wit hin a closed region. A very powerful and convenient theorem which
a ids us in thi s determination is given as follows :

Rouche's Theorem [G UIL 49] If f(s) and g(s ) are analytic fun ctions of s
inside and on a closed contour C, and also if /g(s)! < If(s)1on C, then f (s)
and f (s) + g (s) have the same number of zeroes inside C.

1.4.

USE OF TRANSFORMS IN THE SOLUTION OF


DIFFERENCE AND DIFFERENTIAL EQUATIONS

As we have already mentioned, transforms are extremely useful in the


solution o f both differential and difference equations with constant coefficients. In th is sectio n we illustrate that technique ; we begin with difference
equati on s using z-tra nsfo rms and then move on to differential equations usin g
Laplace transforms, preparing us for the more complicated differentialdifference equations encountered in the text for which we need both methods
simultaneously.
Let us con sider the following general Nth-order linear difference equation
with constant coefficients
(1.46)
where the a, ar e known con stant coefficients, g, are unknown functions to be
found, and e. is a giveri function o f n. In addition , we assume we are given N
boundary equations (e.g., initial condition s). As always with such equ at ion s,
the solutio n which we are seeking consi sts of both a homogeneous a nd a
particular solution , namely,
ju st as wit h differential equations. We know that the homogeneous solution
must sati sfy the homogene ous equation
Gs g. - s

+ ... + Gog.

= 0

(1.47 )

The genera l form of solution to Eq. (1.47) is

where A a nd are yet to be determined. If we substitute the proposed so lutio n


int o Eq. (1.47), we find

as A rx n-

+ G S _ 1Arx. -.\"+ 1 + . . . + aoACI. n = 0

(1.48)

356

APPENDI X I

This Nthorder polyn omial clearly ha s N so lutio ns, which we will de note by
ai ' <X 2 , , <X,v , assuming fo r the moment th at a ll the <Xi a re distinct. As soci.
a ted wit h each suc h so lutio n is an a r bitra ry co nstant Ai whic h will be de ter mi ned fro m the initial co nditio ns for the differen ce equati on (o f which there
must be N). By ca ncelling th e com mon te rm A<xn- .Y from Eq . (1.48) we
fina lly arrive at the characteristic equation which determines the va lues <X i
aN

+ as_l<x + as_2<x - + .. . + a o<X' = 0


C)

, ..

( 1.49)

Thu s the sea rc h for th e homogeneous so lutio n is now reduced to find ing th e N
roots o f o ur characteristic equati on (1.49) . If a ll N o f the <X i a re disti nct, th en
the homogene ou s so lutio n is
g<;:) = A,<x l

+A

2 <X2

+ ... + A ,vaN

In the case of nondist inct root s, we have a slightly different sit ua tio n . In
particula r, let <XI be a multiple root of o rde r k; in thi s case the k equ al ro ot s
will contrib ute to th e homogeneous so lutio n in the fo llowing fo r m:

(Aun k -

+A

k 2
12 11 -

+ ... + AU_,n + Alk)<x

a nd simila rly for any o the r multi p le roots. As fa r as the particular solution
g~P) is con cerned, we know that it mu st be found by a n a ppro priate guess
fr om the form of en'
Let us illustrate so me of these principles by mea ns o f a n exam ple. Consider
the seco nd-o rder di ffere nce eq ua tio n

+ g"-2 =

6g" - 5g n_ l

6Gr

II

= 2, 3, 4, . ..

( 1.50)

T his eq uati o n gives the rela tionsh ip among the u nkno wn functio ns g n for
n = 2 ,3 , 4 , .. . . Of co urse, we mu st give two ini tial co ndi tio ns (since the
o rder is 2) a nd we choose th ese to be go = 0, g l = 6/5 . In order to find the
hom ogeneo us solutio n we mu st form Eq . (1.49) , whic h in th is case becomes

60: 2
a nd so the two values of

<X

5 <X

+I=

which so lve thi s eq uat io n a re


~l

:%<) =

= -

I
3

a nd thus we have the homogen eou s so lutio n


0

( 11
)

on

A('!')"+
A.-,(.3!.)"
2
1

lA.

TRA NSFOR~IS IN DIF FERENC E AND DIFF ERENTIAL EQUATIONS

357

The particular solution must be guessed at, and the correct guess in this case is

If we plug g~p) as given back int o our basic equation , namel y, Eq . (1.50), we
find that B = 1 and so we are convinced that the 'pa rticula r solution is
correct. Thus our comp lete sol ution is given by

We use the initial conditions to solve for Ai and A 2 and find Ai = 8 and
= -9. Thus our final solution is

A2

Il

= 0, 1, 2, . . .

(1.51)

This completes the standard way for solving our difference equation.
Let us now describe the method of c-tra nsfo rrns for so lving difference
equations. Assume once again that we are given Eq . (1.46) a nd that it is good
in the range /I = k , k + .1 , .. .. Ou r approach begins by definin g the following c-tra nsforrn:
<X>

G(: )

=I

gn:n

(1.52)

n= O

Fro m o ur earlier d iscussio n we know that once we have found G(:) we may
then apply o u r inversion techniques to find th e desired solution gn' Our
next step is to multiply the nth equation from Eq. (1.4 6) by : n and then form
the sum of a ll such multiplied equation s from k to infinity ; that is. we form
ec

00

.2 .2 Q ig n_ i z n = .2 e nz n
Tl = h: ; = 0

n=h:

We then carry out the summa tions and a ttempt to recogni ze G(z) in this
sin gle equ ati on. N ext we solve for G(:) algebraica lly and then pr oceed with
o ur inversion techniques to obtai n the so lution. Th is meth od does not requ ire
that we guess at the particula r solution , and so in th at sense is simpler th an
the di rect method : however, as we sha ll see, it still has the basic difficulty that
we must solve the char acteristic equation [Eq . (1.49)] and in general this is the
difficult part of the solution . However, even if we cannot so lve for the ro ot s
(Xi it is possible to o btai n meaningful properties of the solution g n from the
perhap s unfa ctored form for G(: ).

358

APPENDIX I

Let us solve our earlier example using the method of z-tra nsforrns. Accordingly we begin with Eq. (1.50), multiply by z" and then sum; the sum will go
from 2 to infinity since this is the applicable range for that equation . Thus

We now factor out enough powers of z from each sum so that these powers
match the subscript on g thusly:

Focusing on the first summation we see that it is almost of the form G(z)
except that it is missing the terms for n = 0 and /I = I [see Eq. (1.52)];
applying this observation to each of the sums on the left-hand side and
carrying out the summation on the right-hand side directly, we find

6[G(z) - go - g,z] - 5z[G(z) - go] + z-G(z)

6(1/5)2z2

= --'--'---'--

1 - (1/5) z

Observe how the first term in this last equation reflects the fact that our
summation was missing the first two terms for G(z). Solving for G(z) algebraically we find "

G(z)

6g o + 6g, z - 5goz + (6/25)Z2/[1 - (l /5)z]


6-5z+ z2

If we now use our given values for go and g" we have

G(z)

- z)
(I)[I _ (1/3) z][1z(6- (1/2)
z][1 -

= "5

(I /5)z]

Proceeding with a partial-fraction expansion of this last form we obtain

G(z) =

-9
1 - (1/3) z

8
1 - (1/2) z

1
1 - (1/5)z

which by our usual inversion methods yields the final solution


n

0, 1,2, .. .

Note that this is exactly the same as Eq . (LSI) and so our method checks. We
comment here that even were we not able to invert the given form for G(z)
we could still have found certain of its properties; for example, we could"find

IA.

TRANSFORMS IN DIFFE REN CE AN D DIFFERENT IAL EQUATIONS

359

that the sum of all term s is given immed iately by G(I ), that is,

Let us now consider the application of the Laplace transform to the


solution of constant-c oefficient linear differential equations. Consider an
Nth- order equation of the following form:
d Nf (t)
aN--,-.
dt"

+ aN-l

dN- 'f(t)
v

dt : -

df(t )

+ ... + a, -dt- + aof( t) =

e(t)

( 1.53)

Here the coefficients a; are given constants, and e(t ) is a given driving function. Along with this equation we must a lso be given N initial co nditions in
order to carry out a complete solution; these conditions typically a re the
values of the first N derivat ives at some instant, usually at time zero. It is
required to find the function f( t). As usual, we will have a homogeneous
solut ion fihl( t), which solves the homogeneous equation [when e{t ) = OJ
as well as a particular solut ion f (OI(t) that corresponds to the nonhomogeneous
equation. The form for the homogeneous solution will be

If we substitute this into Eq. (1.53) we obt ain

Thi s equation will have N solutions


characteristic equation

<x" <x" , <x n ,

which must solve the

which is equivalent to Eq. (1.49) with a change in subscripts. If all of the


are distinct , then the general form for our homogeneous solution will be

<X i

The evalu ation of th e coefficients A ; is carried out makin g use of the initial
condition s. In the case of multiple root s we have the following modificat ion.
Let us assume that <x, is' a repeated root of order k ; this multiple roo t will
contri bute to the hom ogeneous solution in th e following way:

360

APPENDIX I

and in the case of more than one multiple root the modification is obviou s.
As usual , one must guess in order to find the particular solution J< p)(t). The
complete solution then is, of course, the sum of the homogeneous and
particular solutions, namely,

+f

Jet) = j (hl(t)

p'(t)

Let us apply this method to the solution of the following differential


equation for illustrative purposes:
d](t) _ 6 df(t)
dt 2
dt

with the two initial conditions f(O -)


characteristic equation
cx' - 6cx

+ 9f(t) = 2t

(1.54)

0 and df(O-)/dt

= O.

Forming the

+9=0

(1.55)

we find the following multiple root:

cx, = cx, = 3
and so the hom ogeneou s solution must be of the form
jlh l(t)

= (Ant + A'2)e3t

Making an appropriate guess for the particular solution we try


j lp'(t) = B,

+ B,t

Substituting this back into the basic equation (1.54) we find that B,
and B. = 2/9. Thu s our complete solution takes the form

j (t) = ( Ant

4/27

+ A , 2)e + -4 + -2 t
3/

27

Since our initial conditions state that bothf(t) and its first derivative must
be zero at t '= 0- , we find that Au = 2/9 and A" = -4/27 , which gives for
our final and complete solution
Je t) =

~(t - ~) e3t +~(t +~)

t~O

(1.56)

The Laplace transform provides an alternative meth od for solving constantcoefficient linear differential equations. The method is based upon Properties
II and 12 of Table 1.3, which relate the derivative of a time funct ion to its
Laplace transform. The approach is to make use of these properties to tr an sform both sides of the given differential equation into an equation involving
the Laplace transform of the unknown functionf (t) itself, which we denote
as usual by F*(s) . This algebraic equation is then solved for F*(s), and is then

IA.

TRANSFO/t.\ lS IN DIFF ERENCE AND DIFFERENTIAL EQUATIO NS

361

inverted by any of our methods in order to immediately yield the complete


solution forfU ). No guess is required in order to find the particular solution,
since it comes out of the inversion procedure directly.
Let us apply thi s technique to our pre viou s example . We begin by transformin g both sides of Eq . (1. 54), which will require that we take adv antage
of our initial conditions as follows:
s2F*(s) - sJ(O-)

- l'\o-) -

6sF*(s)

+ 6J(0-) + 9F*(s) = ~

s-

In carrying out this last operation we have taken advantage of Laplace


transform pair 7 from Table IA. Since our initial conditions are both zero,
we may eliminate certain terms in this last equation and proceed dir ectly to
solve for *(s) thusly:
2

F*( s)

2/s
= -.---'--s: - 6s

+9

We must now factor this last equation, which is the same problem we faced
in findin g the roots of Eq. (1.55) in the direct method , and as usual forms the
basically difficult part of all direct a nd indirect methods. Carrying this out we
have
F*(s) _
-

2
S2( S -

We are now in the position to make a partial-fract ion expansion yielding

*
2/9
4/27
F (s) = - + - +
s

S2

Inverting as usuaLwe then obtai n, for


J ( I)

2
9

=-

2/9
-4/27
+
-(s - 3)2
S- 3

I ~

0,

+ -4 + -2 le
27

3t

4 3t
e
27

which is identical to our former solution given in Eq . (1.56).


In our study of queueing systems we often encounter not only difference
equ ati on s and differential equations but also the co mbinatio n in the form of
differential-difference equations. That is, if we refer back to Eq . (1.53) and
replace the time functi on s by time functions that depend upon an inde x, say
/1 , and if we then displa y a set of differenti al equations for vari ou s values of /1,
th en we have an infinite set of differential-difference equations. The solution
to such equations often requ ires that we take both the z-transform on the
discrete index 11 and the Laplace tran sform on the continuous time parameter
I. Examples of this type of analysis are to be found in the text itself.

362

APPENDIX t

REFEREN CES
A hlfors, L. V., Complex Analysis, 2nd Ed ition, McGraw-Hili (New
York), 1966.
CA D Z 73 Ca dzow , J . A. , Discrete-Time Sys tems, Pre ntice-Ha ll (E nglewood
Cliffs, N.J.), 1973.
D OET 61 Doetsch, G. , Guide to the Applications of Laplace Transf orms, Van
Nostrand (Princeton), 1961.
GU[L 49 Guillemin, E. A., The Mathematics of Circuit Analy sis, Wiley (New
Yo rk) , 1949.
J URY 64 Jury, E. 1., Theory and Application of the z-Transfo rm Me /hod,
Wiley (New Yor k), 1964.
SC HW 59 Schwartz, L., Theorie des Distributions, 2nd printing , Ac tualities
scientifiques et industrielles Nos. [245 and I [ 22, Hermann et Cie.
(Paris), Vol. 1 (1957), Vol. 2 (1959).
WIDD 46 Widder , D . V., The Laplace Transf orm, Princeton Uni versity Press
(Princeto n), 1946.
AHLF 66

APPENDIX

II

Probability Theory Refresher

In this appendix we review selected topics from probability theory , which


are relevant to o ur discussion of queueing systems. Mostly , we merel y list
the imp ortant definitions and results with an occasional example. The reader
is expected to be famili ar with this material , which corresp onds to a good
first course in probability theory. Such a course would typ ically use o ne of
the follo wing texts th at conta in additional details and derivat ion s: Feller,
Volume I [FELL 68]; Papouli s [PAPO 65] ; Parzen [PARZ 60]; o r Daven.
port [DAVE 70].
Probability theory concerns itself with describing random events. A typical
dictio nary definition of a ra ndom event is an event lacking aim, pu rpose, or
regularity. N othing could be further from the truth ! In fact , it is the ex treme
regularity that manifests itself in collections of random events, that makes
probability the or y interesting and usefu l. The notion of statistical regularity
is central to our studies. F or exa mple, if one were to toss a fair coin four times ,
o ne expect s o n the average two head s and tw o tails . Of course , there is one
chance in sixteen that no head s will occu r. A s a con sequence, if a n unusual
seq uence ca me up (tha t is, no head s) , we would not be terribly surp rised nor
would we suspect the coin was unfair. On the other hand , if we tossed the
coin a million times, then once again we expect a ppro ximately half head s a nd
half tails, but in thi s case, if no heads occ urred, we would be more than
surp rised, we would be indi gnant and with overwhelming as su rance could
state that this coin was clearly unfa ir. In fact , the odds are better t ha n 1088
to I th at at least 490,000 heads will occu r! This is what we mean by statistical
regularity , namely, that we can make some very precise statements about
large collect ion s of random events .
ILL

RULES OF THE GAME

We now descri be the rule s of the game for cre ating a mathematical model
for probabilistic situatio ns, which is to corresp ond to real-world experiments.
Typic ally one exa mines three features 'of such experiments:
1.

A set of possible experimental outcomes.


363

364
2.
3.

APP ENDIX II

A gro uping of the se outcomes into classes called results.


The relative frequency of these classes in many independent tr ials of
the experiment.

The relative frequency f e of a class c is merely the number of time s the


experimental outcome falls int o that clas s, divided by the number of time s the
experiment is performed; as the number of experimental trial s increases, we
expect [, to reach a limit due to our noti on of statistical regularity.
The mathematical model we create also has three quantities of interest that
are in one-to-one relation with the three quantities listed a bove in the
experimental world. They are , respectively:
A sample space which is a collection of objects which we den ote by
S . S co rrespo nds to the se t of mutually exclusive exhaustive outcome s
of the model of an experiment. Each object (i.e., possible o utco me)
w in the set S is referred to as a sample po int.
2. . A famil y of events <S' denoted {A , B, C, . . .} in which each event is
a set of sa mple points {w} . An event corresponds to a class or result
in the real wo rld.
3. A probability measure P which is an assignment (ma pping) of the
events defined on S into the set of real numbers. P corresponds to the
relative frequency in the experimental situati on. The notation P[A]
is used to denote the real number associated with the event A . Th is
assignment must satisfy the followin g prop erti es (axioms):

1.

(a)

For an y event A , 0 ::::; P[A]::::; 1.

(b) P[S]
(c)

I.

If A and B are "mutually exclusive" event s [see (11.4)


below], then P[A U B] = P[A] + P[B].

(ILl)
(11.2)
(11.3 )

It is appropriate at thi s point to define some set the or etic not ati on [for
example , the use of the symbo l U in property (c)]. Typically, we descr ibe a n
event A as follows : A = {w : w sa tisfies the membership pr operty for the
event A } ; thi s is read as "A is the set of sa mple points w such th at w satisfies
the memb er ship property for the event A ." We further de fine

A e = {w: w not in A } = complement of A

A U B

= {w : w

A II B = AB

in A or B or both} = union of A a nd B

{w : w in A and B } = intersection of A and B

'P = S" = null event (contains no sample points since S contains


all the points)

11.1.

RUL ES OF THE GAME

365

If A B = cp , then A and B are said to be mutually ex clusive (or disjoint). A


set of event s whose union forms the sample space S is sa id to be a n exhaustive
set of event s. We a re therefore led to the definiti on of a set of mutually
exclusive exhaustive events {Ai> A 2 , . , An}, which have the properties
AiA; = cp
for all i '" j
A , V A 2 V . . . v An = S

(1104)

We note further that A v Ac = S , AA c = cp , AS = A , Alp = tp, A V S =


S, A V cp = A, S " = cp, and cpc = S. Also , we comment that the uni on and
intersection opera to rs a re commutative , associative, a nd distributive.
The triplet (S, tff, P ) along with Axioms (11.1)- (11.3) form a p robability
system . These three axioms are all that one needs in order to develop an
axiomatic theory of probability whenever the number of events th at can be
defined on the sa mple space S is finite. [When the number of such events is
infinite it is necessar y to include an additional axiom which extends Axiom
(11.3) to include the infinite union of disjoint event s. Thi s leads us to the
noti on of a Borel field a nd of infinite additiv ity of probab ility measures. We
do not discuss the det ails further in this refr esher.] La stly, we comment that
Axiom (11. 2) is nothing more than a normalizat ion statement and the ch oice
of unity for th is normalizati on is quite arbitrary (but also very natural).
Two other definition s are now in order. The first is that of conditional
probability . The cond itional probability of the event A given that the event B
occurr ed (denot ed as P[A I BD is defined as

P[ A B]

~ P[ AB]
P[B]

when ever P[B] '" O. The introduction of the conditional event B force s
us to restrict attention from the original sample spa ce S to a new sa mple
space defined by the event B; since this new constrained sample space mu st
now ha ve a total pr obability of unity, we magnify the probabilities associated
with co nditio nal event s by dividing by the term P[B] as given above.
The seco nd additiona l notion we need is that of statistical independence
of events. Two events A, B are said to be statistically indep endent if a nd only
if
(11.5)
P[ AB] = P[A]P[B]
Fo r three events A, B , C we require that each pair of event s satisfies Eq . (11.5)
a nd in additi on
P[ABC] = P[ A]P[B]P[C]

Th is definition extend s of course to n event s requiring the n-fold fact oring of


the probab ility expression as well as a ll the (n - I)-fold factorings all the way

366

AP PEN DIX II

down to all the pairwi se factorings. It is easy to see for two ind ependen t
events A, B that PtA I B} = Pt A], which merely says that knowledge of the
occurrence of the event B in no way affects the probability of the Occurrence
of the independent event A .
The theorem of total probability is especially simple and important. It
relates the pr obab ility of an event B and a set of mutually exclusive exhau stive
events {Ai} as defined in Eq . (11.4). The the orem is
n

= L P[ AiB]

P[B]

i= l

which merely says that if the event B is to occur it must occur in conjunction
with exactly one of the mutually exclusive exhaustive events Ai ' Howe ver
from the definition of conditional probability we may always write
P[AiB]

P[A i B] P[B]

P[B

I Ai ]

P[A i]

Thus we have the second important form of the theorem of total pro bability,
namel y ,
n

P[B) =

L P[B I Ai]
i= l

pr All

Thi s last equ ation is perhaps one of the most useful for us in st udying qu eueing theory. It suggests the following approach for finding the pr obability of
some complex event B , namely, first to condition the .event B on some event
Ai in such a way that the calculation of the occurrence of event B given this
cond ition is less comple x, and then of course to multipl y by the pr obabil ity
of the conditio nal event A , to yield the j oint probability P[ A,B) ; this having
been done for a set of mutu ally exclusive exhau stive events {A ,} we may then
sum these probabilities to find the probability of the event B. Of course, this
approach can be extended and we may wish to condition the event B on more
than one event then unc ondition each of these events suita bly (by multipl ying
by the probabilit y of the appropriate condition) and then sum all possible
form s of all conditions. We will use this approach man y times in the text.
We now come to the well-know n Bay es' theorem. Once aga in we consi der
a set of events {A ,}, which are mutually exclusive and exhaustive. Th e theo rem
says
P[B Ai) P[A i )
P [A ; B) = - n --'--'--.:..:....---'---''-=--

L P[B I Ai )

P[A i)

j= l

Th is theorem permits us to calculate the probability of one event conditioned


on a second by calculatin g the probability of the seco nd conditioned on the
first a nd other similar terms.

I
J

ILl .

RUL ES OF TH E GAME

367

imple example is in order here to illustrate some of these ideas. Con sider
'ou have just entered a ga mbling casino in Las Vegas. You approach a

: who is known to have an ident ical twin brother ; the twins cannot
tinguished. It is further kn own that one of the twin s is an honest dealer
.a s the second twin is a cheating dealer in the sense that when you play
he honest dealer yo u lose with probability one-h alf, whereas when you
vith the cheating dealer you lose with probability p (if P is greater than
alf, he is cheating again st you whereas if p is less than one-half he is
ng for you). Further more, it is equally likely that upon entering the
) you will find one or the other of these two dealers. Con sider that you
ilay one game with the particular twin who m you encounter and further
-ou lose. Of course yo u are disappointed and you would now like to
ate the probability that the dealer you faced was in fact the cheat, for if
an establish that thi s probability is close to unity, you have a case for
the casino. Let D II be the event that yo u pla y with the ho nest dealer
:t D c be the event that yo u play with the cheating dealer ; further let L
: event that you lose. What we are then asking for is P[D c L]. It is no t
diat ely ob vious how to make th is calculation ; howe ver , if we appl y
, the orem the calculation itself is trivial , for

I = P[L I Dc]

P[D c L]

P[L Del P[DC]


P[Del
P[L D II ] P[D ll ]

s applica tion of Bayes' theorem the collection of mutually exclusive


stive events is the set { D H, D c }, for one of these two event s must occur
'oth cannot occur simulta neously. Our problem is now tr ivial since
.errn on the right-hand side is easily calculated and lead s us to
P D

IL

[ c

]-

pm __2p_
pm+ mm 2p + I

s the answer we were seeking and we find that the probability of having
a cheating dealer, given that we lost in one play , ranges from 0 (p = 0)
(p = I). Thus, even if we know that the cheating dealer is completel y
Jest (p = I), we can only say that with probab ility 2/3 we faced this
"given th at we lost one play.
a final word on elementary topics, let us remind the reader that the
er of perm utations of N objects taken K at a time is

N'

-----'-'-'.=--

(N - K )!

N(N - I) . .. (N - K

+ 1)

368

APPENDIX II

whereas the number of combinations of N things taken K at a time is denoted


by

(~)

and is given by

(~)

N!
K!(N - K)!

11.2. RANDOM VARIABLES


So far we have described a probability system which consists of the triplet
(S, Iff, P), that is, a sample space, a set of events, and a probability assignment to the events of that sample space. We are now in a position to define
the important concept of a random variable. A random variable is a variable
whose value depends upon the outcome of a random experiment. Since the
outcomes of our random experiments are represented as points w E S then
to each such outc ome w , we associate a real number X (w), which is in fact the
value the random variable takes on when the experimental outcome is w .
Thus our (real) random variable X (w) is nothing more than a function defined
o n the sample space , or if you will, a mapping from the points of the sample
space into the (real) line.
As an example , let us consider the random experiment which consists of
one play of a game of blackjack in Las Vegas. The sample space consists of
all possible pairs of scores that can be obt ained by the dealer and the player.
Let us assume th at we have grouped all such sample points into three
(mutually exclusive) events of interest : lose (L), draw (D) , or win ( W). In
order to complete the probability system we must assign probabilities to each
of these events as follows *: P[L] = 3{8, P[D] = 1{4, P[W] = 3{8. Thus our
probabilit y system may be represented as in the Venn diagram of Figure 11.1.
The numbers in parentheses are of course the probabiliti es. Now for the
random variable X (w). Let us assume that if we win the game we win S5, if
we draw we win SO, and if we lose we win - S5 (that is, we lose S5). Let our
winnings on this single play of blackjack be the random variable X(w ). We
may therefore define this variable as follows :

+5
X(w)

wEW

W ED

-5

wE L

Similarly, we may represent this random variable as the mapping shown in


Figure 11.2.
This is the most difficult step in practice , that is, determining appro priate numbers to
use in o ur model of the real world.

11.2.

369

R ANDOM VARIABLES

Figure JI.I The probability system for the blackjack example.


T he domain of the random variable X(w) is the set of event s e' and the values
it ta kes on for m its range. We note in passing that the probability assignment
P may itself be thought.of as a random var iable since it satisfies th e definition;
this particular assignment P , however, has further restricti ons on it , namely,
th ose given in Axioms (II.l}-(II.3).
We are mainly interested in describing the probability that the rand om
vari able X(w) takes on certain values. To this end we define the followin g
sho rtha nd notati on for events:
[X = x] ~ {w : X (w) = x }
(11 .6)
We may discuss the probability of this event which we define as
P[ X = x ] = probability that X(w) is equal to x

which is merely the sum of the probabilities associa ted with each point w for
which X (w) = x . For our example we have
P[X = -5] = 3/8
P[X =0] = 1/4

(11.7)

P[ X = 5] = 3/8

An other convenient form for expressing the pr obabilities associated with


the rand om vari able is the probability distribution fun ction (PDF) also known

x u

---'----------L.--------L.----- R ~a l

Figure 11.2 The random variable X(w).

line

370

APPEN DIX II

as the cumulati ve distributi on function . For th is purpose we define notation


similar to that given in Eq . (II .6), namely,

[X

x] = {w: X(w)

x}

We then ha ve that the PDF is defined as


Fx(x) ~ P[X ~ x )

which expresses the probability that the random varia ble X ta kes on a valu e
less than o r equal to x . The important properties of this functi on are
Fx(x) ~ 0

(II .S)

Fx ( oo) = I
Fx( - oo) = 0
Fx(b) - Fx (a) = P[a

<X

Fx (b) ~ Fx (a)

b)

for

for

<b

(11. 9)

a~ b

Thus F x (x) is a nonnegati ve mon ot on ically nondecreasing funct ion with


limits 0 and I at - 00 and + 00 , respectively. In addition Fx(x) is assumed
to be continuous from the right. For our blackjack example we then have the
functi on given in Figure II.3. We not e that at points of discontinuity the
PDF takes on the upper valu e (as indicated by the dot) since the fun ction is
piecewise co ntinuous from the right. From Property (II.9) we may easily
calculate the probability that our random variable lies in a given interval.
Thus for o ur blackjack example, we may write P[ - 2 < x ~ 6) = 5/S,
P[l < x ~ 4) = 0, and so o n.
For purposes of calculation it is much mo re convenient to work with a
fun ct ion closely related to the PDF rather than with the PDF itself. Thus we
FX (x )
3

5
8

,
1

...3

-'---

-5

+5

Figure 11.3 The PDF for the blackjack example.

11.2.
j to the d efin ition o f the

RANDOM VARIABL ES

371

probability density fun ction (pdf) defined a s

s:
to. d F x( X)
iX(X) = h

(I I.! 0)

ir se , we are immediately faced with the question of whether or not such


vati ve exist s and if so over what interval. We temporarily a void that
o n and assume that Fx(x) possesses a continuous derivative everywhere
1 is false for our blackjack example). As we shall see later , it is possible
ine the pdf even when the PDF contains jumps. We may "invert"
I.l 0) to y ield

F.y(x) =

f / x(Y) d y

(II.! n

thi s a nd Eq . (11.8) we have

ix(x) ~ 0
F.y(oo) = I, we have from Eq. (II.! I)

the pdf is a function which when integrated over an interval gives the
bility that the random variable X lies in that interval , namely, for
. we ha ve

Pea

< X ~ b] =

fix

(x ) d x

. - b, and the axio m sta ted in Eq. (II.! ) we see that this la st equation
mpli es

f x'<x) ~ 0
an e xample , let us consider an exponentially distributed random

ole defined as one for which

F x(x)
.

(I0 -

: i. > O.

e-A X

O~X

x<O

(I I.! 2)

or res po ndi ng pdf is given by


O~ x

x<O

(11.13)

372

APPENDI X II

F or thi s example, the probability that the random va ria ble lies between the
va lues a( >O) and b( > a) may be calculated in eith er of the two following
ways:
P[a

<

P[a

<x~

x ~ b] = F x ( b) - Fx (a )
= e-.l. a _ e- 1 b

b] = f fx (x) d x
=

e - i.a _

e - J.. b

From o ur blackjack example we not ice that the PDF ha s a derivat ive
which is everywhere 0 except at the three critical points (x = -5 , 0 , + 5).
In o rder to complete the definition for the pdf when the PDF is discontinuou s
we recognize that we must introduce a function such that when it is integrated
o ver the region o f the discontinuity it yields a value equal to the size of the
discontinuous jump; that is, in the blackjack example the probability density
functi on mu st be such that when inte gr ated fr om -5 - E to -5 + E (for
small E > 0) it sh ould yield a probability equal to 3/8. Such a function has
alread y been studied in Appendix I and is, of course , the impulse functi on
(or Dirac delta funct ion ). Recall th at such a function uo(x) is given by
lI

o(x )

oo
0

x = 0

x ;= O

C'

uo(x ) d x

. - a::

a nd a lso th at it is merel y the derivati ve o f the unit step functi on as can be


seen from
x

lI

L"

o(Y) d y

= {O
I

x< O
x :2: 0

Using the graphica l notati on in Fi gure 1.3, we may pr operly descr ibe the pd f
for o ur blackjack example as in Figure 11.4. We note immediately th at this
represen tat ion gives exactly the information we had in Eq. (11.7), a nd therefore the use of impulse functions is overl y cumbersom e for such problems.
In particular if we de fine a discrete random va riab le as o ne that take s on
va lues over a discrete set (finite o r countable) then th e use of the pdf* is a bit
heavy a nd unnecessary a lt ho ugh it does fit int o o u r genera l definition in the
o bvio us way. On the o ther hand , in the case o f a random va riable that ta kes
o n va lues over a continuum it is perfectly natural to use the pdf and in the
In the discrete case, the function P[X = xd is often referr ed to as the probability 1Il0SS
[unction. The genera lization to the pd f lead s one to the noti on of a mass density func tion .

[1.2.

_ _ -'

-5

RANDOM VARIABLES

-'

--'

373

+5

Figure 11.4 The pdf for the blackjack examp le.

case where the re is also a non zero probability th at th e ra nd om variable ta kes


on a specific value (i.e., that the PDF con tains j um ps) then th e use of the pd f
is necessar y as well as is the introduction of the impulse functio n to acco unt
for the se points of accumula tion. We a re thus led to distin gu ish between a
discrete rando m variable, a purely continuous random va ria ble (o ne whos e
PDF is co ntinuous a nd ' everywhere differenti a ble), a nd the th ird case of a
mixed ran dom variable which contains so me discrete as well as co nti nuo us
portion s. * So , for exa mple, let us con sider a random va ria ble that represent s the lifetime of an a uto mo bile. We will assume th at there is a finite
pr oba bility , say of val ue p , t hat the autom obil e will be inoperable imme d iately
upon de livery, a nd th erefor e will ha ve a lifet ime o f length zero. On th e othe r
ha nd , if the a utomobi le is ope ra ble upo n delivery then we will ass ume tha t
th e rem ainder of its lifetim e is exp onentially distributed a s given in Eqs.
(11.I 2) a nd ([1.13). Thus for thi s a uto mo bile lifetime we have a PD F a nd a
pdf as given in Figure 11.5. Thus we clearly see the need fo r impulse fu nct ions
in describing interesting ran dom va riables .
We have now disc ussed the notion of a pro ba bility system (5, s, P) and
the no tio n of a rand om va ria ble X(w) defined upon the sa mple space 5 .
Th ere is, of co urse, no reason why we cann ot define mallY rand om vari ables
o n the sa me sa mple space. Let us co nsider the ca se of two ran dom va riables
X a nd Y defined for so me probability system (5, 8, P ) . In th is case we have
It can be shown that an y PD F may be decomposed into a sum of th ree part s, na mely, a
pure jump function (cont aining on ly discont inu ou s ju mps), a pure ly cont inuo us porti o n,
a nd a singula r port ion (which ra rcly occu rs in distribution functions of interest and which
will be con sidered no fu rther in thi s text) .

374

AP PEN DIX II

1
- - --======--x
o

o
(a) PD F

Figure U.s

(h) pdf

PD F and pdf for automobile lifetime.

the natural exte nsion of the PD F for two rand om varia bles, namely,
6

F Xy(X , y) = P[X ~ x, Y ~ y ]

which is mer ely th e prob ability tha t X ta kes on a value less tha n or eq ua l to x
a t the sa me tim e Y ta kes on a va lue less th an o r equal to y; th at is, it is th e
su m of th e p robabilities associated with all sample p oint s in the intersecti o n
of the two events {w : X(w) ~ z }, {w : Y(w) ~ y }. FXY(x, y) is referred to a s
t he j oint PDF. Of co urse, associa ted with thi s funct ion is a joint probability
d ensity funct iondefined as
A

f.yy( x, y) =

2F

XI'(X' y)

_-"--'.--'-~

dx dy
Gi ven a joint pdf, o ne naturally inq uires as to th e "marginal" density functi on for one of th e varia bles and thi s is clearl y given by integra ting over a ll
possible val ues of the second variable, thus

fx( X) = i : _J xr (X, y) dy

(11.14)

We are now in a posi tion to de fine the noti on of independence between


ra nd om variables. T wo rando m variables X an d Yare sai d to be inde pendent
if a nd only if
f xy(x, y) = fx(x)!I .(y)
th at is, if the ir joint pdf fac tors in to th e product of th e o ne-dime nsional
pdf's. Thi s is very much like th e definition fo r two independent events a s
given in Eq . (11. 5). H owever , for th ree o r mo re random va riables, th e definitio n is esse nt ia lly th e same as for two , na mely, X" X 2 , , X n are sai d to
be independent ra ndom variables if a nd o nly if

IL 2.

RANDOM VARIABLES

375

This last is a much simpler test th an th at required for multiple events to be


independe nt.
With more th an one random variab le, we can now define co nditiona l
distr ibut ions an d den sities as follows. For example, we could ask for the
PDF of the rand om variable X co nditio ned on some given value of the
rand om variab le Y, which would be expre ssed as P[X ::;; x Y = y). Similarly, the cond itional pdf on X , given Y, is defined as

a d
fXI y(x y) = - P[X ::;; x
dx

Iy

= y) =

fxy (x, y)
:...:=-'--'--'-

fy(y)

much as the definition for conditional prob abil ity of events.


To review again, we see that a random variable is defined as a mapping
from the sample space for so me probability system into the real line and
from this mappin g the PDF may easily be determined. Usually, however, a
random var iable is not given in term s of its sample space and the mapping,
but rather directly in terms of its PDF or pdf.
It is possible to define one random variable Y in terms of a second random
variable X, in which case Y would be referred to as a function of the random
variable X . In its most general form we then have

(11.15)

Y= g(X)

where g(' ) is some given function of its argument. Thus, once the value for X
is determined, then the value for Y may be computed ; however, the value
for X depends upon the sample point w, a nd therefore so does the value of
Y which we may therefore write as Y = Y(w) = g( X(w. Gi ven the random
va riable X and its PDF , one sho uld be able to calculate the PDF for the
random variable Y, once the functi on gO is known. In principle, the co mputat ion take s the following form:
Fy (Y)

P[ Y::;; y ] = P[ {w : g(X(w ::;; y} ]

In general, this computati on is rather complex.


On e random variable may be a funct ion of many other random varia bles
rather than ju st one. A particularly important form which often a rises is in
fact the sum of a collection of independent random va riables {Xi}' namely,
n

( 11.16)

Y = L: X i
i- I

Let us derive the distribution function of the sum of two independent random
variables (n = 2). It is clear that this distribution is given by
Fy(y)

P[Y::;; y]

P[X 1

+X

2 ::;;

y]

376

APPENDIX 11
X,

Figure I/.6 The integration region for Y = X,

+ X,

::::; y .

We have the situ ati on shown in Figure I1.6. Inte gratin g over the indic ated
region we have

Due to the independence of X, and X, we then obta in the PD F for Y as


F y(Y )

f [f"-X'
OO
- <Xl

-00

!x ,(x ,) dx, ! x ,(x,) dx,

=foo Fx,(Y -

xJ!x,(x,) dx,

- 00

Fi nally, forming the pdf from thi s PDF , we have


f y(Y) = L : ! x ,(y - x,) !x,(x,) dx,

Th is last equ atio n is merely the convolut ion of the density functions for X,
a nd X, a nd, as in Eq. (1.36), we denote this con volut ion opera to r (which is
both associative an d com mutative) by a n asterisk enclosed within a circle.
Thus

II. 3.

EXPECTATIO N

377

In a similar fash ion , o ne easily shows for the case of a rbitra ry n th at the pdf
fo r Yas de fined in Eq. (11.16) is given by the conv olu tion of the pd f's fo r the
X;'s, th at is,
(11.17)

II.3. EXPECTATION
In thi s sectio n we discuss certain measures associated with the PDF and the
pdf for a random variabl e. These measures will in genera l be ca lled expectations and they deal with inte grals of the pdf. As we saw in the last section ,
th e pdf involves certain difficulties in its definiti on , and the se difficultie s were
handily resolved by the use of impulse functions. However, in much of the
literature on pr obability the or y a nd in most of the literature o n queueing
the or y th e use of impulses is either not accepted, not und er stood or not kn own ;
as a result, special care and not ation has been built up to get a ro und the
problem of differentiating discontinuous functions. The result is that many
of the int egrals encountered are Stieltjes integrals rather than the usual
Riemann inte grals with which we are most fa miliar. Let us take a moment to
define the Stieltjes integral. A Stieltjes inte gral is defined in term s of a nondecreasing function F(x) and a continuous function 'r ex); in additi on, two sets
of points {f.} a nd { ~.} such that 1. _ 1 < ~. ~ f. a re defined and a limit is considered where max If .:... 1. _,1 ~ O. From the se definiti on s, con sider the sum

L 'I'(~.)[F(I.)

- F(I . _1) ]

This sum tends to a limit as the interva ls shrink to zero indepe ndent of the
sets {f. } a nd { ~.} a nd the limit is referred to as the Stieltje s integral of cp
with respect to F. This Stieltjes integral is written as

qc(x) dF(x )

Of co urse, we rec ogn ize th at the PDF may be identified with the functi on F
in thi s definitio n a nd th at dF(x) may be identified with th e pdf [say , f (x)]
through

dF (x )

= f( x) d x

by defini tion . With out the use of impulses the pdf may not exist ; however,
the Stieltjes integral will always exist and therefore it avo ids the issue of
impulses. Howe ver , in thi s text we will feel free to incorporate impul se
functi on s and therefore will work with both the Riem ann and Stieltje s
integra ls ; when impulses a re pe rmitted in the fun ct ion f (x) we then have the

378

APP ENDI X II

following identity:

J'r ex ) dF(x) = Jg;(x)f(x) dx


We will use both notations throughout the text in o rder to fam iliari ze the
student with the more common Stieltjes integral for queueing theory, a s well
a s with the more ea sily manipulated Riemann integral with impulse funct ion s.
Having sa id all th is we may now introduce the definition of expectation.
The expectation of a real random variable X (w) denoted by E[X] and al so
by X is given by the following:

E[X]

&

.1

'1"

-X> X

(I 1.1 8)

dF_,(x)

This last is given in the form of a Stieltjes integral ; in the form of a Riemann
integral we have, of course,

E[X]

= . =

L:

xfx(x) dx

The expectation of X is als o referred to a s the mean or average calue of X.


We may also wr ite

E[X] =

f'

(I - Fx(x)] d x -

f/ x(X) dx

which, up on int egrating by parts, is easily shown to be equal to Eq . (l1.l 8)


so long a s E[X] < 00. Similarly, for X a nonnega tive random variable , thi s
form bec omes

E[X] =

i"

[I - Fx (x)] dx

zo

In general , the expectation of a random variable is equal to the product of the


va lue the random va ria ble may take on and the probability it takes o n th is
value, summed (integra ted) over all po ssible values.
N ow let us con sider o nce aga in a new random va riable Y, which is a
function of our first rand om variable X, namely, a s in Eq. (IU5)
Y

g(X )

We may define the expecta tio n E r [Y] for Yin terms of its PDF just as we did
for X ; the subscript Yon the expectation is there to distingui sh expectation
with respect to Yas opposed to any other random va ria bles (in th is ca se X ).
Thus we have

II.3.

EXP ECTAT ION

379

This last computation requi res that we find either Fy(Y) o r f y(y) , which as
mentioned in the previous section , may be a ra ther complex computation.
However, thefu/ldamentaltheorem ofexpectation gives a much more straightfor ward calculatio n for th is expectation in ter ms of distributio n of the
underlying random variable X, na mely,

Edy ] = Ex[g(X)]

= roo g(x)fx(X) dx
.i_oo

We may define the expectation of the sum of two random variables given
by the followi ng obvious general izatio n of th e o ne-dimensional case :

E[X

Y] = L : L : (x

+ Y)fxy( x, y) d x dy

= L : L : Xf x y (x , y) d x dy

=L :

xfx (x) d x

= E[X ]

+i

+L :

L :Yfxy(x, y) d x dy

:y!y(y) dy

+ E[Y ]

( 11 .19)

Th is may also be written as (X + y = X + Y). In goi ng from the seco nd


line to the thi rd line we have taken advan tage of Eq. (I I. 14) of the previous
section in whic h the marginal density was define d from the jo int density.
We have show n the very imp ortant result, that the expectation of the sum of
tlro random cariables is always equal to the sum of the exp ectations of eachthis is true whether or not these random variables are independent. This very
nice prope rty comes from the fact th at th e expectation operato r is a linea r
ope rator. The mor e genera l sta tement of th is prop ert y for any number of
random variab les, independent o r not , is that the expectation of the sum is
alway s equal to the sum of the exp ectations, tha t is,

E[X l

+ X o + . .. + X n ] = E [X tl + E[X o] + ... + E[X n ]

A similar que stion may be asked a bo ut th e product of two rando m var iab les,
that is,

E[XY] = L : L : xyfXl '( x, y) d x dy


In the special case where the two ran do m variables X and Ya re independent ,
we may write the pdf for th is joint ra ndom variab le as the product of the
pdf's for the individual rando m variab les, thus obtaining

E[XY]

L:L:

xyf s (x )f d Y) dx dy

E[X]E[Y ]

( 11.20)

380

APP ENDI X II

This last equati on (which may also be written as X Y = X Y) states th at the


expecta tion of the product is equal to the product of the expectations if the
random va ria bles are indepe ndent. A result simi la r to th at expre ssed in Eq.
(11.20) applies a lso to fun ctions of independent random va riab les. That is,
if we have two independent random variables X and Y and fun cti ons of
each denoted by g(X) a nd h( Y), then by a rguments exactl y the sa me as th ose
leading to Eq. (11.20) we may sho w
E [g(X) lJ(Y)]

E[ g(X )]E[lJ(Y )]

(l1.21 )

Often we are interested in the expectation of the pOlrer of a random variable.


In fact, this is so common that a s pecia l name has been coined so that the
expected value of the 11th power of a random va riable is referred to a s its nth
mom ent . Thus, by definition (rea lly this follo ws from th e fundamental
theorem of expectation) , the 11th moment of X is given by
E[X n]

~ xn~

L:

x nfx(x) dx

Furthermore , the ntli central moment of thi s random va riable is given as


follows:
OC

( X - x )n =

(x -

X)"fx( x) d x

- oc

The nth central moment may be expressed in terms of the first II mo ments
themselves ; to show thi s we first write down the foll owin g ident ity making
use of the bino mial the orem

(X -

X )"

=I(n)X'-C- Xl"-k
,~O

Taking expectations o n both sides we then have

(X -

X)" =i(II) Xk(- X)"-k


k~O

i
k -O

(/~) Xk( - Xj',-k

( 11.22 )

In going fr om the first to th e seco nd line in thi s last eq ua tion we ha ve taken


a dva ntag e of th e fact th at th e expectat ion o f a su m is equal to the sum of the
expectation s a nd th at the expectat ion of a con stant is mere ly th e con st ant
itself. Now for a few o bserva tio ns. Fir st we note th at the Oth moment o f a
random va ria ble is j ust unit y. Al so . the Ot h central moment mu st be ~ n e. The
first central mom en t mu st be 0 since
(X - X) = X - X = 0

var

In
(I
fil

tc

o
q
d

11.4 .

T RAN SFORMS AN D CHA RACTE RISTIC FUNC TIONS

38 1

The second central moment is extremel y importa nt and is referred to as the


variance; a special not ation has been ado pted for the varia nce and is given by

ax 2 ~ (X - X)2
~ X2 - (1')2

In the second line of thi s last equation we have taken ad vantage of Eq.
(II.22) and have expre ssed the variance (a central moment) in term s of the
first two momen ts themselves. The square root of the variance ax is referred
to as the standard deviation. The ratio of the sta nda rd deviati on to the mean
of a random va riab le is a most important qu ant ity in sta tistics and a lso in
queueing the or y; th is ratio is referred to as the coefficient of variation and is
den oted by
(11 .23)

11.4. TRANSFORMS, GENERATING FUNCTIONS, AND


CHARACTERISTIC FUNCTIONS
In probability the or y one encounters a variety of functi ons (in part icular ,
expect ations) all of which are close relative s of each other. Included in this
class is the characteristicf unction of a random vari able, its moment generating
fun ction, the Laplace transf orm of its probability density fu nction, and its
probability generating fun ction. In this secti on we wish to define and distinguish these vario us forms a nd to indicate a common centra l propert y tha t they
share.
The characteristic fun ction of a rand om variable X, den oted by c/> .d u), is
given by
c/> x( u) ~ E[ei "x ]
=

L:

eiuz;-x(x) dx

where j = J~ a nd where u is a n arbitrar y real vari able . (Note that except


for the sign of the exponent , the characteristic function is the Fo urier tran sform of the pd f for X). Clearly.

Ic/>x(u)I :s;; L :
a nd since

lei""1=

which shows tha t

I . we have

lei "xI IJx (X)1dx

382

APPENDIX II

An important property of the char acteri stic function may be seen by expanding the exponential in the integrand in terms of its power series and then
integrating each term separately as follows:
epx(u)

= Lf'"/
= 1

[ + jux + ~
(jU X)2
+ ...]

x(x) 1

dx

(jU)2~

. _

+ lUX + - - X " + ,..


2!

From this expansion, we see that the characteristic function is expressed in


terms of all the moments of X. Now, if we set u = 0 we find that epx(O) = I.
Similarly, if we first form depx(u)/du and then set u = 0, we obtain j X
Thus, in general , we have
(11.24)

Thi s last important result gives a rather simple way for calculating a constant
times the nth moment of the random variable X .
Since this property is frequently used, we find it con venient to adopt the
following simplified notation (consistent with that in Eq. 1.37) for the nth
der ivative of an arbitrary function g (x), evaluated at some fixed value x = xo:
gln )(x )
o

~ d ng( x)

d x"

. ( 11.25)

%=:1: 0

Thus the result in Eq. (II.24) may be rewritten as


ep:~)(O)

= rx.

The mom ent generating function denoted by M x(v) is given below along
with the appropriate differential relationship that yields the nth moment of X
directly .
Mx (v) ~ E[e- x]

=L :
M:~)(O) = X

e-Z! x(x) d x
n

where v is a real variable. From this last property it is easy to see where the
name "moment generating function" come s from. The deriv ation of this
moment relationship is the same as that for the characteristic function .
Another important and useful function is the Laplace transform of the pdf
of a random variable X. We find it expedient to use a notation now in which
the PDF for a rand om va riable is labeled in a way th at identifies the rand om
variable without the use of subscripts. Thus, for example. if we have

IIA . TRANSFOIU\1S AN D CHARACTERISTIC FUNCTIONS

383

rand om var iable X , which represents , say, the interarrival time between
adjacent customers to a system, then we define A( x) to be the PD F for X;
A(x) = P[ X

x]

where the symbol A is keyed to the word "A rrival." Further , the pdf for th is
example would be denoted a(x). Finally, then , we den ote the Laplace tr an sform of a(x) by A *(s) and it is given by the following:
A *(s) ~ E[e- ' x]

L:

e-' '''a(x) d x

where s is a complex variable. Here we are using the "two-sided" transform ;


however , as mentioned in Section 1.3, since most of the random variables we
deal with are nonnegative, we often write

Th e reader should take special note that the lower limit 0 is defined as 0- ;
that is, the limit comes in from the left so that we specifically mean to include
an y impulse functions at the origin. In the fashion identical to that for the
moment generating funct ion and for the characteristic function , we may find
the moments of X through the following formula:
A*(nl(o) = ( _ l)nx n

(11.26)

For nonnegative random variab les


IA*(s)1

~ f ',e-'''''IG(X)' d x

But the complex variable s consists of a real part Re (s) = a and an imaginary
par t 1m (s) = w such that s = a + j w. Th en we have

le-s"'l = le- ' ' 'e- ;w'' l


~ le-''' I le- ;w
"'l
= le-a"'l
M oreover , for Re (s) ;:::: 0, le-a"'l ~ I and so we have from these last two
equations a nd from

JO' a (x ) dx =
IA*(s)1

I,

Re (s) ;:::: 0

It is clear tha t the three functions 1>x(u), M x (v), A *(s) are all close relatives of each other. In particular, we ha ve the following relationship :

c/>x(sj)

M x (- s)

A*(s)

384

APPENDIX II

Thus we are not surprised that the moment generating pr operties (by differentiati on) are so simila r for each; this property is the central pr operty th at
we will take ad vantage of in o ur studies. Thus the nth mom ent of X is calculable from an y of the following expression s:
X

= rn 4>~~)(O)

=;

Xn

.M~~)(o)

= (_l )nA* <n\o)

It is perhaps worthwhile to carry out an example demonstrating the se


properties. Con sider the continuous random variable X, which represents,
say, the interarrival time of customers to a system and which is exponentially
distributed, that is ,
Ae- A.,
x ~ o
f x:(x) = a(x ) = {

x<O

By direct substitution into the defining integral s we find immediately th at


.J.
J.
,/,X(II ) = - -.J. - )11

A-

M (v) = - -

}. _ v

A *(s)

= -.A-

A.+ s

It is alw ays true that

4>x(O)

.Mx(O)

A *(0)

and , of course, this checks out for our example as well. Usin g our expression
for the first mom ent we find through a nyo ne of our three functions that
_

X= -;A.
and we ma y also verify that the second moment may be calculated from an y
of the three to yield

X2

=2
}.2

and so it goes in calculating all of the moments.


In the case of a discr et e random vari able described, for example , by

gk = P[X

k]

11.4.

TRANSFO RMS AND C HARACTE RISTIC FUNCTION S

385

we make use of the probability generating f unction den ot ed by G(z) as foll o ws :


.

G(z) = E[zx]
=

~ z
k

(11.27)

gk

wh ere z is a complex va riable. It sho uld be clear from o ur di scussion in


Appendix I that G(z) is nothing more th an the z-tra ns fo rm of the discrete
seq uence gk' As with the continuous tr an sforms, we have fo r Izl ~ I

IG(z)1 ~

L Izkllgkl
k

~ L gk

and so

IG(z)1 ~ I

for [z]

(11.28)

No te th at the first deri vati ve evaluated at z = I yie lds the first moment of X

(11.29)
a nd th at th e second deri vat ive yields

GIZ)(l ) = XZ - X
in a fashi on simila r to th at for continuous random va ria bles. * N ote that
in a ll ca ses
G(I ) = I
Let us a pply the se methods to the blackjack example considered earlier in
thi s a ppend ix. Work ing either with Eq. (11.7), which gives the probability
of va rious win nings o r with th e impulsive pd f give n in Fi gure 11.4, we find
th at the probability genera ti ng functi on for the number of doll a rs wo n in
a game of blackjack is given by

1
3 z5
G( z) =-3 z- 5 +-+8
4
8
We no te here th at , of course , G( I)
may be calculated as

X=

I a nd furth er, th at the mean winni ngs

GIll(l ) = 0

Let us no w consider the sum o f n independent va ria bles X i' namely ,


L ~~l Xi' as de fined in Eq. (1I.16). Ifwe form the ch aracterist ic functio n

y =

T hu s we ha ve th at a X Z = G(ZI(I )

G(l I(I ) - [G(l) (1)j2.

386

APPENDIX II

for Y, we have by definition

ep y(u) ~ E[eiuY]

[ ;u.:E Xi]

= E e
=

, ~,

E[eiUX'eiuX, . . . e;Ux,]

Now in Eq . (II.21) we showed that the expectation of the product of functi ons
of independent random variables is equal to the product of the expectat ions
of each function separately ; applying this to the above we have

ep y(u)

E[eiUX ']E[eiUX ,] . . . E[e iuX ,]

Of course the right-hand side of this equation is just a product of characteristic functions , and so

ep y(U)

epx,(u)epx,(u)' . . epx,(u)

(11.30)

In the case where each of the Xi is identically distributed, then , of course,


the characteristic functions will all be the same , and so we may as well drop
the subscript on Xi and conclude
(11.31)
Wehave thus shown that the characteristic functi on of a sum of n identically
distributed independent random variables is the nth power of the characteristic function of the individual random variable itself. This important result
also applies to our other transforms, namely, the moment generating function, the Laplace transform and the z-transform. It is this significant property
that accounts, in no small way, for the widespread use of transforms in
probability theory and in the theory of stochastic processes.
Let us say a few more words now about sums of independent random
variables. We have seen in Eq. (II. 17) that the pdf of a sum of independent
variables is equal to the convolution of the pdf for each ; also, we have seen
in Eq. (11.30) that the transform of the sum is equal to the product of the
transforms for each . From Eq. (II. 19) it is clear (regardless of the independence) that the expectation of the sum equals the sum of the expectations,
namely,
y = X, + X2 + .. . + X.
(11.32)
For n

2 we see that the second moment of Y must be


yo = (X ,

And also in this case

+ X 2)2 =

X, 2 + 2X,X2

+ X 22

11.4.

TRANSFOR.\1S AND CHARACTERISTIC FUNCTIONS

387

ming the va ria nce of Y and then using these last two eq ua tio ns we have

a y' =

y~

- (Y)"

= Xt = ax ,2

+ X .'

(Xt )'

+ ax + 2( X
,2

- (X,)2
IX.

+ 2( X tX2 -

XI X2)

v if XI and X 2 a re also independent, the n X IX2 =


tit
1

XtX2)

XIX2 ,

giving the final

similar fas hion it is easy to show th at the va ria nce of the sum of n

'pendent random va riab les is equal to the sum of the varia nces of eac h,
: is,

'o ntin uing with sums of independe nt random va ria bles let us now ass ume
: the nu mber of these variables tha t are to be summed together is itself a
dorn variable, that is, we defi ne

Y =

I X
i

i= l

:re {Xi} is a set of ident ically distribu ted independent ra nd om variables,


1 with mean X a nd varia nce ax 2 , and where N is also a ra nd om variable
1 mean and variance fil a nd ax 2, respectively; we ass ume that N is also
epen de nt of the Xi' In this case, F y (y ) is sa id to be a compound d isutio n. Let us now find Y *(s), which is the La place transform of the pdf
Y. By definition of the tran sform an d du e to th e independence of a ll
ra ndom variables we may write down

OX>

=I

E[e- , x l ] .. . E [e-,xn]P[N

II]

n= O

: since {X ,} is a set of identically distribu ted ra ndo m va riables, we have


OX>

Y *(s)

=I

[X *(s)rP[N

II]

( 11.33)

n= O

ere we have den oted the Lap lace transfo rm of th e pd f for each of the X i
X ' (s) . The final expression given in Eq . (II .33) is immedia tely recognized

388

AP PENDI X II

as the z-transform for the random variable N, which we choose to denote by


N(z) as defined in Eq. (11.27) ; in Eq , (II.33), z has been replaced by X*(s) .

Thus we finally conclude


Y *( s)

= N (X*(s

(11.34)

Thus a random sum of identically d istributed independent random varia bles


has a tran sform that is related to the transform s of the sum's random varia bles
and of the number of term s in the sum, as given above. Let us now find an
expre ssion similar to that in Eq. (11.32); in that equation for the case of
identically distributed Xi we had Y = nX, where n was a given con stant.
Now, however, the number of term s in the sum is a random quantity and we
mu st find the new mean Y. We proceed by takin g advantage of the momen t
generating properties of our transforms [Eq. (11.26)]. Thus different iating
Eq . (11.34), setting s = 0, and then takin g the negative of the result we find
Y= N X

which is a perfectly reasonable result. Similarly, one can find the variance
of this rand om sum by differenti at ing twice an d then subtracting off the
mean squared to obta in
a y 2 = Nax 2 + (X)2 all 2
Thi s last result perhaps is not so intuitive.
11.5. INEQUALITIES AND LIMIT THEOREMS
In this section we present some of the classical inequalities and limit
theorem s in pro bability the ory.
Let us first consider bou nding the probability that a random variab le
exceeds some value. If we know on ly the mean value of the random variable,
then the following Ma rkov inequality can be established for a nonnegative
ran dom variable X:
P[X

z]

~-

Since on ly the mean value of the rand om var iable is utilized, this inequ ality
is rather weak. The Cheby shev inequality makes use of the mean and variance
a nd is somewhat tighter ; it states that for any x > 0,
a .2
P[IX - X/ ~ xl ~ ~
xOth er simple inequal ities invol ve momen ts of two ran dom varia bles. as
follows: First we have the Cauchy-Schwarz inequality , which make s a
statement ab out the expectation of a product of rand om varia bles in term s of

11.5.

389

INEQU ALITIES AND LIMIT TH EOREMS

the second moments of each.


(I 1.35)

A gene rali zati on of this last is Holder's inequality , which states for C1.
I, C1.- 1 + f3- 1 = I, and X> 0, Y> 0 that

f3 >

>

I,

XY ~ (X ")l /'(yP/IP

whenever the indicated expectations exist. Note that the Cauchy-Schwartz


inequality is the (impor ta nt) special case in which C1. = f3 = 2. The triangle
inequality relates the expectation of the absolute value of a sum to the sum
of the expect at ion s of the absolute values, namel y,

IX + YI ~ IXI

+ IYI

A generalizat ion of the tri an gle inequality , which is kn own as the C.-inequality,
is
where
I

C=
{ 2.- 1

O<r~1

I<r

Next we bound the expectat ion of a convex function g of an arbitrary


random varia ble X (whose first moment X is assumed to exist). A convex
funct ion g(x) is o ne that lies on or below all of its chord s, that is, for any
X l ~ X., and 0 ~ x ~ I

g(C1.X 1

+ (I

- e<)x.) ~ C1.g(x l )

(1 - C1.)g (x.)

For such con vex funct ions g and random variables X , we have Jensen's
inequality as follows:
g(X) :2: g(X)
When we deal with sum s of random variables, we find that some very
nice limitin g properties exist. Let us once again con sider the sum of n
independent identically distr ibuted random vari ables Xi' but let us now
divide that sum by the number of terms n , thu sly
I n
IVn =-")
_ X.
ll i = l

This arithmetic mean is often referred to as the sample mean. We assume


that each of the Xi ha s a mean given by X and a variance ax'. Fr om our
earlier discussion regarding mean s and vari ance s of sums of independent

390

APPENDIX II

ran dom variables we have

=X

Wn
2

a.~/

aW = -

If we now apply the Chebyshev inequ ality to the random variab le W n and
ma ke use of these last two observa tion s, we may express our bound in terms
of the mean and variance of the random variable X itself thu sly
(I 1.36)

Th is very important result says that the arithmetic mean of the sum of n
independent and identically distributed rand om variables will approach
its expected value as n increases. This is due to the decreasing value of
a X2/nx2 as n grows (a X 2/X2 remain s constant). In fact , th is leads us directly to
the weak law of larg e num bers, namely, that for any > 0 we ha ve
lim P [l Wn

X/ ;::: ] = 0

The strong law of large numbers states that


. lim Wn

= X

with probabil ity one

Once again, let us consider the sum of n independent identically distributed


random variables X ; each with mean X and variance ax 2 The central limit
theorem concerns itself with the normalized rand om variable Z; defined by
n

2: Xi - nX
Z =

(I 1.37)

i- I

ax.Jn

and states that the PDF for Z, tends to the standa rd normal distribution as
n increa ses; that is, for any real number x we have
lim P[Z n ~ x]

n -<Xl

= lI>(x)

where

~
lI>(x) =

IX
c- eo

- 1-

(27T)1/2

e- ' z/ 2 d y

That is, the ap propriately norm alized sum of a large numb er of independe nt
random variables tends to a Gaussi an , or a normal distribution. There are
many other forms of the central limit theorem that deal, for example, with
dependent random variables.

-----------

II.S.

INEQUALITI ES AND LIMIT THEOREMS

391

A rather sophisticated means for bounding the tail of the sum of a large
number of independent random variables is available in the form of the
Chernoffbound. It involves an inequality similar to the Markov and Chebyshev
inequalities , but makes use of the entire distribution of the random variable
itself (in particular, the moment generating function) . Thus let us consider
the sum of n independent identically distributed random variables X i as
given by
n
Y = LXi
i=l

From Eq. (II.31) we know that the moment generating function for Y,
M y(v), is related to the moment generating function for each of the random
variables X i [namely, 1\1 x (v)) through the relationship
(11.38)

As with our earlier inequalities, we are interested in the probability that


our sum exceeds a certain value, and this may be calculated as

pry ;::: y) = f 'Jy(W) dw

(11.39)

Clearly, for v;::: 0 we have that the unit step function [see Eq. (1.33)) is
bounded above by the following exponential:
u_,(w - y) ::;; evCw-

Applying this inequality to Eq . (11.39) we have

pry ;::: y) ::;; e---1>'

L:

eVWJy(w) dw

for v ;::: 0

However, the integral on the right-hand side of this equation is merely the
moment genera ting function for Y , and so we have

v;::: 0

(11040)

Let us now define the " semi-invariant" generating function


y(v)

=t>. log M( v)

(Here we are considering natural logarithms.) Applying this definition to


Eq . (11.38) we immediately have
y y (v)

nyx(v)

a nd appl ying these last two to Eq. (11040) we arrive at

prY ;::: y1::;; e---1>Y+nyxCvl

v~O

392

APPENDI X II

Since this last is good for any value of v (~ 0), we should choose v to create
the tighte st possible bound; this is simply carried out by differenti atin g the
exponent and setting it equal to zero . We thus find the optimum relationship
between v and y as
y = nYi l(v)
(H.41 )
Thus the Chernoff bound for the tail of a density function takes the final
form*
pry ;;:.; ny~i-l(v)l ~ e n[ Yz lvl-vYz l1Jlvl]
v ;;:.; 0
(11.42)
It is perhaps worthwhile to carry out an example demonstrating the use of
this last bounding procedure. For this purpose, let us go back to the second
paragraph in this appendix, in which we estimated the odds that at least
490,000 heads would occur in a million tosses of a fair coin. Of course, that
calculation is the same as calculating the probability that no more than
510,000 head s will occur in the same experiment. assuming the coin js fair.
In this example the random variable X may be chosen as follows

X= {I

heads
o tails
Since Y is the sum ofa million trials of this experiment, we have that n = 10 ,
and we now ask for the complementary probability that Yadd up to 510,000
or more, namely , pry ~ 510,000] . The moment-generating function for X is
Mx(v) =

and so
y,.(v)
.,

! + !e

1
log -2 (1

+ e")

Similarly
V

y l1l ( v)

= -e-.

x
1 + e"
From our formula (H.4I) we then must have
nylll( v)

s:

106

eV

--

+ e"

510,000

=y

Thus we have
51
49

eV

= -

51
log49

and

The same derivation leads to a bound on the "lower tail" in which all three inequalities
from Eq. (II.42) face thusly: ~. For example v ~ o.

11.6.

STOCHASTI C PROCESSES

393

Thus we see typically how v might be calcul ated . Plugging these values
back into Eq. (11.42) we conclude .
P[ Y ~ 510,000] ~ e l 0 ' ( l o l< (50/4.)-o. 51 !Og (5 1/4 .) ]

Th is computat ion shows that the probability of exceeding 510 ,000 heads in a
million tosses of a fair coin is less than 10- 88 (this is where the number in
our opening par agraph s comes from). An alternative way of carrying out
this computation would be to make use of the central limit theorem. Let
us do so as an example . For this we require the calculation of the mean and
varia nce of X which are easily seen to be X = 1/2 , Gx 2 = 1/4. Thu s from
Eq, (11.37) we have
Y - 106(1 /2)
Z = ------'-:'---'n
(1/2)103

If we require Y to be greater than 510 ,000 , then we are requiring that Z .


be greater than 20. If we now go to a tabl e of the cumulative norm al distribution, we find that
P[Z ~ 20]

1 - <1>(20) ~ 25 x 10- . 0

Again we see the extreme implausibility of such an event occurring. On the


other hand, the Chebyshev inequality, as given in Eq. (11.36), yields the
following ;

p[ Iw _.!2 I>- O.OIJ -< 100.25106

25 x 10-4

Thi s result is twice as large as it should be for our calculation since we have
effectively calculated both tails (namely, the probability that more than
510 ,000 or less than 490 ,000 heads would occur); thus the appropriate an swer
for the Chebyshev inequ ality would be that the probability of exceeding
5 10 ,000 heads is less than or equal to 12.5 x IQ-4. Note what a poor result
this inequ ality gives comp ared to the central limit theorem approximat ion ,
which in this case is comp arabl e to the Chernoff bound.
11.6. STOCHASTIC PROCESSES
It is often said that queueing theory is part of the theory of applied stochastic processes. As such, the main port ion of this text is really the proper
sequel to this section on stochastic processes; here we merely state some of
the fundamental definitions and concepts.
We begin by considering a probability system (S, Iff, P) , which consists
of a sample space S, a set of events {A , E, .. .} , and a probability measure P.
In addition, we have already introduced the notion of a rand om variable

394

AP PEN DIX II

X (w). A stochastic process may be defined as follows : For each sample


point w E S we assign a time functi on X (t, w). Thi s family of functions forms
a stochastic pro cess; altern ativel y, we may say that for each t included in
some appropriate parameter set, we choose a random variable X (t , w).
Thi s is a collection of rand om variables depending upon t , Thus a stochastic
process (or random function) is a function * X (t) whose values are rand om
variables. An exampl e of a random process is the sequence of closing prices
for a given security on the New York Stock Exchan ge; an other exampl e is
the temperature at a given point on the earth as a function of time.
We are immed iately confronted with the problem of completely specifying
a random pr ocess X (t ). For this purpose we define, for each allo wed t,
a PDF, which we denote by Fx(x , t) and which is given by

Fx(x , t) = P[X(t )

xl

Further we define for each of n allo wable t , {t l , t 2 ,


given by
FX

1""-Y 1, X ,,,( X 1 , X 2, ,Xn ;

t h 12 ,

.. .

t n} a j oint PD F,

,t,J

.l.

= P[X(tl)

~ Xl ' X (t 2 ) ~

X 2, ,

X(t n ) ~ x n ]

and we use the vector notation Fx(x ; t) to den ote this function.
A stochastic process X(t) is said to be stat ionary if all Fx (x , t) are invariant to shifts in time ; that is, for a ny given constant T the followin g holds:

Fx(x ; t

+ T) = Fx(x; t)

where the notation t + T implies the vector (11 + T , t 2 + T, . , t n + T ).


Of most interest in the theory of stoch astic processes are these stationary
random functions.
In order to completely specify a stochastic process, then , one must give
Fx(x ; t) for all possible subsets of {Xi}, {I,}, and all n. Th is is a monstrou s
task in general! Fortunately, for man y of the interesting stochastic pr ocesses,
it is possible to pro vide this specificat ion in very simple term s.
Some other definiti ons are in order. The first is the definition of the pdf
.
for a stochastic process, and this is defined by

Second, we often discuss the mean value of a stochastic process given by


X (I)

E[X(t )]

L:

xix(x; I) d x

Usually we denote X(I, w) by X (I ) for simp licity.

Il .6.

STOCH ASTIC PROCESSES

395

Next, we introduce the autocorrelation of X (t) given by

R x x (t" t2 ) = [X(t ,)X( t2 ) ]

J-: L:

x ,xdx,x,(x " x 2 ; I" 12) dx, d X2

A large the or y of sto chas tic pr ocess has been de veloped , kn own as secondorder theory, in which these pr ocesses a re classified a nd distingui shed
o nly o n the basis of th eir mean X U) a nd autocorrelati on R x x (t" t 2) . In the
case of stat iona ry rand om processes, we ha ve
X(I) = X

( 11.43)

and

R x .\:(t"

12 )

= R x x (12

I,)

(11.44)

th at is, R x x is a functi on onl y of the time difference -r = t 2 - t,. In the


sta tio nary case, then , random processes are characterized in the seco ndorder the or y only by a con stant (their mean X) and a one-d imensiona l func tion Rx x (-r). A random pr ocess is sa id to be wide-sense stationary if Eqs.
(11.43) and (11.44) hold . No te that all sta tiona ry p rocesses are wide- sen se
sta tio nary, but not con versely.
REFERENCES
DAVE 70 Davenport, W. B. Jr. , Probability and Random Processes, McGraw-Hill
(New York), 1970.
FE LL 68 Feller, W., An Introduction to Probability Theory and Its Applications,
3rd Edition, Vol. I , Wiley (New York), 1968.
PAPO 65 Papoulis, A., Probability , Random Variables, and St ochastic Processes,
McGraw-Hill (New York), 1965.
PARZ 60 Parzen, E., Modern Probability Theory and Its Applications, Wiley
(New York), 1960.

G
G

s.

Glossary of Notation*

g
}-

Ii

I,

(Only the notat ion used ofte n in this book is included below.)
NOTATI ONt

A .(t) = A(t)
An*(s) = A *(s)
ak
a. (t) = aCt)
Bn(x) = B (x)
B ;*(s) = B*(s)
bk
bn(x) = b(x)

C2
b
C.
Cn(u) = CCu)
C. *(s) = C*(s)
c.(lI) = c(u)

D
dk
E [X ] =
Ei
Er
FC FS
Fx(x)

DEFI NITION

I'

TYPICAL PAGE
REFER ENCE

P[t . ~ t] = P[i ~ t]
Lapl ace transform of aCt )
k th mome nt of aCt )
dA n(t) /dt = dA( t) /dt
P[x . ~ x ] = P[x ~ x ]
Laplace tra nsform of b(x)
kth moment of b(x)
dBn(x)/dx = dB(x)/dx
Coefficient of variati on for service time
nt h customer to enter the system
P[u. ~ u]
Lapl ace tra nsfor m of cn(lI) = c(u)
dC . (lI)/du = dCCu)/du
Deno tes determin istic distribution
P[ij = k]
Expectation of t he ran dom variab le X
System sta te i
Den otes r-stage Erlan gian distribution
Fir st-come-first-served
P [X ~ x]

13
14
14
14
14
14
14
14
187
II

281
285
281

K
L

tv
tv
III

O(
01

P
P

VIII

176
378
27
124
8
370

PI
p<
P,

Pi
Pi
fk

Q
In those few cases where a symbo l has more than one meaning. the context (or a
specific statement) resolves the a m biguity.
t The use of the notation Y n - Y is mean t to indicate tha t y = lim Yn' as /I - co wherea s
y( t) - y indicates that Y = lim y(r) as t - "'J .

396

q"
q"

GLOSSARY OF NOTATION

397

I,(x)

dFx(x){dx

371

G
G(y)
G*(s)

Denot es genera l di stri bution


Busy-peri od di stribution
Laplace tr an sform o f g(y)

VIII

gk

kth moment of bu sy-peri od duration

g(y)

dG(y) {dy

HIl

Den ote s R-stage hyperexponential


d istri bution

Im (s)
In - > I
I *(s )
I*(s)

208
211
213
215

Im aginary p art of the complex varia ble s


Durat ion of the (nth) idle peri od
Laplace tr ansform of idle-period den sity

141
293
206
307

Laplace tran sform o f idle-time density in the

310

LC FS

dual system
Size of finite sto rag e
La st-c ome-first-ser ved

Den ote s exponential distribution

VIII

M
111

Size o f finite population


Number of servers

viii

N.(I) ->- N.
N(I) ->- N

Number of cu st omers in system a t time

o(x)

8
Vlll

N u mber 'of cu st omers in queue a t time


lim O(x){x
z-o

lim o(x) {x

o(x)

viii

z-o

=K<
=0

48

M atri x of transition probabil itie s


Pr obability of the event A
Pr ob ab ility of the even t A conditioned
o n the event B
Pro babil ity distribution fun cti on
Pr obabil ity den sity functi on

pdf

P[ N(I ) = k ]
P[next sta te is

Pk(l )

Pu

II

284

PDF

co

Pt A]
P tA I B]

17

t; I current

sta te is E,]

31
364

365
369
371
55
27
46
90
192
48

Pk

P[ X (t ) = j Xes) = i ]
P[k custo mers in system]

Q(=)

c-tra nsfo rrn of P[ij = k]

quC l )

T ra nsi tio n rate s a t tim e

qn

N umbe r left behi nd by dep artu re (o f e n)

177

Nu mbe r found by arrival (of e n)

242

Pij(S, I)

ij

q,/ ~ ij'

398

GLOSSARY OF NOTAT ION

r ij

Real part of the complex variable s


P[next node is j current node is i]

rk

P [q' = k ]

Rc(s)

Sn(Y) -> S(y)

P[sn ~ y] -> p es ~ y ]

S; *(s)
s n~s

Laplace tran sform of sn(Y) ---+ sty)


Lapl ace tra nsform variable
T ime in system (for e n)

sn(Y) -> sty)

dS n(y) /dy

sn ---+ s = T

Average time in system (for e n)

snk

kth moment of sn(Y)


Average time in system
Interarrival time (between e n_1 a nd e n)
Avera ge interarriva l time

S * (s)

---+

-Joost

T
I n -> i
i n = (= IIA

fi'
U(/)
Un ~U

V(z )
V

W
IV o
W_(y )
Wn(y)

W(y)

->

W n*(s)

---+

->

~vn -+-

W= W

w(y)

w nk - lvk

X(I)
53

xk

y
z

---+

P[I;:' ~ y ]

Waiting time (for e n) in queue

wn(y)

Xn

P[w n ~ y ]

W*(s) Lapl ace transform of wn(y)

"'n-+- lV

X n -+-

dS(y)/dy

kth moment of a(/ )


Unfinished wor k in system at time I
Unit impul se functi on
Un = Xn f n+ 1 ----+ ii = 53 - i
z-tra nsforrn of P[v = k]
Number of arrivals du ring service time
(of e n)
Average time in qu eue
Average rema ining service time
Co mplementary waiting time

!lo(/)

Un ----+

---+

= x = 1/fl-

dWn(y) /dy

---+

dW(Y) ldy

Average waiting time (for en)


kth moment of wn(y)
State of stochastic pr ocess X(I) at
time I
Service time (of e n)
kt h moment of b(x)
Average service time
Busy-period du ration
z-tra nsform va riable

340
149
176
14
14
339
14
14
14
14
14
14
14
14
206
341
277
184
177
14
190
284
14
14
14
14
14
14
19
14
14
14
206
327

GLOSSARY OF NOTATION

(X(t)

Number of arrivals in (0 , t)

Yi
bet)

(E xternal) input rate to node i


Number of departures in (0, t)

},

Average arrival rate

z,

Birth (arrival) rate when N = k


A verage service rate

fl
fl k
1t ( n ) -)-

Cnl

1Tk

1t

---+ 1Tk

399
15
149
16

Death (service) rate when N = k


Vector of sta te probabilities 7Tla )

14
53
14
54
31

P[sy stem sta te (at nth step) is Ek ]

29

II a,

at a2 '

Utilization factor

"

ak (Product notation)

334

i= l

18
249

Root for G /M/m


Variance of interarrival time

305
305

Variance of service time


Arrival time of e n

12

285
285

Laplace transform of W(y)

(0, t)

Laplace transform of W_(y)


Equals by definition
The interval from 0 to t

X= E[X]

Expectation of the random variable X

(y) +

max [0, y]

(:)
A/B /m /K/M

FC nl(a)
fCkl(x)

o
f ---+ g
A <c4B

=k

11

n'

( .
! n - k)!
z-Server queue with A(t) and B(x)
identified by A and B, respectively,
with storage capacity of size K, and
with a customer population of size M
(if any of the la st two descriptors are
missing the y are a ssumed to be
infinite)
d nF(y)/dyn I .~a
f(x ) 0 . .. 0 f (x)
k-fold convolution
Convolution opera tor
Input f gives output g
Statement A implies sta tement Band
conver sely
Binomial coefficient

and F form a transform pair

15
378
277

368

viii
382

200

376
322

68
328

Summary o f lmportant Results


Followin g is a collection of the basic results (those marked by -) from this
text in the form of a list of equations. To the right of each equati on is the pa ge
number where it first appears in a meaningful way; this is to aid the reader in
locating the descriptive text and theory relevant to that equation .
GENERAL SYSTEMS
P = AX (G/G/ l)
p 4, Ax/m (G /G /m)

18
.18
18
17

T= X + W

N = AT (Little's result)
N. = AW
N. = N - p

17

188

dPk(t) /dt = flow rate into Ek-flow rate o ut of Ek

P = r k
r k = dk

59

(for Poisson arrivals)


[N(t) makes un it change s]

176
176

MARKOV PROCESSES

For a summary of discrete state Markov chains , see the table on


pp. 402-403 .
POISSON PROCESSES

P (t) = (At)k ek
k!

;"

N(t) = At

0, t

60
62

62
63

..
400

..

69

SUMMARY OF IMPORTANT RESULTS

401

BIRTH-DEATH SYSTEMS

k- 1

Pk = Po II -

k ~ 1

57

k=O

57

i..

'

92

(eq uilib rium so lutio n)

;_0 P HI

Po =

co

k- 1

A-

92

1+ I I I '
k =l i - O f-li +l

MIMII

e-IHPlt[plk~;)/2Ik_i(a t)

Pk(t) =

+ plk-i-11 /2Ik+H1(at )
77

+ (1 - p)P';~k~+/-;/2I;(at)J
96
96

Pk = ( 1 - p)P'

R = pl(l - p)

u/ =

97

pl(l _ p)2
pIp
1- p

191

T=~

98

W =

1- p

P[~k in system ] =

99

S(y)

= !l( 1 - p)e- plt- pl>


= 1 - e-p(t-ph

w(y)

(1 - p)uo(Y) + A(l - p)e-pI1- pl>

W(y)

1 - pe- plt- ph

s(y)

y ~ 0

202

202

y ~O

0
y~O

203
203

<!5

IV

Summary of Discrete-Stat e Markov Chains


CONTINUOUS-TI ME

DI SCR ET E-TIM E

HOMOG ENEOUS

One-ste p
PiI"
.
transition
= P[X" t1 = J I X"
prob abil ity
Matrix of o nestep tran sition
I' [Pij]
proba bilit ies
Multiple-step
tra nsiti on

proba bilities
Mat rix of
multiple-step
tran sit ion
prob abilities
Cha pma nKolmogorov
equa tion

pci1'r l

I' ,m,
p(,"1
tI

p lm )

[1'1;"1]

p on- q'p lq)

PiI(n , n+ I )
P[X" +l = j

I X" = i]

=
"

+ 1)]

1'(11)

= J I X" = /)

101
2: pt'''-0Ip
ik
k;

= /]

= p [X" t",

HOMOG ENEO US

NON HOMOGENEOUS

[Pi /n, n

Pij(III , n)
A P[ X"

= j I x'" =

P[ X(t

Po (1)
P[X (s

H (III, n) )1'/1(11I, II)]

p o (m , n)
=
p ik(m , q)pk;(q , n)
k
11 (11I , n) = H (m , q)H(q, n

2:

.
Po U , I + ~ I )
+ ~I ) = j I X( I) = i] A P[X(I + ~I ) = jl X (I ) = i]

Po

I'

i]

H (t )

1'(1)

[PiI)

+ I) = j I Xes) = i]

H (t )

PiI(1)

NO NHOMOGEN EOUS

A [P iI (I )]

2:k P,k(1 =

s)Pk,(s)

H (I - s)H (s)

[pi;(I, I

Po C' , I )
P[x(t )

H (s , I)

= j I Xes) =

i)

=.
" [ p i; (S,I )J

H (s , I)

Piles, I)

+ ~t)]

2:k PikeS, lI)pk; (II , I)

H (s , II)H (II, I)

Table (continued)
Forwa rd

P IUIl

p tm- l lp

P I11IJ

pp lm - l )

equa tio n

Backward
equation
Soluti on

p llll l =

Transition-rate
matrix.
State probabi lity
Matrix of state
probabi lities
For ward
equation
solution

,, ~,,) ~ P [X"

./>0

VJ

= jl

,,~n ) ~ P[X"

n: (tll

n ln )

n ' n,

n 1n- 11P

= n lO'p n

[I -

= nP

%p l- 1

= n ,n- 1,p (1I _

-> p n

= QH (I)

H (I) = e o t

I)

d n(I) ldl

all(s, t)l as

H (s , I)

= -Q (s)lI(s, t)

= exp [f: Q (II) dll ]


.

" j(t) ~ P[X (I )

= n (I )Q

=0

[sl - Ql- l ->H (r)

= jl

n (t ) ,1\, [" jU )l
dn(I)ldl
n (l )

= n (I )Q (I )

= n CO) exp WQ (II) dll ]

I)
nQ

p et) - I

Q U) = lim - - t>t-O CJ. r

=j l

n(l) = n (O)e o t

= H (s, I)Q (I )

aH (s, t)lal

n (l ) ~ [" j U)l

n:11ll

H (I)Q

" j(t) ~ P[X(I)

=j l

= n 'OIP(O)p (1) . . . P en -

d H(t)ldl

,d

t> t_O CJ.r

n '" ' ~ [" I" )l

n "lt ,1\, [ 1T ~ " )l

d H(I)ldl

P - I
Q =l im - -

Equilibrium
so lution
Tr an sform
relatio nships

pm

H (III , II)
= H (III , II - 1)1' (11 - I)
H (III , II)
= P (m)H (1II + I, II)
H (III , II)
= 1'(111)1'(111 + I) ' . . 1'(11 - I)

404

SU MMARY OF IMPORTA NT RESULTS

P [in ter departure tim e ~ tj

g(y)

in =

t~ O

I - e-;"

l(2n - 1

2)p n_l(1

215

n -

+ p)I- 2n

218

( .A.)k

~ - CJ.1.u)K+I .u

104
otherwise

Pk =

M!
(M - k)!

I.

M!

,-oeMP(z)

148

( 1)11 e- U+P)"I 1[2 y(A.u)1/2j


-

y p

l - I,I.u
Pk =

(/.)k
.u

107

(M/ M/ IIIM)

( ~)'

i) ! .u

.uO - p)(l - z)
.u( I - c) - 1.z[1 - G(z)]

Pk = ( I -

(M/M/ I/ K)

~)(~J

(M/M/l bulk a rr ival )

0, 1,2, . . .

(MIMI I bulk service)

136

139

M/M/rn

102

Po= ['II (mpt


+ (IIl )m)(_1 )]-1
k!
1P

k- O

(7)(2-)

P[qu eue m gj =

['II(mp)k + (m p)m)P(_ l _ ) ]
k _O

k!

III!

CJ.1.ulj m <A1.u)'
Pk = ~ ,~ i !(.A.I.u )m/
Pm= - m!

(AI.u )'
2:-.i ~ O I!
m

103

III !

( Erla ng C
formula)

103

1- p
105

(M/M/m/m)
(M/M/m/m)

(E rla ng's
loss formula)

106

SUMMARY OF IMPORTANT RESULTS

405

MIDI I

ij = _P- p'
1 - p 2(1 - p)
W=

px

188
191

2(1 - p)
( ul .j (n p ) n-1

= 2: - - e- n p

G(y)

n!

n= l

219

,- 1

(
)
e-n p
f , =..!!.E.....-..
1

219

n.

ET (r-stage Erla ng Distribution)


rp,( rp,xy-1e- T' '''

b(x) - -'--'--'---'---- (r - I) !

x ~ O

1
p,(r)1/'

124

= -

(J b

124

P;

= (I -

p) 2: A;(:=,r;

= 1, 2, . . . , r

129

i= I

I - p
{ p(ZOT _ l )zo -

Pk =

rk

k= O
k>O

133

x ~ O

141

H R (R- stage H ypere xponential Distribution)


R

b( x) =

2: 'XiP,ie- " '"


i=1

co"

~ I

143

MARKOVIAN NETWO R KS

;., = y,

+ 2: }.;r ji

149

;"",,1

p(k1 k . . . k s ) = P1(k1)p.(k.) . . . p.\( k s )


150
(o pen) where PiCk ,) is solutio n to isolated M /M /m ,

(closed)

152

406

SUM MARY OF I MFO RT A:-lT RESU LTS

LIFE AND RESIDUAL LIFE

fx( x) = xf(x)
m,

(lifetime den sity of sampled interval)

f "(y )

-_ I - F(y)

I*(s)

= I - P es)

(resi
residual life den sity)

m,
sm l

f (x )
1 - F( x)

172

(resid ual life transform )

172

(n th moment of residu al life)

173

(mea n residual life)

rex)

171

173

(failure ra te)

173

M/G/l

176
181
183

v = p

v' - V = A'X' = l (1

C b' )

= B*(A - i.z)
__ + , (I+C/ )

187

V( z)

P 2(1 - p)

- = I

( 1 + c /)
+ p -'---'----"--"-

q -

2(1 - p)

W
(I
Co')
- = p
x
2(1 - p)
Wo
W = -I-p

184
(P-K mean value formula)
(P-K mean value formula )

(P- K mean value formula)

(P-K mean value formula )

Wo ~ AX'
2
Q(z)

B*(i. - AZ) (1 - p)(1 - z)


B*(A - AZ) - Z

187
191
19 1
190

190
(P-K tran sform equation )

194

SU MMARY OF IMPORTANT RESU LTS

w*(s) =

s(I - p)
s - i. + AB*(s)-

sO -

5*(s) = B*(s)

s - i.

=
G*(s) =

P[J ~ y ]

G(y)

f"

208
212

AG*(S

(Ax)n-I

= Jon~le-AX -n-!-

226

b(n)(x) d x

213

gl = - 1 -p

214

g, = ( 1 - p)"
,

V b'

+ p(x)'

3 -

214

(1 - p)3
x3

g -

(1 _ p)'

3i.C?)2

+ (1 _

lOAx' :;O

x'

214

p)5

15i.'( x2)3

g =
+ - -- + ----'---'
( 1 - p)5_ ( 1 _ p)" (1 _ p)7

214
217

F(z) = zB*[A - AF(z) ]


oo (Ay) n-I -A'
P[N b p = n] =
- - e - b(nl(Y) dy
o
n!

1
11 1 = - 1- p
')(
1 - P) + I...,--:;
h; = - p
"x'
( I - p)3

a lt - =

p(I _ p)

226
217

1
+ __

218

1- p

+ A2:2

21 8

(1 - p)"

'fw (

aF(w,
t) = aF(w, t )
---''---'---'
- .I.F(w, t ) + ),
B w - x d x F( x, t )
at
aw
= 0
(Ta kacs integrod ifferential equation )
F

**

( r , s)

Q(: )

(r/IJ)e-' lCo _ e- rlC O

= --'--''--'-'-- - - AB*(s) - i.

200

199

y ~0

+ A-

00 .

p)

+ i.B*(s)

1 - e- A'
B*(s

(P-K tra nsform eq uat ion )

407

+r-

227

229

( I - p)(I - z) B*[A - i.G(z)]


B*(). - AG(z - z

(bulk arrival )

235

408

SUMMARY OF IMPORTA NT RESULTS

M/G/ a:;

234

T= x

234

s(y) = b(y )

234

G/M!l
r k = (1 -

251

k = 0, 1,2, .. .

O)if'

251

a = A*(p - pa)
W(y )

1 - ae-~(l--<1)

y ~ 0

252

252

G[M[m

242
249
249

a = A*(mp - mu a)

P[queue size

n arrival queues]

(I - a)a

n~O

254
Rk

m-2

00

i=k

i=m- l

"R iP ik ..

"
i +I- m
..
a
P ik

254

Pk-l,k

Pu = 0

PH =

fJn

for

i'" (i ~

>i+I

1) [I -

= Pi. i+l - n =

e-~t]i+l-;e-~t; dA(t)

'f " --,(mptr

1= 0

242

n.

- m~t

dA(t)

244

o~ n ~ i + I

- m, m

245

245

SUM MARY OF IMPORTANT RESULTS

J =

W =

m-2

-I - -a +k~IR
k
O

254

Ja
m,u(l - a)2

256

w(y I ar rival queues) = (1 - a)m,ue-m"ll-a).


W(y )

409

1-

a
1

m- 2

+ (1 -

a)

e-m"ll-a).

y ~ O

250

y ~ O

255

Rk

k=O

GIGll
W. +l =

(w.

+ 11 . ) +

277
281

C(II) = a( - II) 0 b(lI)

(I"

W(y - II) dCCII)

W(y) =

-e ec

A ( -s)B (s) - I

y ~O

y<O

'Y+(s)

= 'Y _(s)

(Lin dley's
integral
equation)

283

286

I
. 'Y+(s)
W(o+)
$ +(s) = - - 11m - - = - '+(s)
s
'Y+(s)

290

'Y (0)(1 - p)f


[A *(- s)B *(s) _ 1]'Y _(s)

290

.-0

$ +(s)

W=

an' + a" + (i)"( l


2f(1 - p)
11 2

y2

_ p)2

[2

21

306

W = ---2ii
2Y

305

IV = sup U;

279

n~O

W(y)

*
W (s)

W *( )
s

7T{c(y) 0 w(y
ao[l - [ *( - s)]

1 - C* (s)

1- a
1 _ aI*(s)

301
307
310

Index

Abel transform, 321


Abscissa of absolute convergence, 340,349
Absorbing sta te, 28
Age, 170
Algebra, real commutative, 30 1
Algebra for qu eues, 229- 303
Alterna ting sum property, 330
Analytic continuation, 287
Analytic fun ction , 328, 337
isolat ed singular point, 337
Analyticity , 328
common region of , 287
Aperiodic state, 28
Appro ximations, 319
ARPANET , 320
Arrival rate, 4
average, 13
Arrival time, 12
Arrivals, 15
Autocorrelation, 395
Availability, 9
Average value, 378
Axiomatic th eory of probability, 365
Axioms of prob ability theory , 364
Backward Chapman-K olmogorov equation ,
42,47,49
Balking, 9
Ballot theorem , classical, 224-225
generalization, 225
Baricentri c coordinates, 34
Bayes' theorem , 366
Birth-death pro cess, 22 , 25, 42, 53-78
assumpt ion, 54
equilibrium solution, 90-94
existence of, 93-94
infinitesimal generator , 54
linear , 82
probability transiti on matri x, 4 3

summary of result s, 401


tran sitions, 55 - 56
Birth rate, 53
Borel field , 365
Bottl eneck , 152
Bound,31 9
Chernoff, 39 1
Bribing, 9
Bulk arrivals, 134-136 ,1 62-1 63 , 235, 270
Bulk service, 137-139 , 163 , 236
Burke's the orem, 149
CapacitY,4,5 ,l S
Catastrophe process, 267- 269
Cauchy ineq uality , 143
Cauchy residue theorem , 337, 352
Cauchy -Riemann , 328
Centr al limit theorem , 390
Chain , 20
Channels, 3
Chaprnan-Kolmogor ov equation, 41 , 47, 51
Characteristic equation, 356, 359
Chara cteristic function, 321, 381
Cheating, 9
Chern off bound, 391
Closed queuein g net work, 150
Closed subset, 28
Coeffi cient of variation , 38 1
Combinat orial meth ods, 22 3- 226
Commodity , 3
Complement, 364
Complete solution, 357 , 360
Complex ex ponentials, 322 - 324
Compl ex s-plane , 291
Complex variable, 325
Compound distribution, 387
Computer center example, 6
Computer-communication networks, 320
Computer net work , 7
Conditional pdf, 375

4 11

412

INDEX

Conditional PDF , 375


Conditional probability , 365
Cont inuous-parameter process, 20
Conti nuous-state process, 20
Contour, 293
Convergence strip , 354
Convex function , 389
Convolution , density fun ctions, 376
notation, 376
property , 329, 344 -345
Cumulative distribution function , 370
Cut , 5
Cyclic queue , 113, 156- 158
0 /0/1 , exam ple, 309
Death proce ss, 245
Death rate , 54
Decomposition, 119 , 323 , 327
Defections, 9
Delta function, Dirac , 341
Kronecker, 325
Departures, 16,174 - 176
D/ E,/I queueing system, 314
Differen ce equations, 355- 359
stand ard solution, 355 -357
z-transforrn solution, 357 - 359
Differential-difference equation, 57,361
Differential equations , 359 -361
Laplace tra nsform solution, 360 -36 1
linear constan t coefficient, 324
standard solution, 359 -360
Differential matrix , 38
Diffusio n approximation, 319
Dirac delta funct io n, 341
Discouraged arrivals , 99
Discrete-parameter proce ss, 20
Discrete-state process, 20
Disjoint, 365
Domain, 369
Duality , 304 , 309
Dual queu e, 310 -311
E2/ M/I,259
spectrum factorization, 297
Eigenfu nctio ns, 322 -324
Engset distribution , 109
Equilibri um equat ion, 91
Equilibriu m probabilit y , 30
Ergodic Mar kov chain, 30, 5 2
process, 94

state , 30
Erlang, 119, 286
B formula , 106
C formula, 103
distribution, 72 , 124
loss formula , 106
E,/M/I, 130 - 133, 405
E,(r -stage Erlang Distribution), 40 5
Event , 364
Exhaustive set of events , 365
Expectation , 13, 377 - 38 1
fundamental theorem of , 379
Exponential dist ribution, 65 -71
coefficient of variat ion, 71
Laplace tra nsfo rm , 70
mean, 69
memory less prop erty , 66 -67
variance, 70
Exponential functio n, 340
FCFS , 8
Fig flow example , 5
Final value theo rem , 330, 346
Finite capacity, 4
Flow , 58
conservation, 91 -92
rate , 59 , 87 , 9 1
system , 3
time, 12
Fluid approximation, 319
Forward Chapman-Kolmogorov equation,
42 ,47,49 ,90
Foster's crit eria , 30
Fo urier transfor m, 321 , 381
Functio n of a random variable , 375,380
Gaussian distrib ution, 390
Generat ing functio n , 321 , 327
pro babilistic interpretat ion, 262
Geometric distribu tion, 39
Geometric series, 328
Geometric transform , 327
G/G/ I,19,275-312
defining eq uatio n, 277
mean wait, 306
summary of results , 409
waiting time transform , 307 , 310
G/G/m, II
.
Global-balance eq uatio ns, 155
Glossary of Notation , 396 -399

INDEX
G! M! I,25 1-253
dual queue, 311
mean wait , 252
spectrum factorization, 292
summary of results, 408
waiting time distribution, 25 2
G/M!2 , 256 - 259
distrib ution of number of custome rs. 258
distribution of waiting time , 258
G/ M/m.241 -259
con ditio nal pdf for queueing time,250
conditio nal qu eue length distrib ution, 249
functional equatio n, 249
imbedded Markov chain, 241
mean wait , 256
summary of results , 408- 409
transitio n probabilities, 241 - 246
waiting-time distribu tion , 255
Gre mlin , 261
Gro up arr ivals, see Bulk arrivals
Group service, see Bulk service
Heavy traffic approxi mation, 319
Hippie example, 26-27,3 0-38
Homogeneous Markov chain , 27
Homogeneou s solutio n, 355 ,
HR (R-stage Hyperexponential Distribu tion) ,
14 1, 405
Idle period, 206, 305, 31 1
isolating effec t , 281
Idle time , 304 , 309
Imbedded Markov chain, 23 ,16 7, 169,241
G/G/I, 276-279
G/M/m.24 1-246
M/G/I ,1 74 -1 77
Independence, 374
Independent process, 21
Indepe ndent random variables, produc t of
functions, 386
sums, 386
Inequalit y , Cauchy-Schwarz , 388
Chebyshev, 388
Cr. 389
Holder, 389
Jensen, 389
Markov, 388
trian gle, 389
Infinitesimal generato r, 48
Initial value theorem , 330, 346

413

Input-output relationship, 321


Input variables, 12
Inspection techniqu e, 58
Integral property, 346
Int erarrival time, 8, 12
Int erchangeable random variables, 278
Int erd epart ure time distribution , 148
Intermediate value theorem. 330
Intersection, 364
Irreducible Markov chain , 28
Jackson 's theorem, ISO
Jockeying, 9
Jor dan's lemma, 353
Kronecker delta function , 325
Labeling algorithm, 5
Ladder height , 224
Ladde r index , 223
ascending, 309
descending, 3 11
Laplace transfo rm, 32 1,33 8- 355
bilateral, 339, 348
onesided ,339
probabilist ic inte rpretation, 264 , 267
table of properties, 346
table of transform pairs, 347
two-sided , 339
Laplace transform inversion , inspec tion

meth od, 340 , 349


inversion integral , 352 -354
Laplace transform of the pdf, 382
Laure nt expansion , 333
Law of large numbers, strong , 390
weak, 390
LCFS, 8, 210
Life, summar y of results , 406
Lifetime , 170
Limiting probability , 90
Lindley's integral eq uatio n, 282 -283, 314
Linearity , 332, 345
Linear system, 321, 322 , 324
Liouville's the orem , 287
Little' s result , 17
generalizatio n, 240
Local-balance equat ions, 155 - 160
Loss syste m, lOS
Marginal density function, 374

414

INDEX

Marking of customers, 261-267


Markov chain, 21
continuous-time, 22, 44 -53
definiti on , 44
discret e-time, 22, 26-44
definition , 27
homoge neous, 27, 51
nonh omogeneous, 39- 42
summary of results, 402-403
theorems, 29
transient behavior, 32, 35-36
Markovian net work s, summary of results,
40 5
Markovian population processes, 155
Markov process, 21, 25 ,402 -403
Markov prop erty , 22
Max-t1ow-min-eut theorem, 5
MID/I , 188
busy period, number served, 219
pdf, 219
mean wait, 191
summary of results, 405
M/ E2/ 1, 234
Mean, 378
Mean recurrence time , 28
Measures, 301
finite signed, 301
Memoryl ess property , 45
M/Er /l ,126 -130
summary of results, 405
Merged Poisson stream, 79
Method of stages, 119 - 126
Meth od of supplementary variables, 169
Method of z-transform , 74 -75
M/G/l ,1 67-2 30
average time in syste m, 190
average waiting time, 190
busy period , 206 -216
distribution, 226
moments, ~i3-2 1 4, 217 - 218
number served , 217
trans forrn; 212
discrete time, 238
dual queue, 312
example, 308
feedback system, 239
idle-time distributi on , 208
interdeparture time, 238
mean queue length , 180-1 91
probabilistic interpr etation , 264

state description , 168


summary of results, 406
system time, moments , 202
transform , 199
time-dependent behavior , 264 - 267
transition prob abilities, 177-1 80
waiting time, moments, 20 1
pdf, 201
M/G/CO, 234
summary of results, 408
time dependent behavior, 271
M/H2/1 example, 189, 205
M/M/co,101
time-dependent behavior, 262
M/M/oofM , 107 - 108
M/M/l, 73- 78, 94- 99,401,404
busy period , number served, 218
pdf , 215
discrete time, 112
example, 307
feedback , 113
mean number, 96
mean syste m time, 98
mean wait, 191
spectrum factorization, 290
summary of results, 401,404
system time pdf , 202
transient analysis, 77 , 84-85
variance of numb er , 97
waiting time pdf, 203
M/M/l /K ,1 03-1 05
M/M/l //M,1 06- 107

M/M/m.l02 -1 03,25 9
summary of results, 404
M/M/m /K/M,1 08-l 09

M/M/m/m, 105- 106


M/M/2 ,11 0
Moment generating function, 382
Moment generating properties, 384
Moments, 380
central, 380
Multi-access computer systems, 320
Mutually exclusive event s, 365
Mutually exclusive exhaustive events , 365 - 366
Nearest neighbors, 53, 58
Network,4
closed,150
computer, 7
open, 149

INDEX
Net work flow theory , 5
Network s of Markovian queues, 147-1 60,4 05
Non-nearest-neighbo r, 116
No queue, 161- 162 , 315- 3 16
Normal distribution, 390
Notation, 10 -15 , 396- 399
Null event, 364
Number of custom ers, II
Open qu eueing network s, 149
Paradox of residual life, 169-1 73
Parallel stages, 140-14 3
Parameter shift , 347
Part ial-fraction expansion, 333 -3 36, 349352
Particular solution, 355
Period ic sta te, 28
Permutations, 367
Pineappl e fact ory example, 4
Poisson , catastro phe, 267
distribution , 60 , 63
process, 61 -65
mean, 62
probabilistic interpr etation , 262
summary of result s, 400
variance, 62
pure death process, 245
Pole, 291
mult iple, 350
simple, 350
Pole-zero patte rn, 292, 298
Pollaczek-Khinchin (P-K) mean value
formula, 187, 191 , 308
Pollaczek-Khinchin (P-K) transform equation , 194 , 199 , 200, 308
Power-iteration , 160
Priorit y queueing, 8, 319
Probabilistic arguments, 261
Probability density function (pdf) , 13, 371,
374
Probability distrib ution function (PDF) , 13,
369
Proba bilit y generating function , 385
Probability measure , 364
Probability system, 365
Probability theory , 10, 363 -395
Processor-sharing algorithm, 320
Produ ct notati on , 334
Project ion , 301

415

Pure birth process, 60 , 72 , 81


Pure death process, 72
Queueing discipline , 8
Queueing system , 8-9, II
Queue size, average, 188
Random event, 363
Rand om sum, 388
Random variables, 368 - 377
continuo us, 373
discrete, 373
expectation of produ ct , 379
expectation of sum, 379
mixed , 373
Random walk, 23, 25, 223- 224
Range, 369
Recurrent, nonnull , 29
null, 29,94
process, 24
state , 28
Reducibl e, 28
Regularity , 363
Relative frequency, 364
Renewal, density, 173
functi on, 173, 268
process, 24 , 25
theorem, 174
theory , 169- 174
int egral equation , 174 , 269
Residual life, 169, 170, 222
density , 172, 231
mean , 173, 305
moments , 173
summary of results, 406
Residual service time, 200
Respon sive server, 10 1
Riemann integral , 377
Root , multiple , 356, 359
Rou chc' s theorem, 293 , 355
Sample, mean, 389
path , 40
point, 364
space, 364
Scaie change, 332, 345
Schwart z's theor y of distribut ions, 341
Second -order th eory , 395
Semi-invariant generating function , 39 1
Semi-Markov proc ess, 23, 25,1 75

416

INDEX

Series-parallel stages, 139 - 147


Series stages, 119-1 26, 140
Series sum propert y , 330
Service time, 8, 12
Set theore tic notati on , 364
Sifting pro perty , 34 3
Single channe l, 4
Singularity, 355
Singularity functio ns, family of, 344
Spectru m factorizatio n, 283, 286 - 290
examples, 290- 299
solutio n, 290
Spit zer's identity , 302
Stable flow, 4
Stages, 119, 126
Standard deviatio n, 381
State space, 20
Stat e-transition diagram, 30
State- transitio n-rate diagram , 58
Stat e vector, 167
Stati onar y distribution , 29
Stati onary process, 21
Stat istical independence, 365
Stead y flow, 4
Step-function , 151, 181 .
Stieltjes integral, 377
Stochastic flow; 6
Stochastic processes, 20 , 393-395
classification, 19 - 26
stationary , 394
Stochastic sequence, 20
Storage capacity, 8
Sub-busy period, 210
Supp lementary variables, meth od of, 233
Sweep (pro bability) , 300
System funct ion , 325
Syste m time, 12
Taklcs integrodifferential equa tion, 226 230
Tandem network, 147- 149
Taylor-series expansion, 332
Time-diagram notation, 14
Time-invariant syste m, 322, 324
Time-shared computer systems, 319
Tota l probability , theor em of, 366
Traffic intensity, 18
Transfer function, 325
Transform , 321
Abel,321

bilateral, 354
Fourier, 381

Hankel, 321
Laplace, 338 -355
Mellin, 321
method of analysis, 324
two-sided, 383
z-transforrn, 327 - 338
Transient process, 94
Transient state, 28
Transition prob ability , G/M /m, 241-246
M/G/l ,I77 -1 80
matrix ,31
m-step , 27
one-step , 27
Transition-rate matri x , 48
Translat ion , 332, 345

Unfinished work, 11, 206 -208, 276


time-dependent transf orm , 229
Union , 364
Unit advance, 332
Unit delay , 332
Unit doublet , 34 3
Unit function , 325 , 328
Unit impulse fun ction , 301, 341 -344
Unit response, 325
Unit step functio n, 328, 341, 343
Unsteady flow, 4
Utilizatio n facto r, 18

Vacation, 239
Varian ce, 38 1
Vecto r transform , 35
Virtual waiting time, 206
see also Unfinished work

Waiting time, 12
complementary ,284
tran sform , 290
Wendel projection , 301 , 303
Wide-sense sta tionarity , 21, 395
Wiener-Hopf integral equation, 282
Work, 18
see also Unfinished work

IND EX
Zeroes. 291
z-Transf orm, 321, 327- 338, 385
inversion , inspectio n meth od. 333
inversion form ula. 336

power-series method , 333


met hod of , 74- 75
table of prop erti es. 330
table of transform pairs. 33 1

41 7

You might also like