You are on page 1of 24

Markov chain

http://en.wikipedia.org/wiki/Markov_chain#Statistics From Wikipedia, the free encyclopedia !mp to: navigation, search "his article is a#o!t Markov chains in discrete time. For Markov chains in contin!o!s time, see contin!o!s$time Markov chain.

% simple two$state Markov chain % Markov chain &discrete-time Markov chain or DTMC'()* named after %ndrey Markov, is a mathematical system that !ndergoes transitions from one state to another, #etween a finite or co!nta#le n!m#er of possi#le states. +t is a random process !s!ally characteri,ed as memoryless: the ne-t state depends only on the c!rrent state and not on the se.!ence of events that preceded it. "his specific kind of /memorylessness/ is called the Markov property. Markov chains have many applications as statistical models of real$world processes.

Contents
'hide)

( +ntrod!ction 0 Formal definition o 0.( 1ariations o 0.0 2-ample 3 "ransient evol!tion 4 5roperties o 4.( 6ed!ci#ility o 4.0 5eriodicity o 4.3 6ec!rrence 4.3.( Mean rec!rrence time 4.3.0 2-pected n!m#er of visits 4.3.3 %#sor#ing states o 4.4 2rgodicity o 4.7 Steady$state analysis and limiting distri#!tions 4.7.( Steady$state analysis and the time$inhomogeneo!s Markov chain 7 Finite state space

7.( "ime$homogeneo!s Markov chain with a finite state space 7.0 8onvergence speed to the stationary distri#!tion 9 6eversi#le Markov chain : ;erno!lli scheme < =eneral state space o <.( >ocally interacting Markov chains ? %pplications o ?.( 5hysics o ?.0 8hemistry o ?.3 "esting o ?.4 +nformation sciences o ?.7 @!e!eing theory o ?.9 +nternet applications o ?.: Statistics o ?.< 2conomics and finance o ?.? Social sciences o ?.(A Mathematical #iology o ?.(( =enetics o ?.(0 =ames o ?.(3 M!sic o ?.(4 ;ase#all o ?.(7 Markov te-t generators (A Fitting (( Bistory (0 See also (3 Cotes (4 6eferences
o o

(7 2-ternal links

Introduction[edit]

6!ssian mathematician %ndrey Markov, the namesake.

% Markov chain is a stochastic process with the Markov property on a finite or co!nta#le state space. "he term /Markov chain/ refers to the se.!ence &or chain* of states s!ch a process moves thro!gh. Ds!ally a Markov chain is defined for a discrete set of times &i.e., a discrete$ time Markov chain*'0) altho!gh some a!thors !se the same terminology to refer to a contin!o!s$time Markov chain.'3)'4) "he !se of the term in Markov chain Monte 8arlo methodology covers cases where the process is in discrete time &discrete algorithm steps* with a contin!o!s state space. "he following concentrates on the discrete$time discrete$state$space case. "he changes of state of the system are called transitions, and the pro#a#ilities associated with vario!s state$changes are called transition pro#a#ilities. "he process is characterised #y a state space, a transition matri- descri#ing the pro#a#ilities of partic!lar transitions and an initial state or initial distri#!tion across the state space. ;y convention, we ass!me all possi#le states and transitions have #een incl!ded in the definition of the processes, so there is always a ne-t state and the process goes on forever. % discrete$time random process involves a system which is in a certain state at each step, with the state changing randomly #etween steps. "he steps are often tho!ght of as moments in time, #!t they can e.!ally well refer to physical distance or any other discrete meas!rementE formally, the steps are the integers or nat!ral n!m#ers, and the random process is a mapping of these to states. "he Markov property states that the conditional pro#a#ility distri#!tion for the system at the ne-t step &and in fact at all f!t!re steps* depends only on the c!rrent state of the system, and not additionally on the state of the system at previo!s steps. Since the system changes randomly, it is generally impossi#le to predict with certainty the state of a Markov chain at a given point in the f!t!re. Bowever, the statistical properties of the systemFs f!t!re can #e predicted. +n many applications, it is these statistical properties that are important. % famo!s Markov chain is the so$called /dr!nkardFs walk/, a random walk on the n!m#er line where, at each step, the position may change #y G( or H( with e.!al pro#a#ility. From any position there are two possi#le transitions, to the ne-t or previo!s integer. "he transition pro#a#ilities depend only on the c!rrent position, not on the manner in which the position was reached. For e-ample, the transition pro#a#ilities from 7 to 4 and 7 to 9 are #oth A.7, and all other transition pro#a#ilities from 7 are A. "hese pro#a#ilities are independent of whether the system was previo!sly in 4 or 9. %nother e-ample is the dietary ha#its of a creat!re who eats only grapes, cheese, or lett!ce, and whose dietary ha#its conform to the following r!les:

+t eats e-actly once a day. +f it ate cheese today, tomorrow it will eat lett!ce or grapes with e.!al pro#a#ility. +f it ate grapes today, tomorrow it will eat grapes with pro#a#ility (/(A, cheese with pro#a#ility 4/(A and lett!ce with pro#a#ility 7/(A. +f it ate lett!ce today, it will not eat lett!ce again tomorrow #!t will eat grapes with pro#a#ility 4/(A or cheese with pro#a#ility 9/(A.

"his creat!reFs eating ha#its can #e modeled with a Markov chain since its choice tomorrow depends solely on what it ate today, not what it ate yesterday or even farther in the past. Ine

statistical property that co!ld #e calc!lated is the e-pected percentage, over a long period, of the days on which the creat!re will eat grapes. % series of independent events &for e-ample, a series of coin flips* satisfies the formal definition of a Markov chain. Bowever, the theory is !s!ally applied only when the pro#a#ility distri#!tion of the ne-t step depends non$trivially on the c!rrent state. Many other e-amples of Markov chains e-ist.

Formal definition[edit]
% Markov chain is a se.!ence of random varia#les X(, X0, X3, ... with the Markov property, namely that, given the present state, the f!t!re and past states are independent. Formally,

"he possi#le val!es of Xi form a co!nta#le set S called the state space of the chain. Markov chains are often descri#ed #y a directed graph, where the edges are la#eled #y the pro#a#ilities of going from one state to the other states.

Variations[edit]

8ontin!o!s$time Markov processes have a contin!o!s inde-. Time-homogeneous Markov chains &or stationary Markov chains* are processes where

for all n. "he pro#a#ility of the transition is independent of n. % Markov chain of order m &or a Markov chain with memory m*, where m is finite, is a process satisfying

+n other words, the f!t!re state depends on the past m states. +t is possi#le to constr!ct a chain &Yn* from &Xn* which has the FclassicalF Markov property #y taking as state space the ordered m$t!ples of X val!es, ie. Yn J &Xn, XnH(, ..., XnHmG(*. %n additive Markov chain of order m is determined #y an additive conditional pro#a#ility,

"he val!e f&xn,xn-r,r* is the additive contri#!tion of the varia#le xn-r to the conditional pro#a#ility.'7)'clarification needed)

!ample[edit]

Main article: 2-amples of Markov chains

% state diagram for a simple e-ample is shown in the fig!re on the right, !sing a directed graph to pict!re the state transitions. "he states represent whether a hypothetical stock market is e-hi#iting a #!ll market, #ear market, or stagnant market trend d!ring a given week. %ccording to the fig!re, a #!ll week is followed #y another #!ll week ?AK of the time, a #ear week :.7K of the time, and a stagnant week the other 0.7K of the time. >a#elling the state space L( J #!ll, 0 J #ear, 3 J stagnantM the transition matri- for this e-ample is

"he distri#!tion over states can #e written as a stochastic row vector x with the relation x&n G (* J x&n*P. So if at time n the system is in state 0 &#ear*, then three time periods later, at time n G 3 the distri#!tion is

Dsing the transition matri- it is possi#le to calc!late, for e-ample, the long$term fraction of weeks d!ring which the market is stagnant, or the average n!m#er of weeks it will take to go from a stagnant to a #!ll market. Dsing the transition pro#a#ilities, the steady$state pro#a#ilities indicate that 90.7K of weeks will #e in a #!ll market, 3(.07K of weeks will #e in a #ear market and 9.07K of weeks will #e stagnant, since:

% thoro!gh development and many e-amples can #e fo!nd in the on$line monograph Meyn N "weedie 0AA7.'9) "he appendi- of Meyn 0AA:,':) also availa#le on$line, contains an a#ridged Meyn N "weedie. % finite state machine can #e !sed as a representation of a Markov chain. %ss!ming a se.!ence of independent and identically distri#!ted inp!t signals &for e-ample, sym#ols from a #inary alpha#et chosen #y coin tosses*, if the machine is in state y at time n, then the pro#a#ility that it moves to state x at time n G ( depends only on the c!rrent state.

Transient evolution[edit]
"he pro#a#ility of going from state i to state j in n time steps is

and the single$step transition is

For a time$homogeneo!s Markov chain:

and

"he n$step transition pro#a#ilities satisfy the 8hapmanOPolmogorov e.!ation, that for any k s!ch that A Q k Q n,

where S is the state space of the Markov chain. "he marginal distri#!tion 5r&Xn J x* is the distri#!tion over states at time n. "he initial distri#!tion is 5r&XA J x*. "he evol!tion of the process thro!gh one time step is descri#ed #y

Cote: "he s!perscript &n* is an inde- and not an e-ponent.

"roperties[edit]
#educi$ility[edit]
% state j is said to #e accessi$le from a state i &written i R j* if a system started in state i has a non$,ero pro#a#ility of transitioning into state j at some point. Formally, state j is accessi#le from state i if there e-ists an integer n S A s!ch that

%llowing n to #e ,ero means that every state is defined to #e accessi#le from itself. % state i is said to communicate with state j &written i T j* if #oth i R j and j R i. % set of states C is a communicating class if every pair of states in C comm!nicates with each other, and no state in C comm!nicates with any state not in C. +t can #e shown that comm!nication in this sense is an e.!ivalence relation and th!s that comm!nicating classes are the e.!ivalence classes of this relation. % comm!nicating class is closed if the pro#a#ility of leaving the class is ,ero, namely that if i is in C #!t j is not, then j is not accessi#le from i. % state i is said to #e essential or final if for all j s!ch that i R j it is also tr!e that j R i. % state i is inessential if it is not essential.'<)

Finally, a Markov chain is said to #e irreduci$le if its state space is a single comm!nicating classE in other words, if it is possi#le to get to any state from any state.

"eriodicity[edit]
% state i has period k if any ret!rn to state i m!st occ!r in m!ltiples of k time steps. Formally, the period of a state is defined as

&where /gcd/ is the greatest common divisor*. Cote that even tho!gh a state has period k, it may not #e possi#le to reach the state in k steps. For e-ample, s!ppose it is possi#le to ret!rn to the state in L9, <, (A, (0, ...M time stepsE k wo!ld #e 0, even tho!gh 0 does not appear in this list. +f k J (, then the state is said to #e aperiodic: ret!rns to state i can occ!r at irreg!lar times. +n other words, a state i is aperiodic if there e-ists n s!ch that for all n' n,

Itherwise &k U (*, the state is said to #e periodic %ith period k. % Markov chain is aperiodic if every state is aperiodic. %n irred!ci#le markov chain only needs one aperiodic state to imply all states are aperiodic. 2very state of a #ipartite graph has an even period.

#ecurrence[edit]
% state i is said to #e transient if, given that we start in state i, there is a non$,ero pro#a#ility that we will never ret!rn to i. Formally, let the random varia#le Ti #e the first ret!rn time to state i &the /hitting time/*:

"he n!m#er

is the pro#a#ility that we ret!rn to state i for the first time after n steps. "herefore, state i is transient if

State i is recurrent &or persistent* if it is not transient. 6ec!rrent states have finite hitting time with pro#a#ility 1. Mean recurrence time[edit]

2ven if the hitting time is finite with pro#a#ility 1, it need not have a finite e-pectation. "he mean recurrence time at state i is the e-pected ret!rn time Mi:

State i is positive recurrent &or non-null persistent* if Mi is finiteE otherwise, state i is null recurrent &or null persistent*. !pected num$er of visits[edit] +t can #e shown that a state i is rec!rrent if and only if the e-pected n!m#er of visits to this state is infinite, i.e.,

&$sor$ing states[edit] % state i is called a$sor$ing if it is impossi#le to leave this state. "herefore, the state i is a#sor#ing if and only if

+f every state can reach an a#sor#ing state, then the Markov chain is an a#sor#ing Markov chain.

rgodicity[edit]
% state i is said to #e ergodic if it is aperiodic and positive rec!rrent. +n other words, a state i is ergodic if it is rec!rrent, has a period of 1 and it has finite mean rec!rrence time. +f all states in an irred!ci#le Markov chain are ergodic, then the chain is said to #e ergodic. +t can #e shown that a finite state irred!ci#le Markov chain is ergodic if it has an aperiodic state. % model has the ergodic property if thereFs a finite n!m#er N s!ch that any state can #e reached from any other state in e-actly N steps. +n case of a f!lly connected transition matriwhere all transitions have a non$,ero pro#a#ility, this condition is f!lfilled with NJ(. % model with more than one state and V!st one o!t$going transition per state cannot #e ergodic.

'teady-state analysis and limiting distri$utions[edit]


+f the Markov chain is a time$homogeneo!s Markov chain, so that the process is descri#ed #y a single, time$independent matri, then the vector is called a stationary distri$ution &or invariant measure* if it satisfies

%n irred!ci#le chain has a stationary distri#!tion if and only if all of its states are positive rec!rrent. +n that case, is !ni.!e and is related to the e-pected ret!rn time:

where is the normali,ing constant. F!rther, if the chain is #oth irred!ci#le and aperiodic, then for any i and j,

Cote that there is no ass!mption on the starting distri#!tionE the chain converges to the stationary distri#!tion regardless of where it #egins. S!ch is called the e(uili$rium distri$ution of the chain. +f a chain has more than one closed comm!nicating class, its stationary distri#!tions will not #e !ni.!e &consider any closed comm!nicating class in the chainE each one will have its own !ni.!e stationary distri#!tion . 2-tending these distri#!tions to the overall chain, setting all val!es to ,ero o!tside the comm!nication class, yields that the set of invariant meas!res of the original chain is the set of all conve- com#inations of the Fs*. Bowever, if a state j is aperiodic, then

and for any other state i, let fij #e the pro#a#ility that the chain ever visits state j if it starts at i,

+f a state i is periodic with period k U ( then the limit

does not e-ist, altho!gh the limit

does e-ist for every integer r. 'teady-state analysis and the time-inhomogeneous Markov chain[edit]

% Markov chain need not necessarily #e time$homogeneo!s to have an e.!ili#ri!m distri#!tion. +f there is a pro#a#ility distri#!tion over states s!ch that

for every state j and every time n then is an e.!ili#ri!m distri#!tion of the Markov chain. S!ch can occ!r in Markov chain Monte 8arlo &M8M8* methods in sit!ations where a n!m#er of different transition matrices are !sed, #eca!se each is efficient for a partic!lar kind of mi-ing, #!t each matri- respects a shared e.!ili#ri!m distri#!tion.

Finite state space[edit]


+f the state space is finite, the transition pro#a#ility distri#!tion can #e represented #y a matri-, called the transition matri!, with the &i, j*th element of " e.!al to

Since each row of " s!ms to one and all elements are non$negative, " is a right stochastic matri-.

Time-homogeneous Markov chain %ith a finite state space[edit]


"his section incl!des a list of references, related reading or e-ternal links, #!t the sources of this section remain unclear $ecause it lacks inline citations. 5lease improve this article #y introd!cing more precise citations. !e"r#ary $%1$& +f the Markov chain is time$homogeneo!s, then the transition matri- " is the same after each step, so the k$step transition pro#a#ility can #e comp!ted as the k$th power of the transition matri-, "k. "he stationary distri#!tion ) is a &row* vector, whose entries are non$negative and s!m to (, that satisfies the e.!ation

+n other words, the stationary distri#!tion ) is a normali,ed &meaning that the s!m of its entries is (* left eigenvector of the transition matri- associated with the eigenval!e (. %lternatively, ) can #e viewed as a fi-ed point of the linear &hence contin!o!s* transformation on the !nit simple- associated to the matri- ". %s any contin!o!s transformation in the !nit simple- has a fi-ed point, a stationary distri#!tion always e-ists, #!t is not g!aranteed to #e !ni.!e, in general. Bowever, if the Markov chain is irred!ci#le and aperiodic, then there is a !ni.!e stationary distri#!tion ). %dditionally, in this case "k converges to a rank$one matriin which each row is the stationary distri#!tion ), that is,

where * is the col!mn vector with all entries e.!al to (. "his is stated #y the 5erronO Fro#eni!s theorem. +f, #y whatever means, is fo!nd, then the stationary distri#!tion of the Markov chain in .!estion can #e easily determined for any starting distri#!tion, as will #e e-plained #elow. For some stochastic matrices ", the limit does not e-ist, as shown #y this e-ample:

;eca!se there are a n!m#er of different special cases to consider, the process of finding this limit if it e-ists can #e a lengthy task. Bowever, there are many techni.!es that can assist in finding this limit. >et " #e an nWn matri-, and define +t is always tr!e that

S!#tracting + from #oth sides and factoring then yields

where In is the identity matri- of si,e n, and ,n,n is the ,ero matri- of si,e nWn. M!ltiplying together stochastic matrices always yields another stochastic matri-, so + m!st #e a stochastic matri- &see the definition a#ove*. +t is sometimes s!fficient to !se the matri- e.!ation a#ove and the fact that + is a stochastic matri- to solve for +. +ncl!ding the fact that the s!m of each the rows in " is (, there are n'1 e.!ations for determining n !nknowns, so it is comp!tationally easier if on the one hand one selects one row in + and s!#stit!te each of its elements #y one, and on the other one s!#stit!te the corresponding element &the one in the same col!mn* in the vector ,, and ne-t left$m!ltiply this latter vector #y the inverse of transformed former matri- to find +. Bere is one method for doing so: first, define the f!nction f&&* to ret!rn the matri- & with its right$most col!mn replaced with all (Fs. +f 'f&" H In*)H( e-ists then'citation needed)

2-plain: "he original matri- e.!ation is e.!ivalent to a system of nWn linear e.!ations in nWn varia#les. %nd there are n more linear e.!ations from the fact that @ is a right stochastic matri- whose each row s!ms to (. So it needs any nWn independent linear e.!ations of the &nWnGn* e.!ations to solve for the nWn varia#les. +n this e-ample, the n e.!ations from X@ m!ltiplied #y the right$most col!mn of &5$+n*Y have #een replaced #y the n stochastic ones. Ine thing to notice is that if " has an element "i,i on its main diagonal that is e.!al to ( and the ith row or col!mn is otherwise filled with AFs, then that row or col!mn will remain !nchanged in all of the s!#se.!ent powers "k. Bence, the ith row or col!mn of + will have the ( and the AFs in the same positions as in ".

Convergence speed to the stationary distri$ution[edit]


"his section incl!des a list of references, related reading or e-ternal links, #!t the sources of this section remain unclear $ecause it lacks inline citations. 5lease improve this article #y introd!cing more precise citations. !e"r#ary $%1$& %s stated earlier, from the e.!ation , &if e-ists* the stationary &or steady state* distri#!tion ) is a left eigenvector of row stochastic matri- ". "hen ass!ming that " is diagonali,a#le or e.!ivalently that " has n linearly independent eigenvectors, speed of convergence is ela#orated as follows. For non$diagonali,a#le matrices, one may start with / ordan 8anonical Form/ &almo(t diagonal form* of " and proceed with a #it more involved set of arg!ments in a similar way.'?) >et - #e the matri- of eigenvectors &each normali,ed to having an >0 norm e.!al to (* where each col!mn is a left eigenvector of " and let . #e the diagonal matri- of left eigenval!es of ", i.e. . J diag&Z(,Z0,Z3,...,Zn*. "hen #y eigendecomposition

>et the eigenval!es #e en!merated s!ch that (J[Z([U[Z0[S[Z3[S...S[Zn[. Since " is a row stochastic matri-, its largest left eigenval!e is (. +f there is a !ni.!e stationary distri#!tion, then the largest eigenval!e and the corresponding eigenvector is !ni.!e too &#eca!se there is no other ) which solves the stationary distri#!tion e.!ation a#ove*. >et ui #e the ith col!mn of - matri-, i.e. ui is the left eigenvector of " corresponding to Zi. %lso let ! #e an ar#itrary length n row vector in the span of the eigenvectors ui, that is

for some set of ai. +f we start m!ltiplying " with ! from left and contin!e this operation with the res!lts, in the end we get the stationary distri#!tion ). +n other words ) J ui \ !"""..." J !"k as k goes to infinity. "hat means

since ---1 J I the identity matri- and power of a diagonal matri- is also a diagonal matriwhere each entry is taken to that power.

since the eigenvectors are orthonormal. "hen'(A)

Since ) J u(, )&k* approaches to ) as k goes to infinity with a speed in the order of Z0/Z( e-ponentially. "his follows #eca!se [Z0[S[Z3[S...S[Zn[, hence Z0/Z( is the dominant term. 6andom noise in the state distri#!tion ) can also speed !p this convergence to the stationary distri#!tion.'(()

#eversi$le Markov chain[edit]


% Markov chain is said to #e reversi$le if there is a pro#a#ility distri#!tion over states, ), s!ch that

for all times n and all states i and j. "his condition is also known as the detailed $alance condition &some #ooks refer the local #alance e.!ation*. With a time$homogeneo!s Markov chain, 5r&XnG( J j [ Xn J i* does not change with time n and it can #e written more simply as . +n this case, the detailed #alance e.!ation can #e written more compactly as

S!mming the original e.!ation over i gives

so, for reversi#le Markov chains, ) is always a steady$state distri#!tion of 5r&XnG( J j [ Xn J i* for every n. +f the Markov chain #egins in the steady$state distri#!tion, i)e), if 5r&XA J i* J ]i, then 5r&Xn J i* J ]i for all n and the detailed #alance e.!ation can #e written as

"he left$ and right$hand sides of this last e.!ation are identical e-cept for a reversing of the time indices n and n G (. PolmogorovFs criterion gives a necessary and s!fficient condition for a Markov chain to #e reversi#le directly from the transition matri- pro#a#ilities. "he criterion re.!ires that the prod!cts of pro#a#ilities aro!nd every closed loop are the same in #oth directions aro!nd the loop. 6eversi#le Markov chains are common in Markov chain Monte 8arlo &M8M8* approaches #eca!se the detailed #alance e.!ation for a desired distri#!tion ) necessarily implies that the Markov chain has #een constr!cted so that ) is a steady$state distri#!tion. 2ven with time$ inhomogeneo!s Markov chains, where m!ltiple transition matrices are !sed, if each s!ch transition matri- e-hi#its detailed #alance with the desired ) distri#!tion, this necessarily implies that ) is a steady$state distri#!tion of the Markov chain.

/ernoulli scheme[edit]
% ;erno!lli scheme is a special case of a Markov chain where the transition pro#a#ility matri- has identical rows, which means that the ne-t state is even independent of the c!rrent state &in addition to #eing independent of the past states*. % ;erno!lli scheme with only two possi#le states is known as a ;erno!lli process.

0eneral state space[edit]


Many res!lts for Markov chains with finite state space can #e generali,ed to chains with !nco!nta#le state space thro!gh Barris chains. "he main idea is to see if there is a point in the state space that the chain hits with pro#a#ility one. =enerally, it is not tr!e for contin!o!s state space, however, we can define sets * and + along with a positive n!m#er , and a pro#a#ility meas!re -, s!ch that (. 0. "hen we co!ld collapse the sets into an a!-iliary point ., and a rec!rrent Barris chain can #e modified to contain .. >astly, the collection of Barris chains is a comforta#le level of generality, which is #road eno!gh to contain a large n!m#er of interesting e-amples, yet restrictive eno!gh to allow for a rich theory.

1ocally interacting Markov chains[edit]


8onsidering a collection of Markov chains whose evol!tion takes in acco!nt the state of other Markov chains, is related to the notion of locally interacting Markov chains. "his corresponds to the sit!ation when the state space has a &8artesian$* prod!ct form. See interacting particle system and stochastic cell!lar a!tomata. See for instance /nteraction of Marko0 Proce((e('(0) or '(3)

&pplications[edit]
"his section does not cite any references or sources. 5lease help improve this section #y adding citations to relia#le so!rces. Dnso!rced material may #e challenged and removed. May $%1$& 6esearch has reported the application and !sef!lness of Markov chains in a wide range of topics s!ch as physics, chemistry, medicine, m!sic, game theory and sports.

"hysics[edit]
Markovian systems appear e-tensively in thermodynamics and statistical mechanics, whenever pro#a#ilities are !sed to represent !nknown or !nmodelled details of the system, if it can #e ass!med that the dynamics are time$invariant, and that no relevant history need #e considered which is not already incl!ded in the state description.'citation needed)

Chemistry[edit]

Michaelis$Menten kinetics. "he en,yme &2* #inds a s!#strate &S* and prod!ces a prod!ct &5*. 2ach reaction is a state transition in a Markov chain. 8hemistry is often a place where Markov chains and contin!o!s$time Markov processes are especially !sef!l #eca!se these simple physical systems tend to satisfy the Markov property .!ite well. "he classical model of en,yme activity, Michaelis$Menten kinetics, can #e viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis$Menten is fairly straightforward, far more complicated reaction networks can also #e modeled with Markov chains. %n algorithm #ased on a Markov chain was also !sed to foc!s the fragment$#ased growth of chemicals in silico towards a desired class of compo!nds s!ch as dr!gs or nat!ral prod!cts.'(4) %s a molec!le is grown, a fragment is selected from the nascent molec!le as the /c!rrent/ state. +t is not aware of its past &i.e., it is not aware of what is already #onded to it*. +t then transitions to the ne-t state when a fragment is attached to it. "he transition pro#a#ilities are trained on data#ases of a!thentic classes of compo!nds. %lso, the growth &and composition* of copolymers may #e modeled !sing Markov chains. ;ased on the reactivity ratios of the monomers that make !p the growing polymer chain, the chainFs composition may #e calc!lated &e.g., whether monomers tend to add in alternating fashion or in long r!ns of the same monomer*. ^!e to steric effects, second$order Markov effects may also play a role in the growth of some polymer chains. Similarly, it has #een s!ggested that the crystalli,ation and growth of some epita-ial s!perlattice o-ide materials can #e acc!rately descri#ed #y Markov chains.'(7)

Testing[edit]
Several theorists have proposed the idea of the Markov chain statistical test &M8S"*, a method of conVoining Markov chains to form a /Markov #lanket/, arranging these chains in several rec!rsive layers &/wafering/* and prod!cing more efficient test sets_samples_as a replacement for e-ha!stive testing. M8S"s also have !ses in temporal state$#ased networksE 8hil!k!ri et al.Fs paper entitled /"emporal Dncertainty 6easoning Cetworks for 2vidence F!sion with %pplications to I#Vect ^etection and "racking/ &Science^irect* gives a #ackgro!nd and case st!dy for applying M8S"s to a wider range of applications.

Information sciences[edit]
Markov chains are !sed thro!gho!t information processing. 8la!de ShannonFs famo!s (?4< paper * mat1ematical t1eory of comm#nication, which in a single step created the field of information theory, opens #y introd!cing the concept of entropy thro!gh Markov modeling of the 2nglish lang!age. S!ch ideali,ed models can capt!re many of the statistical reg!larities of systems. 2ven witho!t descri#ing the f!ll str!ct!re of the system perfectly, s!ch signal models

can make possi#le very effective data compression thro!gh entropy encoding techni.!es s!ch as arithmetic coding. "hey also allow effective state estimation and pattern recognition. Markov chains also play an important role in reinforcement learning. Markov chains are also the #asis for Bidden Markov Models, which are an important tool in s!ch diverse fields as telephone networks &which !se the 1iter#i algorithm for error correction*, speech recognition and #ioinformatics.

+ueueing theory[edit]
Markov chains are the #asis for the analytical treatment of .!e!es &.!e!eing theory*. %gner Prar!p 2rlang initiated the s!#Vect in (?(:.'(9) "his makes them critical for optimi,ing the performance of telecomm!nications networks, where messages m!st often compete for limited reso!rces &s!ch as #andwidth*.':)

Internet applications[edit]

Bow Markov chains can #e !sed on comp!tation of 5age6ank "he 5age6ank of a we#page as !sed #y =oogle is defined #y a Markov chain.'(:) +t is the pro#a#ility to #e at page in the stationary distri#!tion on the following Markov chain on all &known* we#pages. +f is the n!m#er of known we#pages, and a page has links to it then it has transition pro#a#ility for all pages that are linked to and pages that are not linked to. "he parameter is taken to #e a#o!t A.<7.'(<) for all

Markov models have also #een !sed to analy,e we# navigation #ehavior of !sers. % !serFs we# link transition on a partic!lar we#site can #e modeled !sing first$ or second$order Markov models and can #e !sed to make predictions regarding f!t!re navigation and to personali,e the we# page for an individ!al !ser.

'tatistics[edit]
Markov chain methods have also #ecome very important for generating se.!ences of random n!m#ers to acc!rately reflect very complicated desired pro#a#ility distri#!tions, via a process called Markov chain Monte 8arlo &M8M8*. +n recent years this has revol!tioni,ed the practica#ility of ;ayesian inference methods, allowing a wide range of posterior distri#!tions to #e sim!lated and their parameters fo!nd n!merically.

conomics and finance[edit]

Markov chains are !sed in Finance and 2conomics to model a variety of different phenomena, incl!ding asset prices and market crashes. "he first financial model to !se a Markov chain was from 5rasad et al) in (?:4.'(?) %nother was the regime$switching model of ames ^. Bamilton &(?<?*, in which a Markov chain is !sed to model switches #etween periods of high volatility and low volatility of asset ret!rns.'0A) % more recent e-ample is the Markov Switching M!ltifractal asset pricing model, which #!ilds !pon the convenience of earlier regime$switching models.'0() +t !ses an ar#itrarily large Markov chain to drive the level of volatility of asset ret!rns. ^ynamic macroeconomics heavily !ses Markov chains. %n e-ample is !sing Markov chains to e-ogeno!sly model prices of e.!ity &stock* in a general e.!ili#ri!m setting.'00)

'ocial sciences[edit]
Markov chains are generally !sed in descri#ing path$dependent arg!ments, where c!rrent str!ct!ral config!rations condition f!t!re o!tcomes. %n e-ample is the reform!lation of the idea, originally d!e to Parl Mar-Fs ^as Papital, tying economic development to the rise of capitalism. +n c!rrent research, it is common to !se a Markov chain to model how once a co!ntry reaches a specific level of economic development, the config!ration of str!ct!ral factors, s!ch as si,e of the commercial #o!rgeoisie, the ratio of !r#an to r!ral residence, the rate of political mo#ili,ation, etc., will generate a higher pro#a#ility of transitioning from a!thoritarian to democratic regime.'03)

Mathematical $iology[edit]
Markov chains also have many applications in #iological modelling, partic!larly pop!lation processes, which are !sef!l in modelling processes that are &at least* analogo!s to #iological pop!lations. "he >eslie matri- is one s!ch e-ample, tho!gh some of its entries are not pro#a#ilities &they may #e greater than (*. %nother e-ample is the modeling of cell shape in dividing sheets of epithelial cells.'04) `et another e-ample is the state of ion channels in cell mem#ranes. Markov chains are also !sed in sim!lations of #rain f!nction, s!ch as the sim!lation of the mammalian neocorte-.'07)

0enetics[edit]
Markov chains have #een !sed in pop!lation genetics in order to descri#e the change in gene fre.!encies in small pop!lations affected #y genetic drift, for e-ample in diff!sion e.!ation method descri#ed #y Motoo Pim!ra.'citation needed)

0ames[edit]
Markov chains can #e !sed to model many games of chance. "he childrenFs games Snakes and >adders and /Bi Boa 8herry$I/, for e-ample, are represented e-actly #y Markov chains. %t each t!rn, the player starts in a given state &on a given s.!are* and from there has fi-ed odds of moving to certain other states &s.!ares*.

Music[edit]

Markov chains are employed in algorithmic m!sic composition, partic!larly in software programs s!ch as 8So!nd, Ma- or S!per8ollider. +n a first$order chain, the states of the system #ecome note or pitch val!es, and a pro#a#ility vector for each note is constr!cted, completing a transition pro#a#ility matri- &see #elow*. %n algorithm is constr!cted to prod!ce and o!tp!t note val!es #ased on the transition matri- weightings, which co!ld #e M+^+ note val!es, fre.!ency &B,*, or any other desira#le metric.'09) (st$order matri2ote & C & A.( A.9 A.3 C A.07 A.A7 A.: A.: A.3 A 0nd$order matri2ote & D 0 && A.(< A.9 A.00 &D A.7 A.7 A &0 A.(7 A.:7 A.( DD A A ( D& A.07 A A.:7 D0 A.? A.( A 00 A.4 A.4 A.0 0& A.7 A.07 A.07 0D ( A A % second$order Markov chain can #e introd!ced #y considering the c!rrent state and also the previo!s state, as indicated in the second ta#le. Bigher, nth$order chains tend to /gro!p/ partic!lar notes together, while F#reaking offF into other patterns and se.!ences occasionally. "hese higher$order chains tend to generate res!lts with a sense of phrasal str!ct!re, rather than the Faimless wanderingF prod!ced #y a first$order system.'0:) Markov chains can #e !sed str!ct!rally, as in benakisFs %nalogi.!e % and ;.'0<) Markov chains are also !sed in systems which !se a Markov model to react interactively to m!sic inp!t.'0?) Ds!ally m!sical systems need to enforce specicc control constraints on the cnite$length se.!ences they generate, #!t control constraints are not compati#le with Markov models, since they ind!ce long$range dependencies that violate the Markov hypothesis of limited memory. +n order to overcome this limitation, a new approach has #een proposed.'3A)

/ase$all[edit]
Markov chain models have #een !sed in advanced #ase#all analysis since (?9A, altho!gh their !se is still rare. 2ach half$inning of a #ase#all game fits the Markov chain state when the n!m#er of r!nners and o!ts are considered. ^!ring any at$#at, there are 04 possi#le com#inations of n!m#er of o!ts and position of the r!nners. Mark 5ankin shows that Markov chain models can #e !sed to eval!ate r!ns created for #oth individ!al players as well as a team.'3() Be also disc!sses vario!s kinds of strategies and play conditions: how Markov chain

models have #een !sed to analy,e statistics for game sit!ations s!ch as #!nting and #ase stealing and differences when playing on grass vs. astrot!rf.'30)

Markov te!t generators[edit]


Markov processes can also #e !sed to generate s!perficially /real$looking/ te-t given a sample doc!ment: they are !sed in a variety of recreational /parody generator/ software &see dissociated press, eff Barrison,'33) Mark 1 Shaney'34)'37) *. "hese processes are also !sed #y spammers to inVect real$looking hidden paragraphs into !nsolicited email and post comments in an attempt to get these messages past spam filters.

Fitting[edit]
When fitting a Markov chain to data, sit!ations where parameters poorly descri#e the sit!ation may highlight interesting trends.'39)

3istory[edit]
%ndrey Markov prod!ced the first res!lts &(?A9* for these processes, p!rely theoretically. % generali,ation to co!nta#ly infinite state spaces was given #y Polmogorov &(?39*. Markov chains are related to ;rownian motion and the ergodic hypothesis, two topics in physics which were important in the early years of the twentieth cent!ry, #!t Markov appears to have p!rs!ed this o!t of a mathematical motivation, namely the e-tension of the law of large n!m#ers to dependent events. +n (?(3, he applied his findings for the first time to the first 0A,AAA letters of 5!shkinFs 2#3ene 4ne3in.'citation needed). ;y (?(:, more practical application of his work was made #y 2rlang to o#tain form!las for call loss and waiting time in telephone network.'(9) Seneta provides an acco!nt of MarkovFs motivations and the theoryFs early development.'3:) "he term /chain/ was !sed #y Markov &(?A9*.'3<)

'ee also[edit]

Bidden Markov model "elescoping Markov chain Markov chain mi-ing time Markov chain geostatistics @!ant!m Markov chain Markov process Markov information so!rce Markov chain Monte 8arlo Markov network Markov #lanket Semi$Markov process 1aria#le$order Markov model Markov decision process

2otes[edit]
(. 0. 4 Corris, ames 6. &(??<*. Marko0 c1ain(. 8am#ridge Dniversity 5ress. 4 2veritt,;.S. &0AA0* T1e Cam"rid3e 5ictionary of Stati(tic(. 8D5. +S;C A$ 70($<(A??$b 3. 4 5ar,en, 2. &(?90* Stoc1a(tic Proce((e(, Bolden$^ay. +S;C A$<(90$9994$9 &"a#le 9.(** 4. 4 ^odge, `. &0AA3* T1e 4xford 5ictionary of Stati(tical Term(, ID5. +S;C A$ (?$?0A9(3$? &entry for /Markov chain/* 7. 4 Dsatenko, I. 1.E %postolov, S. S.E May,elis, d. %.E Melnik, S. S. &0A(A* 6andom finite-0al#ed dynamical (y(tem(7 additi0e Marko0 c1ain a88roac1. 8am#ridge Scientific 5!#lisher. +S;C ?:<$($?A4<9<$:4$3 9. 4 S. 5. Meyn and 6.>. "weedie, 0AA7. Markov 8hains and Stochastic Sta#ility. Second edition to appear, 8am#ridge Dniversity 5ress, 0AA<. :. e a b S. 5. Meyn, 0AA:. 8ontrol "echni.!es for 8omple- Cetworks, 8am#ridge Dniversity 5ress, 0AA:. <. 4 %sher >evin, ^avid &0AA?*. Marko0 c1ain( and mixin3 time(. p. (9. +S;C ?:<$A$<0(<$4:3?$<. ?. 4 Florian Schmitt and Fran, 6othla!f, /In the Mean of the Second >argest 2igenval!e on the 8onvergence 6ate of =enetic %lgorithms/, Working 5aper (/0AA(, Working 5apers in +nformation Systems, 0AA(. http://citeseer-.ist.ps!.ed!/viewdoc/s!mmaryfdoiJ(A.(.(.0<.9(?( (A. 4 =ene B. =ol!#, 8harles F. 1an >oan, /Matri- comp!tations/, "hird 2dition, "he ohns Bopkins Dniversity 5ress, ;altimore and >ondon, (??9. ((. 4 Fran,ke, ;randonE Posko, ;art &( Icto#er 0A((*. /Coise can speed convergence in Markov chains/. P1y(ical 6e0ie9 2 56 &4*. doi:(A.((A3/5hys6ev2.<4.A4(((0. (0. 4 Spit,er, Frank &(?:A*. /+nteraction of Markov 5rocesses/. *d0ance( in Mat1ematic( 7: 049O0?A. ov, %. >. "oom &(?:<*. Stoc1a(tic Cell#lar (3. 4 6. >. ^o#r!shin, 1. +. Pri !g k Sy(tem(7 2r3odicity, Memory, Mor81o3ene(i(. (4. 4 P!tch!kian, 5eterE >o!, ^avidE Shakhnovich, 2!gene &0AA?*. /FI=: Fragment Iptimi,ed =rowth %lgorithm for the de Covo =eneration of Molec!les occ!pying ^r!glike 8hemical/. :o#rnal of C1emical /nformation and Modelin3 68 &:*: (93AO(940. doi:(A.(A0(/ci?AAA47<. 5M+^ (?70:A0A. (7. 4 Popp, 1. S.E Paganer, 1. M.E Schwar,kopf, .E Waidick, F.E 6emmele, ".E Pwasniewski, %.E Schmid#a!er, M. &0A((*. /b$ray diffraction from nonperiodic layered str!ct!res with correlations: %nalytical calc!lation and e-periment on mi-ed %!rivilli!s films/. *cta Cry(tallo3ra81ica Section a !o#ndation( of Cry(tallo3ra81y 95: (4<. doi:(A.((A:/SA(A<:9:3((A44<:4. edit (9. e a b IF8onnor, ohn .E 6o#ertson, 2dm!nd F., /Markov chain/, MacT#tor ;i(tory of Mat1ematic( arc1i0e, Dniversity of St %ndrews. (:. 4 D.S. 5atent 9,0<7,??? (<. 4 5age, >awrence and ;rin, Sergey and Motwani, 6aVeev and Winograd, "erry &(???*. T1e Pa3e6ank Citation 6ankin37 +rin3in3 4rder to t1e <e". http://citeseer-.ist.ps!.ed!/viewdoc/s!mmaryfdoiJ(A.(.(.3(.(:9<. (?. 4 5rasad, C6E 68 2nder,S" 6eilly,= Cesgos &(?:4*. /%llocation of reso!rces on a minimi,ed cost #asis/. 1=>? /222 Conference on 5eci(ion and Control incl#din3 t1e 1@t1 Sym8o(i#m on *da8ti0e Proce((e( *:: 4A0O3. doi:(A.((A?/8^8.(?:4.0:A4:A.

0A.

4 Bamilton, ames &(?<?*. /% new approach to the economic analysis of nonstationary time series and the #!siness cycle/. 2conometrica &2conometrica, 1ol. 7:, Co. 0* 7; &0*: 37:O<4. doi:(A.03A:/(?(077?. S"I6 (?(077?. 0(. 4 8alvet, >a!rentE %dlai Fisher &0AA4*. /Bow to Forecast long$r!n volatility: regime$switching and the estimation of m!ltifractal processes/. :o#rnal of !inancial 2conometric( <: 4?O<3. doi:(A.(A?3/VVfinec/n#hAA3. 00. 4 ;rennan, MichaelE bia#, `ihong. /Stock 5rice 1olatility and the 2.!ity 5remi!m/. 5e8artment of !inance, t1e *nder(on Sc1ool of Mana3ement, ACB*. 03. 4 %cemogl!, ^aronE =eorgy 2gorov, Ponstantin Sonin &0A((*. /5olitical model of social evol!tion/. Proceedin3( of t1e National *cademy of Science( *,5: 0(0?0O0(0?9. 04. 4 =i#son, Matthew 8E 5atel, %nkit 5.E 5errimon, Cor#ert &0AA9*. /"he emergence of geometric order in proliferating meta,oan epithelia/. Nat#re 66<: (A3<O (A4(. doi:(A.(A3</nat!reA7A(4. 07. 4 =eorge, ^ileepE Bawkins, eff &0AA?*. /"owards a Mathematical "heory of 8ortical Micro$circ!its/. +n Friston, Parl . PBoS Com8#t +iol 7 &(A*: e(AAA730. doi:(A.(3:(/Vo!rnal.pc#i.(AAA730. 5M8 0:4?0(<. 5M+^ (?<(977:. 09. 4 P Mc%lpine, 2 Miranda, S Boggar &(???*. /Making M!sic with %lgorithms: % 8ase$St!dy System/. Com8#ter M#(ic :o#rnal <: &0*: (?. doi:(A.((90/A(4<?09??77?:33. 0:. 4 8!rtis 6oads &ed.* &(??9*. T1e Com8#ter M#(ic T#torial. M+" 5ress. +S;C A$090$(<(7<$4. 0<. 4 benakis, +annisE Panach, Sharon &(??0* !ormaliCed M#(ic7 Mat1ematic( and T1o#31t in Com8o(ition, 5endragon 5ress. +S;C (7:94:A:?0 0?. 4 8ontin!ator'dead link) 3A. 4 5achet, F.E 6oy, 5.E ;ar#ieri, =. &0A((* /Finite$>ength Markov 5rocesses with 8onstraints/, Proceedin3( of t1e $$nd /nternational :oint Conference on *rtificial /ntelli3ence, + 8%+, pages 937$940,;arcelona, Spain, !ly 0A(( 3(. 4 5ankin, Mark ^. /M%6PI1 8B%+C MI^2>S: "B2I62"+8%> ;%8P=6IDC^/. 6etrieved 0AA:$(($09. 30. 4 5ankin, Mark ^. /;%S2;%>> %S % M%6PI1 8B%+C/. 6etrieved 0AA?$ A4$04. 33. 4 5oetFs 8orner $ Fieraling!e 34. 4 Penner, B!ghE IF6o!rke, oseph &Covem#er (?<4*. /% "ravesty =enerator for Micros/. +YT2 8 &(0*: (0?O(3(, 44?O49? 37. 4 Bartman, 8harles &(??9*. Dirt#al M#(e7 2x8eriment( in Com8#ter Poetry. Banover, CB: Wesleyan Dniversity 5ress. +S;C A$<(?7$003?$0 39. 4 %very, 5. .E Benderson, ^. %. &(???*. /Fitting Markov 8hain Models to ^iscrete State Series S!ch as ^C% Se.!ences/. :o#rnal of t1e 6oyal Stati(tical Society 65 &(*: 73O9(. doi:(A.((((/(49:$?<:9.AA(3?. S"I6 09<A<(<. edit 3:. 4 Seneta, 2. &(??9*. /Markov and the ;irth of 8hain ^ependence "heory/. /nternational Stati(tical 6e0ie9 96 &3*: 077O093. doi:(A.03A:/(4A3:<7. S"I6 (4A3:<7. 3<. 4 Dpton, =.E 8ook, +. &0AA<*. 4xford 5ictionary of Stati(tic(. ID5. +S;C ?:<$ A$(?$?74(47$4.

#eferences[edit]

%.%. Markov. /6asprostranenie ,akona #olFshih chisel na velichiny, ,avisyaschie dr!g ot dr!ga/. /C0e(tiya !iCiko-matematic1e(ko3o o"(c1e(t0a 8ri EaCan(kom #ni0er(itete, 0$ya seriya, tom (7, pp. (37O(79, (?A9. %.%. Markov. /2-tension of the limit theorems of pro#a#ility theory to a s!m of varia#les connected in a chain/. reprinted in %ppendi- ; of: 6. Boward. 5ynamic Pro"a"ili(tic Sy(tem(, 0ol#me 17 Marko0 C1ain(. ohn Wiley and Sons, (?:(. 8lassical "e-t in "ranslation: %. %. Markov, %n 2-ample of Statistical +nvestigation of the "e-t 2!gene Inegin 8oncerning the 8onnection of Samples in 8hains, trans. ^avid >ink. Science in 8onte-t (?.4 &0AA9*: 7?(O9AA. Inline: http://Vo!rnals.cam#ridge.org/prod!ction/action/cVo=etF!llte-tff!llte-tidJ93:7AA >eo ;reiman. Pro"a"ility. Iriginal edition p!#lished #y %ddison$Wesley, (?9<E reprinted #y Society for +nd!strial and %pplied Mathematics, (??0. +S;C A$<?<:($ 0?9$3. See C1a8ter >)& .>. ^oo#. Stoc1a(tic Proce((e(. Cew `ork: ohn Wiley and Sons, (?73. +S;C A$4:($ 7039?$A. S. 5. Meyn and 6. >. "weedie. Marko0 C1ain( and Stoc1a(tic Sta"ility. >ondon: Springer$1erlag, (??3. +S;C A$3<:$(?<30$9. online: https://netfiles.!i!c.ed!/meyn/www/spm_files/#ook.html . Second edition to appear, 8am#ridge Dniversity 5ress, 0AA?. S. 5. Meyn. Control Tec1niF#e( for Com8lex Net9ork(. 8am#ridge Dniversity 5ress, 0AA:. +S;C ?:<$A$70($<<44($?. %ppendi- contains a#ridged Meyn N "weedie. online: https://netfiles.!i!c.ed!/meyn/www/spm_files/8"8C/8"8C.html ;ooth, "aylor >. &(?9:*. SeF#ential Mac1ine( and *#tomata T1eory &(st ed.*. Cew `ork: ohn Wiley and Sons, +nc. >i#rary of 8ongress 8ard 8atalog C!m#er 9:$07?04. 2-tensive, wide$ranging #ook meant for specialists, written for #oth theoretical comp!ter scientists as well as electrical engineers. With detailed e-planations of state minimi,ation techni.!es, FSMs, "!ring machines, Markov processes, and !ndecida#ility. 2-cellent treatment of Markov processes pp. 44?ff. ^isc!sses d$ transforms, ^ transforms in their conte-t. Pemeny, ohn =.E Ba,leton Mirkil, . >a!rie Snell, =erald >. "hompson &(?7?*. !inite Mat1ematical Str#ct#re( &(st ed.*. 2nglewood 8liffs, C. .: 5rentice$Ball, +nc. >i#rary of 8ongress 8ard 8atalog C!m#er 7?$(0<4(. 8lassical te-t. cf 8hapter 9 !inite Marko0 C1ain( pp. 3<4ff. 2. C!mmelin. /=eneral irred!ci#le Markov chains and non$negative operators/. 8am#ridge Dniversity 5ress, (?<4, 0AA4. +S;C A$70($9A4?4$b Seneta, 2. Non-ne3ati0e matrice( and Marko0 c1ain(. 0nd rev. ed., (?<(, b1+, 0<< p., Softcover Springer Series in Statistics. &Iriginally p!#lished #y %llen N Dnwin >td., >ondon, (?:3* +S;C ?:<$A$3<:$0?:97$( Pishor S. "rivedi, Pro"a"ility and Stati(tic( 9it1 6elia"ility, G#e#ein3, and Com8#ter Science *88lication(, ohn Wiley N Sons, +nc. Cew `ork, 0AA0. +S;C A$4:($3334($ :. P.S."rivedi and 6.%.Sahner, S;*6P2 at t1e a3e of t9enty-t9o, vol. 39, no. 4, pp.$70$ 7:, %8M S+=M2"6+8S 5erformance 2val!ation 6eview, 0AA?. 6.%.Sahner, P.S."rivedi and %. 5!liafito, Performance and relia"ility analy(i( of com8#ter (y(tem(7 an exam8le-"a(ed a88roac1 #(in3 t1e S;*6P2 (oft9are 8acka3e, Pl!wer %cademic 5!#lishers, (??9. +S;C A$:?03$?97A$0. =.;olch, S.=reiner, B.de Meer and P.S."rivedi, G#e#ein3 Net9ork( and Marko0 C1ain(, ohn Wiley, 0nd edition, 0AA9. +S;C ?:<$A$:?03$?97A$7.

!ternal links[edit]

Ba,ewinkel, Michiel, ed. &0AA(*, /Markov chain/, 2ncyclo8edia of Mat1ematic(, Springer, +S;C ?:<$($779A<$A(A$4 "echni.!es to Dnderstand 8omp!ter Sim!lations: Markov 8hain %nalysis Markov 8hains chapter in %merican Mathematical SocietyFs introd!ctory pro#a#ility #ook&pdf* 8hapter 7: Markov 8hain Models

You might also like