Professional Documents
Culture Documents
Dynamic Programming and Markov Processes by Ronald A. Howard, (MIT Press, Cambridge, Mas
Paul A. Jensen
First Pass
Simulate
Absorbing States
4
4-mo Sum Status
0 1 Class-1
0 1 Class-1
0 1 Class-1
0.3 1 Class-1
0.2 1 Class-1
0.5
Economic Data
Type: DTMC Transition Cost Matrix
Title: Bulb 0 1
State Combine
State Cost d Cost New 1-mo
0 New 2.5 2.5 0 0
Calculate 1 1-mo 0.5 0.5 0 0
2 2-mo 0.5 0.5 0 0
Matrix 3 3-mo 0.5 0.5 0 0
4 4-mo 0.5 0.5 0 0
Matrix
Markov Chain Transition Matrix
Type: DTMC Step Matrix Analyzed.
Title: Bulb_A Calculate Measure 1 Recurrent State
Change Month 1 Recurrent State Class
Analyze 4 Transient States
State 5 0 1 2 3
Index Names New 1-mo 2-mo 3-mo
Economics 0 New New 1 0 0 0
1 1-mo 1-mo 1 0 0 0
Transient 2 2-mo 2-mo 1 0 0 0
3 3-mo 3-mo 1 0 0 0
Steady State 4 4-mo 4-mo 1 0 0 0
Sum 5 0 0 0
n-step Probabilities
First Pass
Simulate
Absorbing States
4
4-mo Sum Status
0 1 Class-1
0 1 Transient
0 1 Transient
0 1 Transient
0 1 Transient
0
Economic Data
Type: DTMC Transition Cost Matrix
Title: Bulb_A 0 1
State Combine
State Cost d Cost New 1-mo
0 New 1.7 1.7 0 0
Calculate 1 1-mo 0 0 0 0
2 2-mo 0 0 0 0
Matrix 3 3-mo 0 0 0 0
4 4-mo 0 0 0 0
Matrix
DP Solver
Type MDP
Title Bulb
Goal Min Solve Make Markov Chain
States 5
Actions 2 Change Make LP Model
Actions/State 2
Events 2 Equations
Events/Action 2
Iteration Type Value Gain
Policy Steps 10 0
Max. Val. Difference 1.2048 State List
Inde State State Decisio Action State
Sum Prob. Difference 0.0782 x Name Cost n Index Name Value
Time Measure Month 1 New 2 1 Inspect 14.1604
Economic Measure Cost 2 1-mo 0 3 Inspect 12.1947
Discount Rate 1.000% 3 2-mo 0 3 Inspect 12.523
Step Interval 1 4 3-mo 0 4 Replace 12.5294
5 4-mo 0 4 Replace 12.5294
1 2
Next Value 12.958 10.99
State Prob. 0.2598 0.2728
Exp. Value
1.31219 Transition Probability
Decision List 1 2
Const. 0 0
2-mo 3-mo 4-mo Row Sum New 1-mo 2-mo 3-mo 4-mo
0 0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0 0
0.7 0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0 0
0 0.5 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0 0
0 0 0.3 1 0 0 0 0 0
0 0 0 1 0 0 0 0 0
0 0 0.2 1 0 0 0 0 0
0 0 0 1 0 0 0 0 0
Min Sum 1
Max Sum 1
Min Prob. 0
0 0 1
0.1883 0.0942 0
DP Solver
Type MDP
Title Bulb_DP1
Goal Min Solve Make Markov Chain
States 5
Actions 2 Change Make LP Model
Actions/State 2
Events 2 Equations
Events/Action 2
Iteration Type Value
Iteration Steps 0
Stop Dif. 0.00001 State List Action List
State State Final
Value Error 99999 Index Name Cost Cost Index
Prob. Error 99999 1 New 2 0 1
Time Measure Month 2 1-mo 0 0 2
Economic Measure Cost 3 2-mo 0 0 3
Discount Rate 1.000% 4 3-mo 0 0 4
Discount Factor 99.0% 5 4-mo 0 0
Step Interval 1
Action List Event List Decision List
Action Action Event Event Decisio State
Name Cost Index Name Cost Prob. n Index Index
Inspect 0.5 1 Survive 0 1 1 1
Replace -0.3 2 Fail 0 0 2 1
Null 0 3 Null 0 0 3 2
NA 999999 4 NA 999999 0 4 2
5 3
6 3
7 4
8 4
9 5
10 5
1 2 3 4 5
0 0 0 0 0
1 0 0 0 0
Action Decision
Index Name New 1-mo 2-mo 3-mo 4-mo Row Sum New 1-mo 2-mo
1 New / Inspect 0.4 0.6 0 0 0 1 0 0 0
2 New / Replace 1 0 0 0 0 1 0 0 0
1 1-mo / Inspect 0.3 0 0.7 0 0 1 0 0 0
2 1-mo / Replace 1 0 0 0 0 1 0 0 0
1 2-mo / Inspect 0.5 0 0 0.5 0 1 0 0 0
2 2-mo / Replace 1 0 0 0 0 1 0 0 0
1 3-mo / Inspect 0.7 0 0 0 0.3 1 0 0 0
2 3-mo / Replace 1 0 0 0 0 1 0 0 0
1 4-mo / Inspect 0.8 0 0 0 0.2 1 0 0 0
2 4-mo / Replace 1 0 0 0 0 1 0 0 0
Min Sum 1
Max Sum 1
Min Prob. 0
4 5
3-mo 4-mo
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
Final Step 10 Initial
Sum State Action Action State State Next
Prob. Index Name Index Name Value Prob. Value
6.97391 1 New 1 Inspect 14.1604 0.46317 0
3.30129 2 1-mo 1 Inspect 12.1947 0.27275 0
2.11101 3 2-mo 1 Inspect 12.523 0.18196 0
0.61378 4 3-mo 2 Replace 12.5294 0.08212 0
0 5 4-mo 2 Replace 12.5294 0 0
Value
Step 0 Step 1
Last Action Action State State Action Action State State
Prob. Index Name Value Prob. Index Name Value Prob.
1 2 Replace 1.7 1 1 Inspect 2.99505 0.4
0 2 Replace -0.3 0 1 Inspect 0.79703 0.6
0 2 Replace -0.3 0 1 Inspect 1.19307 0
0 2 Replace -0.3 0 2 Replace 1.38317 0
0 2 Replace -0.3 0 2 Replace 1.38317 0
Step 2 Step 3 Step 4
Action Action State State Action Action State State Action
Index Name Value Prob. Index Name Value Prob. Index
1 Inspect 4.15964 0.34 1 Inspect 5.46411 0.628 1
1 Inspect 2.2165 0.24 1 Inspect 3.58284 0.204 1
2 Replace 2.6654 0.42 2 Replace 3.81846 0.168 1
2 Replace 2.6654 0 2 Replace 3.81846 0 2
2 Replace 2.6654 0 2 Replace 3.81846 0 2
Step 5 Step 6
Action State State Action Action State State Action Action
Name Value Prob. Index Name Value Prob. Index Name
Inspect 6.79243 0.3964 1 Inspect 8.02341 0.427 1 Inspect
Inspect 4.76946 0.3768 1 Inspect 6.04897 0.23784 1 Inspect
Inspect 5.09533 0.1428 1 Inspect 6.3923 0.26376 2 Replace
Replace 5.11001 0.084 2 Replace 6.42517 0.0714 2 Replace
Replace 5.11001 0 2 Replace 6.42517 0 2 Replace
Step 7 Step 8
State State Action Action State State Action Action State
Value Prob. Index Name Value Prob. Index Name Value
9.27104 0.57731 1 Inspect 10.5163 0.39103 1 Inspect 11.745
7.3135 0.2562 1 Inspect 8.55158 0.34639 1 Inspect 9.7738
7.64397 0.16649 1 Inspect 8.87377 0.17934 1 Inspect 10.1018
7.64397 0 2 Replace 8.87924 0.08324 2 Replace 10.1122
7.64397 0 2 Replace 8.87924 0 2 Replace 10.1122
Step 9 Step 10
State Action Action State State Action Action State State
Prob. Index Name Value Prob. Index Name Value Prob.
0.43324 1 Inspect 12.9577 0.45459 1 Inspect 14.1604 0.46317
0.23462 1 Inspect 10.9899 0.25995 1 Inspect 12.1947 0.27275
0.24247 1 Inspect 11.3204 0.16423 1 Inspect 12.523 0.18196
0.08967 2 Replace 11.3287 0.12124 2 Replace 12.5294 0.08212
0 2 Replace 11.3287 0 2 Replace 12.5294 0
Markov Chain Transition Matrix
Type: DTMC Step Matrix Analyzed.
Title: Bulb1 Calculate Measure 4 Recurrent States
Change Month 1 Recurrent State Class
Analyze 1 Transient State
State 5 0 1 2 3
Index Names New 1-mo 2-mo 3-mo
Economics 0 New New 0.4 0.6 0 0
1 1-mo 1-mo 0.3 0 0.7 0
Transient 2 2-mo 2-mo 0.5 0 0 0.5
3 3-mo 3-mo 1 0 0 0
Steady State 4 4-mo 4-mo 1 0 0 0
Sum 3.2 0.6 0.7 0.5
n-step Probabilities
First Pass
Simulate
Absorbing States
4
4-mo Sum Status
0 1 Class-1
0 1 Class-1
0 1 Class-1
0 1 Class-1
0 1 Transient
0
Economic Data
Type: DTMC Transition Cost Matrix
Title: Bulb1 0 1
State Combine
State Cost d Cost New 1-mo
0 New 2.5 2.5 0 0
Calculate 1 1-mo 0.5 0.5 0 0
2 2-mo 0.5 0.5 0 0
Matrix 3 3-mo -0.3 -0.3 0 0
4 4-mo -0.3 -0.3 0 0
Matrix
DP Solver
Type MDP
Title Bulb1
Goal Min Solve Make Markov Chain
States 5
Actions 2 Change Make LP Model
Actions/State 2
Events 2 Equations
Events/Action 2
Iteration Type Policy
Policy Steps 2
Max. Val. Difference 3E-014 State List
Decisio Decisio
State State n n Action
Sum Prob. Difference 0 Index Name Cost Index Range Index
Time Measure Month 1 New 2 1 2 1
Economic Measure Cost 2 1-mo 0 3 2 1
Discount Rate 1.000% 3 2-mo 0 3 2 1
Step Interval 1 4 3-mo 0 4 2 2
5 4-mo 0 4 2 2
Beta 0.9901
=SUMPRODUCT(Bulb_DPDecProb,AH14:AH23
)
=SUMPRODUCT(Bulb_DPStateNextValue,AH14:AL14
)
=INDEX(Bulb_DPActionReward,Bulb_DPDecActionIndex)
=Bulb_DPDecReward+Bulb_DPDecTransReward+(Bulb_DPDecFuture/
(1+Bulb_DPDiscount))
Gain Exp. Value
0 1.3215
Action List Event List
Decisio
Action n Step State Last Action Action
Name Value Value Value Prob. Index Name Cost Index
Inspect 132.47 2.5 134.47 0.4484 1 Inspect 0.5 1
Inspect 132.5 0.5 132.5 0.2691 2 Replace -0.3 2
Inspect 132.83 0.5 132.83 0.1883 3 Null 0 3
Replace 132.83 -0.3 132.83 0.0942 4 NA 999999 4
Replace 132.83 -0.3 132.83 0
e/
Event List Decision List Expect
ed
Decisio Transit
Event Event n State Action Decision Decisio ion
Name Cost Prob. Index Index Index Name n Cost Cost
Survive 0 1 1 1 1 New / Inspect 0.5 0
Fail 0 0 2 1 2 New / Replace -0.3 0
Null 0 0 3 2 1 1-mo / Inspect 0.5 0
NA 999999 0 4 2 2 1-mo / Replace -0.3 0
5 3 1 2-mo / Inspect 0.5 0
6 3 2 2-mo / Replace -0.3 0
7 4 1 3-mo / Inspect 0.5 0
8 4 2 3-mo / Replace -0.3 0
9 5 1 4-mo / Inspect 0.5 0
10 5 2 4-mo / Replace -0.3 0
1 2 3 4 5
Next Value 134.47 132.5 132.83 132.83 132.83
State Prob. 0.2601 0.2691 0.1883 0 0
Transition Probability
1 2 3 4 5
Expect
ed Decisio
Next n Last
Cost Value Prob. New 1-mo 2-mo 3-mo 4-mo Row Sum
133.29 132.47 0.4484 0.4 0.6 0 0 0 1
134.47 132.83 0 1 0 0 0 0 1
133.32 132.5 0.2691 0.3 0 0.7 0 0 1
134.47 132.83 0 1 0 0 0 0 1
133.65 132.83 0 0.5 0 0 0.5 0 1
134.47 132.83 0 1 0 0 0 0 1
133.98 133.15 0 0.7 0 0 0 0.3 1
134.47 132.83 0 1 0 0 0 0 1
134.14 133.31 0 0.8 0 0 0 0.2 1
134.47 132.83 0 1 0 0 0 0 1
Min Sum 1
Max Sum 1
Min Prob. 0
Variable V1 V2 V3 V4 V5
Decisio Decisio
n State Action n
Index Index Index Name New 1-mo 2-mo 3-mo 4-mo
1 1 1 New / Inspect 0.604 -0.5941 0 0 0
2 1 2 New / Replace0.0099 0 0 0 0
3 2 1 1-mo / Inspect
-0.297 1 -0.6931 0 0
4 2 2 1-mo / Replace
-0.9901 1 0 0 0
5 3 1 2-mo / Inspect
-0.495 0 1 -0.495 0
6 3 2 2-mo / Replace
-0.9901 0 1 0 0
7 4 1 3-mo / Inspect
-0.6931 0 0 1 -0.297
8 4 2 3-mo / Replace
-0.9901 0 0 1 0
9 5 1 4-mo / Inspect
-0.7921 0 0 0 0.802
10 5 2 4-mo / Replace
-0.9901 0 0 0 1
Immed
iate
Cost
2.5
1.7
0.5
-0.3
0.5
-0.3
0.5
-0.3
0.5
-0.3
Linear Model Name:Bulb1_LP Solver:
1 Type: LP1 Type:
0 Change Goal: Max Sens.:
1 Profit: 134.47 Side:
0 Solve
0 Variables 1 2 3
100 Vary Name: New 1-mo 2-mo
100 Values: 134.466 132.498 132.827
0 Change Relation Lower Bounds: -10000 -10000 -10000
60 Upper Bounds: 10000 10000 10000
4 5
3-mo 4-mo
132.834 132.834
-10000 -10000
10000 10000
0 0
Coefficients
0 0
0 0
0 0
0 0
-0.495 0
0 0
1 -0.297
1 0
0 0.80198
0 1
3-mo 4-mo
0 0
0 0
0 0
0 0
-0.495 0
0 0
1 -0.297
1 0
0 0.80198
0 1
Sensitivity Analysis for Worksheet Bulb1_LP
Constraint Analysis
Shadow Constraint Range Range
Num. Name Value Status Price Limit Lower Limit Upper Limit
1 New / Inspect 2.5 Upper 45.7095 2.5 2.4661 3.3146
2 New / Replace 1.3313 Basic 0 1.7 1.3313 ---
3 1-mo / Inspect 0.5 Upper 27.1541 0.5 0.4429 0.9656
4 1-mo / Replace -0.6359 Basic 0 -0.3 -0.6359 ---
5 2-mo / Inspect 0.5 Upper 18.8197 0.5 -518.0384 0.5084
6 2-mo / Replace -0.3076 Basic 0 -0.3 -0.3076 ---
7 3-mo / Inspect 0.1846 Basic 0 0.5 0.1846 ---
8 3-mo / Replace -0.3 Upper 9.3167 -0.3 -991.3403 -0.2831
9 4-mo / Inspect 0.023 Basic 0 0.5 0.023 ---
10 4-mo / Replace -0.3 Upper 0 -0.3 -1.362 0.2947
DP Solver
Type MDP
Title Bulb_DP2
Goal Min Solve Make Markov Chain
States 5
Actions 2 Change Make LP Model
Actions/State 2
Events 2 Equations
Events/Action 2
Iteration Type Fixed/Value
Iteration Steps 37
Stop Dif. 0.00001 State List
State State Final Decisio
Value Error 8.73E-006 Index Name Cost Cost n Index
Prob. Error 8.75E-008 1 New 2 0 1
Time Measure Month 2 1-mo 0 0 3
Economic Measure Cost 3 2-mo 0 0 5
Discount Rate 1.000% 4 3-mo 0 0 8
Discount Factor 99.0% 5 4-mo 0 0 10
Step Interval 0
Index
1
2
3
4
5
Max.
C_L
C_U
Val. Error
Prob. Error
Gain Exp. Value
0 1.32152
Decision List
Decisio
n Action Action Decisio Step Discoun Last Decisio State
Range Index Name n Value Value t Value Prob. n Index Index
2 1 Inspect 132.466 2.5 134.466 0.44843 1 1
2 1 Inspect 132.498 0.5 132.498 0.26906 2 1
2 1 Inspect 132.827 0.5 132.827 0.18834 3 2
2 2 Replace 132.834 -0.3 132.834 0.09417 4 2
2 2 Replace 132.834 -0.3 132.834 0 5 3
6 3
7 4
State Value Probability 8 4
Value Dif.Abs. Dif. Lower Upper Next Val.Abs. Dif. 9 5
4E-008 4E-008 134.466 134.466 134.466 2E-008 10 5
-3E-009 3E-009 132.498 132.498 132.498 2E-008
-5E-008 5E-008 132.827 132.827 132.827 2E-008
-2E-008 2E-008 132.834 132.834 132.834 2E-008
-2E-008 2E-008 132.834 132.834 132.834 0
5E-008 2E-008
-5E-006
4E-006
9E-006
9E-008
1 2 3 4 5
Next Value 134.5 132.5 132.8 132.8 132.8
State Prob. 0.448 0.269 0.188 0.094 0
Transition Probability
sion List 1 2 3 4 5
Step 7 Step 8
Action Discoun State Action Discoun State
Name t Value Prob. Name t Value Prob.
Inspect 9.968627 0.44823 Inspect 11.18928 0.45854
Inspect 7.981199 0.25743 Inspect 9.255527 0.26894
Inspect 8.360706 0.18463 Inspect 9.577884 0.1802
Replace 8.368699 0.1097 Replace 9.569928 0.09232
Replace 8.368699 0 Replace 9.569928 0
DP Solver
Type MDP
Title Bulb2
Goal Min Solve Make Markov Chain
States 5
Actions 2 Change Make LP Model
Actions/State 2
Events 2 Equations
Events/Action 2
Iteration Type Policy
Policy Steps 2
Max. Val. Difference 2E-016 State List
Decisio
State State Action Action n
Sum Prob. Difference 0 Index Name Cost Index Name Value
Time Measure Month 1 New 2 1 Inspect 0.943
Economic Measure Cost 2 1-mo 0 1 Inspect 0.9789
Discount Rate 0.000% 3 2-mo 0 1 Inspect 1.3108
Step Interval 1 4 3-mo 0 2 Replace 1.3215
5 4-mo 0 2 Replace 1.3215
Next Value
State Prob.
Gain Exp. Value
1.3215 1.32152
Decision List
Step State Last Decisio State Action Decision Decisio Last
Value Value Prob. n Index Index Index Name n Cost Prob.
2.5 2.943 0.44843 1 1 1 New / Inspect 0.5 0.4484
0.5 0.9789 0.26906 2 1 2 New / Replace -0.3 0
0.5 1.3108 0.18834 3 2 1 1-mo / Inspect 0.5 0.2691
-0.3 1.3215 0.09417 4 2 2 1-mo / Replace -0.3 0
-0.3 1.3215 0 5 3 1 2-mo / Inspect 0.5 0
6 3 2 2-mo / Replace -0.3 0
7 4 1 3-mo / Inspect 0.5 0
8 4 2 3-mo / Replace -0.3 0
9 5 1 4-mo / Inspect 0.5 0
10 5 2 4-mo / Replace -0.3 0
Solution
1.6215
-0.3426
-0.0108
0
1.3215
Const.
SS Prob.
1 2 3 4 5
1.6215 -0.3426 -0.0108 0 0
0.2601 0.2691 0.1883 0 0
Transition Probability
1 2 3 4 5
0 0 0 0 1
Final Step 2
State Action Action Step Next Last
Index Name Index Name Value Value Prob.
1 New 1 Inspect 2.5 1.62152 0.44843
2 1-mo 1 Inspect 0.5 -0.3426 0.26906
3 2-mo 1 Inspect 0.5 -0.0108 0.18834
4 3-mo 2 Replace -0.3 0 0.09417
5 4-mo 2 Replace -0.3 0 0
Exp.
Gain Value Gain
1.7 1.7 1.32152
Policy
Step 1 Step 2
Action Action Step Next Last Action Action Step Next
Index Name Value Value Prob. Index Name Value Value
2 Replace 1.7 2 1 1 Inspect 2.5 1.62152
2 Replace -0.3 0 0 1 Inspect 0.5 -0.3426
2 Replace -0.3 0 0 1 Inspect 0.5 -0.0108
2 Replace -0.3 0 0 2 Replace -0.3 0
2 Replace -0.3 0 0 2 Replace -0.3 0
Exp.
Value
1.32152
Last
Prob.
0.44843
0.26906
0.18834
0.09417
0
DP Solver
Type MDP
Title Bulb_DP3
Goal Min Solve Make Markov Chain
States 5
Actions 2 Change Make LP Model
Actions/State 2
Events 3 Equations
Events/Action 2
Iteration Type Value
Iteration Steps 0
Stop Dif. 0.00001 State List
State State Final Decision
Value Error 99999 Index Name Cost Cost Index
Prob. Error 99999 1 New 2 0 2
Time Measure Month 2 1-mo 0 0 4
Economic Measure Cost 3 2-mo 0 0 4
Discount Rate 1.000% 4 3-mo 0 0 4
Discount Factor 99.0% 5 4-mo 0 0 4
Step Interval 0
Gain Exp. Value
0 1.7
Const. 0 0 0 0
1
1
1
1
1
0
Transition Probabilities
Transition Probabilities
Decision
Name New 1-mo 2-mo 3-mo 4-mo
New / Inspect 0.4 0.6 0 0 0
1-mo / Inspect 0.3 0 0.7 0 0
2-mo / Inspect 0.5 0 0 0.5 0
3-mo / Replace 1 0 0 0 0
4-mo / Replace 1 0 0 0 0
Costs
State
State Cost
New 2.5
1-mo 0.5
2-mo 0.5
3-mo 0.5
4-mo 0.5
Costs
State Decision
State Cost Name New 1-mo 2-mo 3-mo
New 1.7 New / Inspect 0.4 0.6 0 0
1-mo 0 New / Replace 1 0 0 0
2-mo 0 1-mo / Inspect 0.3 0 0.7 0
3-mo 0 1-mo / Replace 1 0 0 0
4-mo 0 2-mo / Inspect 0.5 0 0 0.5
2-mo / Replace 1 0 0 0
3-mo / Inspect 0.7 0 0 0
State Action
State Cost Cost 3-mo / Replace 1 0 0 0
New 2 0.5 4-mo / Inspect 0.8 0 0 0
1-mo 0 0.5 4-mo / Replace 1 0 0 0
2-mo 0 0.5
3-mo 0 -0.3
4-mo 0 -0.3
Decision Decision
4-mo Cost Name New
0 0.5 New / Inspect 0.4
0 -0.3 1-mo / Inspect 0.3
0 0.5 2-mo / Inspect 0.5
0 -0.3 3-mo / Replace 1
0 0.5 4-mo / Replace 1
0 -0.3
0.3 0.5
0 -0.3
0.2 0.5
0 -0.3
Decision
1-mo 2-mo 3-mo 4-mo Cost
0.6 0 0 0 New / Inspect 0.5
0 0.7 0 0 1-mo / Inspect 0.5
0 0 0.5 0 2-mo / Inspect 0.5
0 0 0 0 3-mo / Replace -0.3
0 0 0 0 4-mo / Replace -0.3
State Action Event
Variabl Variabl Variabl State
Problem es es es Blocks
Cab - MDP x x x x
Baseball - MDP 4 1 1 2
Replacement - M 1 1 1 1
Sequence - MDP 6 1 1 0
Birth-Death-MC 1 0 2 1
Birth-Death-MDP 1 1 2 1
Investment DDP 2 1 0 2
Queue - MDP 2 1 1 0
Doors - MDP 2 1 1 3
Decisio
n Transitio Transition
Blocks n Blocks States Actions Events Decisions s
x x 3 3 3 6 27
8 26 33 5 14 78 402
1 5 40 41 2 1640 3239
2 1 729 7 2 2188 3646
0 3 11 0 2 0 32
3 3 11 2 2 16 47
0 1 100 5 0 372 372
2 2 55 3 3 143 416
0 0 17 5 4 73 227