Professional Documents
Culture Documents
distribution shows the results we would obtain if we took many probability samples and
computed the statistics for each sample. A table listing all possible values that a random
variable can take on together with the associated probabilities is called a probability
distribution.
The probability distribution of X, where X is the number of spots showing when a sixsided symmetric die is rolled is given below:
X
f (X)
1
1
6
2
1
6
3
1
6
4
1
6
5
1
6
6
1
6
The probabilty distribution is the outcome of the different probabilities taken by the
function of the random variable X.
Knowledge of the expected behaviour of a phenomenon or the expected frequency
distribution is of great help in a large number of problems in practical life. They serve as
benchmarks against which to compare observed distributions and act as substitute for
actual distributions when the latter are costly to obtain or cannot be obtained at all. We
now introduce a few discrete and continuous probability distributions that have proved
particularly useful as models for real-life phenomena. In every case the distribution will
be specified by presenting the probability function of the random variable.
1
for i = 1,2,...,k
k
Example 2.1: Suppose that a plant is selected at random from a plot of 10 plants to record
the height. Each plant has the same probability 1/10 of being selected. If it is assumed that
the plants have been numbered in some way from 1 to 10, the distribution is uniform with
f(x;10) = 1/10 for x=1,...,10.
2.2 Binomial Distribution
Binomial distribution is a probability distribution expressing the probability of one set of
dichotomous alternatives i.e. success or failure. More precisely, the binomial distribution
refers to a sequence of events which posses the following properties:
An experiment is performed under same conditions for a fixed number of trials say, n.
In each trial, there are only two possible outcomes of the experiment success or
failure.
The probability of a success denoted by p remains constant from trial to trial.
The trials are independent i.e. the outcomes of any trial or sequence of trials do not
affect the outcomes of subsequent trials.
distribution is skewed. If p is less than 1/2, the distribution is positively skewed and
when p is more than 1/2, the distribution is negatively skewed.
If n is large and if neither p nor q is too close to zero, the binomial distribution can be
closely approximated by a normal distribution with standardized variable given by
X np
.
Z=
npq
53
3125
0
6
1
20
2
28
3
12
4
8
5
6
6
0
7
0
8
0
9
0
10
0
Step 1: Calculate X =
fX
f
2 = 3 +
1
1
, 2 =
The Standard Deviation (SD): The uncertainty (expressed as 1 SD) in the measurement
of a number of random events equals the square root of the total number of events i.e.
S.D = total events
Radioactive decay and its detection is used to illustrate this feature of the Poisson
distribution for two reasons. Most biologists have some experience with radioactivity
measurements; more important, radioactive decay is a true random process. In fact, it is
the only truly random process known in nature. For this latter reason, we can make
confident predictions about its behavior.
Suppose there is a radioactive sample that registers about 1000 cpm. The measurements
are to be reported along with an uncertainty expressed as a standard deviation (SD). We
could count the sample 10 times for one minute each and then calculate the mean and SD
of the 10 determinations. However, the important property of processes described by the
Poisson distribution is that the SD is the square root of the total counts registered. To
illustrate, the table shows the results of counting the radioactive sample for different time
intervals (with some artificial variability thrown in).
Time Total
SD
Reported SD
Rel. Error
(in cpm) as %
(min) Counts (counts) cpm
0.1
98
10
980
100
10
1.0
1,020
32
1020
32
10
9,980
100
998
10
1010
3
0.3
100 101,000 318
Reported cpm is Total Counts/Time; SD (in cpm) is SD (counts)/Time;
and Relative Error is SD (in cpm)/Reported cpm, expressed as %.
Comparing every other line shows that a 100-fold increase in counting time increases SD,
but only by 10-fold. At the same time, the relative error decreases by 10-fold. The general
point here is that the experimenter can report the 1000 cpm value to any degree of
precision desired simply by choosing the appropriate time interval for measurement. There
is no advantage whatever in using multiple counting periods. Thus, counting error is
distinguished from experimental error in that the latter can only be estimated with multiple
measurements.
N(P1) = N(P0)
1
N(P2) = N(P1)
2
Exercise 2.2: The following mutated DNA segments were observed in 325 individuals:
Mutated DNA segments
0
1
2
3
4
Number of individuals
211
90
19
5
0
Fit a Poisson distribution to the data.
Step 1: Calculate the mean
Step 2: Find the different terms N(P0), N(P1),... i.e. the expected frequencies.
2.4 Negative Binomial Distribution
The negative binomial distribution is very much similar to the binomial probability model.
It is applicable when the following conditions hold good:
An experiment is performed under the same conditions till a fixed number of
successes, say c, are achieved.
In each trial, there are only two possible outcomes of the experiment success or
failure
The probability of a success denoted by p remains constant from trial to trial.
The trials are independent i.e. the outcome of any trial or sequence of trials do not
affect the outcomes of subsequent trials.
The only difference between the binomial model and the negative binomial model is about
the first condition.
Consider a sequence of Bernoulli trials with p as the probability of success. In the
sequence, success and failure will occur randomly and in each trial the probability of
success will be p. Let us investigate how much time will be taken to reach the rth success.
Here r is fixed, let the number of failures preceding the rth success be x (=0,1,...). The total
number of trials to be performed to reach the rth success will be x+r. Then the probability
that rth success occurs at (x+r)th trial is
x + r - 1 r x
P(X=x) =
p q ; x=0,1,2,....
r -1
Example 2.3: Suppose that 30% of the items taken from the end of a production line are
defective. If the items taken from the line are checked until 6 defective items are found,
what is the probability that 12 items are examined?
Solution: Suppose the occurrence of a defective item is a success. Then the probability
that there will be 6 failures preceding the 6th success will be given by:
6 + 6 1
6
6
Note: If we take r=1, i.e. the first success, then P[X=x] = pqx, x=0,1,2,... which is the
probability distribution of X, the number of failures preceding the first success. This
distribution is called as Geometric distribution.
2.5 Hypergeometric Distribution:
The hypergeometric distribution occupies a place of great significance in statistical theory.
It applies to sampling without replacement from a finite population whose elements can be
classified into two categories - one which possess a certain characteristic and another
which does not possess that characteristic. The categories could be male, female,
employed unemployed etc. When n random selections are made without replacement from
the population, each subsequent draw is dependent and the probability of success changes
on each draw. The following conditions characterise the hypergeometric distribution:
The result of each draw can be classified into one of the two categories.
The probability of a success changes on each draw.
Successive draws are dependent.
The drawing is repeated a fixed number of times.
n - r r
for r=0,1,2...,[n,X]
P(r) =
N
n
The symbol [n, X] means the smaller of n or X. This distribution may be used to estimate
the number of wild animals in forests or to estimate the number of fish in a lake. The
hypergeometric distribution bears a very interesting relationship to the binomial
distribution. When N increases without limit, the hypergeometric distribution approaches
the binomial distribution. Hence, the binomial probabilities may be used as approximation
to hypergeometric probabilities when n/N is small.
(
Fig. 3.1: Shape of normal distribution
Many quantitative characteristics have distribution similar in form to the normal
distributions bell shape. For example height and weight of people, the IQ of people,
height of trees, length of leaves etc. are the type of measurements that produces a random
variable that can be successfully approximated by normal random variable. The values of
random variables are produced by a measuring process and measurements tend to cluster
symmetrically about a central value.
Definition: A random variate X, with mean and variance 2, is said to have normal
distribution, if its probability density function is given by
1
e
P( x ) =
2
( x ) 2
22
Fig. 3.2: Area representing P(a < X < b) for a normal random variable
The probability that X is between a and b (b > a) can be determined by computing the
probability that Z is between (a - ) / and (b - ) / . It is possible to determine the area
in Fig. 3.2 by using tables (for areas under normal curve) rather than by performing any
mathematical computations.
Probability associated with a normal random variable X can be determined from Table 1
given at the end.
-3 -2 - + +2 +3
Fig.3.3
As indicated in Fig. 3.3, for any normal distribution, 68.27% of the Z values lie within one
standard deviation of mean, 95.45% of values lie within 2 standard deviations of mean and
10
99.73% of values lie within three standard deviations of mean. By using the fact that the
normal distribution is symmetric about its mean (zero in this case) and the total area under
curve is 1 (half to the left of zero and half to right), probabilities of standard normal
variable of the form P(0<Z<a) are provided in Table 1 in the end. Using this table,
probabilities that Z lies in any interval on real line may be determined.
For certain variables the nature of the distribution is not known. For the study of such
variables, it is easy to scale the variables in such a way as to produce a normal
distribution. It is indispensable in mental test study. It is reasonable to assume that a
selected group of children of a given age would show a normal distribution of
intelligence test scores.
Exercises
1. The average rainfall in a certain town is 50 cm with a standard deviation of 15 cm.
Find the probability that in a year the rainfall in that town will be between 75 and 85
cm.
11
2.
The average fuelwood value in Kcal/kg of subabul plant is found to be 4,800 with
standard deviation of 240. Find the probability that the subabul plant selected at
random has fuelwood value greater than 5,280 Kcal/Kg.
4. Sampling Distributions
The word population or universe in Statistics is used to refer to any collection of
individuals or of their attributes or of results of operations which can be numerically
specified. Thus, we may speak of the populations of weights, heights of trees, prices of
wheat, etc. A population with finite number of individuals or members is called a finite
population. For instance, the population of ages of twenty boys in a class is an example of
finite population. A population with infinite number of members is known as infinite
population. The population of pressures at various points in the atmosphere is an example
of infinite population. A part or small section selected from the population is called a
sample and the process of such selection is called sampling. Sampling is resorted to when
either it is impossible to enumerate the whole population or when it is too costly to
enumerate in terms of time and money or when the uncertainty inherent in sampling is
more than compensated by the possibilities of errors in complete enumeration. To serve a
useful purpose sampling should be unbiased and representative.
The aim of the theory of sampling is to get as much information as possible, ideally the
whole of the information about the population from which the sample has been drawn. In
particular, given the form of the parent population we would like to estimate the
parameters of the population or specify the limits within which the population parameters
are expected to lie with a specified degree of confidence. It is, however, to be clearly
understood that the logic of the theory of sampling is the logic of induction, that is we pass
from particular (i.e., sample) to general (i.e., population) and hence all results will have to
be expressed in terms of probability.
The fundamental assumption underlying most of the theory of sampling is random
sampling which consists in selecting the individuals from the population in such a way
that each individual of the population has the same chance of being selected.
of
i =1
1 N
(X i ) 2
N i =1
Definition: If a sample of n values x1 ,x2,...,xn is taken from a population set of values, the
sample mean ( x ) is defined as
12
x
x =
i =1
1 n
(x i x) 2
n i =1
Central limit theorem: Let x1,x2,...,xn be a simple random sample of size n drawn from
an infinite population with a finite mean and standard deviation . Then random
variable x has a limiting distribution that is normal with a mean and standard deviation
/ n.
13
f(x) =
1
e x/2 x (n/2)1 , 0 x <
2 (n/2)
n/2
If samples of size n are drawn repeatedly from a normal population with variance , and
the sample variance s2 is computed for each sample, we obtain the value of a statistic 2.
The distribution of the random variable 2, called chi-square, defined by
2 2
2 = ( n-1)s /
is referred to as 2 distribution with n-1 degrees of freedom.
The mean of 2 distribution equals the number of degree of freedom.
The variance is twice its mean.
Mode is n-2.
Let be the probability and let X have a 2 distribution with v degrees of freedom, then
2(n) is a number such that
P[X 2(n)] =
Thus 2(n) is 100(1-) percentile or upper 100 percent point of chi-square distribution
with n degrees of freedom. Then 100 percentile point is the number 2(1-)(n) such that
P[X
21- (n)] = 1-, i.e. the probability to right of 21- (n) is 1- .
Properties of 2 Variate
Sum of independent 2-variates is a 2-variate.
2 distribution tends to normal distribution as n is large.
Table 2 gives values of 2(n) for various values of and n. From Table 2, 20.05(7)
=14.07 and 20.95(7) = 2.167.
Example 4.1: Let X have a chi-square distribution with five degrees of freedom, then
using Table 2, P(1.145 X 12.83) = F(12.83) - F(1.145) = 0.975-0.050 = 0.925 and P(X
15.09) = 1-F(15.09) = 1-.099 = 0.01
4.2 t - Distribution
If Z is a N(0,1) random variable, U is a 2 (v) variate and if Z and U are independent, then
T=Z/ U
has a t - distribution with v degrees of freedom, its probability density function is
f(x) =
1
1
, - < x <
2
1
x
(
+
1)/2
B( , ) [1 +
]
2 2
14
=0
Variance =
,v2
2
Example 4.2: Let T have a t-distribution with 7 degrees of freedom, then from Table 3 we
have
P(T 1.415 ) = 0.90
P(T 1.415 ) = 1 - P ( T 1.415 ) = 0.10
P(-1.895 T 1.415 ) = 0.90 - 0.05 = 0.85
Example 4.3: Let T have a t distribution with a variance of 5 / 4 (v = 10) then
P(-1.812 T 1.812 ) = 0.90
t0.05 (10) =1.812 ; t0.01 (10) = 2.764
t0.99 (10) = -2.764.
4.3 F-Distribution
One of the most important distributions in applied statistics is the F distribution. F
distribution is defined to be the ratio of two independent chi-square variates, each divided
by their respective degrees of freedom.
2 / v
F = 12 1 ,
2 / v2
where 12 is the value of a chi-square distribution with v1 degrees of freedom and 22 is a
value of a chi-square distribution with v2 degrees of freedom. The mathematical form of
the p.d.f. of F distribution is.
(v1/v 2 ) v1/2
x (v1/2) 1
,0x<
f(x) =
v v
v
B( 1 , 2 ) [1 + 1 x](v1 + v 2 )/2
2 2
v2
To obtain an f value, select a random sample of size n1 from a normal population having
variance 12 and compute s12 / 12 . An independent random sample of size n2 is then
selected from a second normal population having variance 22 and compute s 22 / 22 . The
ratio of two quantities s12 / 12 and s 22 / 22 produces an f value. The distribution of all
possible f values is called F distribution .The number of degrees of freedom associated
15
with the sample variance in numerator is stated first, followed by the number of degrees of
freedom associated with the sample variance in the denominator. Thus the curve of F
distribution depends not only on the two parameters v1 and v2 but also on the order in
which we state them. Once these two values are given we can identify the curve.
Let f be the f value above which we find an area equal to . Table 4 gives values of f
only for = 0.05 and for various combinations of the degrees of freedom v1 and v2. Hence
the f value with 6 and 10 degrees of freedom, leaving an area of 0.05 to the right, is f0.05
=3.22.
The F-distribution is applied primarily in the analysis of variance, where we wish to test
the equality of several means simultaneously. F-distribution is also used to make
inferences concerning the variance of two normal populations.
16
Table 1
The Normal Probability Integral or Area under the Normal Curve
Z
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.0
3.1
0.00
.0000
.0398
.0793
.1179
.1554
.1915
.2257
.2580
.2881
.3159
.3413
.3643
.3849
.4032
.4192
.4332
.4452
.4554
.4641
.4713
.4772
.4821
.4861
.4893
.4918
.4938
.4953
.4965
.4974
.4981
.4986
.4990
0.01
.0040
.0438
.0832
.1217
.1591
.1950
.2291
.2611
.2910
.3186
.3438
.3665
.3869
.4049
.4207
.4345
.4463
.4564
.4649
.4719
.4778
.4826
.4865
.4896
.4920
.4940
.4955
.4966
.4975
.4982
.4987
.4991
0.02
.0080
.0478
.0871
.1255
.1628
.1985
.2324
.2642
.2939
.3212
.3461
.3686
.3888
.4066
.4222
.4357
.4474
.4573
.4656
.4726
.4783
.4830
.4868
.4898
.4922
.4941
.4956
.4967
.4976
.4983
.4987
.4991
0.03
.0120
.0517
.0910
.1293
.1664
.2019
.2357
.2673
.2967
.3238
.3485
.3708
.3907
.4082
.4236
.4370
.4485
.4582
.4664
.4732
.4788
.4834
.4871
.4901
.4925
.4943
.4957
.4968
.4977
.4983
.4988
.4991
0.04
.0159
.0557
.0948
.1331
.1700
.2054
.2389
.2704
.2995
.3264
.3508
.3729
.3925
.4099
.4251
.4382
.4495
.4591
.4671
.4738
.4793
.4838
.4875
.4904
.4927
.4945
.4959
.4969
.4977
.4984
.4988
.4992
17
0.05
.0199
.0596
.0987
.1368
.1736
.2088
.2422
.2734
.3023
.3289
.3531
.3749
.3944
.4115
.4265
.4394
.4505
.4599
.4678
.4744
.4798
.4842
.4878
.4906
.4929
.4946
.4960
.4970
.4978
.4984
.4989
.4992
0.06
.0239
.0636
.1026
.1406
.1772
.2123
.2454
.2764
.3051
.3315
.3554
.3770
.3962
.4131
.4279
.4406
.4515
.4608
.4686
.4750
.4803
.4846
.4881
.4909
.4931
.4948
.4961
.4971
.4979
.4985
.4989
.4992
0.07
.0279
.0675
.1064
.1443
.1808
.2157
.2486
.2794
.3078
.3340
.3577
.3790
.3980
.4147
.4292
.4418
.4525
.4616
.4693
.4756
.4808
.4850
.4884
.4911
.4932
.4949
.4962
.4972
.4980
.4985
.4989
.4992
0.08
.0319
.0714
.1103
.1480
.1844
.2190
.2518
.2823
.3106
.3365
.3599
.3810
.3997
.4162
.4306
.4430
.4535
.4625
.4699
.4762
.4812
.4854
.4887
.4913
.4934
.4951
.4963
.4973
.4980
.4986
.4990
.4993
0.09
.0359
.0753
.1141
.1517
.1879
.2224
.2549
.2852
.3133
.3389
.3621
.3830
.4015
.4177
.4319
.4441
.4545
.4633
.4706
.4767
.4817
.4857
.4890
.4916
.4936
.4952
.4964
.4974
.4981
.4986
.4990
.4923
Table 2
Values of with probability P of being exceeded in random sampling
n = degrees of freedom
2
P
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
0.99
0.95
0.50
0.30
0.20
0.10
0.05
0.01
0.0002
0.020
0.115
0.30
0.55
0.87
1.24
1.65
2.09
2.56
3.05
3.37
4.11
4.66
5.23
5.81
6.41
7.02
7.63
8.26
8.90
9.54
10.20
10.86
11.52
12.20
12.88
13.56
14.26
14.95
0.004
0.103
0.35
0.71
1.14
1.64
2.17
2.73
3.32
3.94
4.58
5.23
5.89
6.57
7.26
7.96
8.67
9.39
10.12
10.85
11.59
12.34
13.09
13.85
14.61
15.38
16.15
16.93
17.71
18.49
0.46
1.39
2.17
3.86
4.35
5.35
6.35
7.34
8.34
9.34
10.34
11.34
12.34
13.34
14.34
15.34
16.34
17.34
18.34
19.34
20.34
21.34
22.34
23.34
24.34
25.34
26.34
27.34
28.34
29.34
1.07
2.41
3.66
4.88
6.06
7.23
8.38
9.52
10.66
11.78
12.90
14.01
15.12
16.22
17.32
18.42
19.51
20.60
21.69
22.78
23.86
24.94
26.02
27.10
28.17
29.25
30.32
31.39
32.46
33.53
1.64
3.22
4.64
5.99
7.29
8.56
9.80
11.03
12.24
13.44
14.63
15.81
16.98
18.15
19.31
20.46
21.62
22.76
23.90
25.04
26.17
27.30
28.43
29.55
30.68
31.80
32.91
34.03
35.14
36.25
2.71
4.00
6.25
7.78
9.24
10.64
12.02
13.36
14.68
15.99
17.28
18.55
19.81
21.06
22.31
23.54
24.77
25.99
27.20
28.41
29.62
30.81
32.01
33.20
34.38
35.56
36.74
37.92
39.09
40.26
3.84
5.99
7.82
9.49
11.07
12.59
14.07
15.51
16.92
18.31
19.68
21.03
22.36
23.68
25.00
26.30
27.59
28.87
30.14
31.41
32.67
33.92
35.17
36.42
37.65
38.88
40.11
41.34
42.56
43.77
6.64
9.21
11.34
13.28
15.09
16.81
18.48
20.09
21.67
23.21
24.72
26.22
27.69
29.14
30.58
32.06
33.41
34.80
36.19
37.57
38.93
40.29
41.64
42.98
44.31
45.64
46.96
48.28
49.59
50.89
18
Table 3
Value of mod.t with a probability of mean exceeded in random sampling
v = degrees of freedom
P
v
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
35
40
45
50
60
0.50
1.000
0.816
0.765
0.741
0.727
0.718
0.711
0.706
0.703
0.700
0.697
0.695
0.694
0.692
0.691
0.690
0.689
0.688
0.688
0.687
0.686
0.686
0.685
0.685
0.684
0.684
0.684
0.683
0.683
0.633
0.682
0.681
0.680
0.679
0.678
0.674
0.10
0.05
6.34
2.92
2.35
2.13
2.02
1.94
1.90
1.86
1.83
1.81
1.80
1.78
1.77
1.76
1.75
1.75
1.74
1.73
1.73
1.72
1.72
1.72
1.71
1.71
1.71
1.71
1.70
1.70
1.70
1.70
1.69
1.68
1.68
1.68
1.67
1.64
12.71
4.30
3.18
2.78
2.57
2.45
2.36
2.31
2.26
2.23
2.20
2.18
2.16
2.14
2.13
2.12
2.11
2.10
2.09
2.09
2.08
2.07
2.07
2.06
2.06
2.06
2.05
2.05
2.04
2.04
2.03
2.02
2.02
2.01
2.00
1.96
19
0.02
31.82
6.96
4.54
3.75
3.36
3.14
3.00
2.90
2.82
2.76
2.72
2.68
2.65
2.62
2.60
2.58
2.57
2.55
2.54
2.53
2.52
2.51
2.50
2.49
2.48
2.48
2.47
2.47
2.46
2.46
2.44
2.42
2.41
2.40
2.39
2.33
0.01
63.66
9.92
5.84
4.60
4.03
3.71
3.50
3.36
3.25
3.17
3.11
3.06
3.01
2.98
2.95
2.92
2.90
2.88
2.86
2.84
2.83
2.82
2.81
2.80
2.79
2.78
2.77
2.76
2.76
2.75
2.72
2.71
2.69
2.68
2.66
2.58
Table 4
F-table (5%)
v2
v1
12
24
18.51
19.00
19.16
10.13
9.55
9.28
9.12
9.01
8.94
8.84
8.74
8.64
8.53
7.71
6.94
6.59
6.39
6.26
6.16
6.04
5.91
5.77
5.63
6.61
5.79
5.41
5.19
5.05
4.95
4.82
4.68
4.53
4.36
5.99
5.14
4.76
4.53
4.39
4.28
4.15
4.00
3.84
3.67
5.59
4.74
4.35
4.12
3.97
3.87
3.73
3.57
3.41
3.23
5.32..
4.46
4.07
3.84
3.69
3.58
3.44
3.28
8.12
2.93
5.12
4.26
3.86
3.63
3.48
3.37
3.23
3.07
2.90
2.71
10
4.96
4.10
3.71
3.48
3.33
3.22
3.07
2.91
2.74
2.54
12
4.75
3.88
3.49
3.26
3.11
3.00
2.85
2.69
2.50
2.30
14
4.60
3.74
3.34
3.11
2.96
2.85
2.70
2.53
2.35
2.13
16
4.49
3.63
3.24
3.01
2.85
2.74
2.59
2.42
2.24
2.01
18
4.41
3.55
3.16
2.93
2.77
2.66
2.51
2.34
2.15
1.92
20
4.35
3.49
3.10
2.87
2.71
2.60
2.45
2.28.
2.08
1.84
25
4.24
3.38
2.99
2.76
2.60
2.49
2.34
2.16
1.96
1.71
30
4.17
3.32
2.92
2.69
2.53
2.42
2.27
2.09
1.89
1.62
40
4..08
3.23
2.84
2.61
2.45
2.34
2.18
2.00
1.79
1.51
60
4..00
3.15
2.76
2.52
2.37
2.25
2.10
1.92.
1.70
1.39
80
3.96
3.11
2.72
2.49
2.33
2.21
2.06
1.88
1.65
1.32
19.25 19.30
20