You are on page 1of 14

Stochastics and Statistics

The moments and central moments


of a compound distribution
Robert W. Grubbstro m, Ou Tang
*
Department of Production Economics, Linko ping Institute of Technology, SE-581 83 Linko ping, Sweden
Received 4 September 2002; accepted 4 June 2004
Available online 25 August 2004
Abstract
The compound distribution is of interest for the study of production/inventory problems, since it provides a exible
description of the stochastic properties of the system. However, due to the diculties involved in obtaining analytical
results for the compound distribution, studies are usually limited to searching for a good approximation by replacing a
more complex model with a simpler one applying only the rst few moments as parameters.
This paper presents general closed form formulae for the moments and central moments of any order of a compound
distribution made up of non-negative stochastic variables. The Laplace and z-transform methods play an important
role in this study. The importance of taking into consideration higher-order moments, when computing a safety factor
for inventory control, is illustrated in a numerical example.
2004 Elsevier B.V. All rights reserved.
Keywords: Stochastic processes; Inventory; Moments; Central moments; Compound distribution
1. Introduction
Compound distributions are instances of mixtures. Mixtures are obtained when the density f of a stoch-
astic variable W, depends on a second stochastic variable X having a density g(x), which we write f(wjX).
The mixture will then have the density hw
_
1
1
f wjxgxdx. When a stochastic number M of stochastic
variables Y
k
, k = 0, 1, 2, . . . , M, are added together, the mixture becomes a compound distribution. With
W

M
k1
Y
k
, we have hw

1
i0
f wjig
i
, where f(wji) is the density for the sum

i
k1
Y
k
conditional
0377-2217/$ - see front matter 2004 Elsevier B.V. All rights reserved.
doi:10.1016/j.ejor.2004.06.012
*
Corresponding author. Tel.: +46 13 281773; fax: +46 13 288975.
E-mail addresses: rwg@ipe.liu.se (R.W. Grubbstro m), ou.tang@ipe.liu.se (O. Tang).
European Journal of Operational Research 170 (2006) 106119
www.elsevier.com/locate/ejor
on M = i, and g
i
the probability that M = i. An example of a compound distribution is the demand from a
store, where the number of customers and demand from each of them both are independent random vari-
ables. A second example is the total demand during the lead time for an acquisition, when the lead time and
demand rate both are stochastic.
As to general formulae for compound distributions, they appear to be scarce in the literature. Sir
Maurice Kendall and Alan Stuart [1, p. 157] state an expression corresponding to /
W
/
M

ln /
Y
t

1
p

for the characteristic function of a compound distribution made up of a sum of M independent
identically distributed variables Y
k
each having the characteristic function /
Y
(t) and where M is
random having the characteristic function /
M
(t). A comparison with our approach is given in Section
4 below. A similar expression, instead using the related generating functions, is given in Feller
[2, p. 269].
The compound distribution is of interest for solving production/inventory problems because it can be
used to describe in more detail various stochastic properties of the system under study compared to
more simple approaches. Bagchi et al. [3] summarised analytical models of compound distributions
for modelling production/inventory systems. However, the few analytical models that exist do not easily
provide tractable results, and a further analysis of the production/inventory system becomes very
complex.
The alternative to deal with a compound distribution is by using approximations, such as a normal dis-
tribution. However, such a simplication may not t the actual distribution suciently well and the con-
sequent analysis may therefore contain substantial errors [4]. In order to avoid such disadvantages of
approximations, Lau [4] and other researchers [57] adopted other types of distributions for approximation
purposes, such as the Pearson family of distributions, with the aim to get a better t of real data. In this
case, the moments of the distribution had to be calculated up to the fourth order to determine the param-
eters of the Pearson curve. It is claimed that inventory control parameters, such as the optimal reorder
point, are easily obtained using this approach.
In the study of stochastic models, moment-generating functions play an important role. For our pur-
poses, we will use the Laplace and z-transforms as moment-generating functions. The moments of non-neg-
ative continuous and discrete stochastic variables have a close association with properties of the Laplace
and z-transforms of the corresponding density functions.
In this paper, we rst discuss some basics of transforms and their use as moment-generating functions,
including the connection between moments and central moments, and vice versa. We then develop general
closed-form formulae for calculating moments of any order of a compound distribution. It is shown that
the Laplace transform of the compound distribution can be written as a combination of the z-transform
and the Laplace transform for the involved processes. Although transforms have been in use for mo-
ment-generating purposes for an extremely long time (since the days of Pierre Simon Laplace, 1749
1827), this combination appears to be novel, although, as a starting point, there is some resemblance with
the formula provided by Kendall and Stuart [1]. As examples to illustrate the use of our closed-form expres-
sions, calculations of the rst ve moments and central moments of a general compound distribution are
provided. The importance of using higher-order moments in inventory control is illustrated by a numerical
example in Section 6.
As always, it might be questioned, what the use of a general formula, beyond the fourth moment, head-
ing possibly into innity, might be. What use is of this, one might ask? Why think about four dimensions,
or more, when we know that we almost always are satised with two dimensions? For all domestic purposes
Earth is essentially at. The answer, of course is that we, or someone in the future, might be interested in
the higher-order results, and that, here, there is a procedure providing an answer in a closed form. And if
this answer were not provided here, we, and they, might have to wait some additional time to nd out.
Obviously, some have been interested in moments up to the fourth order, but why did they happen to stop
just there? In this paper, the authors did not.
R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119 107
2. Notation
f x
~
f s
_
1
0
f x e
sx
dx Laplace transform of any function f(x) of a continuous
variable x, x P0, where s is the complex Laplace frequency
Zfg
m
g ^ gz

1
m0
g
m
z
m
z-transform of a function g
m
of a discrete variable m,
m = 0, 1, . . . A more conventional denition is Zfg
m
g

1
m0
g
m
z
m
. However, it makes no dierence in our
developments to follow and this denition provides more
compact expressions
l
k
X
EX
k

_
1
1
x
k
f xdx kth moment of a random variable X having the probability
density function f(x)
l
X
EX
_
1
1
xf xdx rst moment of a random variable X, also identied as the
expectation or the mean of X
l
0k
X
EX l
X

k

_
1
1
x l
X

k
f xdx kth central moment of a random variable X having the
probability density function f(x), k = 1, 2, . . .
f
k
x
d
k
f x
dx
k
Short-hand notation for the kth derivative of a function f(x)
3. Transforms and basics of moments
According to Kendall and Stuart [1, p. 57], or Feller [2, p. 213] the moment of order r about a point a is
dened as
_
1
1
x a
r
dF
_
1
1
x a
r
f xdx EX a
r
: 1
The transformation between moments about a and moments about b are easily found to obey

m
j0
m
j
_ _
a b
mj
EX a
j
E

m
j0
m
j
_ _
a b
mj
X a
j
_ _
EX b
m
: 2
In particular, when a = 0, or when b = 0, we obtain
EX b
m

m
j0
m
j
_ _
b
mj
EX
j

m
j0
m
j
_ _
b
mj
l
j
X
;
3
l
m
X
EX
m

m
j0
m
j
_ _
a
mj
EX a
j
:
4
The moments about zero are simply called moments l
m
X
EX
m
, and moments about the mean l
X
= E[X]
are called central moments l
0m
X
EX l
X

m
. The rst central moment is always zero, l
01
X
0. Thus we
have
108 R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119
l
0m
X

m
j0
m
j
_ _
l
X

mj
l
j
X
;
5
l
m
X

m
j0
m
j
_ _
l
X

mj
l
0j
X
:
6
It is well known that the Laplace transform can be used as a moment-generating function for non-neg-
ative stochastic variables X:
~
f
n
0 lim
s!0
_
1
0
d
n
ds
n
e
sx
f xdx 1
n
lim
s!0
_
1
0
x
n
e
sx
f xdx 1
n
_
1
0
x
n
f xdx 1
n
l
n
X
: 7
This method applies to continuous as well as discrete (and mixed) distributions.
Similarly, for the discrete probability g
m
of a stochastic variable M we may use the z-transform to gen-
erate its moments. Interpreting g
m
, m = 0, 1, 2, . . ., in continuous time as a sequence of coecients of Dirac
impulses d(0), d(t T), d(t 2T), . . ., distanced by the constant interval T, the Laplace transform of

1
m0
g
m
dt mT will be fg
m
g

1
m0
g
m
e
msT
, and its moments using the substitution z = e
sT
may be
calculated according to
l
n
M
1
n
lim
s!0

1
m0
d
n
ds
n
e
smT
g
m
1
n
lim
s!0
d
n
ds
n

1
m0
z
m
g
m
_ _
1
n
lim
s!0
d
n
^ gzs
ds
n
T
n
lim
z!1
z
d
dz
z
d
dz
z
d
dz
. . . z
d^ g
dz
_ _ _ _ _ _ _ _
T
n
lim
z!1
d
n
^ g
dln z
n
; 8
since we have
d
ds

dz
ds

d
dz
Tz
d
dz
T
d
d ln z
, cf. [8]. Without loss of generality, in the following we dene the
interval as unity T = 1.
Using a Taylor expansion around z = 1 and a binomial expansion, we may write the z-transform of any
discrete function as
^ gz

1
i0
^ g
i
1
i!
z 1
i

1
i0
^ g
i
1
i!

i
j0
i
j
_ _
z
j
1
ij
: 9
The nth derivative of the z-transform with respect to ln z is therefore,
d
n
^ gz
dln z
n

1
i0
^ g
i
1
i!

i
j0
i
j
_ _
j
n
e
j ln z
1
ij
: 10
The nth moment of a discrete distribution of a discrete random variable M can thus be written as
l
n
M
lim
z!1
d
n
^ gz
dln z
n

n
i0
^ g
i
1
i!

i
j0
i
j
_ _
j
n
1
ij

n
i0
a
ni
^ g
i
; 11
where the coecients a
ni
are dened as
a
ni

1
i!

i
j0
i
j
_ _
j
n
1
ij
: 12
The rst set of these numbers is displayed in Table 1. These coecients are numerically the same as those
appearing in the z-transform of t
k
and have a close relationship with Bernoulli numbers. The sum in (12)
above has been truncated at n since the coecients vanish for i > n (for a proof, see [9]). For further values
R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119 109
and various properties of the a
ni
, reference is made to [10] in which the a
ni
are dened slightly dierently.
However, one simple property of a
ni
(adjusted to our denition) we repeat here, namely that
a
ni
a
n1;i1
ia
n1;i
; 13
which provides an easy recursive method for calculating all values of a
ni
.
Alternatively, we can describe the derivatives of the z-transform as functions of the moments. Letting
^ gz hln z and h
i
ln z
d
i
^ gz
dln z
i
, and using a Taylor expansion for the logarithm, we obtain the following
expression:
^ gz

1
k0
h
k
0
k!
ln z
k
h
k
0

1
k1
h
k
0
k!

1
j1
1
j1
z 1
j
j
_ _
k
h
k
0

1
k1
h
k
0
k!
1
k

j
1
;j
2
;...;j
k
P1
1
j
1
1
j
2
. . . 1
j
k
j
1
j
2
. . . j
k
z 1
j
1
j
2
j
k
: 14
The limit of its ith derivative becomes
^ g
i
1 lim
z!1
d
i
^ gz
dz
i
i!

i
k0
1
ik
h
k
0
k!

j
1
;j
2
;...;j
k
P1
j
1
j
2
j
k
i
1
j
1
j
2
. . . j
k
i!

i
k0
1
ik
l
k
M
k!

j
1
;j
2
;...;j
k
P1
j
1
j
2
j
k
i
1
j
1
j
2
. . . j
k
i!

i
k0
b
ik
l
k
M
; 15
since all terms apart from those having

k
l1
j
l
i vanish in the limit, and where the coecients b
ik
are de-
ned by
b
ik
1
ik
i!
k!

j
1
;j
2
;...;j
k
P1
j
1
j
2
j
k
i
1
j
1
j
2
. . . j
k
: 16
Table 1
Values of the coecients a
ni
for n,i 6 10
n i
0 1 2 3 4 5 6 7 8 9 10
0 1
1 0 1
2 0 1 1
3 0 1 3 1
4 0 1 7 6 1
5 0 1 15 25 10 1
6 0 1 31 90 65 15 1
7 0 1 63 301 350 140 21 1
8 0 1 127 966 1701 1050 266 28 1
9 0 1 255 3025 7770 6951 2646 462 36 1
10 0 1 511 9330 34105 42525 22827 5880 750 45 1
110 R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119
Comparing the two sets of coecients a
ni
and b
ik
in (12) and (16), we nd that they are orthogonal in the
sense that

n
i0
a
ni
b
ik

1; if i k;
0; if i 6 k:
_
17
This means that when the coecients a
ni
and b
ik
are arranged as two triangular matrices, they will be each
others inverses.
The multiple summations in (16) are computationally cumbersome. Fortunately, the coecients b
ik
have
some simple properties making them easy to evaluate. In [10, p. 121, Eq. (9)] it is shown that b
i1
must be
(i1)!(1)
i1
, for i P1 (adopting our denitions from above). Also we must have b
ii
= 1 for all i, since the
matrix of a
ni
is triangular with unit values along its main diagonal. The main property to be used for eval-
uating these coecients is the following recursive formula which the b
ik
obey:
b
ik
b
i1;k1
i 1b
i1;k
; i; k P1: 18
Applying this procedure beginning with b
00
= 1, all coecients b
ik
are then directly obtained. The rst
few values of the b
ik
are displayed in Table 2. A proof by mathematical induction of the recursive relation
(18) is the following. Let d
ik
denote Kroneckers delta, i.e. d
ik
= 1 when i = k and zero otherwise. We know
that

i
j0
b
ij
a
jk
d
ik
for i,k 6 n and need to show that

n1
j0
b
n1;j
a
jk
d
n1;k
, if b
n + 1,j
is given by (18), i.e.,
if b
n+1,j
= b
n,j1
nb
nj
, which determines the b
n+1,j
uniquely. Developing

n1
j0
b
n1;j
a
jk

n1
j0
b
n;j1
nb
nj
a
jk
and using a
jk
= a
j1,k1
+ ka
j1,k
according to (13), we obtain

n1
j0
b
n1;j
a
jk

n1
j0
b
n;j1
nb
nj
a
jk

n1
j1
b
n;j1
a
j1;k1
ka
j1;k
n

n
j0
b
nj
a
jk
nb
n;n1
a
n1;k
d
n;k1
kd
nk
nd
nk
;
since the triangular form of b
ij
requires b
n,n+1
= 0. Hence if n + 1 = k then d
n,k1
= 1 and d
nk
= 0 requiring

n1
j0
b
n1;j
a
jk
1, and if n + 1 > k then

n1
j0
b
n1;j
a
jk
0, concluding our proof.
Further properties of b
ik
are that each row sum of the matrix formed by the b
ik
is zero, apart from the
rst two rows. Also, the diagonal immediately below the main diagonal of this matrix consists of triangular
numbers with alternating signs.
Table 2
Values of the coecients b
ik
for i,k 6 10
n i
0 1 2 3 4 5 6 7 8 9 10
0 1
1 0 1
2 0 1 1
3 0 2 3 1
4 0 6 11 6 1
5 0 24 50 35 10 1
6 0 120 274 225 85 15 1
7 0 720 1764 1624 735 175 21 1
8 0 5040 13068 13132 6769 1960 322 28 1
9 0 40320 109584 118124 67284 22449 4536 546 36 1
10 0 362880 1026576 1172700 723680 269325 63273 9450 870 45 1
R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119 111
4. Moments of a compound distribution
As stated above, the following sum is dened as a stochastic variable having a compound distribution,
cf. Feller [2, p. 213],
W

M
k1
Y
k
; 19
where the Y
k
are mutually independent random variables with a common distribution having the probabil-
ity density function f(y), and where M is a random variable having a discrete distribution g
m
independent of
the Y
k
. The stochastic variables M and Y
k
represent the number of events and the size (intensity) of each
event respectively. M has a discrete distribution, but the Y
k
can be either discrete or continuous (or a mix-
ture thereof).
The probability of W taking on a value between w and w + dw is therefore
Prw6W < w dw

1
l1
g
l
Prw6Y
1
Y
2
Y
l
< w dw

1
l1
g
l
f f f dw; 20
where asterisks denote l convolution operations. We notice that these convolutions may be expressed in
terms of transforms in a compact compound way:
~ wsdw fPrw6W < w dwg

1
l1
g
l

~
f s
l
dw Zfg
l
g
z
~
f s
dw ^ g
~
f sdw: 21
By using a Taylor expansion and knowing that
~
f 0 1, this transform may be developed into the fol-
lowing series:
~ ws ^ g
~
f s

1
i0
^ g
i
1
i!

~
f s 1
i

1
i0
^ g
i
1
i!

1
l0
~
f
l
0
l!
s
l
1
_ _
i

1
i0
^ g
i
1
i!

1
l1
~
f
l
0
l!
s
l
_ _
i
1

1
i1
^ g
i
1
i!

1
l
1
1
~
f
l
1

0
l
1
!

1
l
2
1
~
f
l
2

0
l
2
!

1
l
i
1
~
f
l
i

0
l
i
!
s
l
1
l
2
l
i
: 22
The derivatives of ~ ws can be written as
~ w
n
s 0
n

1
i1
^ g
i
1=i!

l
1
;l
2
;...;l
i
P1
l
1
l
2
l
i
Pn

i
j1

~
f
l
j

0=l
j
!

i
j1
l
j
_ _

i
j1
l
j
1
_ _

i
j1
l
j
n 1
_ _
s

i
j1
l
j
n
23
112 R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119
and have the limits
~ w
n
0 0
n
n!

n
i1
^ g
i
1=i!

l
1
;l
2
;...;l
i
P1;
l
1
l
2
l
i
n

i
j1

~
f
l
j

0=l
j
!: 24
From

i
j1
l
j
n and l
j
P 1 for all j, we have

i
j1
l
j
Pi and n P i. Therefore all terms with i larger
than n vanish. As an intermediate result, we obtain the nth moment of W expressed in terms of the moments
of Y, the latter all being of an order less than or equal to n, and the z-transform derivatives ^ g
i
1,
i = 0, 1, 2, . . ., n:
l
n
W
0
n
n!

n
i1
^ g
i
1=i!

l
1
;l
2
;...;l
i
P1;
l
1
l
2
l
i
n

i
j1
l
l
j
Y
=l
j
!: 25
However, we prefer to use the coecients h
i
0 l
i
M
instead of the ^ g
i
1, since the former represent
moments of M. Inserting (17) into (25) gives us
EW
n
0
n
n!

n
i1
1
i!

i
k1
b
ik
EM
k

_ _

l
1
;l
2
;...;l
i
P1;
l
1
l
2
l
i
n

i
j1
EY
l
j
=l
j
!
_

_
_

_
0
n
n!

n
i1

i
k1
1
ik
l
k
M
k!

j
1
;j
2
;...;j
k
P1
j
1
j
2
j
k
i
1
j
1
j
2
. . . j
k
_

_
_

l
1
;l
2
;...;l
i
P1;
l
1
l
2
l
i
n

i
j1
l
l
j
Y
l
j
!
_

_
_

_
; 26
where the b
ik
are dened by (16). Although this expression is rather complicated, we may note the two inde-
pendent multiplicative factors, one depending on moments of M up to the order of i, the other on moments
of Y up to the order of n. They can therefore be calculated separately for dierent values of n and i. Also
wemay see that the number of terms in the right-hand factor is binomial
_
n 1
i 1
_
and in the left-hand factor
is

i
k1
_
i 1
k 1
_
2
i1
, if the b
ik
are to be calculated, or i terms, if these coecients are considered as
known. For instance, for n = 5 and i = 3 we have
_
n 1
i 1
_

_
4
2
_
6 terms in the right-hand factor. Three
of these terms contain the common factor l
Y

2
l
3
Y
and the three others the factor l
Y
l
2
Y

2
. After adding
common terms together, there are two terms left, see Eq. (32e) below, in which appears
43l
Y

2
l
3
Y
=3! 3l
Y
l
2
Y
=2!
2
2l
Y

2
l
3
Y
3l
Y
l
2
Y

2
, where the coecient 4 comes from the left-hand
factor.
We oer the following interpretations of these two factors. Concerning the rst factor, we may interpret
the intermediate sum
1
i!

i
k1
b
ik
l
k
M
in the following way. Consider
E

i
k0
b
ik
M
k
_ _
E

i
k0
b
i1;k1
i 1b
i1k
M
k
_ _
;
where the recursive relation (18) and basic properties of the b
ik
have been used. Letting i step down one unit
at a time gives us

i
k0
b
ik
l
k
M
E

i1
j0
M j
_ _
i!E
M
i
_ _ _ _
: 27
R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119 113
Regarding the second factor, we make a slightly dierent expansion of the third member of (22) keeping
the zeroth-order term (for l = 0),

1
l0
~
f
l
0
l!
s
l
1
_ _
i

i
j0
i
j
_ _
1
ij

1
l0
~
f
l
0s
l
l!
_ _
j

i
j0
i
j
_ _
1
ij

1
l
1
0
~
f
l
1

0
l
1
!

1
l
2
0
~
f
l
2

0
l
2
!

1
l
i
0
~
f
l
j

0
l
j
!
s
l
1
l
2
l
j
;
providing the limit
lim
s!0
d
n
ds
n

1
l0
~
f
l
0
l!
s
l
1
_ _
i

i
j0
i
j
_ _
1
ij
n!

l
1
;l
2
;...;l
j
P0;
l
1
l
2
l
j
n

j
k1

~
f
l
k

0=l
k
!

i
j0
i
j
_ _
1
ij

l
1
;l
2
;...;l
j
P0;
l
1
l
2
l
j
n
n
l
1
l
2
. . . l
j
_ _

j
k1
~
f
l
k

i
j0
i
j
_ _
1
ij

l
1
;l
2
;...;l
j
P0;
l
1
l
2
l
j
n
n
l
1
l
2
. . . l
j
_ _
E

j
k1
1
l
k
Y
l
k
k
_ _

i
j0
i
j
_ _
1
nij
E

j
k1
Y
k
_ _
n
_ _
; 28
where the Y
k
are independent. Therefore, we end up with the following expression, interpreting

j
k1
Y
k
to
be zero for j = 0:
l
n
W

n
i0
E
M
i
_ _ _ _

i
j0
i
j
_ _
1
ij
E

j
k1
Y
k
_ _
n
_ _ _ _
: 29
The formulae (26) and (29) thus provide general closed-form expressions for the moments of a general
compound distribution in terms of the moments of the number of events M and the moments of their inten-
sities Y, where the b
ij
are dened by (16). For n = 1, from (26) we immediately obtain the obvious result
l
W
= b
11
l
M
l
Y
= l
M
l
Y
. Also, the right-hand factor collapses into a
ni
given by Eq. (12), if Y is set to unity
with probability one (case of a constant intensity).
If we instead wish to express a relationship between the central moments involved, we may use the bino-
mial expansions given by (5) and (6) and insert these into (26). This gives us expressions of a rather lengthy
type, which appear dicult to simplify much further.
A comparison between our method and the formula /
W
t /
M

ln /
Y
t

1
p
provided by Kendall and Stuart
[1] is the following. The characteristic function of an individual Y
k
is dened as /
Y
t
_
1
y1
e
t

1
p
f ydy,
where t is real. For the sum

j
k1
Y
k
, where j is given, its characteristic function will be
/
Y
t
j

_
1
y1
e
t

1
p
f ydy
j
, since the Y
k
are independent. Writing the density of M as a sequence of
Dirac impulses gx

1
j0
g
j
dx j, where d(x j) is an impulse at x = j (cf. Section 3), the characteristic
function of the distribution of M is found to be /
M
t
_
1
x1
e
xt

1
p

1
j0
g
j
dx jdx

1
j0
g
j
e
jt

1
p
. The
characteristic function of W is developed as
114 R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119
/
W
t
_
1
w1
e
wt

1
p
hwdw
_
1
w1
e
wt

1
p
1
j0
hwjjg
j
dw

1
j0
g
j
_
1
w1
e
wt

1
p
hwjjdw

1
j0
g
j
_
1
y1
e
yt

1
p
f ydy
_ _
j

1
j0
g
j
/
Y
t
j

1
j0
g
j
e
j ln /
Y
t
/
M
ln /
Y
t

1
p
_ _
;
which gives Kendalls and Stuarts formula. The Laplace transform of the density of our non-negative var-
iable Y
k
is
~
f s
_
1
y0
e
sy
f ydy /
Y
s=

1
p
and the z-transform of the distribution of M is
^ gz

1
j0
g
j
z
j

1
j0
g
j
e
j ln z
/
M
ln z=

1
p
. The Laplace transform of the density of W according to
(21) is ~ ws ^ g
~
f s hln
~
f s. Substituting s t

1
p
into the right-hand member and using
hln z /
M
ln z=

1
p
and /
Y
t
~
f t

1
p
, provides the left-hand member ~ wt

1
p
/
W
t, i.e.
the KendallStuart formula. This formula is an alternative bringing us to the starting point of our algebraic
developments.
5. The rst ve moments as an example
We demonstrate how to use the above expressions through developing the explicit formulae for the rst
ve moments of a compound distribution. From Eq. (11) and Table 1, we have the moments of any discrete
distribution as a function of the derivatives of a z-transform
l
M
^ g
1
1; 30a
l
2
M
^ g
1
1 ^ g
2
1; 30b
l
3
M
^ g
1
1 3^ g
2
1 ^ g
3
1; 30c
l
4
M
^ g
1
1 7^ g
2
1 6^ g
3
1 ^ g
4
1; 30d
l
5
M
^ g
1
1 15^ g
2
1 23^ g
3
1 10^ g
4
1 ^ g
5
1: 30e
Alternatively, from Eq. (15) (using Table 2), we have
^ g
1
1 l
M
; 31a
^ g
2
1 l
M
l
2
M
; 31b
^ g
3
1 2l
M
3l
2
M
l
3
M
; 31c
^ g
4
1 6l
M
11l
2
M
6l
3
M
l
4
M
; 31d
^ g
5
1 24l
M
50l
2
M
35l
3
M
10l
4
M
l
5
M
: 31e
For a compound distribution, Eq. (26) gives us the moments of W as
l
W
l
M
l
Y
; 32a
l
2
W
l
M
l
2
Y
l
2
M
l
M
l
Y

2
; 32b
l
3
W
l
M
l
3
Y
3l
2
M
l
M
l
Y
l
2
Y
l
3
M
3l
2
M
2l
M
l
Y

3
; 32c
R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119 115
l
4
W
l
M
l
4
Y
l
2
M
l
M
4l
Y
l
3
Y
3l
2
Y

2
6l
3
M
3l
2
M
2l
M
l
Y

2
l
2
Y
l
4
M
6l
3
M
11l
2
M
6l
M
l
Y

4
; 32d
l
5
W
l
M
l
5
Y
5l
2
M
l
M
l
Y
l
4
Y
2l
2
Y
l
3
Y
5l
3
M
3l
2
M
2l
M
2l
Y

2
l
3
Y
3l
Y
l
2
Y

10l
4
M
6l
3
M
11l
2
M
6l
M
l
Y

3
l
2
Y
l
5
M
10l
4
M
35l
3
M
50l
2
M
24l
M
l
Y

5
: 32e
Using (5) and (6) for replacing all moments by central moments in (32), we obtain the following expres-
sions for the rst set of central moments of a compound distribution:
l
02
W
l
M
l
02
Y
l
02
M
l
Y

2
; 33a
l
03
W
l
M
l
03
Y
3l
02
M
l
Y
l
02
Y
l
03
M
l
Y

3
; 33b
l
04
W
l
M
l
04
Y
4l
02
M
l
Y
l
03
Y
3l
02
M
l
M
l
M
1l
02
Y

2
6l
03
M
l
M
l
02
M
l
Y

2
l
02
Y
l
04
M
l
Y

4
; 33c
l
05
W
l
M
l
05
Y
5l
02
M
l
Y
l
04
Y
10l
02
M
l
M
l
M
1l
02
Y
l
03
Y
10l
03
M
l
M
l
02
M
l
Y

2
l
03
Y
15l
03
M
2l
M
1l
02
M
l
Y
l
02
Y

2
10l
04
M
l
M
l
03
M
l
Y

3
l
02
Y
l
05
M
l
Y

5
: 33d
Other parameters such as the skewness and kurtosis are readily obtained for further analysis straight
from (33). The development of higher-order moments of a compound distribution is a complex task. In
the literature, the derivation up to the fourth-order moments only appear to have been shown in [6,11],
where a dierent method was adopted. In that approach, a random variable was rst separated into two
components and the dependence between these claried. Consequently, an error was easily raised due to
the improper assumptions of component independence. Nevertheless, our work veries that only the results
in [11] are correct.
6. Illustrations and numerical examples
Although all distributions are not fully characterised by the values of all their moments, in most cases, when
concentrated on a nite interval, a distribution is given by its complete set of moments [12, pp. 222224]. When
estimating a true distribution from the real world, a better approximation of the true distribution involves a
greater number of moments to be assigned correct values. This is similar to better approximations involving
more terms in Taylor expansions. As pointed out above, the Taylor expansion of the Laplace transform of a
probability density around s = 0, produces coecients which indeed are the moments (except for sign).
Estimations using two-parametric distributions, such as the normal or the Gamma distributions, can
adopt at most two chosen moments, all remaining moments have to follow suit, when the two are decided.
Furthermore, there may be other severe restrictions, such as all odd moments above rst being zero-valued
for any symmetric distribution. For a three-parameter distribution, such as the triangular distribution, sim-
ilarly, at most three moments can be assigned values independently.
The question then arises as to what errors might be generated by choosing an approximation with a
small number of free parameters. In an overwhelming number of cases in the literature, the normal distri-
bution with its two parameters is chosen as an estimate, and subsequently as a base for taking decisions. In
particular, in applications involving cycle service levels, the argument providing a given level of the cumu-
lative distribution is requested. Even in simple cases, taking the argument of a normal approximation in lieu
of something more correct, can yield substantial errors.
Compound distributions are inherently complex. In Fig. 1 we have compared three simple cases and
their normal approximations. In all cases, the intensity Y is uniformly distributed on the interval from 5
116 R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119
to 6. The discrete variable M is either uniformly distributed (a), or has a linearly increasing probability (b),
or a linearly decreasing probability (c). M may take on integer values between 1 and 4. The moments and
central moments of the variables in these cases are tabulated in Table 3 together with the normal approx-
imation. There are obvious major dierences between the values for the central moments of the compound
distributions and their normal approximations.
As an illustration for the use of our formulae in a safety stock application, we provide an example deal-
ing with an inventory system having a Gamma distributed random replenishment lead time M and a Pois-
son distributed random demand Y in each period. The aim is to determine the safety stock level SS so that
the cycle service level is maintained at 95%. The means and the second to fourth central moments of the
random variables are given as
l
M
3; l
02
M
5; l
03
M
10; l
04
M
100;
l
Y
20; l
02
Y
50; l
03
Y
100; l
04
Y
1200:
Applying (33), we obtain the mean and the second to fourth central moments of the lead time demand W as
l
W
60; l
02
W
2150; l
03
W
95300; l
04
W
19126100:
The safety factor k is dened as the safety stock SS divided by the standard deviation of the lead time
demand:
SS k l
02
W

0:5
: 34
If we assume that the lead time demand follows a normal distribution, only the mean and second central
moment are considered. For a service level to be 95%, the value k = 1.64 is acquired from a normal distri-
bution table. Thus SS = 1.64 2150
0.5
= 74.
Fig. 1. Three simple cases of compound distributions (solid) and their normal approximations (dotted).
R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119 117
We now assume the true lead time demand distribution to belong to Pearsons system of distributions
[13] and take into account the mean and second to fourth central moments. The skewness b
1
and kurtosis
b
2
are dened as
b
1

l
03
W

2
l
02
W

3
; 35
b
2

l
04
W
l
02
W

2
; 36
which are evaluated as b
1
= 0.914 and b
2
= 4.14 in our example. From standard tables of statistics (for
example in [14]), we have k = 1.89 which leads to a safety stock level of SS = 1.89 2150
0.5
= 88 instead
of the formerly calculated value of 74. This simple example illustrates a method to determine the safety
stock level by using the mean and second to fourth central moments. In addition, it indicates the dierence
in computed safety stock level to be up to 16%, when the conventional approach assuming a normal dis-
tribution and the more accurate method involving the higher-order moments are compared.
In order to illustrate which method provides a better result, we assign the above two safety stock levels
and then use simulation to examine the actual service level. The simulation runs 10000 periods and contains
10 replications. The result indicates that the actual service levels are 95.2% and 93.5% when the Pearson
distribution and the normal distribution methods are used, respectively.
A more comprehensive comparison is made recently by Tang and Grubbstro m [15]. Using a simulation
method, they have investigated more compound combinations such as Poisson-normal, Poisson-lognormal,
Table 3
Moments and central moments of distributions in examples of Fig. 1
Order of moment
1 2 3 4 5
Intensity, uniform distribution between Moment 5.500 30.333 167.750 930.200 5171.833
5 and 6, same for the three cases below Central moment 0.000 0.083 0.000 0.013 0.000
Case a
Discrete variable, uniformly distributed on Moment 2.500 7.500 25.000 88.500 325.000
interval between 1 and 4 Central moment 0.000 1.250 0.000 2.563 0.000
Compound Moment 13.750 227.083 4169.688 81361.292 1647956.979
Central moment 0.000 38.021 1.719 2392.249 358.574
Normal approximation Central moment 0.000 38.021 0.000 4336.751 0.000
Case b
Discrete variable, distribution linearly increasing on Moment 3.000 10.000 35.400 130.000 489.000
interval between 1 and 4 Central moment 0.000 1.000 0.600 2.200 3.000
Compound Moment 16.500 302.750 5903.425 119493.733 2479102.542
Central moment 0.000 30.500 98.450 2049.621 15040.208
Normal approximation Central moment 0.000 30.500 0.000 2790.750 0.000
Case c
Discrete variable, distribution linearly decreasing Moment 2.000 5.000 14.600 47.000 161.000
on interval between 1 and 4 Central moment 0.000 1.000 0.600 2.200 3.000
Compound Moment 11.000 151.417 2435.950 43228.850 816811.417
Central moment 0.000 30.417 101.200 2052.550 15572.333
Normal approximation Central moment 0.000 30.417 0.000 2775.521 0.000
118 R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119
normal-gamma, normal-lognormal, etc. All the results conrm that a high-order moments method is nec-
essary for determining the safety stock to avoid severe errors, especially when a high service level is desired.
Besides the Pearson system of distributions, one may also apply the SchmeiserDeutsch distribution in
which the mean and second to fourth central moments are considered [16]. A comparative study of these
two approaches for modelling inventory systems is given by Kottas and Lau [6]. However, in that particular
study, the third and fourth moments are miscalculated due to errors in their formulae for higher-order
moments.
7. Summary
This article provides general closed-form formulae for computing higher-order moments of a compound
distribution. Their derivation is straightforward due to the advantage that the probability distribution of a
compound distributed variable can be described in terms of a mixture of the Laplace and z-transforms. For-
mulae are also made available for relating moments and coecients of the Laplace and z-transforms. The
only limitation in these formulae is that the random variable needs to be non-negative.
The use of (26) is at least twofold. We can either use it to approximate a compound distribution from
real data, or for analysing how accurate the simplication assumption of normality is in the circumstance
that the individual distributions are known. Both methods are important for investigating properties of, for
instance, production-inventory models. An application of a compound demand process in a production-
inventory system can also be found in [17], where the objective is to minimise the average cost of a system
in order to achieve an optimal production plan. A future application of our ndings will be to adapt the
results to t the stockout function, cf. [17].
References
[1] M.G. Kendall, A. Stuart, The Advanced Theory of Statistics, fourth ed., vol. 1, Charles Grin, London, 1977.
[2] W. FellerAn Introduction to Probability Theory and Its Applications, vol. 1, John Wiley & Sons, New York, 1957.
[3] U. Bagchi, J.C. Hayya, J.K. Ord, Modeling demand during lead time, Decision Sciences 15 (1984) 157176.
[4] H.-S. Lau, Toward an inventory control system under non-normal demand and lead-time uncertainty, Journal of Business
Logistics 10 (1) (1989) 88103.
[5] M. Keaton, Using the gamma distribution to model demand when lead time is random, Journal of Business Logistics 16 (1) (1995)
107131.
[6] J.F. Kottas, H.-S. Lau, A realistic approach for modeling stochastic lead time distributions, AIIE Transactions 11 (1) (1979) 54
60.
[7] J.E. Tyworth, Modeling transportation-inventory trade-os in a stochastic setting, Journal of Business Logistics 13 (2) (1992) 97
124.
[8] R.W. Grubbstro m, The fundamental equations of MRP theory in discrete time, Working Paper WP-254, Department of
Production Economics, Linko ping Institute of Technology, Sweden, December 1998.
[9] R.W. Grubbstro m, A closed-form expression for the net present value of a time-power cash ow function, Managerial and
Decision Economics 12 (5) (1991) 305316.
[10] R.W. Grubbstro m, The z-transform of t
k
, The Mathematical Scientist 16 (1991) 118129.
[11] W.-X. Wan, H.-S. Lau, Formulas for computing the moments of stochastic lead time demand, AIIE Transactions 13 (3) (1981)
281282.
[12] J.K. Ord, Families of frequency distributions, Grin, London, 1972.
[13] W. Feller, An Introduction to Probability Theory and Its Applications, vol. 2, John Wiley & Sons, New York, 1966.
[14] E.S. Pearson, H.O. Hartley, Biometrika Tables for Statisticians, vol. II, Cambridge University Press, 1972.
[15] O. Tang, R.W. Grubbstro m, On the necessity of using higher-order moments for stochastic inventory systems, Working Paper
WP-317, Department of Production Economics, Linko ping Institute of Technology, Sweden, 2003.
[16] B.W. Schmeiser, S.J. Deutsch, A versatile four parameter family of probability distributions suitable for simulation, AIIE
Transactions 9 (2) (1977) 176182.
[17] O. Tang, Application of transforms in a compound demand process, Promet-Trac-Traco 13 (6) (1999) 355364.
R.W. Grubbstro m, O. Tang / European Journal of Operational Research 170 (2006) 106119 119

You might also like